Sunday, July 25, 2010

NatSent - National Sentiment

About NatSent

NatSent envision a more democratic world, a world where democracy is even more accessible and its benefits continually apparent.


NatSent stands for "the National Sentiment". NatSent is an innovative web platform that leverages the efficiencies of the Internet to empower the voices of individuals. Through NatSent, we hope to enable individuals to identify, follow, frame, and influence the issues that they care about. Our goal is to decrease the transaction costs between voters and politicians, consumers and businesses, citizens and thought leaders. NatSent wants individuals to say and ultimately get some form of what they want without having to go through a lot of red tape.


NatSent is designed to be like a personalized, digital newspaper where users are both the editors and the writers. What makes the front page depends on the users. How an issue is framed also depends on the users. And when NatSent doesn't cater to a specific user's needs, he can proactively make a difference by posting, responding, and voting on NatSent.


NatSent was founded by two, Generation Y, former roommates at Rice University in 2010. Houston, Texas is hot as hell in the summer but it's home.

Rails Server Deployment - Ruby, Rails, ImageMagick, MySQL, Nginx Installation and configuration

Ruby Rails Server Deployment Instructions

This isn't totally new, Its there everywhere. Just thought of keeping it a compiled note.
(this is scraped from my notes)
I've used a Ubuntu 9.10 Server to install...




Install Ruby 1.9.1, Rails 2.3.5, ImageMagick, MySql 5.1
----------------------------------------------------
Install Gcc :
apt-get install gcc

Install Zlib (optional) :
wget http://zlib.net/zlib-1.2.5.tar.gz
tar zxvf zlib-1.2.5.tar.gz
cd zlib-1.2.5
./configure
make
sudo make install


Install Openssl(optional)
wget http://mirrors.isc.org/pub/openssl/source/openssl-0.9.8g.tar.gz
tar zxvf openssl-0.9.8g.tar.gz
cd openssl-0.9.8g
./config
make
make install


Ruby 1.9.1
apt-get install libopenssl-ruby1.9.1 libssl-dev

wget ftp://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.1-p376.tar.gz
tar zxvf ruby-1.9.1-p376.tar.gz
cd ruby-1.9.1-p376

For Including Openssl and Zlib
------------------------------
vi ruby-1.9.1-p376/ext/Setup
#openssl (uncomment it)
#zlib (uncomment it)

Installation
------------
./configure
make
sudo make install


Verify Ruby Installation
----------------------
whereis ruby

ruby -v
gem -v



Install ImageMagick
apt-get install imagemagick

NOTE: If any problem occurs with the above step... then run this one also.
apt-get install graphicsmagick-imagemagick-compat


Verify ImageMagick Installation
-------------------------------
whereis identify
whereis convert
convert logo: logo.gif

Install RMagick

apt-get install libmagick9-dev
sudo gem install rmagick


Install Rails and other required gems

gem update --system
gem install rails -v=2.3.5

sudo gem install mysql --source http://gems.rubyinstaller.org



Install MySQL Server 5.1
sudo apt-get install mysql-server-5.1
sudo apt-get install libmysqlclient16-dev

Verify MySQL Installation
mysql --version
mysql Ver 14.14 Distrib 5.1.37, for debian-linux-gnu (x86_64) using EditLine wrapper
whereis mysql
ps -ef | grep mysql
mysql -uroot -p

Create User in MySQL and set Privileges
CREATE USER 'monty'@'localhost' IDENTIFIED BY 'some_pass';
GRANT ALL PRIVILEGES ON . TO 'monty'@'localhost' WITH GRANT OPTION;

Memcached, Nginx and Mongrel Cluster

http://articles.slicehost.com/2009/3/11/ubuntu-intrepid-nginx-rails-and-mongrels

Install nginx
sudo apt-get install apache2-utils
sudo aptitude install nginx
sudo /etc/init.d/nginx start

Verify nginx Installation

Install Mongrel and Mongrel Cluster
sudo gem install mongrel -v 1.2.0.pre2 --pre --source http://rubygems.org
sudo gem install gem_plugin
sudo gem install fastthread cgi_multipart_eof_fix daemons

sudo gem install mongrel_cluster

mongrel_rails cluster::configure -e production -p 8000 -N 2 -c /home/demo/public_html/testapp -a 127.0.0.1
mongrel_rails cluster::start
mongrel_rails cluster::restart
mongrel_rails cluster::stop

Scrape proper content from wikipedia

Scrape the 1st paragragh & Image from a Wikipedia entry

Sometime we would need a automated way of getting the description or possibly the image too to be available automatically for the keywords that we use in our projects...

One good place to look for this content is Wikipedia. But...
When you search for Company Apple in Wiki... what we get is probably not the correct one.
And we might not be in a position or we cant expect our users to type the full information about the keyword like 'Apple Inc'.

The solution is to use google search combined with Wiki.

Here is the code for getting the description & Image from wiki (Hope there is a wikipage for the keyword we search for.... unless and until it is really crazyyyyyyyyy)

require 'hpricot'
require 'open-uri'

def fetch_description(query_item)
page_title, uri_title = get_wiki_name(query_item)
return get_wiki_description(page_title, uri_title)
end

def upload_photo(wiki_photo)
begin
base_uri = URI.parse(wiki_photo)
uploaded_data = open(base_uri)
def uploaded_data.original_filename; base_uri.path.split('/').last; end
return uploaded_data.original_filename.blank? ? nil : uploaded_data
rescue
return nil
end
end


#Method to fetch wiki page and strip first two

Tags
def get_wiki_description(page_title, uri_title)
url = uri_title
final_content = ""
if url.size > 10
buffer = Hpricot(open(url, "UserAgent" => "reader"+rand(10000).to_s).read)
#Capture first two paragraphs of text
content = buffer.search("//div[@id='content']").search("//div[@id='bodyContent']").search("//p")[0..2]
#Remove the extra spaces and strip html tags fromt the fetched content
content.each do |c|
final_content+=c.inner_html.gsub(/<\/?[^>]*>/, '').gsub(/&#\d+;/,'').gsub(/\([^\)]+\)/,'').gsub(/\[[^\]]+\]/,'').gsub(/ +/,' ')+"\n"
end
end
return final_content
end
#Method to get the link for wikipedia from google search results
def get_wiki_name(query_item)
search_keywords = query_item.strip.gsub(/\s+/,'+')
url = "http://www.google.com/search?q=#{search_keywords}+site%3Aen.wikipedia.org"
begin
doc = Hpricot(open(url, "UserAgent" => "reader"+rand(10000).to_s).read)
result = doc.search("//div[@id='ires']").search("//li[@class='g']").first.search("//a").first unless doc
rescue
return '',''
end
if result
return result.inner_html.gsub(/<\/?[^>]*>/,"").gsub(/./,""),result.attributes["href"]
else
return '',''
end
end


wiki_description, wiki_photo = fetch_description("Apple")
upload_photo(wiki_photo)

Note: After all of this done... Please make sure to give credits to Wikipedia :)