We're running out of space on our mLab instance, and it would be nice to have better querying options for our data. We need a new, larger, smarter database.
(This is the epic issue for our heuristic method classifier; please reference this in issues.)
The most basic way we can classify the work NGOs do (besides guessing) is to use simple heuristic methods. These are ways of intuiting what our most basic, reducible reactions to information are and attempting to translate those into programmatic instructions. This is complicated given that we have immense prior knowledge and the experiences/collective memory of humanity backing up our judgements.
So how do we solve this? Let's start by assessing the most simple parts of the information we have. Which words do we have? How many? With what frequencies? Where do these words occur? Which parts of webpages are significant and which are not? If you were directing an NGO and trying to get your organization's mission across through your website, what would you say? What are our limitations? These are all consideration we need to make (refer to our brainstorming sessions).
Make a python function that, given the URL of an NGO website, returns a JSON object of all visible text on the website.
This can be a single block of text, k/v pairs of subpage_name: "subpage_text", or some other dictionary with informative fields like contact numbers, sponsors/affiliates, etc.
This scraper should be thorough and scrape all possible text from the site.
Use GlobalGiving's public API to get some records of NGOs (name, URL, category, country, add to the schema as you see fit). This should be a python script which saves a list of these records as a JSON file.
We might include some of the verbiage that we used to describe the problem in the PRD, the newletter, or the fb presentation. Some graphics might be helpful as well.
Sub-issue of #13
Heuristic method that classifies organizations through a bag of words implementation
Generates dictionaries of relevant words to each category by collecting webscraped words of already classified organizations
Uses the above dictionaries to classify new organizations by frequency of categorical words in text of the new organizations' website (ex. if the website has the more animal-related words than any other category, it will be classified as an animal organization)
If we don't render the website using JS, we run the risk of missing some text in elements generated by JS (these commonly include tables, but really could be anything).
We're going to need a central place to store data, right now we're just doing it in JSON files. Ideally we'd like to have a database so we can do more complex queries, be able to make full use of our data, etc.
We're looking at using NOW or MLab, either way we'd also like to build a python module to abstract interaction with the database to make it easier to read/write.
We have some data to work with, but it is polluted by the problems encountered in extracting the text from a website. As mentioned in #15, we don't pull any valid text from websites which require JS. We also don't have a way to know if a website fails if it returns an HTTP code of 2** (there are some websites which display a 404 page but return 200). There are also pages which are filled with ads, or pages which advertise domains which are for sale.