Focusing

The last month has taken me through a couple of choices in how to focus this project.  First, I attempted to design a ‘version 2’ that would use project files.  The idea was to give a project a group of categories to use, as well as persistent result sets and a dictionary that can be kept up-to-date with the results.  It didn’t take long to realize that, although this might be a good application, I needed to reduce my scope.  I needed to build the underlying set of classes that this sort of application would call to do its searching and parsing.

Coincidentally, my Systems Development instructor suggested that I might make my project more manageable by making it a platform for further research.  Instead of trying to study all of the variables involved, it would be productive to focus on making the code highly modular and well-documented, and then I could move on to doing a study of one variable.  In future years, students needing a research project could pick up the code and do a more thorough study of the variables that impact the quality of the training sets.  Alternatively,  a student could take the SVMTrainer classes and use them to implement a higher-level application.

So for my own purposes I am calling the current model ‘version 3.’  It is designed with several basic classes that are designed to be extended.  Here’s a summary:

  • The Searcher class is in charge of going online and retrieving a set of Documents.  (I am considering making the Searcher return a set of results and giving the implementation the job of creating Documents, but this is cleaner for now.)
  • The Document class converts its source text to a bag-of-words representation on construction.  It uses a DocumentParser to do so.  It also remembers whether it is supposed to be a positive or negative example.
  • A WebDocument is just a Document that is constructed with a URL that subsequently fetches itself.
  • The DocumentParser decides what parts of the document to process and splits that part into words that it wants to put into the word bag.  It asks a Lexicon for each word’s ID before it puts them in the word bag.
  • The Lexicon tracks all of the words it has seen, the number of times it’s seen each one and a unique ID for each.  It asks a WordFilter to preprocess every word it gets from the DocumentParser.
  • The WordFilter serves a dual purpose – to ignore low-content words (such as pronouns) and to unify different word forms and concepts.  It has been suggested to me that using WordNet synsets here to recognize synonyms would be a good study.
  • Finally, a SetGenerator will take a set of Documents, and (potentially using statistics from the DocumentParser) format them in the correct format for a training set using normalized word frequencies.  At this point, the Lexicon‘s dataset is also saved to disk so it can be reloaded and used to convert any text that needs categorization.

This is basically the content being shared on my poster at CCSC-NW 2008.

While implementing version 3 I switched to the Yahoo! Web Search API.  While writing this post, I just noticed another Yahoo! search service called Term Extraction that could simplify my project… I’m not sure how I missed it before.  I’ll just add that to my list of potential changes.

Oh, and some good news:  Shortly after writing the bare-bones implementation of version 3, training on “dogs -cats” with 896 examples returned a XiAlpha-estimate of precision of 47.60%, an encouraging improvement over the precisions I reported at the end of August.  It suggests that the concept really has potential and further research is merited.

Search Limits

Today I learned about some under-documented limits on Google’s AJAX search API.  While working on my Searcher class (that will eventually generate training sets for the SVM) I asked Java to print the first 50 page titles that Google returned.  Every time I ran the program I would get a JSONException after 28 results.  Upon further examination, I found that Google returned the following 400 Bad Request JSON whenever I sent a request with the parameter &start greater than 28:

{
"responseData": null,
"responseDetails": "out of range start",
"responseStatus": 400
}

This seemed a little absurd, considering that in previous queries Google claimed to have found over 14 million results for the same search terms. Naturally, I started digging online to see if anyone else had encountered this magic 28 barrier. I soon learned that the AJAX search API is limited to 32 results, and that in order to get all 32 you must include the &rsz=large directive in your request, dictating 8 results per request instead of 4.

This could really hinder the quality of my training sets. I suppose I can just add results to the 100 most recent for each category (I wrote a nice little class to do just that) but then it could take a while to build a diverse training set, several days even if the results changed every day. On the other hand, I read that Yahoo’s web search API offers up to 1000 results with a cap of 5000 queries in 24 hours. Switching to Yahoo might be a good option, if their results are kept as up-to-date as Google’s. I’ll have to do some research, or maybe make the search interface modular so I can try both.