Today I found a link on JSON.org to I can’t believe it’s not XML! by James Bennett back in 2006. I think it gives good reasons why both JSON and REST are good choices for Google, which are humorously summarized in this quote:
So it may not be formal or robust, but JSON is cheap, it’s fast, and it works. And by now, it’s fairly widespread; having never heard of this protocol, I feel behind the times. I should bounce this off my advisor.
My efforts today revolved around Google’s search API. It took a while to find out how to search Google from Java. The first lead I found was an old page from Pace University’s CS department, which mentioned a “Google API” and the need for a developer key. It didn’t take me long to find out that they were referring to the SOAP search API. Unfortunately, Google stopped issuing API keys back in 2006, so it’s not an option for me. Almost a dead end.
JSONObject json = new JSONObject(builder.toString());
// now have some fun with the results...
So I fiddled with the code sample and JSON classes for a while and got it to work, in a limited capacity. The JSON format is actually very clear, once you figure out how to parse through it. I tried a few search queries, and discovered that the results returned by the AJAX interface aren’t necessarily the same as the ones returned by Google Web Search. Of more concern, however, I only got four results back at a time, and there were no instructions for getting the next page of results. One line in the output gave a “moreResultsUrl” but it pointed me to a web results page, not another JSON file.
I’m beginning a new project this month, to run through December. I’m going to learn how to train a Support Vector Machine (SVM) to categorize text, and then write a program that will automatically train the SVM using web searches generate training material. Once I’ve got a semblance of a working system, I’ll be building a ‘web game’ to evaluate the machine’s performance accuracy against human feedback. I hope that an automatically trained SVM will be able to catch references to current events in news and pop culture, and use those to assist in categorizing paragraphs of text.
I’ll be using SVMlight (or a related work from Thorsten Joachims of Cornell University) as the SVM backend. I just finished Probability, so the mathematics involved here are far beyond me; however, there is an SVM tutorial by Chris Burges out of Microsoft Research for those interested in the theory of SVMs.
My first challenge of this project is learning how to represent a text document as a vector. The most common representation (and the one used in the Inductive SVM example on Joachims’ page) is a Bag-of-Words or BOW. There’s a tutorial covering variations on the BOW model by José María Gómez Hidalgo of the Universidad Europea de Madrid. Basically, you build a dictionary for the categorization domain and then assign a value to each word based on whether it is in the document or not: Zero if the word does not appear, and either a one or a weighted value if it does. I think I will begin with a simple binary document representation while I work out the program flow, and tweak the representation later to see if it improves my results.