8 Sentiments

April 7, 2011

8Sentiments.com is semi-supervised sentiment analysis engine.

8Sentiments model is trained every day from large volume of unannotated Twitter data and can learn emotion related to current topics.

For instance, on April fool day, the phrase ‘April Fool’ was learned and associated with emotion ‘Surprise’.

Current 8 emotions are anger, fear, sadness, joy, the waiting, surprise, disgust, and acceptance.

Very simple API is available and sample code is provided for java, ruby and python.



New Information Extraction Projects

February 8, 2010

YooName named entity recognition technology is now at the hearth of new projects in the domain of Online Reputation Management and Monitoring.

  • InfoGlutton aggregates restaurant reviews and classify them by sentiments (positive, neutral, negative). InfoGlutton is aimed at helping restaurant owners getting a complete overview of the ‘digital word-of-mouth’ around their brand.
  • FoodFu reuses InfoGlutton data into a restaurant directory for foodies in search of the best tables in town.
  • DingDining leverages YooName entity recognition trained for food industry domain and offers a directory of restaurants ranked by awards and distinctions.

And there’s more to come!

YooName is *not* a search engine

April 30, 2009

(and other frequently given answers)

In the last few weeks, YooName traffic increased dramatically (ten fold),  and so did the volume of emails. Don’t be offended if I answer your email by linking to this post. I think this is a good place and good time to address the most frequent concerns :

1. YooName is not a search engine

Don’t expect YooName to get a list of web sites when you issue a query in the demo page. YooName is not a search engine. There’s a confusion because we often describe YooName as a potential search engine component, or a novel algorithm for improving web search.

YooName is self-improving named entity recognition (NER) system. If you know what NER is then you probably have an idea how it relates to search engines. If not, then this is less obvious. In short, NER allows structuring textual information, and structured information is important for semantic search technologies.

2. YooName is not a commercial project per se

YooName is a technology showcase for my PhD project.

3. No, I didn’t hired a lawyer to write a formal privacy policy

I order to sign up for the YooName demo, we collect your email. This is the simplest form of verification we could imagine to avoid being scrapped by robots and/or mechanical turk. Also, when you send a text to the demo, it is stored in the system for statistics and quality insurance. These are two frequent privacy concerns expressed by the demo users.

E-mails: I use the demo user email database with the greatest diligence. I do not share it and I do not mass-mail for fun. In fact, in the two years of existence of the demo site, I haven’t use it yet. As the sign up form tells it: “We will not share your e-mail. We may send you news about YooName developments. We will promptly remove your e-mail from our database upon request.”

Texts: The text you send to the demo are stored and used internally. This information is not shared and is destroyed periodically. Again, if you think that you sent sensible information in the system and want it to be destroyed, drop me a line and I’ll wipe out information linked to your username.

Pushing Automation a Step Forward

December 13, 2008

(and hope not to fall off the cliff)

* * *

I recently worked on the implementation of ‘Stacked Skews Model’, an algorithm proposed by Andrew Carlson and Charles Schafer.

The idea is to train a web page wrapper induction algorithm (let’s call that a ‘wrapper’) at extracting information using a small number of already trained wrappers for sites in the same domain. For instance, if you already have in hands four wrappers for hotel booking web sites then you can use them to bootstrap new wrappers for virtually any hotel booking web site out there.


sample web page wrapper annotations

What’s clever in Carlson and Schafer’s solution is overcoming the lack of annotated examples, given the huge search space for such a problem, by working on features distribution and distribution divergences instead on relying directly on surface evidences. In other words, when the system learns what the name of a hotel is, it learns how each feature is distributed and how similar the solution must be (e.g., hotel name length is around 20 characters, hotel name often contains the trigram ‘hotel’ or ‘resort’, etc.). It is basically equivalent to creating one classifier for each feature and, as the authors suggest, stack them using linear regression.

My implementation didn’t exactly worked as advertised, which is normal ;) Even if stacked models reduce the feature space and diminish overfitting, the problem is still enormous and one or two features tend to rule out the stack. However, I did some important progress by playing around the published ideas.

First, do connect on an ontology. Ok, I’m not a big fan of ontological features and only use them in the last resort but here, it did a good difference. When wrapping hotel web sites, connect on WordNet synset ‘hotel’ and use all synonyms and related words as features.

Also, do use DOM tree features. In their article, Carlson and Schafer limit the learning to features on textual information (the current node text and the previous node text). However, DOM tree is very useful here. For instance, desirable information tend to be deep and almost in juxtaposition. Also, an hotel name is more likely to be in its own HTML tag (bold, header, etc.) while amenities are often enumerated (lists, table, etc.).

Finally, in order to reduce overfitting further, I split the feature space in independent groups and applied a voting scheme over the ensemble.

The New York Times Annotated Corpus

November 1, 2008

The New York Times just released (through LDC) a gigantic corpus including:

Over 1.5 million articles manually tagged by The New York Times Index Department with a normalized indexing vocabulary of people, organizations, locations and topic descriptors. […] Articles are tagged for persons, places, organizations, titles and topics using a controlled vocabulary that is applied consistently across articles. For instance if one article mentions “Bill Clinton” and another refers to “President William Jefferson Clinton”, both articles will be tagged with “CLINTON, BILL”.

According to the documentation, there are hand-assigned meta annotations (describing text content) using a controlled vocabulary:

  • 1.3M persons
  • 600k locations
  • 600k organizations

as well as algorithmically assigned and manually verified online annotations (tagged within the text):

  • 114k persons
  • 124k locations
  • 136k organizations

Thanks Peter for forwarding the news.

Semantic Knowledge Discovery, Organization and Use

August 28, 2008

The University of New York will host a symposium on Semantic Knowledge Discovery mid-November. The presentations will consist of invited talks by leaders (see below) in the field and partially of general submissions.

The focus of NLP research has been shifting towards semantic analysis from syntactic analysis. It has become evident that the methods employed for developing syntactic analyzers, i.e. supervised methods using small annotated corpora, are not the best methods for the semantic task. In order to handle semantics, we need large amounts of knowledge which may be best collected by semi/un-supervised methods from a huge unannotated corpus.

Interesting new research problem

June 25, 2008

This article, found via digg, highlights an inherent ‘by-design’ flaw of automatic news aggregators, including Google News: they need a significant amount of press coverage before promoting news to their front page. As a result, automatic news aggregators are often hours late in covering breaking news.

The solution to the problem of “finding the most important news right now” cannot rely on one hour or so of news history. After one hour, it is no more a breaking news. It is late and repetitive.

Let’s formulate a challenging research problem from that: “Given novel and unique news, can you predict that there will be thousand of repetitions and reformulations?”

DayLife Developer Challenge

June 13, 2008

DayLife is staging a challenge from June 3rd to June July 25th (extended date):

Build the future of news, in software!

Build an application that uses the Daylife API. No limits here: mashups, portals, widgets, iphone apps, blogging plugins, you name it.

DayLife challenge

DayLife is a news aggregation platform with strong named entity (NE) recognition capability. NEs are also called ‘Topics’, and they fall under the types ‘Person’, ‘Place’ and ‘Organization’.

Ontology is Overrated

June 5, 2008

This is an extract (a summary by sentence extraction – like these old days text summarizers were doing ;) of Clay Shirky’s blog post titled ‘Ontology is Overrated‘.

* * *

Today I want to talk about categorization, and […] I want to convince you that many of the ways we’re attempting to apply categorization to the electronic world are actually a bad fit.

What I think is coming instead are [..] organic ways of organizing information […], based on two units — the link, which can point to anything, and the tag, which is a way of attaching labels to links.

PART I: Classification and Its Discontents

The question ontology asks is: What kinds of things exist or can exist in the world, and what manner of relations can those things have to each other?

If you’ve got a large, ill-defined corpus, if you’ve got naive users, if your cataloguers aren’t expert, if there’s no one to say authoritatively what’s going on, then ontology is going to be a bad strategy.

One of the biggest problems with categorizing things in advance is that it forces the categorizers […] to guess what their users are thinking, and to make predictions about the future.

When people [are] offered search [e.g., Web search] and categorization [e.g., Web directory] side-by-side, fewer and fewer people [are] using categorization to find things.

Part II: The Only Group That Can Categorize Everything Is Everybody

Now imagine a world where everything can have a unique identifier. This should be easy, since that’s the world we currently live in — the URL gives us a way to create a globally unique ID for anything we need to point to.

And once you can do that, anyone can label those pointers, can tag those URLs, in ways that make them more valuable, and all without requiring top-down organization schemes.

As [Joshua] Schachter says of del.icio.us, “Each individual categorization scheme is worth less than a professional categorization scheme. But there are many, many more of them.” If you find a way to make it valuable to individuals to tag their stuff, you’ll generate a lot more data about any given object than if you pay a professional to tag it once and only once.

Well-managed, well-groomed organizational schemes get worse with scale, both because the costs of supporting such schemes at large volumes are prohibitive, and, as I noted earlier, scaling over time is also a serious problem. Tagging, by contrast, gets better with scale. With a multiplicity of points of view the question isn’t “Is everyone tagging any given link ‘correctly'”, but rather “Is anyone tagging it the way I do?” As long as at least one other person tags something they way you would, you’ll find it […].

We are moving away from binary categorization — books either are or are not entertainment — and into this probabilistic world, where N% of users think books are entertainment.

* * *

Difficult to Pwn IM Language iykwimaityd

June 1, 2008

Researchers at the University of Toronto, Canada, suggest that instant messaging represents “an expansive new linguistic renaissance” (story from New Scientist.)

We’ve tried seeding YooName with a list of well-known internet slang expressions such as: LOL, brb, and OMG.

YooName found 993 pages on the Internet containing lexicon (or structured repository) of Internet slang, and it collected a list of 1,718 unique expressions. Interestingly, more than a quarter of these expressions are ambiguous with other categories of words, for example brb (be right back) is also a tickers symbol, lol (laugh out loud) is a place in Papua New Guinea, and asap (as soon as possible) is also the name of a company.

We’ve updated YooName lexicon and rule system to recognize and annotate Internet slang… but because of its high ambiguity and unconventional syntax, it is very difficult to pwn!