Archive for the ‘NE Ecosystem’ Category

Pushing Automation a Step Forward

December 13, 2008

(and hope not to fall off the cliff)

* * *

I recently worked on the implementation of ‘Stacked Skews Model’, an algorithm proposed by Andrew Carlson and Charles Schafer.

The idea is to train a web page wrapper induction algorithm (let’s call that a ‘wrapper’) at extracting information using a small number of already trained wrappers for sites in the same domain. For instance, if you already have in hands four wrappers for hotel booking web sites then you can use them to bootstrap new wrappers for virtually any hotel booking web site out there.

tagging

sample web page wrapper annotations

What’s clever in Carlson and Schafer’s solution is overcoming the lack of annotated examples, given the huge search space for such a problem, by working on features distribution and distribution divergences instead on relying directly on surface evidences. In other words, when the system learns what the name of a hotel is, it learns how each feature is distributed and how similar the solution must be (e.g., hotel name length is around 20 characters, hotel name often contains the trigram ‘hotel’ or ‘resort’, etc.). It is basically equivalent to creating one classifier for each feature and, as the authors suggest, stack them using linear regression.

My implementation didn’t exactly worked as advertised, which is normal ;) Even if stacked models reduce the feature space and diminish overfitting, the problem is still enormous and one or two features tend to rule out the stack. However, I did some important progress by playing around the published ideas.

First, do connect on an ontology. Ok, I’m not a big fan of ontological features and only use them in the last resort but here, it did a good difference. When wrapping hotel web sites, connect on WordNet synset ‘hotel’ and use all synonyms and related words as features.

Also, do use DOM tree features. In their article, Carlson and Schafer limit the learning to features on textual information (the current node text and the previous node text). However, DOM tree is very useful here. For instance, desirable information tend to be deep and almost in juxtaposition. Also, an hotel name is more likely to be in its own HTML tag (bold, header, etc.) while amenities are often enumerated (lists, table, etc.).

Finally, in order to reduce overfitting further, I split the feature space in independent groups and applied a voting scheme over the ensemble.

The New York Times Annotated Corpus

November 1, 2008

The New York Times just released (through LDC) a gigantic corpus including:

Over 1.5 million articles manually tagged by The New York Times Index Department with a normalized indexing vocabulary of people, organizations, locations and topic descriptors. [...] Articles are tagged for persons, places, organizations, titles and topics using a controlled vocabulary that is applied consistently across articles. For instance if one article mentions “Bill Clinton” and another refers to “President William Jefferson Clinton”, both articles will be tagged with “CLINTON, BILL”.

According to the documentation, there are hand-assigned meta annotations (describing text content) using a controlled vocabulary:

  • 1.3M persons
  • 600k locations
  • 600k organizations

as well as algorithmically assigned and manually verified online annotations (tagged within the text):

  • 114k persons
  • 124k locations
  • 136k organizations

Thanks Peter for forwarding the news.

Semantic Knowledge Discovery, Organization and Use

August 28, 2008

The University of New York will host a symposium on Semantic Knowledge Discovery mid-November. The presentations will consist of invited talks by leaders (see below) in the field and partially of general submissions.

The focus of NLP research has been shifting towards semantic analysis from syntactic analysis. It has become evident that the methods employed for developing syntactic analyzers, i.e. supervised methods using small annotated corpora, are not the best methods for the semantic task. In order to handle semantics, we need large amounts of knowledge which may be best collected by semi/un-supervised methods from a huge unannotated corpus.

Interesting new research problem

June 25, 2008

This article, found via digg, highlights an inherent ‘by-design’ flaw of automatic news aggregators, including Google News: they need a significant amount of press coverage before promoting news to their front page. As a result, automatic news aggregators are often hours late in covering breaking news.

The solution to the problem of “finding the most important news right now” cannot rely on one hour or so of news history. After one hour, it is no more a breaking news. It is late and repetitive.

Let’s formulate a challenging research problem from that: “Given novel and unique news, can you predict that there will be thousand of repetitions and reformulations?”

DayLife Developer Challenge

June 13, 2008

DayLife is staging a challenge from June 3rd to June July 25th (extended date):

Build the future of news, in software!

Build an application that uses the Daylife API. No limits here: mashups, portals, widgets, iphone apps, blogging plugins, you name it.

DayLife challenge

DayLife is a news aggregation platform with strong named entity (NE) recognition capability. NEs are also called ‘Topics’, and they fall under the types ‘Person’, ‘Place’ and ‘Organization’.

Ontology is Overrated

June 5, 2008

This is an extract (a summary by sentence extraction – like these old days text summarizers were doing ;) of Clay Shirky’s blog post titled ‘Ontology is Overrated‘.

* * *

Today I want to talk about categorization, and [...] I want to convince you that many of the ways we’re attempting to apply categorization to the electronic world are actually a bad fit.

What I think is coming instead are [..] organic ways of organizing information [...], based on two units — the link, which can point to anything, and the tag, which is a way of attaching labels to links.

PART I: Classification and Its Discontents

The question ontology asks is: What kinds of things exist or can exist in the world, and what manner of relations can those things have to each other?

If you’ve got a large, ill-defined corpus, if you’ve got naive users, if your cataloguers aren’t expert, if there’s no one to say authoritatively what’s going on, then ontology is going to be a bad strategy.

One of the biggest problems with categorizing things in advance is that it forces the categorizers [...] to guess what their users are thinking, and to make predictions about the future.

When people [are] offered search [e.g., Web search] and categorization [e.g., Web directory] side-by-side, fewer and fewer people [are] using categorization to find things.

Part II: The Only Group That Can Categorize Everything Is Everybody

Now imagine a world where everything can have a unique identifier. This should be easy, since that’s the world we currently live in — the URL gives us a way to create a globally unique ID for anything we need to point to.

And once you can do that, anyone can label those pointers, can tag those URLs, in ways that make them more valuable, and all without requiring top-down organization schemes.

As [Joshua] Schachter says of del.icio.us, “Each individual categorization scheme is worth less than a professional categorization scheme. But there are many, many more of them.” If you find a way to make it valuable to individuals to tag their stuff, you’ll generate a lot more data about any given object than if you pay a professional to tag it once and only once.

Well-managed, well-groomed organizational schemes get worse with scale, both because the costs of supporting such schemes at large volumes are prohibitive, and, as I noted earlier, scaling over time is also a serious problem. Tagging, by contrast, gets better with scale. With a multiplicity of points of view the question isn’t “Is everyone tagging any given link ‘correctly’”, but rather “Is anyone tagging it the way I do?” As long as at least one other person tags something they way you would, you’ll find it [...].

We are moving away from binary categorization — books either are or are not entertainment — and into this probabilistic world, where N% of users think books are entertainment.

* * *

Top 5 Natural Language Processing Applications

May 13, 2008

In the last decades, Natural Language Processing (NLP) has been equally hyped and criticized. All in all, many applications emerged in the real world following intense and continued research and development. Here’s a list of the most prominent success stories.

Given that this blog is about named entity recognition (NER), itself an NLP application, we would be biased at including NER to this list. As such, we’ve excluded ourselves from the chart-toppers ;)

#5: Chat bots

"HELLO, MY NAME IS DOCTOR SBAITSO.
I AM HERE TO HELP YOU.
SAY WHATEVER IS ON YOUR MIND FREELY,
OUR CONVERSATION WILL BE KEPT IN THE STRICTEST CONFIDENCE.
MEMORY CONTENTS WILL BE WIPED CLEAN AFTER YOU LEAVE,
SO, TELL ME ABOUT YOUR PROBLEMS."

The first time I chatted with Dr. Sbaitso, I was about 12 years old. Probably more than anything else, it has influenced my career path. Since then, chat bots such as ELIZA, A.L.I.C.E. and Jabberwacky propelled the art of conversational robots, leading to Automated Service Agent applications (see NextIT)

For its lasting impact on generations of NLP developers, and for the interesting improvements that ensued, Chat bots rank #5.

#4: NLP-based search engines

Ask Jeeves pioneered it, Powerset redefined it, but we are all somewhat skeptical when it comes to beating Google's classic vector space models and ranking techniques., Do we really need shallow NLP parsing to answer "When did Einstein die," or will statistical fact extraction suffice?

Though it is the Holy Grail of NLPers, it has not yet surpassed current information retrieval techniques. As such, NLP-based search engines rank #4.

#3: Speech recognition

Microsoft and Ford just teamed up to develop in-car speech recognition. But they forgot to include Electronic Voice Alert, a feature of mid-80s luxury Chrysler cars!

In all seriousness, automatic speech recognition (ASR) is a vital application for hand-free computing (for disabled persons or for certain circumstances, such as driving), and transcription. It is also poised to revolutionize audio-video content retrieval.

For where it came from, and for where it's going, ASR ranks #3.

#2: Machine translation

"It is apparent to me that the possibilities of the aeroplane, which two or three years ago were thought to hold the solution to the [flying machine] problem, have been exhausted, and that we must turn elsewhere." - Thomas Edison, inventor, 1895

The "heavier-than-air" problem that once plagued flight technology is probably the best comparison we can make to AI and machine translation (MT). It was long believed that MT would require a completely automatic understanding of human language before a resolution finally came. But today's Google and Government of Canada systems surpass human translation abilities (can you translate from French to Chinese? Not me.) Their good level of precision makes them useful in many applications.

People are constantly pinpointing these systems' shortcomings, but nobody would contest their second-place ranking on this list.

#1: Knowledge discovery in texts

Have you ever heard of software that finds new relationships and interactions between genes, proteins or cells? By mining large collections of scientific literature, NLP agents can discover and highlight novel and surprising knowledge.

What makes knowledge discovery so promising is the hope that, in the near future, we may monitor all these documents that are just too abundant to be processed manually. Early forms of knowledge discovery, such as data mining, are already used for Business Intelligence (BI) and outside the NLP world, examples of machine-made inventions already exist.

As a form of technological singularity, and as an emerging field of research for NLP, knowledge discovery gets first place on this list of top NLP applications.

NER Demos on the Web

March 8, 2008

Here’s a list of demos for Named Entity Recognition technologies:

Are you aware of any other demos? Send us the links!

What is a Named Entity?

February 12, 2008

To our surprise, when it comes to defining the task of Named Entity Recognition (NER), nobody seems to question including temporal expressions and measures. This probably deserves some historic consideration, since the domain was popularized by information extraction competitions where, clearly, the date and the money generated by the event were crucial. But we receive lot of questions about the inclusion of some types, specifically those written as common nouns. Think about sports, minerals, or species. Should they be included in the task? What about genes and proteins that don’t refer to individual entities, but are often included as well?

It seems that anyone who tries to define the task eventually falls back on practical considerations, like filling templates and answering questions.

** Let’s try to sort things out and let’s fall back on practical considerations. **

We discovered 5 different criteria to determine the essence of named entities:

Orthographic criteria: named entities are usually capitalized. Capitalization rules for multiword proper nouns change from one language to the next (e.g., ‘House of Representatives’ vs ‘Chambre des communes‘). In German, all nouns are capitalized. [source]

Translation criteria: named entities usually do not translate from one language to the next. However, the transcribed names of places, monarchs, popes, and non-contemporary authors often share spelling, and are sometimes universal. [source]

Generic/specific criteria: named entities usually refer to single individuals. A mention of “John Smith” refers to an individual, but the gene “P53-wt” or the product “Canon EOS Rebel Xti” refer to multiple instances of entities. [source]

Rigid designation criteria: named entities usually designate rigidly. Proper names and certain natural terms-including biological taxa and types of natural substances (most famously “water” and “H2O”) are rigidly designated. [source]

Information Extraction (IE) criteria: named entities fill some predefined “Who? What? Where? When?” template. This surely includes money, measure, date, time, proper names, and themes such as accident, murder, election, etc. [source]

Let’s take a closer look at some examples and the criterion they meet:

God: capitalized, translatable, single individual*, rigid, useful in IE

London: capitalized, translatable, single individual, rigid, useful in IE

John Smith: capitalized, not translatable, single individual, rigid, useful in IE

water : not capitalized, translatable, not a single individual, rigid, useful in IE

Miss America: capitalized, translatable, not a single individual, not rigid, useful in IE

the first Chancellor of the German Empire: not capitalized, translatable, single individual, not rigid*, useful in IE

Canon EOS Rebel Xti: capitalized, not translatable, not single individual, not rigid, useful in IE

iPhone: not capitalized*, not translatable, not single individual, not rigid, useful in IE

hockey: not capitalized, translatable, not a single individual, not rigid, useful in IE

10$: not capitalized, not translatable, not a single individual, not rigid, useful in IE

* Alright, it could be up for debate…

No single criterion accurately covers the named entity class. Capitalization is language-specific and sometimes falls short. Translatability is inconsistent. Specificity and rigid designation miss important types, such as money and product. The only criterion that encompasses them all is usefulness in information extraction, but it’s way too broad.

Our definition is a practical one. It stems from the way YooName works:

“The types recognized by NER are any sets of words that intersect with an NER type.”

This is ugly and circular, but it is practical!

We started by including Person, Location and Organization. These sets were ambiguous with products, songs, book title, fruits, etc. So we’ve added these new sets. We expanded the number of type to 100, as guided by our definition. We calculated that less than 1% of the millions of entities we have are ambiguous with sets of words that are not handled so far. The problem is that this 1% is so diverse, we’ll need to add thousands of new types.

TextMap juxtaposition algorithm

February 2, 2008

TextMap, the entity search engine, just published their juxtaposition algorithm.

The paper is dense in ideas, on top of being entertaining:

Concordance-Based Entity-Oriented Search, by Mikhail Bautin and Steven Skiena.

The algorithm very roughly goes as follow:

  1. Annotate every entities in every documents;
  2. Extract all sentences containing an entity;
  3. Delete duplicate sentences corpus-wide (use MD5 hashing for duplicate detection)
  4. Use Lucene to index tuples [entity, concatenation of all sentences containing it]
  5. Use special ranking function

The search is conducted with a special scoring scheme (tf-idf minus sensibility to document length), and the result to a query (e.g., ‘Montreal’) is a list of entities that are closely related to it (‘Montreal Canadiens’, ‘Saku Koivu’, etc.).


Follow

Get every new post delivered to your Inbox.