Part of Speech Tags in Text Mining

In this post, we will see information extraction from unstructured data. Information Extraction has many applications, including business intelligence, resume harvesting, media analysis, sentiment detection, patent search, and email scanning. From the nltk book, the Information architecture is given as ie-architecture Figure 1: Simple Pipeline Architecture for an Information Extraction System. This system takes the raw text of a document as its input, and generates a list of (entity, relation, entity) tuples as its output. For example, given a document that indicates that the company IBM is located in Atlanta, it might generate the tuple ([ORG: ‘IBM’] ‘in’ [LOC: ‘Atlanta’]).

The basic technique we will use for entity detection is chunking, which segments and labels multi-token sequences

Noun-Phrase Chunking (NP): We will begin by considering the task of noun phrase chunking, or NP-chunking,
where we search for chunks corresponding to individual noun phrases. For example, here is some Wall Street Journal text with  NP-chunks marked using brackets:


[ The/DT market/NN ] for/IN [ system-management/NN software/NN ] for/IN [ Digital/NNP ]
[ 's/POS hardware/NN ] is/VBZ fragmented/JJ enough/RB that/IN [ a/DT giant/NN ] such/JJ as/IN
[ Computer/NNP Associates/NNPS ] should/MD do/VB well/RB there/RB ./.

A frequently asked question is What do the Part of Speech tags (VB, JJ, etc) mean?
Training data generally takes a lot of work to create, so a pre-existing corpus is typically used.
These usually use the Penn Treebank or Brown Corpus tags.

The Penn Tree Bank tags are most commonly used in NLP tasks, and they are given as from here;

Tag Description Examples
$ dollar $ -$ –$ A$ C$ HK$ M$ NZ$ S$ U.S.$ US$
opening quotation mark ` “
closing quotation mark ‘ ”
( opening parenthesis ( [ {
) closing parenthesis ) ] }
, comma ,
dash
. sentence terminator . ! ?
: colon or ellipsis : ; …
CC conjunction, coordinating & ‘n and both but either et for less minus neither nor or plus so therefore times v. versus vs. whether yet
CD numeral, cardinal mid-1890 nine-thirty forty-two one-tenth ten million 0.5 one forty-seven 1987 twenty ’79 zero two 78-degrees eighty-four IX ’60s .025 fifteen 271,124 dozen quintillion DM2,000 …
DT determiner all an another any both del each either every half la many much nary neither no some such that the them these this those
EX existential there there
FW foreign word gemeinschaft hund ich jeux habeas Haementeria Herr K’ang-si vous lutihaw alai je jour objets salutaris fille quibusdam pas trop Monte terram fiche oui corporis …
IN preposition or conjunction, subordinating astride among uppon whether out inside pro despite on by throughout below within for towards near behind atop around if like until below next into if beside …
JJ adjective or numeral, ordinal third ill-mannered pre-war regrettable oiled calamitous first separable ectoplasmic battery-powered participatory fourth still-to-be-named multilingual multi-disciplinary …
JJR adjective, comparative bleaker braver breezier briefer brighter brisker broader bumper busier calmer cheaper choosier cleaner clearer closer colder commoner costlier cozier creamier crunchier cuter …
JJS adjective, superlative calmest cheapest choicest classiest cleanest clearest closest commonest corniest costliest crassest creepiest crudest cutest darkest deadliest dearest deepest densest dinkiest …
LS list item marker A A. B B. C C. D E F First G H I J K One SP-44001 SP-44002 SP-44005 SP-44007 Second Third Three Two \* a b c d first five four one six three two
MD modal auxiliary can cannot could couldn’t dare may might must need ought shall should shouldn’t will would
NN noun, common, singular or mass common-carrier cabbage knuckle-duster Casino afghan shed thermostat investment slide humour falloff slick wind hyena override subhumanity machinist …
NNP noun, proper, singular Motown Venneboerger Czestochwa Ranzer Conchita Trumplane Christos Oceanside Escobar Kreisler Sawyer Cougar Yvette Ervin ODI Darryl CTCA Shannon A.K.C. Meltex Liverpool …
NNPS noun, proper, plural Americans Americas Amharas Amityvilles Amusements Anarcho-Syndicalists Andalusians Andes Andruses Angels Animals Anthony Antilles Antiques Apache Apaches Apocrypha …
NNS noun, common, plural undergraduates scotches bric-a-brac products bodyguards facets coasts divestitures storehouses designs clubs fragrances averages subjectivists apprehensions muses factory-jobs …
PDT pre-determiner all both half many quite such sure this
POS genitive marker ‘ ‘s
PRP pronoun, personal hers herself him himself hisself it itself me myself one oneself ours ourselves ownself self she thee theirs them themselves they thou thy us
PRP$ pronoun, possessive her his mine my our ours their thy your
RB adverb occasionally unabatingly maddeningly adventurously professedly stirringly prominently technologically magisterially predominately swiftly fiscally pitilessly …
RBR adverb, comparative further gloomier grander graver greater grimmer harder harsher healthier heavier higher however larger later leaner lengthier less-perfectly lesser lonelier longer louder lower more …
RBS adverb, superlative best biggest bluntest earliest farthest first furthest hardest heartiest highest largest least less most nearest second tightest worst
RP particle aboard about across along apart around aside at away back before behind by crop down ever fast for forth from go high i.e. in into just later low more off on open out over per pie raising start teeth that through under unto up up-pp upon whole with you
SYM symbol % & ‘ ” ”. ) ). * + ,. < = > @ A[fj] U.S U.S.S.R \* \*\* \*\*\*
TO “to” as preposition or infinitive marker to
UH interjection Goodbye Goody Gosh Wow Jeepers Jee-sus Hubba Hey Kee-reist Oops amen huh howdy uh dammit whammo shucks heck anyways whodunnit honey golly man baby diddle hush sonuvabitch …
VB verb, base form ask assemble assess assign assume atone attention avoid bake balkanize bank begin behold believe bend benefit bevel beware bless boil bomb boost brace break bring broil brush build …
VBD verb, past tense dipped pleaded swiped regummed soaked tidied convened halted registered cushioned exacted snubbed strode aimed adopted belied figgered speculated wore appreciated contemplated …
VBG verb, present participle or gerund telegraphing stirring focusing angering judging stalling lactating hankerin’ alleging veering capping approaching traveling besieging encrypting interrupting erasing wincing …
VBN verb, past participle multihulled dilapidated aerosolized chaired languished panelized used experimented flourished imitated reunifed factored condensed sheared unsettled primed dubbed desired …
VBP verb, present tense, not 3rd person singular predominate wrap resort sue twist spill cure lengthen brush terminate appear tend stray glisten obtain comprise detest tease attract emphasize mold postpone sever return wag …
VBZ verb, present tense, 3rd person singular bases reconstructs marks mixes displeases seals carps weaves snatches slumps stretches authorizes smolders pictures emerges stockpiles seduces fizzes uses bolsters slaps speaks pleads …
WDT WH-determiner that what whatever which whichever
WP WH-pronoun that what whatever whatsoever which who whom whosoever
WP$ WH-pronoun, possessive whose
WRB Wh-adverb how however whence whenever where whereby whereever wherein whereof why

One of the most useful sources of information for NP chunking is POS tagging.

So to create a chunk parser, following three steps are required;

Note: Chunking uses special regexp syntax for rules that delimit the chunks. These rules must be converted to ‘regular’ regular expressions before a sentence can be chunked. More details on Chunking, see here

For example:


import nltk,re, pprint
from nltk.chunk.regexp import *

tag_pattern = "<DT>?<JJ>*<NN.*>"
regexp_pattern = tag_pattern2re_pattern(tag_pattern)
print regexp_pattern

When this code is executed it will give the following output


(<(DT)>)?(<(JJ)>)*(<(NN[^\{\}<>]*)>)

Step 1: First define a chunk grammar, consisting of rules that indicate how sentences should be chunked. To do this use regular expression in python.

grammar = "NP: {<DT>?<JJ>*<NN>}"  

This rule says that an NP chunk should be formed whenever the chunker finds an optional determiner (DT) followed by any number of adjectives (JJ) and then a noun (NN).

Step 2: Now create an example sentence that has been pre-tagged like

sentence = [("the", "DT"), ("little", "JJ"), ("yellow", "JJ"), ("dog", "NN"), ("barked", "VBD"), ("at", "IN"), ("the", "DT"), ("cat", "NN")]

Step 3:using the grammar defined in Step 1 above, we now create a chunk parser using nltk.RegexpParser() library like

chunkParser= nltk.RegexpParser(grammar)

Step 4: Now test this chunkParser on your sentence as defined in Step 1 like

result = chunkParser.parse(sentence)
print (result)

From the nltk book, a sample python script for POS tagging would be something like this:

import nltk
grammar = "NP: {<DT>?<JJ>*<NN>}"
sentence = [("the", "DT"), ("little", "JJ"), ("yellow", "JJ"), ("dog", "NN"), ("barked", "VBD"), ("at", "IN"), ("the", "DT"), ("cat", "NN")]
chunkParser= nltk.RegexpParser(grammar)
result = chunkParser.parse(sentence)
print (result)

if the above code is executed, it will print the following


(S
(NP the/DT little/JJ yellow/JJ dog/NN)
barked/VBD
at/IN
(NP the/DT cat/NN))

Process finished with exit code 0 

One drawback that I see in the above code steps is that I have to manually label the sentence for the chunk Parser to learn.  So a better approach that can be used is to automate the word tokenization process;

Step 1: Tokenize the text using the nltk.word_tokenize (yourText) function

Step 2: Generate POS tags for the tokens generated in step 1

import nltk
sentence="The little red fox jumped over the fence and got entangled in the barbed fence behind it"
text_tokens=nltk.word_tokenize(sentence)
print nltk.pos_tag(text_tokens)

And the output when I run the above code is


[('The', 'DT'), ('little', 'JJ'), ('red', 'VBN'), ('fox', 'NN'), ('jumped', 'VBD'), ('over', 'IN'), 
('the', 'DT'), ('fence', 'NN'), ('and', 'CC'), ('got', 'VBD'), ('entangled', 'VBN'), ('in', 'IN'), 
('the', 'DT'), ('barbed', 'VBN'), ('fence', 'NN'), ('behind', 'IN'), ('it', 'PRP')]

Reference

http://www.nltk.org/

Advertisements