Part of Speech Tags in Text Mining

In this post, we will see information extraction from unstructured data. Information Extraction has many applications, including business intelligence, resume harvesting, media analysis, sentiment detection, patent search, and email scanning. From the nltk book, the Information architecture is given as ie-architecture Figure 1: Simple Pipeline Architecture for an Information Extraction System. This system takes the raw text of a document as its input, and generates a list of (entity, relation, entity) tuples as its output. For example, given a document that indicates that the company IBM is located in Atlanta, it might generate the tuple ([ORG: ‘IBM’] ‘in’ [LOC: ‘Atlanta’]).

The basic technique we will use for entity detection is chunking, which segments and labels multi-token sequences

Noun-Phrase Chunking (NP): We will begin by considering the task of noun phrase chunking, or NP-chunking,
where we search for chunks corresponding to individual noun phrases. For example, here is some Wall Street Journal text with  NP-chunks marked using brackets:

[ The/DT market/NN ] for/IN [ system-management/NN software/NN ] for/IN [ Digital/NNP ]
[ 's/POS hardware/NN ] is/VBZ fragmented/JJ enough/RB that/IN [ a/DT giant/NN ] such/JJ as/IN
[ Computer/NNP Associates/NNPS ] should/MD do/VB well/RB there/RB ./.

A frequently asked question is What do the Part of Speech tags (VB, JJ, etc) mean?
Training data generally takes a lot of work to create, so a pre-existing corpus is typically used.
These usually use the Penn Treebank or Brown Corpus tags.

The Penn Tree Bank tags are most commonly used in NLP tasks, and they are given as from here;

Tag Description Examples
$ dollar $ -$ –$ A$ C$ HK$ M$ NZ$ S$ U.S.$ US$
opening quotation mark ` “
closing quotation mark ‘ ”
( opening parenthesis ( [ {
) closing parenthesis ) ] }
, comma ,
. sentence terminator . ! ?
: colon or ellipsis : ; …
CC conjunction, coordinating & ‘n and both but either et for less minus neither nor or plus so therefore times v. versus vs. whether yet
CD numeral, cardinal mid-1890 nine-thirty forty-two one-tenth ten million 0.5 one forty-seven 1987 twenty ’79 zero two 78-degrees eighty-four IX ’60s .025 fifteen 271,124 dozen quintillion DM2,000 …
DT determiner all an another any both del each either every half la many much nary neither no some such that the them these this those
EX existential there there
FW foreign word gemeinschaft hund ich jeux habeas Haementeria Herr K’ang-si vous lutihaw alai je jour objets salutaris fille quibusdam pas trop Monte terram fiche oui corporis …
IN preposition or conjunction, subordinating astride among uppon whether out inside pro despite on by throughout below within for towards near behind atop around if like until below next into if beside …
JJ adjective or numeral, ordinal third ill-mannered pre-war regrettable oiled calamitous first separable ectoplasmic battery-powered participatory fourth still-to-be-named multilingual multi-disciplinary …
JJR adjective, comparative bleaker braver breezier briefer brighter brisker broader bumper busier calmer cheaper choosier cleaner clearer closer colder commoner costlier cozier creamier crunchier cuter …
JJS adjective, superlative calmest cheapest choicest classiest cleanest clearest closest commonest corniest costliest crassest creepiest crudest cutest darkest deadliest dearest deepest densest dinkiest …
LS list item marker A A. B B. C C. D E F First G H I J K One SP-44001 SP-44002 SP-44005 SP-44007 Second Third Three Two \* a b c d first five four one six three two
MD modal auxiliary can cannot could couldn’t dare may might must need ought shall should shouldn’t will would
NN noun, common, singular or mass common-carrier cabbage knuckle-duster Casino afghan shed thermostat investment slide humour falloff slick wind hyena override subhumanity machinist …
NNP noun, proper, singular Motown Venneboerger Czestochwa Ranzer Conchita Trumplane Christos Oceanside Escobar Kreisler Sawyer Cougar Yvette Ervin ODI Darryl CTCA Shannon A.K.C. Meltex Liverpool …
NNPS noun, proper, plural Americans Americas Amharas Amityvilles Amusements Anarcho-Syndicalists Andalusians Andes Andruses Angels Animals Anthony Antilles Antiques Apache Apaches Apocrypha …
NNS noun, common, plural undergraduates scotches bric-a-brac products bodyguards facets coasts divestitures storehouses designs clubs fragrances averages subjectivists apprehensions muses factory-jobs …
PDT pre-determiner all both half many quite such sure this
POS genitive marker ‘ ‘s
PRP pronoun, personal hers herself him himself hisself it itself me myself one oneself ours ourselves ownself self she thee theirs them themselves they thou thy us
PRP$ pronoun, possessive her his mine my our ours their thy your
RB adverb occasionally unabatingly maddeningly adventurously professedly stirringly prominently technologically magisterially predominately swiftly fiscally pitilessly …
RBR adverb, comparative further gloomier grander graver greater grimmer harder harsher healthier heavier higher however larger later leaner lengthier less-perfectly lesser lonelier longer louder lower more …
RBS adverb, superlative best biggest bluntest earliest farthest first furthest hardest heartiest highest largest least less most nearest second tightest worst
RP particle aboard about across along apart around aside at away back before behind by crop down ever fast for forth from go high i.e. in into just later low more off on open out over per pie raising start teeth that through under unto up up-pp upon whole with you
SYM symbol % & ‘ ” ”. ) ). * + ,. < = > @ A[fj] U.S U.S.S.R \* \*\* \*\*\*
TO “to” as preposition or infinitive marker to
UH interjection Goodbye Goody Gosh Wow Jeepers Jee-sus Hubba Hey Kee-reist Oops amen huh howdy uh dammit whammo shucks heck anyways whodunnit honey golly man baby diddle hush sonuvabitch …
VB verb, base form ask assemble assess assign assume atone attention avoid bake balkanize bank begin behold believe bend benefit bevel beware bless boil bomb boost brace break bring broil brush build …
VBD verb, past tense dipped pleaded swiped regummed soaked tidied convened halted registered cushioned exacted snubbed strode aimed adopted belied figgered speculated wore appreciated contemplated …
VBG verb, present participle or gerund telegraphing stirring focusing angering judging stalling lactating hankerin’ alleging veering capping approaching traveling besieging encrypting interrupting erasing wincing …
VBN verb, past participle multihulled dilapidated aerosolized chaired languished panelized used experimented flourished imitated reunifed factored condensed sheared unsettled primed dubbed desired …
VBP verb, present tense, not 3rd person singular predominate wrap resort sue twist spill cure lengthen brush terminate appear tend stray glisten obtain comprise detest tease attract emphasize mold postpone sever return wag …
VBZ verb, present tense, 3rd person singular bases reconstructs marks mixes displeases seals carps weaves snatches slumps stretches authorizes smolders pictures emerges stockpiles seduces fizzes uses bolsters slaps speaks pleads …
WDT WH-determiner that what whatever which whichever
WP WH-pronoun that what whatever whatsoever which who whom whosoever
WP$ WH-pronoun, possessive whose
WRB Wh-adverb how however whence whenever where whereby whereever wherein whereof why

One of the most useful sources of information for NP chunking is POS tagging.

So to create a chunk parser, following three steps are required;

Note: Chunking uses special regexp syntax for rules that delimit the chunks. These rules must be converted to ‘regular’ regular expressions before a sentence can be chunked. More details on Chunking, see here

For example:

import nltk,re, pprint
from nltk.chunk.regexp import *

tag_pattern = "<DT>?<JJ>*<NN.*>"
regexp_pattern = tag_pattern2re_pattern(tag_pattern)
print regexp_pattern

When this code is executed it will give the following output


Step 1: First define a chunk grammar, consisting of rules that indicate how sentences should be chunked. To do this use regular expression in python.

grammar = "NP: {<DT>?<JJ>*<NN>}"  

This rule says that an NP chunk should be formed whenever the chunker finds an optional determiner (DT) followed by any number of adjectives (JJ) and then a noun (NN).

Step 2: Now create an example sentence that has been pre-tagged like

sentence = [("the", "DT"), ("little", "JJ"), ("yellow", "JJ"), ("dog", "NN"), ("barked", "VBD"), ("at", "IN"), ("the", "DT"), ("cat", "NN")]

Step 3:using the grammar defined in Step 1 above, we now create a chunk parser using nltk.RegexpParser() library like

chunkParser= nltk.RegexpParser(grammar)

Step 4: Now test this chunkParser on your sentence as defined in Step 1 like

result = chunkParser.parse(sentence)
print (result)

From the nltk book, a sample python script for POS tagging would be something like this:

import nltk
grammar = "NP: {<DT>?<JJ>*<NN>}"
sentence = [("the", "DT"), ("little", "JJ"), ("yellow", "JJ"), ("dog", "NN"), ("barked", "VBD"), ("at", "IN"), ("the", "DT"), ("cat", "NN")]
chunkParser= nltk.RegexpParser(grammar)
result = chunkParser.parse(sentence)
print (result)

if the above code is executed, it will print the following

(NP the/DT little/JJ yellow/JJ dog/NN)
(NP the/DT cat/NN))

Process finished with exit code 0 

One drawback that I see in the above code steps is that I have to manually label the sentence for the chunk Parser to learn.  So a better approach that can be used is to automate the word tokenization process;

Step 1: Tokenize the text using the nltk.word_tokenize (yourText) function

Step 2: Generate POS tags for the tokens generated in step 1

import nltk
sentence="The little red fox jumped over the fence and got entangled in the barbed fence behind it"
print nltk.pos_tag(text_tokens)

And the output when I run the above code is

[('The', 'DT'), ('little', 'JJ'), ('red', 'VBN'), ('fox', 'NN'), ('jumped', 'VBD'), ('over', 'IN'), 
('the', 'DT'), ('fence', 'NN'), ('and', 'CC'), ('got', 'VBD'), ('entangled', 'VBN'), ('in', 'IN'), 
('the', 'DT'), ('barbed', 'VBN'), ('fence', 'NN'), ('behind', 'IN'), ('it', 'PRP')]


MapReduce Patterns, Algorithms, and Use Cases

Ashish Dutt:

an interesting post on big data processing

Originally posted on Highly Scalable Blog:

In this article I digested a number of MapReduce patterns and algorithms to give a systematic view of the different techniques that can be found on the web or scientific articles. Several practical case studies are also provided. All descriptions and code snippets use the standard Hadoop’s MapReduce model with Mappers, Reduces, Combiners, Partitioners, and sorting. This framework is depicted in the figure below.

MapReduce Framework

Basic MapReduce Patterns

Counting and Summing

Problem Statement: There is a number of documents where each document is a set of terms. It is required to calculate a total number of occurrences of each term in all documents. Alternatively, it can be an arbitrary function of the terms. For instance, there is a log file where each record contains a response time and it is required to calculate an average response time.


Let start with something really simple. The code snippet below shows Mapper that simply…

View original 2,379 more words

PySpark in PyCharm on a remote server

Use Case: I want to use my laptop (using Win 7 Professional) to connect to the CentOS 6.4 master server using PyCharm.

Objective: To write the code in Pycharm on the laptop and then send the job to the server which will do the processing and should then return the result back to the laptop or to any other visualizing API.

My solution was to get the PyCharm Professional (you can download it as a 30 day evaluation version) edition which let me configure it. In the PyCharm environment, press the key combination “ctrl+alt+s” which will open up the settings window. From there click on the + sign next to Project: [your project name] in my case project name is Remote_Server as shown


My solution was to get the PyCharm Professional (you can download it as a 30 day evaluation version) edition which let me configure it. In the PyCharm environment, press the key combination “ctrl+alt+s” which will open up the settings window. From there click on the + sign next to Project: [your project name] in my case project name is Remote_Server as shown

1-pyspark configuration

Now click on Ok and write a sample program to test the connectivity. A sample program is given as

from pyspark import SparkContext
from pyspark import SparkConf
print (“Pyspark sucess”)
except ImportError as e:
print (“Error importing Spark Modules”, e)
conf = SparkConf()
sc = SparkContext(conf=conf)
print (“connection succeeded with Master”,conf)
data = [1, 2, 3, 4, 5]
distData = sc.parallelize(data)
print (“unable to connect to remote server”)

Now, when you run this code you should see the pyspark interpreter as shown


How to commit changes to a remote file on a server

Task: To configure spark-defaults.conf and file on a remote server using WinSCP

Error: Each time I try to commit changes to the file i keep getting the error "cannot overwrite remote file Permission denied Error code: 3 Error message from server: Permission denied"


In Putty navigate into one directory before the actual directory
Example: I wanted to edit spark-defaults.conf file located in


So what I did was I cd into /opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/etc/spark
Next type the command sudo chown username:username directoryname -R
example: sudo chown ashish:ashish conf.dist -R

Now go into WinScp and edit the file ..Notice when you save the file the WinScp screen will flash kind off, indicates change has been comitted. you can check the same by opening the concerned file.


null record error on inserting a new message or append a new message into a topic- Apache Kafka

So, I have been working on Apache Kafka using CDH5.4 with parcels.
Scenario: I have four Linux servers of which one is Master and remaining three are slaves.
Task: To configure one of the slaves to act as a Kafka messaging server.
Command: When I execute this command to append a new message to the topic

hadoop jar /opt/camus/camus-example/target/camus-example-0.1.0-SNAPSHOT-shaded.jar com.linkedin.camus.etl.kafka.CamusJob -P /opt/camus/

I get the error "java.lang.RunTimeException job failed nullrecord

camus- null record errorMistake: I overlooked the file in Kafka and did not properly configure it which caused this error
(etl.hourly and etl.daily were grayed out, I only enabled them)
etl.default.timezone=Singapore (The default timezone was not set, I set it to Singapore)

This solved the null record error
Help: was instrumental in providing the solution

What you need to ensure is the timezone where you are in and most important is the camus.message.timestamp.field=created_at

The complete file is listed at my github page





Batch Geo-coding in R

  • “Geocoding (sometimes called forward geocoding) is the process of enriching a description of a location, most typically a postal address or place name, with geographic coordinates from spatial reference data such as building polygons, land parcels, street addresses, postal codes (e.g. ZIP codes, CEDEX) and so on.”

Google API for Geo-coding restricts coordinates lookup to 2500 per IP address per day. So if you have more than this limit of addresses then searching for an alternative solution is cumbersome.

The task at hand was to determine the coordinates of a huge number of addresses to the tune of over 10,000. The question was how to achieve this in R?


> library(RgoogleMaps)
 > DF <- with(caseLoc, data.frame(caseLoc, t(sapply(caseLoc$caseLocation, getGeoCode))))
 #caseLoc is the address file and caseLocation is the column header
Python Lambda Function Image

The significance of python’s lambda function

For some time I was unable to figure out what in the world is “lambda” function in python. Off course, I referred to the official documentation but it did not help much. Not that I am saying its poorly written.. It is very well written but you see the problem here is that I needed an easy explanation that my grey cells could easily catch.

So anyway, I have now understood it and I briefly explain it in case I need to refer it back again. I can always check it in here.

In a single line definition these are quick dirty functions. Meaning if you are too tired of writing a full function definition like

> def heart_beat (pulse):
         return pulse*100
> doctor=heart_beat(20)
> print doctor # Result

But if you were to use the lambda function, you would not need to define and name the function as above. It could have rather been done as follows

> doctor = lambda heart_beat: heart_beat * 100
> doctor # Result will be printed on screen
> doctor (20)
> #Result