import streamlit as st
# Custom CSS for better styling
st.markdown("""
""", unsafe_allow_html=True)
# Main Title
st.markdown('
The Ultimate Guide to Named Entity Recognition with Spark NLP
', unsafe_allow_html=True)
# Introduction
st.markdown("""
Named Entity Recognition (NER) is the task of identifying important words in a text and associating them with a category. For example, we may be interested in finding all the personal names in documents, or company names in news articles. Other examples include domain-specific uses such as identifying all disease names in a clinical text, or company trading codes in financial ones.
NER can be implemented with many approaches. In this post, we introduce two methods: using a manually crafted list of entities (gazetteer) or regular expressions, and using deep learning with the NerDL model. Both approaches leverage the scalability of Spark NLP with Python.
""", unsafe_allow_html=True)
st.image("images/ner.png")
# Introduction to Spark NLP
st.markdown('Introduction to Spark NLP
', unsafe_allow_html=True)
st.markdown("""
Spark NLP is an open-source library maintained by John Snow Labs. It is built on top of Apache Spark and Spark ML and provides simple, performant & accurate NLP annotations for machine learning pipelines that can scale easily in a distributed environment.
To install Spark NLP, you can simply use any package manager like conda or pip. For example, using pip you can simply run pip install spark-nlp
. For different installation options, check the official documentation.
""", unsafe_allow_html=True)
# Using NerDL Model
st.markdown('Using NerDL Model
', unsafe_allow_html=True)
st.markdown("""
The NerDL model in Spark NLP is a deep learning-based approach for NER tasks. It uses a Char CNNs - BiLSTM - CRF architecture that achieves state-of-the-art results in most datasets. The training data should be a labeled Spark DataFrame in the format of CoNLL 2003 IOB with annotation type columns.
""", unsafe_allow_html=True)
# Setup Instructions
st.markdown('Setup
', unsafe_allow_html=True)
st.markdown('To install Spark NLP in Python, use your favorite package manager (conda, pip, etc.). For example:
', unsafe_allow_html=True)
st.code("""
pip install spark-nlp
pip install pyspark
""", language="bash")
st.markdown("Then, import Spark NLP and start a Spark session:
", unsafe_allow_html=True)
st.code("""
import sparknlp
# Start Spark Session
spark = sparknlp.start()
""", language='python')
# Example Usage with NerDL Model
st.markdown('Example Usage with NerDL Model
', unsafe_allow_html=True)
st.markdown("""
Below is an example of how to set up and use the NerDL model for named entity recognition:
""", unsafe_allow_html=True)
st.code('''
from sparknlp.base import *
from sparknlp.annotator import *
from pyspark.ml import Pipeline
# Document Assembler
document_assembler = DocumentAssembler() \\
.setInputCol("text") \\
.setOutputCol("document")
# Sentence Detector
sentence_detector = SentenceDetector() \\
.setInputCols(["document"]) \\
.setOutputCol("sentence")
# Tokenizer
tokenizer = Tokenizer() \\
.setInputCols(["sentence"]) \\
.setOutputCol("token")
# Word Embeddings
embeddings = WordEmbeddingsModel.pretrained() \\
.setInputCols(["sentence", "token"]) \\
.setOutputCol("bert")
# NerDL Model
ner_tagger = NerDLModel.pretrained() \\
.setInputCols(["sentence", "token", "bert"]) \\
.setOutputCol("ner")
# Pipeline
pipeline = Pipeline().setStages([
document_assembler,
sentence_detector,
tokenizer,
embeddings,
ner_tagger
])
# Example sentence
example = """
William Henry Gates III (born October 28, 1955) is an American business magnate, software developer, investor, and philanthropist.
He is best known as the co-founder of Microsoft Corporation. Throughout his career at Microsoft, Gates held various positions,
including chairman, chief executive officer (CEO), president, and chief software architect. He was also the largest individual
shareholder until May 2014.
Gates is recognized as one of the foremost entrepreneurs and pioneers of the microcomputer revolution of the 1970s and 1980s.
Born and raised in Seattle, Washington, he co-founded Microsoft with childhood friend Paul Allen in 1975. Initially established
in Albuquerque, New Mexico, Microsoft grew to become the world’s largest personal computer software company.
Gates led Microsoft as chairman and CEO until January 2000, when he stepped down as CEO but continued as chairman and chief
software architect. During the late 1990s, Gates faced criticism for business practices considered anti-competitive, an opinion
upheld by numerous court rulings.
In June 2006, Gates announced his transition to a part-time role at Microsoft while dedicating full time to the Bill & Melinda Gates
Foundation, a private charitable organization he established with his wife, Melinda Gates, in 2000. Gates gradually transferred
his responsibilities to Ray Ozzie and Craig Mundie and stepped down as chairman of Microsoft in February 2014. He then assumed
the role of technology adviser to support the newly appointed CEO, Satya Nadella.
"""
data = spark.createDataFrame([[example]]).toDF("text")
# Transforming data
result = pipeline.fit(data).transform(data)
result.select("ner.result").show(truncate=False)
''', language="python")
st.text("""
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|result |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|[O, B-PER, I-PER, I-PER, I-PER, O, O, O, O, O, O, O, O, O, B-MISC, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, B-ORG, I-ORG, O, O, O, O, O, B-ORG, O, B-PER, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, B-LOC, O, O, O, B-LOC, O, B-LOC, O, B-PER, O, B-ORG, O, O, O, B-PER, I-PER, O, O, O, O, B-LOC, O, B-LOC, I-LOC, O, O, O, O, O, O, O, O, O, O, O, O, O, O, B-PER, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, B-PER, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, B-PER, O, O, O, O, O, O, O, O, O, O, O, B-ORG, O, O, O, O, O, B-ORG, I-ORG, I-ORG, I-ORG, I-ORG, O, O, O, O, O, O, O, O, O, O, O, B-PER, I-PER, O, O, O, O, O, O, O, O, O, O, B-PER, I-PER, O, B-PER, I-PER, O, O, O, O, O, O, O, B-ORG, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, B-PER, I-PER, O]|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
""")
# Using EntityRuler Annotator
st.markdown('Using EntityRuler Annotator
', unsafe_allow_html=True)
st.markdown("""
In addition to the deep learning-based approach, Spark NLP also supports a rule-based method for NER using the EntityRuler annotator. This method involves using a gazetteer or regular expressions to identify entities in the text.
""", unsafe_allow_html=True)
# Example Usage with EntityRuler
st.markdown('Example Usage with EntityRuler
', unsafe_allow_html=True)
st.markdown("""
For the NER tasks based on gazetteer list, we will use the EntityRuler annotator, which has both Approach and Model versions.
As this annotator consists in finding the entities based in a list of desired names, the EntityRulerApproach annotator will store the given list in the EntityRulerModel parameters. All we need is a JSON or CSV file with the list of names or regex rules. For example, we may use the following entities.json file:
""", unsafe_allow_html=True)
st.code("""
[
{
"label": "PERSON",
"patterns": [
"John",
"John Snow"
]
},
{
"label": "PERSON",
"patterns": [
"Eddard",
"Eddard Stark"
]
},
{
"label": "LOCATION",
"patterns": [
"Winterfell"
]
},
{
"label": "DATE",
"patterns": [
"[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}"
],
"regex": true
}
]
""", language="json")
# Pipeline Setup
st.markdown('Pipeline Setup
', unsafe_allow_html=True)
st.code("""
from sparknlp.base import DocumentAssembler, Pipeline
from sparknlp.annotator import EntityRulerApproach, Tokenizer
document_assembler = DocumentAssembler() \\
.setInputCol("text") \\
.setOutputCol("document")
tokenizer = Tokenizer() \\
.setInputCols(["document"]) \\
.setOutputCol("token")
entity_ruler = EntityRulerApproach() \\
.setInputCols(["document", "token"]) \\
.setOutputCol("entity") \\
.setPatternsResource("entities.json")
pipeline = Pipeline(stages=[document_assembler, tokenizer, entity_ruler])
""", language="python")
# Example Sentences
st.markdown('Example Sentences
', unsafe_allow_html=True)
st.code('''
example = """Game of Thrones was released in 2011-04-17.
Lord Eddard Stark was the head of House Stark.
John Snow lives in Winterfell."""
data = spark.createDataFrame([[example]]).toDF("text")
pipeline_model = pipeline.fit(data)
''', language="python")
# Save and Load Model
st.markdown('We can Save and Load Model for future use (optional)
', unsafe_allow_html=True)
st.code("""
pipeline_model.stages[-1].write().overwrite().save('my_entityruler')
entity_ruler = EntityRulerModel.load("my_entityruler") \\
.setInputCols(["document", "token"]) \\
.setOutputCol("entity")
""", language="python")
# Result Visualization
st.markdown('Result Visualization
', unsafe_allow_html=True)
st.code("""
result = pipeline_model.transform(data)
import pyspark.sql.functions as F
result.select(
F.explode(F.col("entity")).alias("entity")
).select(
F.col("entity.result").alias("keyword"),
F.col("entity.metadata").alias("metadata")
).select(
F.col("keyword"),
F.expr("metadata['entity']").alias("label")
).show()
""", language="python")
st.text("""
+------------+--------+
| keyword| label|
+------------+--------+
| 2011-04-17| DATE|
|Eddard Stark| PERSON|
| John Snow| PERSON|
| Winterfell|LOCATION|
+------------+--------+
""")
# Non-English Languages
st.markdown('Non-English Languages
', unsafe_allow_html=True)
st.markdown("""
The EntityRuler annotator utilizes the Aho-Corasick algorithm, which may not handle languages with unique characters or alphabets effectively. For example:
- Spanish includes the ñ character.
- Portuguese uses ç.
- Many languages have accented characters (á, ú, ê, etc.).
To accommodate these characters, use the .setAlphabetResource
parameter.
When a character is missing from the alphabet, you might encounter errors like this:
Py4JJavaError: An error occurred while calling o69.fit.
: java.lang.UnsupportedOperationException: Char ú not found on alphabet. Please check alphabet
To define a custom alphabet, create a text file (e.g., custom_alphabet.txt
) with all required characters:
""", unsafe_allow_html=True)
st.code("""
abcdefghijklmnopqrstuvwxyz
ABCDEFGHIJKLMNOPQRSTUVWXYZ
áúéêçñ
ÁÚÉÊÇÑ
""")
st.markdown("""
Alternatively, you can use predefined alphabets for common languages. For instance, for Spanish:
""", unsafe_allow_html=True)
st.code("""
entity_ruler = (
EntityRulerApproach()
.setInputCols(["sentence"])
.setOutputCol("entity")
.setPatternsResource("locations.json")
.setAlphabetResource("Spanish")
)
""")
# Summary
st.markdown('Summary
', unsafe_allow_html=True)
st.markdown("""
In this article, we talked about named entity recognition using both deep learning-based and rule-based methods. We introduced how to perform the task using the open-source Spark NLP library with Python, which can be used at scale in the Spark ecosystem. These methods can be used for natural language processing applications in various fields, including finance and healthcare.
""", unsafe_allow_html=True)
# References
st.markdown('References
', unsafe_allow_html=True)
st.markdown("""
""", unsafe_allow_html=True)
st.markdown('Community & Support
', unsafe_allow_html=True)
st.markdown("""
- Official Website: Documentation and examples
- Slack: Live discussion with the community and team
- GitHub: Bug reports, feature requests, and contributions
- Medium: Spark NLP articles
- YouTube: Video tutorials
""", unsafe_allow_html=True)