Datasets:
n_chapter
stringclasses 10
values | chapter
stringclasses 10
values | n_section
stringlengths 3
5
| section
stringlengths 3
48
| n_subsection
stringlengths 3
6
| subsection
stringlengths 3
51
| text
stringlengths 1
2.65k
|
---|---|---|---|---|---|---|
2 | Regular Expressions | nan | nan | nan | nan | The dialogue above is from ELIZA, an early natural language processing system ELIZA that could carry on a limited conversation with a user by imitating the responses of a Rogerian psychotherapist (Weizenbaum, 1966). ELIZA is a surprisingly simple program that uses pattern matching to recognize phrases like \"I need X\" and translate them into suitable outputs like \"What would it mean to you if you got X?\". This simple technique succeeds in this domain because ELIZA doesn't actually need to know anything to mimic a Rogerian psychotherapist. As Weizenbaum notes, this is one of the few dialogue genres where listeners can act as if they know nothing of the world. Eliza's mimicry of human conversation was remarkably successful: many people who interacted with ELIZA came to believe that it really understood them and their problems, many continued to believe in ELIZA's abilities even after the program's operation was explained to them (Weizenbaum, 1976), and even today such chatbots are a fun diversion. |
2 | Regular Expressions | nan | nan | nan | nan | Of course modern conversational agents are much more than a diversion; they can answer questions, book flights, or find restaurants, functions for which they rely on a much more sophisticated understanding of the user's intent, as we will see in Chapter 24. Nonetheless, the simple pattern-based methods that powered ELIZA and other chatbots play a crucial role in natural language processing. |
2 | Regular Expressions | nan | nan | nan | nan | We'll begin with the most important tool for describing text patterns: the regular expression. Regular expressions can be used to specify strings we might want to extract from a document, from transforming \"I need X\" in Eliza above, to defining strings like $199 or $24.99 for extracting tables of prices from a document. |
2 | Regular Expressions | nan | nan | nan | nan | We'll then turn to a set of tasks collectively called text normalization, in which regular expressions play an important part. Normalizing text means converting it to a more convenient, standard form. For example, most of what we are going to do with language relies on first separating out or tokenizing words from running text, the task of tokenization. English words are often separated from each other by whitespace, but whitespace is not always sufficient. New York and rock 'n' roll are sometimes treated as large words despite the fact that they contain spaces, while sometimes we'll need to separate I'm into the two words I and am. For processing tweets or texts we'll need to tokenize emoticons like :) or hashtags like #nlproc. 2.1 \u2022 REGULAR EXPRESSIONS 3 Some languages, like Japanese, don't have spaces between words, so word tokenization becomes more difficult. |
2 | Regular Expressions | nan | nan | nan | nan | Another part of text normalization is lemmatization, the task of determining that two words have the same root, despite their surface differences. For example, the words sang, sung, and sings are forms of the verb sing. The word sing is the common lemma of these words, and a lemmatizer maps from all of these to sing. Lemmatization is essential for processing morphologically complex languages like Arabic. Stemming refers to a simpler version of lemmatization in which we mainly just strip suffixes from the end of the word. Text normalization also includes sentence segmentation: breaking up a text into individual sentences, using cues like periods or exclamation points. |
2 | Regular Expressions | nan | nan | nan | nan | Finally, we'll need to compare words and other strings. We'll introduce a metric called edit distance that measures how similar two strings are based on the number of edits (insertions, deletions, substitutions) it takes to change one string into the other. Edit distance is an algorithm with applications throughout language processing, from spelling correction to speech recognition to coreference resolution. |
2 | Regular Expressions | 2.1 | Regular Expressions | nan | nan | One of the unsung successes in standardization in computer science has been the regular expression (RE), a language for specifying text search strings. This prac-regular expression tical language is used in every computer language, word processor, and text processing tools like the Unix tools grep or Emacs. Formally, a regular expression is an algebraic notation for characterizing a set of strings. They are particularly useful for searching in texts, when we have a pattern to search for and a corpus of texts to search through. A regular expression search function will search through the corpus, returning all texts that match the pattern. The corpus can be a single document or a collection. For example, the Unix command-line tool grep takes a regular expression and returns every line of the input document that matches the expression. |
2 | Regular Expressions | 2.1 | Regular Expressions | nan | nan | A search can be designed to return every match on a line, if there are more than one, or just the first match. In the following examples we generally underline the exact part of the pattern that matches the regular expression and show only the first match. We'll show regular expressions delimited by slashes but note that slashes are not part of the regular expressions. |
2 | Regular Expressions | 2.1 | Regular Expressions | nan | nan | Regular expressions come in many variants. We'll be describing extended regular expressions; different regular expression parsers may only recognize subsets of these, or treat some expressions slightly differently. Using an online regular expression tester is a handy way to test out your expressions and explore these variations. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.1 | Basic Regular Expressions | The simplest kind of regular expression is a sequence of simple characters. To search for woodchuck, we type /woodchuck/. The expression /Buttercup/ matches any string containing the substring Buttercup; grep with that expression would return the line I'm called little Buttercup. The search string can consist of a single character (like /!/) or a sequence of characters (like /urgl/). |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.1 | Basic Regular Expressions | Regular expressions are case sensitive; lower case /s/ is distinct from upper case /S/ (/s/ matches a lower case s but not an upper case S). This means that the pattern /woodchucks/ will not match the string Woodchucks. We can solve this problem with the use of the square braces [ and ] . The string of characters inside the braces specifies a disjunction of characters to match. For example, Fig. 2.2 shows that the pattern /[wW]/ matches patterns containing either w or W. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.1 | Basic Regular Expressions | The regular expression /[1234567890]/ specifies any single digit. While such classes of characters as digits or letters are important building blocks in expressions, they can get awkward (e.g., it’s inconvenient to specify /[ABCDEFGHIJKLMNOPQRSTUVWXYZ]/ to mean “any capital letter”). In cases where there is a well-defined sequence associated with a set of characters, the brackets can be used with the dash (-) to specify any one character in a range. The pattern /[2-5]/ specifies any one of the characters 2, 3, 4, or 5. The pattern /[b-g]/ specifies one of the characters b, c, d, e, f, or g. Some other examples are shown in Fig. 2.3. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.1 | Basic Regular Expressions | The square braces can also be used to specify what a single character cannot be, by use of the caret ˆ. If the caret ˆ is the first symbol after the open square brace [, the resulting pattern is negated. For example, the pattern /[ˆa]/ matches any single character (including special characters) except a. This is only true when the caret is the first symbol after the open square brace. If it occurs anywhere else, it usually stands for a caret; Fig. 2.4 shows some examples. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.1 | Basic Regular Expressions | How can we talk about optional elements, like an optional s in woodchuck and woodchucks? We can’t use the square brackets, because while they allow us to say “s or S”, they don’t allow us to say “s or nothing”. For this we use the question mark /?/, which means “the preceding character or nothing”, as shown in Fig. 2.5. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.1 | Basic Regular Expressions | We can think of the question mark as meaning "zero or one instances of the previous character". That is, it's a way of specifying how many of something that we want, something that is very important in regular expressions. For example, consider the language of certain sheep, which consists of strings that look like the following: baa! baaa! baaaa! baaaaa! . . . |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.1 | Basic Regular Expressions | This language consists of strings with a b, followed by at least two a's, followed by an exclamation point. The set of operators that allows us to say things like "some number of as" are based on the asterisk or *, commonly called the Kleene * (gen-Kleene * erally pronounced "cleany star"). The Kleene star means "zero or more occurrences of the immediately previous character or regular expression". So /a*/ means "any string of zero or more as". This will match a or aaaaaa, but it will also match Off Minor since the string Off Minor has zero a's. So the regular expression for matching one or more a is /aa*/, meaning one a followed by zero or more as. More complex patterns can also be repeated. So /[ab]*/ means "zero or more a's or b's" (not "zero or more right square braces"). This will match strings like aaaa or ababab or bbbb. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.1 | Basic Regular Expressions | For specifying multiple digits (useful for finding prices) we can extend /[0-9]/, the regular expression for a single digit. An integer (a string of digits) is thus /[0-9][0-9]*/. (Why isn't it just /[0-9]*/?) Sometimes it's annoying to have to write the regular expression for digits twice, so there is a shorter way to specify "at least one" of some character. This is the Kleene +, which means "one or more occurrences of the immediately preceding Kleene + character or regular expression". Thus, the expression /[0-9]+/ is the normal way to specify "a sequence of digits". There are thus two ways to specify the sheep language: /baaa*!/ or /baa+!/. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.1 | Basic Regular Expressions | One very important special character is the period (/./), a wildcard expression that matches any single character (except a carriage return), as shown in Fig. 2 .6. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.1 | Basic Regular Expressions | The wildcard is often used together with the Kleene star to mean "any string of characters". For example, suppose we want to find any line in which a particular word, for example, aardvark, appears twice. We can specify this with the regular expression /aardvark.*aardvark/. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.1 | Basic Regular Expressions | Anchors are special characters that anchor regular expressions to particular places in a string. The most common anchors are the caretˆand the dollar sign $. The caret matches the start of a line. The pattern /ˆThe/ matches the word The only at the start of a line. Thus, the caretˆhas three uses: to match the start of a line, to indicate a negation inside of square brackets, and just to mean a caret. (What are the contexts that allow grep or Python to know which function a given caret is supposed to have?) The dollar sign $ matches the end of a line. So the pattern $ is a useful pattern for matching a space at the end of a line, and /ˆThe dog\.$/ matches a line that contains only the phrase The dog. (We have to use the backslash here since we want the . to mean "period" and not the wildcard.) |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.1 | Basic Regular Expressions | There are also two other anchors: \b matches a word boundary, and \B matches a non-boundary. Thus, /\bthe\b/ matches the word the but not the word other. More technically, a "word" for the purposes of a regular expression is defined as any sequence of digits, underscores, or letters; this is based on the definition of "words" in programming languages. For example, /\b99\b/ will match the string 99 in There are 99 bottles of beer on the wall (because 99 follows a space) but not 99 in There are 299 bottles of beer on the wall (since 99 follows a number). But it will match 99 in $99 (since 99 follows a dollar sign ($), which is not a digit, underscore, or letter). |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.2 | Disjunction, Grouping and Precendence | Suppose we need to search for texts about pets; perhaps we are particularly interested in cats and dogs. In such a case, we might want to search for either the string cat or the string dog. Since we can't use the square brackets to search for "cat or dog" (why can't we say /[catdog]/?), we need a new operator, the disjunction operator, also disjunction called the pipe symbol |. The pattern /cat|dog/ matches either the string cat or the string dog. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.2 | Disjunction, Grouping and Precendence | Sometimes we need to use this disjunction operator in the midst of a larger sequence. For example, suppose I want to search for information about pet fish for my cousin David. How can I specify both guppy and guppies? We cannot simply say /guppy|ies/, because that would match only the strings guppy and ies. This is because sequences like guppy take precedence over the disjunction operator |. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.2 | Disjunction, Grouping and Precendence | precedence To make the disjunction operator apply only to a specific pattern, we need to use the parenthesis operators ( and ). Enclosing a pattern in parentheses makes it act like a single character for the purposes of neighboring operators like the pipe | and the Kleene*. So the pattern /gupp(y|ies)/ would specify that we meant the disjunction only to apply to the suffixes y and ies. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.2 | Disjunction, Grouping and Precendence | The parenthesis operator ( is also useful when we are using counters like the Kleene*. Unlike the | operator, the Kleene* operator applies by default only to a single character, not to a whole sequence. Suppose we want to match repeated instances of a string. Perhaps we have a line that has column labels of the form Column 1 Column 2 Column 3. The expression /Column [0-9]+ */ will not match any number of columns; instead, it will match a single column followed by any number of spaces! The star here applies only to the space that precedes it, not to the whole sequence. With the parentheses, we could write the expression /(Column [0-9]+ *)*/ to match the word Column, followed by a number and optional spaces, the whole pattern repeated zero or more times. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.2 | Disjunction, Grouping and Precendence | This idea that one operator may take precedence over another, requiring us to sometimes use parentheses to specify what we mean, is formalized by the operator precedence hierarchy for regular expressions. The following table gives the order of RE operator precedence, from highest precedence to lowest precedence. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.2 | Disjunction, Grouping and Precendence | Thus, because counters have a higher precedence than sequences, |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.2 | Disjunction, Grouping and Precendence | /the*/ matches theeeee but not thethe. Because sequences have a higher precedence than disjunction, /the|any/ matches the or any but not thany or theny. Patterns can be ambiguous in another way. Consider the expression /[a-z]*/ when matching against the text once upon a time. Since /[a-z]*/ matches zero or more letters, this expression could match nothing, or just the first letter o, on, onc, or once. In these cases regular expressions always match the largest string they can; we say that patterns are greedy, expanding to cover as much of a string as they can. There are, however, ways to enforce non-greedy matching, using another meaning of the ? qualifier. The operator *? is a Kleene star that matches as little text as possible. The operator +? is a Kleene plus that matches as little text as possible. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.3 | A Simple Example | Suppose we wanted to write a RE to find cases of the English article the. A simple (but incorrect) pattern might be: /the/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.3 | A Simple Example | One problem is that this pattern will miss the word when it begins a sentence and hence is capitalized (i.e., The). This might lead us to the following pattern: /[tT]he/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.3 | A Simple Example | But we will still incorrectly return texts with the embedded in other words (e.g., other or theology). So we need to specify that we want instances with a word boundary on both sides: /\b[tT]he\b/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.3 | A Simple Example | Suppose we wanted to do this without the use of /\b/. We might want this since /\b/ won’t treat underscores and numbers as word boundaries; but we might want to find the in some context where it might also have underlines or numbers nearb (the or the25). We need to specify that we want instances in which there are no alphabetic letters on either side of the the: /[ˆa-zA-Z][tT]he[ˆa-zA-Z]/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.3 | A Simple Example | But there is still one more problem with this pattern: it won’t find the word the when it begins a line. This is because the regular expression [ˆa-zA-Z], which we used to avoid embedded instances of the, implies that there must be some single (although non-alphabetic) character before the the. We can avoid this by specifying that before the the we require either the beginning-of-line or a non-alphabetic character, and the same at the end of the line: /(ˆ|[ˆa-zA-Z])[tT]he([ˆa-zA-Z]|$)/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.3 | A Simple Example | The process we just went through was based on fixing two kinds of errors: false positives, strings that we incorrectly matched like other or there, and false negafalse positives tives, strings that we incorrectly missed, like The. Addressing these two kinds of false negatives errors comes up again and again in implementing speech and language processing systems. Reducing the overall error rate for an application thus involves two antagonistic efforts: |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.3 | A Simple Example | • Increasing precision (minimizing false positives) • Increasing recall (minimizing false negatives) |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.3 | A Simple Example | We'll come back to precision and recall with more precise definitions in Chapter 4. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.4 | More Operators | Figure 2.8 shows some aliases for common ranges, which can be used mainly to save typing. Besides the Kleene * and Kleene + we can also use explicit numbers as counters, by enclosing them in curly brackets. The regular expression /{3}/ means "exactly 3 occurrences of the previous character or expression". So /a\.{24}z/ will match a followed by 24 dots followed by z (but not a followed by 23 or 25 dots followed by a z). |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.4 | More Operators | A range of numbers can also be specified. So /{n,m}/ specifies from n to m occurrences of the previous char or expression, and /{n,}/ means at least n occurrences of the previous expression. REs for counting are summarized in Fig. 2 .9. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.4 | More Operators | Finally, certain special characters are referred to by special notation based on the backslash (\) (see Fig. 2 .10). The most common of these are the newline character newline \n and the tab character \t. To refer to characters that are special themselves (like ., *, [, and \) , precede them with a backslash, (i.e., /\./, /\*/, /\[/, and /\\/). |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.5 | A More Complex Example | Let’s try out a more significant example of the power of REs. Suppose we want to build an application to help a user buy a computer on the Web. The user might want “any machine with at least 6 GHz and 500 GB of disk space for less than $1000”. To do this kind of retrieval, we first need to be able to look for expressions like 6GHz or 500 GB or Mac or $999.99. In the rest of this section we’ll work out some simple regular expressions for this task. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.5 | A More Complex Example | First, let’s complete our regular expression for prices. Here’s a regular expression for a dollar sign followed by a string of digits: |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.5 | A More Complex Example | /$[0-9]+/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.5 | A More Complex Example | Note that the $ character has a different function here than the end-of-line function we discussed earlier. Most regular expression parsers are smart enough to realize that $ here doesn’t mean end-of-line. (As a thought experiment, think about how regex parsers might figure out the function of $ from the context.) |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.5 | A More Complex Example | Now we just need to deal with fractions of dollars. We’ll add a decimal point and two digits afterwards: /$[0-9]+\.[0-9][0-9]/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.5 | A More Complex Example | This pattern only allows $199.99 but not $199. We need to make the cents |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.5 | A More Complex Example | optional and to make sure we’re at a word boundary: /(ˆ|\W)$[0-9]+(\.[0-9][0-9])?\b/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.5 | A More Complex Example | One last catch! This pattern allows prices like $199999.99 which would be far too expensive! We need to limit the dollars: /(ˆ|\W)$[0-9]{0,3}(\.[0-9][0-9])?\b/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.5 | A More Complex Example | How about disk space? We’ll need to allow for optional fractions again (5.5 GB); note the use of ? for making the final s optional, and the of / */ to mean “zero or more spaces” since there might always be extra spaces lying around: /\b[0-9]+(\.[0-9]+)? *(GB|[Gg]igabytes?)\b/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.5 | A More Complex Example | Modifying this regular expression so that it only matches more than 500 GB is left as an exercise for the reader. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.6 | Substitution, Capture Groups, and ELIZA | An important use of regular expressions is in substitutions. For example, the substitution operator s/regexp1/pattern/ used in Python and in Unix commands like vim or sed allows a string characterized by a regular expression to be replaced by another string: s/colour/color/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.6 | Substitution, Capture Groups, and ELIZA | It is often useful to be able to refer to a particular subpart of the string matching the first pattern. For example, suppose we wanted to put angle brackets around all integers in a text, for example, changing the 35 boxes to the <35> boxes. We'd like a way to refer to the integer we've found so that we can easily add the brackets. To do this, we put parentheses ( and ) around the first pattern and use the number operator \1 in the second pattern to refer back. Here's how it looks: s/([0-9]+)/<\1>/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.6 | Substitution, Capture Groups, and ELIZA | The parenthesis and number operators can also specify that a certain string or expression must occur twice in the text. For example, suppose we are looking for the pattern "the Xer they were, the Xer they will be", where we want to constrain the two X's to be the same string. We do this by surrounding the first X with the parenthesis operator, and replacing the second X with the number operator \1, as follows: /the (.*)er they were, the \1er they will be/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.6 | Substitution, Capture Groups, and ELIZA | Here the \1 will be replaced by whatever string matched the first item in parentheses. So this will match the bigger they were, the bigger they will be but not the bigger they were, the faster they will be. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.6 | Substitution, Capture Groups, and ELIZA | This use of parentheses to store a pattern in memory is called a capture group. Every time a capture group is used (i.e., parentheses surround a pattern), the resulting match is stored in a numbered register. If you match two different sets of register parentheses, \\2 means whatever matched the second capture group. Thus /the (.*)er they (.*), the \\1er we \\2/ will match the faster they ran, the faster we ran but not the faster they ran, the faster we ate. Similarly, the third capture group is stored in \\3, the fourth is \\4, and so on. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.6 | Substitution, Capture Groups, and ELIZA | Parentheses thus have a double function in regular expressions; they are used to group terms for specifying the order in which operators should apply, and they are used to capture something in a register. Occasionally we might want to use parentheses for grouping, but don't want to capture the resulting pattern in a register. In that case we use a non-capturing group, which is specified by putting the commands non-capturing group ?: after the open paren, in the form (?: pattern). /(?:some|a few) (people|cats) like some \\1/ will match some cats like some cats but not some cats like some a few. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.6 | Substitution, Capture Groups, and ELIZA | Substitutions and capture groups are very useful in implementing simple chatbots like ELIZA (Weizenbaum, 1966). Recall that ELIZA simulates a Rogerian psychologist by carrying on conversations like the following: User 1 : Men are all alike. ELIZA 1 : IN WHAT WAY User 2 : They're always bugging us about something or other. ELIZA 2 : CAN YOU THINK OF A SPECIFIC EXAMPLE User 3 : Well, my boyfriend made me come here. ELIZA 3 : YOUR BOYFRIEND MADE YOU COME HERE User 4 : He says I'm depressed much of the time. ELIZA 4 : I AM SORRY TO HEAR YOU ARE DEPRESSED |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.6 | Substitution, Capture Groups, and ELIZA | ELIZA works by having a series or cascade of regular expression substitutions each of which matches and changes some part of the input lines. Input lines are 2.2 \u2022 WORDS 11 first uppercased. The first substitutions then change all instances of MY to YOUR, and I'M to YOU ARE, and so on. The next set of substitutions matches and replaces other patterns in the input. Here are some examples: |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.6 | Substitution, Capture Groups, and ELIZA | s/.* I’M (depressed|sad) .*/I AM SORRY TO HEAR YOU ARE \1/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.6 | Substitution, Capture Groups, and ELIZA | s/.* I AM (depressed|sad) .*/WHY DO YOU THINK YOU ARE \1/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.6 | Substitution, Capture Groups, and ELIZA | s/.* all .*/IN WHAT WAY/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.6 | Substitution, Capture Groups, and ELIZA | s/.* always .*/CAN YOU THINK OF A SPECIFIC EXAMPLE/ |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.6 | Substitution, Capture Groups, and ELIZA | Since multiple substitutions can apply to a given input, substitutions are assigned a rank and applied in order. Creating patterns is the topic of Exercise 2.3, and we return to the details of the ELIZA architecture in Chapter 24. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.7 | Lookahead Assertions | Finally, there will be times when we need to predict the future: look ahead in the text to see if some pattern matches, but not advance the match cursor, so that we can then deal with the pattern if it occurs. |
2 | Regular Expressions | 2.1 | Regular Expressions | 2.1.7 | Lookahead Assertions | These lookahead assertions make use of the (? syntax that we saw in the previous section for non-capture groups. The operator (?= pattern) is true if pattern occurs, but is zero-width, i.e. the match pointer doesn’t advance. The operator (?! pattern) only returns true if a pattern does not match, but again is zero-width and doesn’t advance the cursor. Negative lookahead is commonly used when we are parsing some complex pattern but want to rule out a special case. For example suppose we want to match, at the beginning of a line, any single word that doesn’t start with “Volcano”. We can use negative lookahead to do this: /ˆ(?!Volcano)[A-Za-z]+/ |
2 | Regular Expressions | 2.2 | Words | nan | nan | Before we talk about processing words, we need to decide what counts as a word. Let’s start by looking at one particular corpus (plural corpora), a computer-readable collection of text or speech. For example the Brown corpus is a million-word collection of samples from 500 written English texts from different genres (newspaper, fiction, non-fiction, academic, etc.), assembled at Brown University in 1963-64 (Kučera and Francis, 1967) . How many words are in the following Brown sentence? |
2 | Regular Expressions | 2.2 | Words | nan | nan | He stepped out into the hall, was delighted to encounter a water brother. |
2 | Regular Expressions | 2.2 | Words | nan | nan | This sentence has 13 words if we don't count punctuation marks as words, 15 if we count punctuation. Whether we treat period ("."), comma (","), and so on as words depends on the task. Punctuation is critical for finding boundaries of things (commas, periods, colons) and for identifying some aspects of meaning (question marks, exclamation marks, quotation marks). For some tasks, like part-of-speech tagging or parsing or speech synthesis, we sometimes treat punctuation marks as if they were separate words. |
2 | Regular Expressions | 2.2 | Words | nan | nan | The Switchboard corpus of American English telephone conversations between strangers was collected in the early 1990s; it contains 2430 conversations averaging 6 minutes each, totaling 240 hours of speech and about 3 million words (Godfrey et al., 1992) . Such corpora of spoken language don't have punctuation but do intro-duce other complications with regard to defining words. Let's look at one utterance from Switchboard; an utterance is the spoken correlate of a sentence: I do uh main-mainly business data processing This utterance has two kinds of disfluencies. The broken-off word main-is disfluency called a fragment. Words like uh and um are called fillers or filled pauses. Should we consider these to be words? Again, it depends on the application. If we are building a speech transcription system, we might want to eventually strip out the disfluencies. |
2 | Regular Expressions | 2.2 | Words | nan | nan | But we also sometimes keep disfluencies around. Disfluencies like uh or um are actually helpful in speech recognition in predicting the upcoming word, because they may signal that the speaker is restarting the clause or idea, and so for speech recognition they are treated as regular words. Because people use different disfluencies they can also be a cue to speaker identification. In fact Clark and Fox Tree (2002) showed that uh and um have different meanings. What do you think they are? |
2 | Regular Expressions | 2.2 | Words | nan | nan | Are capitalized tokens like They and uncapitalized tokens like they the same word? These are lumped together in some tasks (speech recognition), while for partof-speech or named-entity tagging, capitalization is a useful feature and is retained. |
2 | Regular Expressions | 2.2 | Words | nan | nan | How about inflected forms like cats versus cat? These two words have the same lemma cat but are different wordforms. A lemma is a set of lexical forms having lemma the same stem, the same major part-of-speech, and the same word sense. The wordform is the full inflected or derived form of the word. For morphologically complex wordform languages like Arabic, we often need to deal with lemmatization. For many tasks in English, however, wordforms are sufficient. |
2 | Regular Expressions | 2.2 | Words | nan | nan | How many words are there in English? To answer this question we need to distinguish two ways of talking about words. Types are the number of distinct words word type in a corpus; if the set of words in the vocabulary is V , the number of types is the vocabulary size |V |. Tokens are the total number N of running words. If we ignore word token punctuation, the following Brown sentence has 16 tokens and 14 types: |
2 | Regular Expressions | 2.2 | Words | nan | nan | They picnicked by the pool, then lay back on the grass and looked at the stars. |
2 | Regular Expressions | 2.2 | Words | nan | nan | When we speak about the number of words in the language, we are generally referring to word types. |
2 | Regular Expressions | 2.2 | Words | nan | nan | Tokens = N Types = |V | Shakespeare 884 thousand 31 thousand Brown corpus 1 million 38 thousand Switchboard telephone conversations 2.4 million 20 thousand COCA 440 million 2 million Google n-grams 1 trillion 13 million Figure 2 .11 Rough numbers of types and tokens for some English language corpora. The largest, the Google n-grams corpus, contains 13 million types, but this count only includes types appearing 40 or more times, so the true number would be much larger. |
2 | Regular Expressions | 2.2 | Words | nan | nan | Fig. 2 .11 shows the rough numbers of types and tokens computed from some popular English corpora. The larger the corpora we look at, the more word types we find, and in fact this relationship between the number of types |V | and number of tokens N is called Herdan's Law (Herdan, 1960) or Heaps' Law (Heaps, 1978) Herdan's Law Heaps' Law after its discoverers (in linguistics and information retrieval respectively). It is shown in Eq. 2.1, where k and β are positive constants, and 0 < β < 1. |
2 | Regular Expressions | 2.2 | Words | nan | nan | |V | = kN β (2.1) 2.3 • CORPORA 13 |
2 | Regular Expressions | 2.2 | Words | nan | nan | The value of β depends on the corpus size and the genre, but at least for the large corpora in Fig. 2 .11, β ranges from .67 to .75. Roughly then we can say that the vocabulary size for a text goes up significantly faster than the square root of its length in words. |
2 | Regular Expressions | 2.2 | Words | nan | nan | Another measure of the number of words in the language is the number of lemmas instead of wordform types. Dictionaries can help in giving lemma counts; dictionary entries or boldface forms are a very rough upper bound on the number of lemmas (since some lemmas have multiple boldface forms). The 1989 edition of the Oxford English Dictionary had 615,000 entries. |
2 | Regular Expressions | 2.3 | Corpora | nan | nan | Words don't appear out of nowhere. Any particular piece of text that we study is produced by one or more specific speakers or writers, in a specific dialect of a specific language, at a specific time, in a specific place, for a specific function. |
2 | Regular Expressions | 2.3 | Corpora | nan | nan | Perhaps the most important dimension of variation is the language. NLP algorithms are most useful when they apply across many languages. The world has 7097 languages at the time of this writing, according to the online Ethnologue catalog (Simons and Fennig, 2018) . It is important to test algorithms on more than one language, and particularly on languages with different properties; by contrast there is an unfortunate current tendency for NLP algorithms to be developed or tested just on English (Bender, 2019) . Even when algorithms are developed beyond English, they tend to be developed for the official languages of large industrialized nations (Chinese, Spanish, Japanese, German etc.), but we don't want to limit tools to just these few languages. Furthermore, most languages also have multiple varieties, often spoken in different regions or by different social groups. Thus, for example, if we're processing text that uses features of African American English (AAE) or AAE African American Vernacular English (AAVE) -the variations of English used by millions of people in African American communities (King 2020) -we must use NLP tools that function with features of those varieties. Twitter posts might use features often used by speakers of African American English, such as constructions like iont (I don't in Mainstream American English (MAE)), or talmbout corresponding MAE to MAE talking about, both examples that influence word segmentation (Blodgett et al. 2016 , Jones 2015 . |
2 | Regular Expressions | 2.3 | Corpora | nan | nan | It's also quite common for speakers or writers to use multiple languages in a single communicative act, a phenomenon called code switching. Code switching (2.2) Por primera vez veo a @username actually being hateful! it was beautiful:) |
2 | Regular Expressions | 2.3 | Corpora | nan | nan | [For the first time I get to see @username actually being hateful! it was beautiful:) ] (2.3) dost tha or ra-hega ... dont wory ... but dherya rakhe ["he was and will remain a friend ... don't worry ... |
2 | Regular Expressions | 2.3 | Corpora | nan | nan | Another dimension of variation is the genre. The text that our algorithms must process might come from newswire, fiction or non-fiction books, scientific articles, Wikipedia, or religious texts. It might come from spoken genres like telephone conversations, business meetings, police body-worn cameras, medical interviews, or transcripts of television shows or movies. It might come from work situations like doctors' notes, legal text, or parliamentary or congressional proceedings. |
2 | Regular Expressions | 2.3 | Corpora | nan | nan | Text also reflects the demographic characteristics of the writer (or speaker): their age, gender, race, socioeconomic class can all influence the linguistic properties of the text we are processing. |
2 | Regular Expressions | 2.3 | Corpora | nan | nan | And finally, time matters too. Language changes over time, and for some languages we have good corpora of texts from different historical periods. |
2 | Regular Expressions | 2.3 | Corpora | nan | nan | Because language is so situated, when developing computational models for language processing from a corpus, it's important to consider who produced the language, in what context, for what purpose. How can a user of a dataset know all these details? The best way is for the corpus creator to build a datasheet (Gebru et al., 2020) or data statement (Bender and Friedman, 2018) for each corpus. A datasheet specifies properties of a dataset like: |
2 | Regular Expressions | 2.3 | Corpora | nan | nan | Motivation: Why was the corpus collected, by whom, and who funded it? Situation: When and in what situation was the text written/spoken? For example, was there a task? Was the language originally spoken conversation, edited text, social media communication, monologue vs. dialogue? Language variety: What language (including dialect/region) was the corpus in? Speaker demographics: What was, e.g., age or gender of the authors of the text? Collection process: How big is the data? If it is a subsample how was it sampled? |
2 | Regular Expressions | 2.3 | Corpora | nan | nan | Was the data collected with consent? How was the data pre-processed, and what metadata is available? Annotation process: What are the annotations, what are the demographics of the annotators, how were they trained, how was the data annotated? Distribution: Are there copyright or other intellectual property restrictions? |
2 | Regular Expressions | 2.4 | Text Normalization | nan | nan | Before almost any natural language processing of a text, the text has to be normalized. At least three tasks are commonly applied as part of any normalization process: 1. Tokenizing (segmenting) words 2. Normalizing word formats 3. Segmenting sentences In the next sections we walk through each of these tasks. |
2 | Regular Expressions | 2.4 | Text Normalization | 2.4.1 | Unix Tools for Crude Tokenization and Normalization | Let's begin with an easy, if somewhat naive version of word tokenization and normalization (and frequency computation) that can be accomplished for English solely in a single UNIX command-line, inspired by Church (1994) . We'll make use of some Unix commands: tr, used to systematically change particular characters in the input; sort, which sorts input lines in alphabetical order; and uniq, which collapses and counts adjacent identical lines. |
2 | Regular Expressions | 2.4 | Text Normalization | 2.4.1 | Unix Tools for Crude Tokenization and Normalization | For example let's begin with the 'complete words' of Shakespeare in one file, sh.txt. We can use tr to tokenize the words by changing every sequence of nonalphabetic characters to a newline ('A-Za-z' means alphabetic, the -c option complements to non-alphabet, and the -s option squeezes all sequences into a single character): tr -sc 'A-Za-z' '\n' < sh.txt |
2 | Regular Expressions | 2.4 | Text Normalization | 2.4.1 | Unix Tools for Crude Tokenization and Normalization | The output of this command will be: THE SONNETS by William Shakespeare From fairest creatures We ... |
2 | Regular Expressions | 2.4 | Text Normalization | 2.4.1 | Unix Tools for Crude Tokenization and Normalization | Now that there is one word per line, we can sort the lines, and pass them to uniq -c which will collapse and count them: tr -sc 'A-Za-z' '\n' < sh.txt | sort | uniq -c with the following output: |
2 | Regular Expressions | 2.4 | Text Normalization | 2.4.1 | Unix Tools for Crude Tokenization and Normalization | 1945 A 72 AARON 19 ABBESS 25 Aaron 6 Abate 1 Abates 5 Abbess 6 Abbey 3 Abbot ... |
2 | Regular Expressions | 2.4 | Text Normalization | 2.4.1 | Unix Tools for Crude Tokenization and Normalization | Alternatively, we can collapse all the upper case to lower case: |
2 | Regular Expressions | 2.4 | Text Normalization | 2.4.1 | Unix Tools for Crude Tokenization and Normalization | tr -sc 'A-Za-z' '\n' < sh.txt | tr A-Z a-z | sort | uniq -c whose output is 14725 a 97 aaron 1 abaissiez 10 abandon 2 abandoned 2 abase 1 abash 14 abate 3 abated 3 abatement ... Now we can sort again to find the frequent words. The -n option to sort means to sort numerically rather than alphabetically, and the -r option means to sort in reverse order (highest-to-lowest): |
2 | Regular Expressions | 2.4 | Text Normalization | 2.4.1 | Unix Tools for Crude Tokenization and Normalization | tr -sc 'A-Za-z' '\n' < sh.txt | tr A-Z a-z | sort | uniq -c | sort -n -r |
2 | Regular Expressions | 2.4 | Text Normalization | 2.4.1 | Unix Tools for Crude Tokenization and Normalization | The results show that the most frequent words in Shakespeare, as in any other corpus, are the short function words like articles, pronouns, prepositions: |
2 | Regular Expressions | 2.4 | Text Normalization | 2.4.1 | Unix Tools for Crude Tokenization and Normalization | Unix tools of this sort can be very handy in building quick word count statistics for any corpus. |
Dataset Card for IK-NLP-22 Speech and Language Processing
Dataset Summary
This dataset contains chapters extracted from the Speech and Language Processing book (3ed draft of January 2022) by Jurafsky and Martin via a semi-automatic procedure (see below for additional details). Moreover, a small set of conceptual questions associated with each chapter is provided alongside possible answers.
Only the content of chapters 2 to 11 of the book draft are provided, since these are the ones relevant to the contents of the 2022 edition of the Natural Language Processing course at the Information Science Master's Degree (IK) at the University of Groningen, taught by Arianna Bisazza with the assistance of Gabriele Sarti.
The Speech and Language Processing book was made freely available by the authors Dan Jurafsky and James H. Martin on the Stanford University website. The present dataset was created for educational purposes, and is based on the draft of the 3rd edition of the book accessed on December 29th, 2021. All rights of the present contents are attributed to the original authors.
Projects
See the course page for a description of possible research directions.
Languages
The language data of Speech and Language Processing is in English (BCP-47 en
)
Dataset Structure
Data Instances
The dataset contains two configurations: paragraphs
(default), containing the full set of parsed paragraphs associated to the respective chapter and sections, and questions
, containing a small subset of example questions matched with the relevant paragraph, and with the answer span extracted.
Paragraphs Configuration
The paragraphs
configuration contains all the paragraphs of the selected book chapters, each associated with the respective chapter, section and subsection. An example from the train
split of the paragraphs
config is provided below. The example belongs to section 2.3 but not to a subsection, so the n_subsection
and subsection
fields are empty strings.
{
"n_chapter": "2",
"chapter": "Regular Expressions",
"n_section": "2.3",
"section": "Corpora",
"n_subsection": "",
"subsection": "",
"text": "It's also quite common for speakers or writers to use multiple languages in a single communicative act, a phenomenon called code switching. Code switching (2.2) Por primera vez veo a @username actually being hateful! it was beautiful:)"
}
The text is provided as-is, without further preprocessing or tokenization.
Questions Configuration
The questions
configuration contains a small subset of questions, the top retrieved paragraph relevant to the question and the answer spans. An example from the test
split of the questions
config is provided below.
{
"chapter": "Regular Expressions",
"section": "Regular Expressions",
"subsection": "Basic Regular Expressions",
"question": "What is the meaning of the Kleene star in Regex?",
"paragraph": "This language consists of strings with a b, followed by at least two a's, followed by an exclamation point. The set of operators that allows us to say things like \"some number of as\" are based on the asterisk or *, commonly called the Kleene * (gen-Kleene * erally pronounced \"cleany star\"). The Kleene star means \"zero or more occurrences of the immediately previous character or regular expression\". So /a*/ means \"any string of zero or more as\". This will match a or aaaaaa, but it will also match Off Minor since the string Off Minor has zero a's. So the regular expression for matching one or more a is /aa*/, meaning one a followed by zero or more as. More complex patterns can also be repeated. So /[ab]*/ means \"zero or more a's or b's\" (not \"zero or more right square braces\"). This will match strings like aaaa or ababab or bbbb.",
"answer": "The Kleene star means \"zero or more occurrences of the immediately previous character or regular expression\""
}
Data Splits
config | train | test |
---|---|---|
paragraphs |
1697 | - |
questions |
- | 59 |
Dataset Creation
The contents of the Speech and Language Processing book PDF were extracted using the PDF to S2ORC JSON Converter by AllenAI. The texts extracted by the converter were then manually cleaned to remove end-of-chapter exercises and other irrelevant content (e.g. tables, TikZ figures, etc.). Some issues in the parsed content were preserved in the final version to maintain a naturalistic setting for the associated projects, promoting the use of data filtering heuristics for students.
The question-answer pairs were created manually by Gabriele Sarti.
Additional Information
Dataset Curators
For problems on this 🤗 Datasets version, please contact us at ik-nlp-course@rug.nl.
Licensing Information
Please refer to the authors' websites for licensing information.
Citation Information
Please cite the authors if you use these corpora in your work:
@book{slp3ed-iknlp2022,
author = {Jurafsky, Daniel and Martin, James},
year = {2021},
month = {12},
pages = {1--235, 1--19},
title = {Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition},
volume = {3}
}
- Downloads last month
- 65