Skip to content
This repository was archived by the owner on Mar 8, 2021. It is now read-only.

Commit 4db042e

Browse files
committed
various updates and new answer books
1 parent 985d2b7 commit 4db042e

6 files changed

+3582
-36
lines changed

Chapter 2 - First steps.ipynb

+17-10
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@
160160
"# insert your code here\n",
161161
"\n",
162162
"# The following test should print True if your code is correct \n",
163-
"print(number_of_es == text.count(\"e\"))"
163+
"print(number_of_es == 78)"
164164
],
165165
"language": "python",
166166
"metadata": {},
@@ -458,7 +458,8 @@
458458
"cell_type": "code",
459459
"collapsed": false,
460460
"input": [
461-
"# insert your code here"
461+
"# insert your code here\n",
462+
"print(count_in_list(\"the\", words))"
462463
],
463464
"language": "python",
464465
"metadata": {},
@@ -650,6 +651,11 @@
650651
"cell_type": "code",
651652
"collapsed": false,
652653
"input": [
654+
"infile = open('data/austen-emma-excerpt.txt')\n",
655+
"text = infile.read()\n",
656+
"infile.close()\n",
657+
"words = text.split()\n",
658+
"\n",
653659
"for word in words:\n",
654660
" print(word, count_in_list(word, words))"
655661
],
@@ -870,7 +876,7 @@
870876
"collapsed": false,
871877
"input": [
872878
"short_text = \"Commas, as it turns out, are so much overestimated.\"\n",
873-
"short_text = # insert your code here\n",
879+
"# insert your code here\n",
874880
"\n",
875881
"# The following test should print True if your code is correct \n",
876882
"print(short_text == \"commas as it turns out are so much overestimated.\")"
@@ -1003,7 +1009,7 @@
10031009
"# insert your code here\n",
10041010
"\n",
10051011
"# The following test should print True if your code is correct \n",
1006-
"print(woodhouse_counts = 263)"
1012+
"print(woodhouse_counts == 263)"
10071013
],
10081014
"language": "python",
10091015
"metadata": {},
@@ -1080,17 +1086,18 @@
10801086
"infile = # insert your code here\n",
10811087
"text = # insert your code here\n",
10821088
"\n",
1083-
"# now clean up the text, and turn all characters to lowercase\n",
1089+
"# now clean up the text, turn all characters to lowercase \n",
1090+
"# and split the text into a list of words\n",
10841091
"text = # insert you code here\n",
10851092
"\n",
10861093
"# next compute the frequency distribution\n",
10871094
"frequency_distribution = # insert your code here\n",
10881095
"\n",
1089-
"# now open the file data/austen-frequency-distribution for writing\n",
1096+
"# now open the file data/austen-frequency-distribution.txt for writing\n",
10901097
"outfile = # insert your code here\n",
10911098
"\n",
10921099
"for word, frequency in frequency_distribution.items():\n",
1093-
" outfile.write(word + \";\" + str(frequency))\n",
1100+
" outfile.write(word + \";\" + str(frequency) + '\\n')\n",
10941101
" \n",
10951102
"# close the outfile\n",
10961103
"outfile.# insert your code here"
@@ -1194,13 +1201,13 @@
11941201
],
11951202
"metadata": {},
11961203
"output_type": "pyout",
1197-
"prompt_number": 124,
1204+
"prompt_number": 47,
11981205
"text": [
1199-
"<IPython.core.display.HTML at 0x109425c50>"
1206+
"<IPython.core.display.HTML at 0x1091ab310>"
12001207
]
12011208
}
12021209
],
1203-
"prompt_number": 124
1210+
"prompt_number": 47
12041211
},
12051212
{
12061213
"cell_type": "markdown",

Chapter 3 - Text analysis.ipynb

+11-8
Original file line numberDiff line numberDiff line change
@@ -310,7 +310,7 @@
310310
"cell_type": "code",
311311
"collapsed": false,
312312
"input": [
313-
"from preprocessing import clean_text"
313+
"from pyhum.preprocessing import clean_text"
314314
],
315315
"language": "python",
316316
"metadata": {},
@@ -325,7 +325,7 @@
325325
" each sentence and remove all punctuation. Finally split each\n",
326326
" sentence into a list of words.\"\"\"\n",
327327
" # insert your code here\n",
328-
" \n",
328+
"\n",
329329
"# these tests should return True if your code is correct\n",
330330
"print(tokenize(\"This is a sentence. So, what!\") == \n",
331331
" [[\"this\", \"is\", \"a\", \"sentence\"], [\"so\", \"what\"]])"
@@ -395,7 +395,10 @@
395395
"cell_type": "code",
396396
"collapsed": false,
397397
"input": [
398-
"# insert your code here"
398+
"# insert your code here\n",
399+
"corpus = []\n",
400+
"for filename in list_textfiles('data/arabian_nights'):\n",
401+
" corpus.append(tokenize(read_file(filename)))"
399402
],
400403
"language": "python",
401404
"metadata": {},
@@ -921,7 +924,7 @@
921924
" # insert your code here\n",
922925
"\n",
923926
"# these tests should return True if your code is correct\n",
924-
"print(story_time([\"story\"] * 130) == 1.0)"
927+
"print(story_time([[\"story\"]]) * 130 == 1.0)"
925928
],
926929
"language": "python",
927930
"metadata": {},
@@ -1048,7 +1051,7 @@
10481051
"cell_type": "markdown",
10491052
"metadata": {},
10501053
"source": [
1051-
"**3)** In this final exercize we will put everything together what we have learnt so far. We want you to write a function `positions_of` that returns for a given word all sentence positions in the *Arabian Nights* where that word occurs. We are not interested in the positions relative to a particular night, but only to the corpus as a whole. Use that function to find all occurences of the name Sharahzad and store the corresponding indexes in the variable `positions_of_shahrazad`. Do the same thing for the name *Ali*. Store the result in `positions_of_ali`. Finally, find all occurences of *Egypt* and store the indexes in `positions_of_egypt`. Tip: (1) remember that we lowercased the entire corpus! (2) remember that indexes start at 0."
1054+
"**3)** In this final exercise we will put everything together what we have learnt so far. We want you to write a function `positions_of` that returns for a given word all sentence positions in the *Arabian Nights* where that word occurs. We are not interested in the positions relative to a particular night, but only to the corpus as a whole. Use that function to find all occurences of the name Sharahzad and store the corresponding indexes in the variable `positions_of_shahrazad`. Do the same thing for the name *Ali*. Store the result in `positions_of_ali`. Finally, find all occurences of *Egypt* and store the indexes in `positions_of_egypt`. Tip: (1) remember that we lowercased the entire corpus! (2) remember that indexes start at 0."
10521055
]
10531056
},
10541057
{
@@ -1198,13 +1201,13 @@
11981201
],
11991202
"metadata": {},
12001203
"output_type": "pyout",
1201-
"prompt_number": 163,
1204+
"prompt_number": 214,
12021205
"text": [
1203-
"<IPython.core.display.HTML at 0x1103e2090>"
1206+
"<IPython.core.display.HTML at 0x110239ad0>"
12041207
]
12051208
}
12061209
],
1207-
"prompt_number": 163
1210+
"prompt_number": 214
12081211
},
12091212
{
12101213
"cell_type": "markdown",

Chapter 5 - Building NLP Applications.ipynb

+38-17
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"metadata": {
3-
"name": "Chapter 5 - Building NLP Applications"
3+
"name": ""
44
},
55
"nbformat": 3,
66
"nbformat_minor": 0,
@@ -26,7 +26,7 @@
2626
"cell_type": "markdown",
2727
"metadata": {},
2828
"source": [
29-
"In the last chapter we made some tools to prepare corpora for further processing. To be able to tokenise a text is nice, but from a humanities perspective not very interesting. So, what are we going to do with it? In this chapter, you'll implement two major applications that build upon the tools you developed. The first will be a relatively simple program that scores each text in a corpus according to its Automatic Readability Index. In the second application we will build a system that can predict who wrote a certain text. Again, we'll need to cover a lot of ground and things are becoming increasingly difficult now. So, let's get started!"
29+
"In the last chapter we made some tools to prepare corpora for further processing. To be able to tokenise a text is nice, but from a humanities perspective not very interesting. So, what are we going to do with it? In this chapter, you'll implement two major applications that build upon the tools you developed. The first will be a relatively simple program that scores each text in a corpus according to its *Automatic Readability Index*. In the second application we will build a system that can predict who wrote a certain text. Again, we'll need to cover a lot of ground and things are becoming increasingly difficult now. So, let's get started!"
3030
]
3131
},
3232
{
@@ -41,7 +41,7 @@
4141
"cell_type": "markdown",
4242
"metadata": {},
4343
"source": [
44-
"The Automatic Readability Index is a readability test designed to gauge the understandability of a text. The formula for calculating the Automated Readability Index is as follows:\n",
44+
"The *Automatic Readability Index* is a readability test designed to gauge the understandability of a text. The formula for calculating the *Automated Readability Index* is as follows:\n",
4545
"\n",
4646
"$$ 4.71 \\cdot \\frac{nchars}{nwords} + 0.5 \\cdot \\frac{nwords}{nsents} - 21.43 $$\n",
4747
"\n",
@@ -67,19 +67,19 @@
6767
"cell_type": "markdown",
6868
"metadata": {},
6969
"source": [
70-
"Write a function `AutomaticReadabilityIndex` that takes three arguments `n_chars`, `n_words` and `n_sents` and returns the ARI given those arguments."
70+
"Write a function `automatic_readability_index` that takes three arguments `n_chars`, `n_words` and `n_sents` and returns the ARI given those arguments."
7171
]
7272
},
7373
{
7474
"cell_type": "code",
7575
"collapsed": false,
7676
"input": [
77-
"def AutomaticReadabilityIndex(n_chars, n_words, n_sents):\n",
77+
"def automatic_readability_index(n_chars, n_words, n_sents):\n",
7878
" # insert your code here\n",
7979
"\n",
8080
"# do not modify the code below, it is for testing your answer only!\n",
8181
"# it should output True if you did well\n",
82-
"print(abs(AutomaticReadabilityIndex(300, 40, 10) - 15.895) < 0.001)"
82+
"print(abs(automatic_readability_index(300, 40, 10) - 15.895) < 0.001)"
8383
],
8484
"language": "python",
8585
"metadata": {},
@@ -96,24 +96,25 @@
9696
"cell_type": "markdown",
9797
"metadata": {},
9898
"source": [
99-
"Now we need to write some code to obtain the numbers we so wishfully assumed to have. We will use the code we wrote in earlier chapters to read and tokenise texts. We stored all the functions we wrote for our corpus reader in `preprocess.py`. We only need the function `readcorpus` and import it here."
99+
"Now we need to write some code to obtain the numbers we so wishfully assumed to have. We will use the code we wrote in earlier chapters to read and tokenise texts. In the file `preprocessing.py` we defines a function `read_corpus` which reads all files with the extension `.txt` in the given directory. It tokenizes each text into a list of sentences each of which is represented by a list of words. All words are lowercased and we remove all punctuation. We import the function using the following line of code:"
100100
]
101101
},
102102
{
103103
"cell_type": "code",
104104
"collapsed": false,
105105
"input": [
106-
"from preprocess import readcorpus"
106+
"from pyhum.preprocessing import read_corpus"
107107
],
108108
"language": "python",
109109
"metadata": {},
110-
"outputs": []
110+
"outputs": [],
111+
"prompt_number": 2
111112
},
112113
{
113114
"cell_type": "markdown",
114115
"metadata": {},
115116
"source": [
116-
"Remember that the function readcorpus returns a generator of `(filename, sentences)` tuples. Sentences are represented by lists of strings, i.e. a list of tokens. Let's write a function `extract_counts` that takes a list of sentences as input and returns the number of characters, the number of words and the number of sentences as a tuple."
117+
"Let's write a function `extract_counts` that takes a list of sentences as input and returns the number of characters, the number of words and the number of sentences as a tuple."
117118
]
118119
},
119120
{
@@ -147,8 +148,8 @@
147148
"\n",
148149
"# do not modify the code below, for testing only!\n",
149150
"print(extract_counts(\n",
150-
" [[\"This\", \"was\", \"rather\", \"easy\", \".\"], \n",
151-
" [\"Please\", \"give\", \"me\", \"something\", \"more\", \"challenging\"]]) == (54, 11, 2))"
151+
" [[\"this\", \"was\", \"rather\", \"easy\"], \n",
152+
" [\"please\", \"give\", \"me\", \"something\", \"more\", \"challenging\"]]) == (53, 10, 2))"
152153
],
153154
"language": "python",
154155
"metadata": {},
@@ -172,12 +173,11 @@
172173
"cell_type": "code",
173174
"collapsed": false,
174175
"input": [
175-
"sentences = [[\"This\", \"was\", \"rather\", \"easy\", \".\"], \n",
176+
"sentences = [[\"this\", \"was\", \"rather\", \"easy\"], \n",
176177
" [\"Please\", \"give\", \"me\", \"something\", \"more\", \"challenging\"]]\n",
177178
"\n",
178179
"n_chars, n_words, n_sents = extract_counts(sentences)\n",
179-
"\n",
180-
"print(abs(AutomaticReadabilityIndex(n_chars, n_words, n_sents) - 4.442) < 0.001)"
180+
"print(automatic_readability_index(n_chars, n_words, n_sents))"
181181
],
182182
"language": "python",
183183
"metadata": {},
@@ -209,7 +209,7 @@
209209
"cell_type": "markdown",
210210
"metadata": {},
211211
"source": [
212-
"Write the function `compute_ARI` that takes as argument a list of sentences (represented by lists of words) and returns the Automatic Readability Index for that input."
212+
"Write the function `compute_ARI` that takes as argument a list of sentences (represented by lists of words) and returns the *Automatic Readability Index* for that input."
213213
]
214214
},
215215
{
@@ -274,6 +274,25 @@
274274
"metadata": {},
275275
"outputs": []
276276
},
277+
{
278+
"cell_type": "markdown",
279+
"metadata": {},
280+
"source": [
281+
"Remember that in Chapter 3, we plotted different basic statistics using Python plotting library matplotlib. Can you do the same for all ARIs?"
282+
]
283+
},
284+
{
285+
"cell_type": "code",
286+
"collapsed": false,
287+
"input": [
288+
"import matplotlib.pyplot as plt\n",
289+
"\n",
290+
"# insert your code here"
291+
],
292+
"language": "python",
293+
"metadata": {},
294+
"outputs": []
295+
},
277296
{
278297
"cell_type": "markdown",
279298
"metadata": {},
@@ -527,7 +546,9 @@
527546
"collapsed": false,
528547
"input": [
529548
"def add_file_to_database(filename, feature_database):\n",
530-
" return update_counts(extract_author(filename), extract_features(filename), feature_database)"
549+
" return update_counts(extract_author(filename), \n",
550+
" extract_features(filename), \n",
551+
" feature_database)"
531552
],
532553
"language": "python",
533554
"metadata": {},

0 commit comments

Comments
 (0)