# Boilerplate Spark stuff:
=
=
# Load documents (one per line).
=
=
=
# Store the document names for later:
=
# Now hash the words in each document to their term frequencies:
= #100K hash buckets just to save some memory
=
# At this point we have an RDD of sparse vectors representing each document,
# where each value maps to the term frequency of each unique hash value.
# Let's compute the TF*IDF of each term in each document:
=
=
# Now we have an RDD of sparse vectors, where each value is the TFxIDF
# of each unique hash value for each document.
# I happen to know that the article for "Abraham Lincoln" is in our data
# set, so let's search for "Gettysburg" (Lincoln gave a famous speech there):
# First, let's figure out what hash value "Gettysburg" maps to by finding the
# index a sparse vector from HashingTF gives us back:
=
=
# Now we will extract the TF*IDF score for Gettsyburg's hash value into
# a new RDD for each document:
=
# We'll zip in the document names so we can see which is which:
=
# And, print the document with the maximum TF*IDF value: