Readme: This was a fun assignment, but there are some things that I think needed to be explained in the task and weren’t, like how to use the WORD and NOT_WORD things, also it would have been very useful to explain where the text files needed to go because I spent maybe an hour debugging before I looked online and found you just needed to move a file….i think the sheer complexity of the program, with all the classes and such, is simply overwhelming….I think to make the project better, you should cut down on all the total number of classes, and precreate all the classes the students need to code and overall give a better general idea of how the code fits together before the student begins the work. It took me about 8 hrs to do, and was a good assignment, I got help from no one On some side notes, the board finder is RIDICULOUSLY faster than the lexicon finder, about 100 times faster. This is because in the lexicon finder, it has to take each word and scan across the whole board looking for that one word, so it scans across the board a few hundred thousand times looking for one word each time. On the other hand, the board first finder scans the board once and takes each combination of letters and matches it against a well sorted lexicon to determine whether it’s a word. On top of that, if the word is also not a prefix of another word it stops looking there, further expediting the process. For 1000 runs on a 4x4, both scanners got a max of 889, but it took the board finder 1.74 secs on the trie lexicon and lexicon finder 190.7 secs on the trie lexicon. On the Simple lexicon, the lexicon finder performed in 106 secs while the board finder took 3.4. This means that some interaction between the lexicon finder and the simple lexicon causes it to work faster. This is due to the fact that board finder constantly has to look up things, meaning a slow search with simple lexicon and quicker lookup in a trie, but the lexicon first finder takes each word in turn meaning that it can just run through the traversal of the set but is slower in the trie. Because its 4 am and I can’t run tests all night, here are the results for 1000 runs on the 4 lexicons with the board finder 1000 10000 Predicted Predicted 1000000 100000 Simple 3.057 29.97 300 3,000 BinarySearch 3.121 31.1 310 3100 Trie 1.684 16.9 169 1,690 CompressedTrie 22.822 234 232 23200 Since it is roughly linear….the times increase proportionally. For a 5x5 board, 1000 tries takes 57 seconds for the board lexicon with the TRIE and the lexicon first takes 258. So, the time it takes rises exponentially to the board size. The max score was 1301. Again, the times increase proportional to number of trials. With the search benchmarking, all 4 were the same size except for the compressed trie which was about 75% as large as the others (58,000 to 80,000 approx). All took infintessimally little time to look up words. The three larger files took .031 secs to look up prefixes while the compressed trie took .046. Also the iteration times for simple,binsearch,trie, and compressed are .016, .015, .296, and .109 respectively. The .296 for the trie shows really why the lexicon first finder took so much longer with the trie than with simple. Its because the iteration time is so slow and all lexicon finder does is iterate. Also, the prefix lookup times explain why the compressed trie was slower than the others with the board first finder. Clearly its seen that the compressed trie takes up the least space but is the slowest, while trie is larger and faster at lookup, but ridiculously slow for iteration. So, which should you use? It depends.