This week, I continued to run the algorithms repeatedly to get more accurate results. At first, I was running each one 10 times, and Professor Colunga pointed out that 1000 times would be better. While most of the algorithms are pretty fast and would finish in around an hour, there are a couple that would take days to run, and together they would take more than a week. At the end of this week, I decided to compromise and run each algorithm 100 times, which is doable and should give us a sufficiently high level of precision. As I was rewriting some parts of the algorithms to store and return more information, I was able to take a really close look at some of them and fix some flaws. Also at the end of the week, I met one of Professor Colunga's graduate students, Nicole, and we talked about a paper she wrote. Although I wasn't going to try any more new algorithms, Nicole pointed me to some useful data on the MCDI website that might be worth trying out. The website has norms for children 16 to 30 months - information gathered from a much larger group of children than the sample I've been working with, about each age group's most commonly known words. This will be much simpler to implement than PLSA, and I think I will be able to try it out and get the results before next week is over.