The default data for the speaker adaptive demo was synthesized, producing clear audio files. It is remarkable how the data and parameters affect the voice to sound like a designated speaker. After listening to the synthesized speech, I began working with the prepared data from the Boston Radio Corpus. I started from scratch using the new text and audio sentences. I created directories for each speaker, and wrote a script to copy the files into their corresponding speaker folders (for the .raw and .utt files). I also copied data that Erica had prepared from recordings of a student actor and the Boston Directions Corpus. I borrowed the TDT2 ABC Corpus from our inventory, but it only included audio files. I hope we can access the transcriptions as well so that I can add this data to our collection. This additional material will produce a better voice. I updated the makefile to expect the new variables. Once this was done, I followed the steps to make the data for the demo. These scripts ran for quite some time, so simultaneously I read up on data selection, active learning, building voices from ASR corpora, and synthesizing speech from minimal data. A couple of hours after running the scripts, everything was ready. Finally, I ran the demo.