Week 1 (May 23 - May 27) During my first week of research under Dr. Anderson of the University of Alabama, I met my fellow research peers, Morgan Hood and Kira Curry, as well as had an overview of what was expected of us during our assigned project. Moreover, we were informed that we would be researching how to use different speech recognition platforms to interact with smart environment, such as a smart home. Next, I explored new realms of possibilities that can be implemented using today's voice recognition software. As a result, I spent the rest of the week learning how to develop apps in Visual Studio using XML and C# programming languages and how to integrate speech recognition software, such as Cortana into Windows based applications. For reference, we used Microsoft's Virtual Academy to aid in our integration process. Lastly, due to technical difficulties beyond our control, we learned how to implement Windows 10 on a Macintosh machine using Virtual Box as a result of capability issues.
Week 2 (May 30 - June 3) During my second week of research, Dr. Anderson decided to transition from C#, XML, and Cortana to building a web based application using HTML5, CSS3, and Javascript programming languages. As a result of being familiar with the HTML platform, I started with learning HTML5 and CSS3 to learn of any possible improvements I can make to successfully execute more efficient coding techniques and qualities, especially for any future website design projects. As I did in week one, I used the Microsoft Virtual Academy video series "Learn HTML5 and Get CSS3 Training" to compensate the learning process. I also did practice exercises to insure I understood the information I was being taught. Lastly, I tried developing a website using strictly HTML5.
Week 3 (June 6 - June 10) During my third week of research, I transitioned to learning JavaScript by starting the "JavaScript Fundamentals" video series within Microsoft's Virtual Academy. Although, I was unable to completely finish the tutorials due to attending the NSBC (National Society of Blacks in Computing) Conference in Atlanta, Georgia with Dr. Anderson and Kira Curry on Thursday until returning the following Sunday. At the conference we mingled with other undergraduate, graduate, and doctorate students also interested in computer based programs. Moreover, we were allowed to speak with professionals within the computer science and information technology concentration.
Week 4 (June 13 - June 17) During my fourth week of research, I continued to learn JavaScript within Microsoft's Virtual Academy. Moreover, Dr. Anderson officially assigned IBM BlueMix as my platform instead of the previously selected Google recognition option. As I began to research ways to implement BlueMix into my project I discovered compatible computing languages such as Java, Node JS, and Python. Due to my unfamiliarity with command line direct programming, I requested assistance from my peers to establish a foundation as I began my smart environment application journey.
Week 5 (June 20 - June 24) After gaining some familiarity with the command line, I explored using Node.js to develop my Bluemix powered application, but based off my little experienced I found it quite difficult to use. As a result, I tried using the Eclipse IDE, which I was familiar with from developing Java applications. Moreover, to develop the application in the Eclipse IDE I decided on using the Java programming language instead of its JavaScript counter part. After finding an older Bluemix compatible version of Eclipse (Luna) I began to attempt to install the Bluemix toolbar which would be used for pushing code to the cloud. I ended the week by writing code to take advantage of IBM's speech to text service within my application.
Week 6 (June 27 - July 1) In continuation from last week's progress, I ran into another problem which involved pushing and cloning my gitHub/Bluemix code within Eclipse. There was failure to communicate with the server also when executing my code and after troubleshooting for a few days I decided once again to try Node.js. I began by going through numerous extensive tutorials in hopes of improving my ability to use Node.js and finally I was able to access the cloud successfully and efficiently. I also learned I could use Bluemix's web based editor to code, but at the downfall of not being able to undo actions unless pushing the code each time as backup. As a result, I wrote code using Text Wrangler which eliminated this negative and only required simple copy and pasting to push code for execution.
Week 7 (July 4 - July 8) Following up on last week's progress, I have successfully established accurate speech recognition using IBM's speech to text service. Although, I have to find a way to successfully retrieve the converted results and then parse them to be accessible to my JavaScript code which I have already written. Bluemix seems to be using ports which I am unfamiliar with completely. I have not been able to find examples using JavaScript, but I have found one in Python which I also have no experience with. Overall, this week was used to figure out how to successfully retrieve data from the cloud which was then used to open my webpage by executing my written code.
Week 8 (July 11 - July 15) This week I spent my times collecting voices to test my speech recognition speed and accuracy considering it being a web based application. I collected voices via recordings sent from friends, family, and strangers from different regions and backgrounds to eliminate any bias testing to one region. Most of the recording were recorded via a cellphone microphone and sent using text messaging, email, or cloud based programs. To end the week, I created a video demo of my application that will be used next week to demonstrate the current state of my application to high school students that are apart of the SITE program for upcoming upper division students.
Week 9 (July 18 - July 22) At the beginning of the week, Monday, we presented a demo of our applications for high school students involved with the SITE program. The sessions mainly consisted of upcoming freshman and seniors who are interested in STEM related fields. During the presentation we also discussed the pros and cons of each platform as well as tips for success in higher academia. Following the SITE session we visited the engineering lounge area to tour the 3D printing equipment and explore the various uses of 3D printed materials. Also, we visited the computer science department lounge which was used for computer and visual activities. Lastly, we compared our speech recognition software (the platforms being IBM Bluemix, Microsoft Windows Cortana, and PocketSphinx) in terms of accuracy.
Week 10 (July 25 - July 29) Once again, at the beginning of the week, Monday, we presented a demo of our applications for high school students involved with the SITE program. Being our final week, we are finalizing our research and comparing data that will be used in our papers. As a result, we determined that Morgan Hood's project using Microsoft's Cortana was the most accurate, followed by mines (Denson Ferrell) coming in second, and Kira Curry's project using PocketSphinx in last. Overall, I was happy with my results considering that my program is cloud based versus the other competitors, which are ran locally. Also, considering that mines lost only marginally (84% versus 82%), I feel my ability to access anywhere out weighs the 2% difference. Lastly, we are preparing posters along with our papers that can be used to future expand our research into greater extremities.