Final Report - click here to download PDF
Week 9
I did not realize that there were miscommunication about video playback feature and I had to install eye tracking software on the computers in the classroom. It took lot of time and I had to reschedule the video again. Then, the software somehow were removed from the computers and I was able to install them quicker this time and I was able to finish the video production of this software. I am completely finished with this summer internship since I have been working 40ish hours a week for almost 8 weeks and I have met the 350 hours requirement. I do not need to work on the 10th week and it is only one week away to start graduate school at RIT.
Week 8
I was working on the poster and publication paper for this research about Accessible Viewing Device – Low Vision: Magnification of Classroom views for Low Vision Students. I also prepared the software for the RIT video production team to create video about the technology. We had dry run on Thursday to make sure everything works then we were all set on Friday for the video and do the video interview. Then we had to recreate the whole video to demonstrate the playback feature. We had to do this next week.
Week 7
I was working on with dual screen eye tracking concept. Since last week I have completed the circle on one computer. I had to add second eye tracking circle from second computer. I set both computers to be web servers so that way the can communicate with each together. Before doing dual screen eye tracking, I had to make sure the program I created on one pc works with other computer. It works beautifully.
Then I figured how to pass on the x and y coordinates with each together. I had to use PHP to pull up file from other computer and echo so JavaScript is able to pull up the x and y coordinates without any problems.
Both computers works and I was able to see other person’s circle and my circle on the screen based on our eye movements. My next goal for this is to make it work with div’s and maybe add color frame to div to show which div I am currently looking at. Since the eye movements are very rapid and it generates lot of visual noise in my option. I want it make it more smoothly. My mentor and I discussed the formula to make it smooth. We agreed on that the first ten numbers of x and y coordinates and make the average. I hope it will work that way.
I had to pause on the dual eye tracking concept. I had to fix my AVD software to be able to work with actual video such as MPG. It is because I have RIT video professionals who will make video about this software professionally on August 9th. I will have actual mock classroom, professor and few students in this video. Since playback feature is not fully ready, because very low FPS. I thought that we could make two videos twice on one day. Since processing all videos to series of images takes lot of time. So I need to modify my code to function with video as well.
And this week I was working on typing up ALL seven weeks of weekly reports what I have been doing on this research. I kind of wished I have done my reports on time, but it helps me now in some ways. Now, I have some ideas how to solve some issues I have faced in previous weeks. I could have done weekly reports on each week when it is due. As long I remembered what I have done, thankfully. I have also revised my website to make it look like it is research website.
I also have started writing poster for Richard Tapia Conference in Seattle, Washington. The deadline is on August 9th.
Week 6
This was the week when I was working on developing on my research website. I looked around for a better template, and I found it. I loved this template very much. I had some CSS tweaks to make it look that I wanted it to be. I only mainly focused on the design of the website itself. I have received the feedback my website does not look like a research website.
I also had to refresh my mind and I put down a piece of paper to remember what I have done since first week of research, I wish I have done weekly reports on time. I did not, it was bad idea on my part. I was too excited to focus on my work and see the results that I have produced. Good thing for emails, and conferences that help me remember what I have been doing since. I made mini note to “outline” what I have done each week.
I also created a simple JavaScript and html to show the circle moving around on a browser that shows where the eye is looking at. It worked beautifully. My goal for this is to have dual eye tracking on one screen with two computers. There will be two computer being set up and have two person looking on its own monitors. I am able to see what they are looking on the monitor and they will be able to see what I am looking on the monitor. The both monitors has the same size, so it won’t go wrong.
My next goal for this is when it works, then it should be able to see which video the person is currently looking at. The purpose of this is to compare between hearing student and Deaf student. I hope it will help Deaf student to get some cues when to look at the slides. Hearing student will be watching professor, while Deaf student will be watching interpreter. When hearing student look on slide or PowerPoint the Deaf student will be aware based on some kind of signal. The signal could be blink or colored frame on the video itself or something like that. This is the cue for Deaf student that it is time to look at the video of slide or PowerPoint.
Week 5
Time has flies so fast and it is already half way through my final coop. Last week I really got into with low vision. So it is time for me to get back on the track with the programming. I had to merge the two features. I was able to do this. I was so proud of myself to accomplish this.
It is really amazing for me to see the eye tracking to work without mouse. I felt so good when I was able to control videos with just my eyes and be able to use the mouse to control the video as well. I know it is bit confusing control videos with eye tracking and mouse? How does it work?
Well, when the eye is not on the interpreter video, the video automatically pauses. When the eye goes back on the interpreter video it will speed up to catch up current time. The user can set any playback speed from 1x (not catching up at all), 1.5x, 2x, 2.5x and 3x. There are many situations in classroom where interpreter is not saying anything for a while. Sometimes it is because the teacher is not talking just drew something or whatever reasons. I do not want to watch the interpreter video for a while, I could simply use the mouse to click “live” button to skip to live. I also could click minus 5 seconds, minus 1 seconds to review the previous information. While it is still in playback and I wanted only skip 5 or 1 seconds closer to live. Maybe there are some information somewhere along the way to live. I also could move around the video and change its size, and transparency during any time.
I added one more feature, the ability to turn off eye tracking for any reasons. Sometimes it can be pain to have eye tracking to control over the videos. It is because for the eye tracking to work the person has to be in same position for long time. If the person decides that the interpreter video should stay live at all times. The software is able to turn off at any time and turn it on as well.
As you notice there are lot of new features that is being added to AVD software. It is not good for research purposes. When I want to test for specific features I will remove the ability to control some certain things for research purposes.
I was not able to show demonstration of this feature at CAID (Convention of American Instructors of the Deaf) conference. My mentor was able to show the demonstration at this conference, he has received good feedbacks about this software. Unfortunately, I did not realize that he did not know about the new feature that I just created. It was not shown in demonstration, the old software was shown.
Week 4
Last week was wonderful experience to be part of Effective Access Technology Day. Then, I had wonderful idea. I thought about the low vision people would definitely benefit from this as well, not only Deaf and Hard of Hearing students. I am aware of accessible feature of Windows 7 called Magnifier, it has different types of zoom views they are full screen, lens and docked. I wanted to play around with lens, it is like magnifying glass. When user moves the mouse the rectangle or square glass will move around. The size of magnifying glass can be manipulated to any pixels the amount of zoom in can be changed to any percentage.
I thought it will be good idea to have the eye tracking to control the mouse movements and let the Windows 7’s Ease of Access Center to handle the magnifying glass movements based on the mouse. Unfortunately it did not work like I would have expected it to work. I had to explore various freeware and shareware online and I found one that works pretty well. The software is called Desktop Zoomer, it costs only 14 dollars. I did not buy the software yet, it has trail version that allows up to 30 times of use.
I had two low-vision person to test the feature. Both individuals have Nystagmus which means the person has fast, uncontrollable movements of the eyes. It may be side to side, up and down or rotary it is depend on the cause. It can be in both eyes or just in one eye.
The first student came and the calibration of the eye tracking was not successful. I had to do the calibration for this student. I have set up the feature for him, he really liked the idea of eye tracking controlled magnifying concept. While I was observing, I noticed the magnifying glass actually moved so fast on the screen due to his eye condition. Then I had professor to test the software and the feature. She really liked it very much as well. She was not able to calibrate the eye tracking as well. I have to do for her, and it worked as well.
Then, I decided to test for myself to get taste of what they were experiencing. I thought it was really hard for me, since I do not have the condition myself. Since those two individuals is already used to their condition and they know how to get around with it. Same concept as me as a Deaf person, I don’t consider myself as disability, and I always find a way to accommodate myself in various ways. Then I tried to read some text with this feature, I found out it was really hard to do. I find it really hard to read something while it is moving and kind of visual noise for me. I knew the idea will work as I would envisioned it to be. It just need some kind of formula that will make magnifying glass to move smoother and easier to follow.
I do not know what formula I am really looking for, but I do have some ideas but not sure if it will work that way. I will need to figure out the measurement of zoom in area against actual area to figure out if the eye is still inside of the area. If it is moving outside then the area should move not the center dot area and keeps on moving at all times. For example, 1000x1000 monitor (for easier calculation concept as an example.) The magnifying glass is 500x500 and the zoom level is maybe at 50%. If it is at the zero point which is upper left corner, which means the reading area should be at 250x250 at 100% zoom (normal view) and see that area as in 500x500 to see everything better. So that means when the eye is still anywhere in the 250x250 area of the monitor the glass will not move until it is actually out of that area.
This is something to think about for later in research and I still have more other things that I need to finish programming. I need to merge the codes and keep this website updated and so on. I am glad that I am able to include this software with other types of disabilities.
Week 3
I was supposed to get started to merge new feature in the AVD program. However I was assigned to create a simulation video to get the concept how the eye tracking would work with AVD. I did not include the new feature with this one. I used the temporary way by using the mouse based eye movements. Which means the mouse moves where the person is looking on the screen. I had my own ways to manipulate with CSS with the mouse I could make it disappear or create circle. The circle is there to display the viewers to get idea of where I am currently looking on the screen. The viewers will be able to see how it works by seeing the interpreter pause while I am not looking at it.
I created four different videos with exactly same math lessons. I think it was really hard to make sure everything is in the same time frame and everything. For the first video, I really wanted to show what Deaf person’s actual experience in the mainstreamed classroom. Again, I have seen the video too many times for almost two years, I already know what it is really talking about. I had to keep in my mind to pretend that I am watching it for the first time. I set webcam to record my eye movements close up, then I set screen recorder. I used the Microsoft Expression Encoder 4 Screen Capture, it record webcam and screen at the same time. I was glad that it was able to do that with that program, otherwise it will be pain to figure out the timing when it starts and ends.
When recording was over, I had to review the short clip it was about two minutes and I made sure every time when I looked away from interpreter on the screen I had to remove the professor’s voice. When I am looking back to the professor I had to make sure the professor voice is back on. This creates sound cut off while the professor is still talking. This really gives the two minutes experience of a deaf person without have to understand or know sign language. This gives you the basic idea what we are really missing on daily basis during mainstreamed classroom. This gives the importance of implementing AVD in classrooms.
The other three videos does not have sound cut off but there was no sound when the eyes were looking away from the interpreter. Once the eyes goes back on, it catches up. It has three different speeds, 2.5x, 2x and 1.5x. I had to speed up the sound as well, to help to the idea what it is like to have this feature. It was difficult for me to manipulate with sound, since I am completely profound deaf, I have no idea what it is saying. I was little bit stuck when I fixed the sound and there was too much sound left, or too little. What I did was, I had two minutes worth of video and sound. I have to cut the sound then shrink it to speed up. I had to make sure when the interpreter in the video has “caught up” to live and the sound should play at normal speed. It was very complicated task to create those videos.
I also had problems converting videos, I realized and I learned something new. When exporting videos the original FPS cannot be changed to different FPS. It is because when it was finished exporting at highest possible quality and the video seem has some kind noise. It does not look right and it was not very smooth. If the interpreter was not there and video will look good. With lot of hand movements in the air the hands looks like it has lots of vertical lines. It makes this person does not look like human it looks like it is computer or something. I am very perfectionist, and I did not want have that video as my final product. I figured out, and I had to follow the same FPS, good thing all videos was created the same way. So now I know for next time if I wanted to create video make sure all cameras are set at the same FPS, then the progress of editing, and production of videos should be no problems and no more vertical lines. This took lot of my time, but it was finished.
I demonstrated the eye tracking AVD and the video at the Effective Access Technology day at RIT Inn. I had to use VLC player to play both videos infinitely. I did not have the time to make both videos to become one because exporting videos takes lots of time.
I have learned some feedbacks at the conference, I did not even realize the speeding up the sound sounded like cartoon. It was very interesting, and most of them was impressed that Sign Language is able to understand up to 2.5x speed while voice was up to around 1.5x speed. It is my understanding that blind person have high-speed listening comprehension. I have received lot of positive feedbacks most people really amazed with the idea of this technology. Some of them have tried for themselves, they really liked the idea of looking off screen and the interpreter pauses then when the person looks back in it changes to play mode by itself.
Week 2
Since I already had some ideas how to manipulate the numbers that were received by the eye tracking device last week. I decided to try to communicate directly from JavaScript to the eye tracking software API however it was not possible. I even tried using web socket, it does not work. It is because of the handshake was not successful. I found out that, any type of web based socket such as HTML5 socket will only work with itself through the network. It cannot communicate with different type of program via port.
So, I thought it is not worthwhile of my time to struggle on one small thing to have direct communication from the eye tracking API to JavaScript. I needed to have the program get running soon as possible. I already have Java application that is able to communicate the eye tracking API directly. I decided to make the Java application to write a simple text file on the computer. The computer becomes a web server and JavaScript is able to read the information that is being passed by the eye tracking API.
Then I created a simple page that would display the information from the eye tracking. I was not sure it works well because it only display x and y positions. Then I calculate the monitor measurement to get eye tracking current pixels on the screen. I was not sure if it was correct calculations so I decided to create simple draggable and resizable div. When the eye is placed inside the div the text would change to true, if the eyes is off the div screen then it will turn the text to false. I had some minor miscalculation and I was able to make it work. Then I added two more div tags and it worked beautifully.
The next thing I needed to is to merge the code that I just created with the AVD program. I decided to rewrite almost the whole thing. I wanted it to be more jQuery based for the GUI part of the AVD program.
Week 1
This is the final cooperative requirement for my Bachelors level degree. I have been working on research while as full-time student. I did not have enough of time to do some upkeeps with the desktop computer. During this week I had to perform some updates and clean up unnecessary files. I found some virus and I have removed them. It is really pain in the neck doing this. I am hopeful someday that I will not have to waste my time on this part and have someone do it for me.
So that way I will be able to have more time to focus on my research work. I actually had to reinstall the windows to be able to install any other general programs. The windows installer was not working correctly, that had gave me huge headache how to figure out how to fix it. The only way to fix is to reinstall the windows.
While I am waiting for installation of windows, I am sure you know how long it takes. I was reading the API documentation for mirametrix for my eye-tracking research. I learned that it does not have the knowledge of the actual screen or monitor size.
It is based on 0 and 1 X and Y position. For example for the upper and left corner it will be 0,0 upper right corner 0,1 lower left 0,1 and lower right 1,1.
Now, I was ready to write code based on the information that the eye-tracking will give me. If it is somewhere between upper left and upper right it will give me .5,0 and it will create decimal numbers. Most of times I noticed they were at three places. So I had to have JavaScript to detect the screen size width and height. Then multiply with eye-tracking information and will give me the current pixels.
© Accessible Viewing Device