Home  |  Benessa  |  Mentor  |  Research  |  Final Report  |  Journal  |  Fun  |  Photo Album

 
 

 

Benessa's Research Journal

 

Jump to Week 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10

 

 
 

Week 1: June 1 - June 4

I met Monica and she gave me a laptop to share with the other two undergraduate research students. I spent the entire week setting up my pc and the laptop. I downloaded, installed, and configured all of the programs that I will need for conducting research. Some of the programs I installed include GhostView, LaTeX, Miktex, and Winedt. For security purposes, I also installed the latest Norton AntiVirus software, the ZoneAlarm firewall, Spybot Search & Destroy, and Ad-aware.

There was a problem, which the graduate students and I could not solve, with the CD-ROM driver on the laptop. Windows claimed that the driver was either corrupt or missing, however it was the most up to date driver available. There was another person in the research group with the exact same laptop, so I even copied his driver onto the laptop and it still gave the same error. None of MS's suggestions in the help pages changed anything, so the only solution left was to reinstall Windows. This meant that I got to reinstall and reconfigure all of the programs and the printer that I had installed on the laptop. Lucky me!

Monica also gave me two papers to read about the design principles of the internet. The first paper was called Rethinking the design of the Internet: The end to end arguments vs. the brave new world. The other paper was called The Design Philosopy of the DARPA Internet Protocols. The grad and undergrad researchers then had a discussion of the papers. Click here to read my notes (pdf format). (top)

 

Week 2: June 7 - June 11

With all of the configuration stuff out of the way, it was time to get down to business. We formalized our research objectives and goals for the summer. I put all of this information on the research page. At one of the meetings this week, Jawwad, a Ph.D. student, gave a presentation on a simulator project he created that used Trans-Stub type technology. At another meeting, Xinjie, another Ph.D. student, gave a presentation about distributed hash tables. Click here to read my notes from all of the meetings (pdf format).

Monica gave me a textbook about UNIX systems programming so that I could work throught some client/sever programming examples and familiarize myself with the tools I will need to conduct my research. I ran a serial server and client that were implemented using UICI, the Universal Internet Communication Interface library. I then tried running a more sophisticated version of the server that forks a child to handle communication. This approach is much more effective, because after receiving a request, the server can fork a child and then resume listening for additional requests. I also modified the programs to create a bidirectional server and client since many real-world applications require symmetric bidirectional communication between client and server. (top)

 

Week 3: June 14 - June 19

After working through the textbook examples, I am writing a ping server and client program using the C language. I learned C++ and Java in school, so I am learning about the differences between C and C++ as I work through the book. I like to throw a bunch of cout statements in my C++ programs when I debug. It seems like a pain to use printf in C, but I guess I'll get used to it.

My ping server listens on a well-known port for client requests. It forks a child to respond to the request while the original server process continues listening. The child calls a function that prints the host name and the output of the uptime command. The client pings the server, reads what comes in on the connection and echoes it to standard output. If it can't connect, it retries a set number of times and sleeps a set number of seconds in between tries.

I ran into some performance issues when using the system call uptime in the ping server, so Monica lent me some books to help in my quest for a solution. I was using system("uptime"), which produced the desired effects. However, this method is not the desired way to use a system call in a program. Monica also gave me some other UNIX and Pthreads books. In the past couple of days I haven't worked much with the ping server because I have been reading through parts of the books. I read about UNIX commands, networks, buffer overruns, and other security problems.

Most of the time I sit at the computer programming, reading, and listening to music. When I run into problems, I consult Jawwad, the student manager of the research group, or I ask Monica. Sometimes Jawwad is out of his office, so I ask the other grad students. Everyone around here is always willing to help. (top)

 

Week 4: June 21 - June 25

I got an account on PlanetLab with the help of Jawwad and another professor. Jawwad showed me how to create the public/private key pair that I need to authenticate myself when I log on to a node. I had to upload the public key to the server, but I keep the private key on my machine. I also had to make up a passphrase that is 10-30 characters long. You can only use ssh (not telnet) to access PlanetLab because ssh is secure. My research group has a slice to which users are assigned. I can add nodes to the slice and then ssh into them to run experiments. I added 8 nodes to the slice for now and transferred my files to some of the PlanetLab nodes on my slice from the local linux machines on campus.

Since the ping server was working correctly on the local machines, it was time to move it to the PlanetLab environment for testing. It sounds simple, but I've run into problems. First, the slice which I use comes with the bare necessities; by default it is created with a near-empty file system because each user has different packages s/he needs. I need gcc, cc, or some other compiler for my C programs, but there are no compilers on the machines. Also, those linux header files that are necessary to compile and run my programs weren't there. That means I had to install header files and the compilers myself. I have never had to do this before because any machine I've ever worked on has had those basic necessities installed for me. Installation was not as simple as it sounds because I did not have permission to install packages on the machines. After some asking around, I found someone who had worked with this and he said that all I had to do was su to root. By default root has no password and each user is root of the slice. Then I had to run a script to install python and yum. Yum is a tool that installs packages such as gcc, man, linux header files, etc. I got the gcc compiler installed relatively easily. However, yum could not find the linux header files, so I still couldn't compile my ping server! (top)

 

Week 5: June 28 - July 2

This week I feel like I ran nowhere fast and spent the days banging my head into the wall. PlanetLab has been nothing but trouble for me! I subscribed to a support email list for all PlanetLab users and posted my question about difficulty in installing the linux header files. Nobody responded to me! I searched around in the archived support discussions and found out that somebody had had a similar problem a few months ago, but no one had responded to him either. Thus, I figured that he should have solved the problem himself by now, so I emailed him. He told me that he had to reinstall the linux kernel and move the linux header files into a specific directory where the compiler can find them. So I followed his steps and my program compiled! Then, I hit a brick wall again when it would not run. I kept getting a "protocol not available" message. I was running my TCP ping server, so the protocol should be available! Anyways, Monica told me to just put this aside and move on to the UDP server. Then I can move onto using measurement tools and perform timing experiments. I'm looking forward to that- I am so sick of Planet Lab! (top)

 

Week 6: July 5 - July 9

I wrote a UDP port client that sends a UDP request to a server to find out which services are available on a host. The client generates a pseudo-random integer for the sequence number of the request. The request is marshaled by putting it in the form of a struct containing the sequence number, protocol name, and service name. The server receives the datagram and uses the information to call the getservbyname function from the netdb.h header file. This function returns the port number in network byte order and the set of aliases of the service. This information is put in the form of the struct used by the client request and sent to the client.

Now that I had a good grip on network programming, it was time to get some real research done. I tried to use fping on PlanetLab. Fping is like ping, except it is meant to be used in scripts, the output is easier to parse, and it can accept an input file of ip address to ping. I ran into some trouble using fping on Planet Lab, shock shock. So I decided to work on my website. I spent about 6 hours working on it, so I hope it looks better. :) (top)

 

Week 7: July 12 - July 16

I used the ping and fping functions on my own computer to gather information about message latency between my computer and computers on campus. I wrote a couple Perl scripts using fping to gather the round trip times for a message to be sent and received between two nodes. I have never used Perl before, so I learned that language as I wrote. It's like a cross between Linux shell scripts and C, so it was pretty easy to learn. My first script reads a list of addresses from an input file and fpings each address. It prints the results of the fping to an output file. The second script I wrote does the same thing as the first, but it has timed loops. It fpings the list of addresses and waits 30 seconds before fpinging each node again. Of course, the program can be altered to wait longer or shorter than 30 seconds. Right now I have it set to loop 5 times, but that can also be changed. I could easily alter it to fping the nodes every 5 minutes for 3 hours. This type of setup would allow me to gather information about the stability of the connection.

I have only been able to run these scripts on my own computer and not PlanetLab. Why? Because, shock shock, fping would not work on PlanetLab. I got fping to work on my own machine by installing raw sockets, ping, and fping. I tried the same steps on a PlanetLab node at Wayne State, but I get the error message "Invalid Argument." Jawwad suggested that I try using a node at a different location to make sure that the problem is really with PlanetLab and not just the machines on campus. Well, I tried a node in Tawain, but I got identical results. Since fping worked on my machine and not on PlanetLab, I feel that the problem must be that there is some file or package that I need to install on PlanetLab. However, I have no idea what that might be. Thus, I resorted to posting a question to the PlanetLab support email list. I have not received any feedback yet. Pamela emailed the man who maintains fping, but he has not been able to solve the problem so far. (top)

Week 8: July 19 - July 23

I wrote a script to parse the output of my fping Perl script. I wrote my parse script in C++, so it didn't take very long. My program reads the output file of the fping script and extracts the time to send a packet to each node. Right now, it just sends the times to another file. The whole point of using fping is that the input can come from a file and that the output is easy to parse. I haven't added additional features yet because I can't run the script on PlanetLab. The time values from pinging PlanetLab nodes are the values that I am interested in, but since fping doesn't work on PlanetLab, I have put this project aside for now.

Monica, Jawwad, Pamela, and I have continued to look into the problem with fping on PlanetLab. Since we were not getting anywhere, Monica gave Pamela and I a couple of papers to read. They are End-to-End Internet Packet Dynamics and Scriptroute: A Public Internet Measurement Facility. After reading the papers, Pamela, Jawwad, and I met with Monica and discussed them.

I visited www.scriptroute.org and generated a reverse path tree from my ip address. It took about 30 minutes for the tree to be completed. The tree was very large and complex, so I had to zoom in on the picture to read the names of the nodes and the time values. To see my tree in a readable size, go here. You can read about how the tree was generated by reading this. I decided that the tree would make a unique background, so that's what you see here. If you are really bored, you can copy the image file and zoom in on the nodes to see the ip addresses.

Scriptroute servers accept scripts written in Ruby. The scriptroute website provides some sample ping and traceroute scripts, which I edited and submitted to a server. Without much difficulty, this process was successful. One drawback was that I was not able to ping a computer on the Wayne State campus, which is probably because of firewall issues. One task that Monica had suggested involved sending bursts of 10 pings to 4 ip addresses at a time, waiting a minute, then sending another 10 probes, etc. for about 1 hour. This sounded like a reasonable task, but the scriptroute servers have time limits on script execution for security purposes. Thus the server interrupted my script after less than 2 minutes of execution.

There are various ways to get around this problem. One way is to download the scriptroute code and execute scripts on my own machine. I followed this path on PlanetLab, and much to my surprise, it worked! However, my script was only successful in pinging machines on campus and timed out when trying to ping any other location.

This lead me back to the root of the problem to begin with. Monica suggested writing a Ruby script client that sends smaller scripts to a web server. The larger Ruby script would have a loop that would accomplish the overall task of sending 10 probes every minute for an hour by resending a small script to the server each time. The server wouldn't time out the script because it would only get the small script that doesn't have the timed loops. This sounded like a really great idea, but there were roadblocks. First, I had to figure out a way to post the message to the server. This means I needed to see the html code that is generated after submitting the script across the web interface. How do I see the code if I send my script in a browser directly to the server? The answer is a proxy server! Monica said that configuring a proxy server in between my client and the server would be a good way to see the html I need to know to write the large Ruby script.

Monica sent me a link to a simple http proxy server that prints out the html generated when a webpage is requested. I got the proxy server installed easily, but running it was a problem. It had errors! Jawwad helped me try to figure out the problem and suggested I download the latest version of Perl, which is the language that the proxy server was written in, even though I already had Perl on my machine. So, I downloaded Perl, but still received the same error messages. Then, after about 2 hours of trying to fix the proxy server, Jawwad came to my office and told me to change one line of code. I did and tada!, it worked! Now that the proxy server worked, I just had to configure my browser to point it instead of directly connecting to the internet. I started my server and opened a webpage, and the html did print out from the proxy server. (top)

Week 9: July 26 - July 30

Now that I knew what html was generated when I submitted my script to the server, I could write my own program to repeatedly submit small scripts to the server. I decided to write the program in Java because of the vast resources available on Java. I looked around and found a wealth of resources. However, all I wanted to do was something simple: connect to a server, send it a script, receive the output, and continue doing this every minute for 2 hours. Pamela and I started writing a Java program, but then we decided that Java provided more utilities than we needed at the moment and it would be much more simple just to write a short Perl script. So, that's what we did. I went to the library and checked out a few Perl books and, with Jawwad's help, we were able to write the script in only an hour.

The only problem was that the program only worked if we sent the Ruby script in the form that it is converted into when we ran it through the proxy server. In this form, spaces, !, #, /, and other special characters are converted into their ASCII codes. We can't just send the plain Ruby script, so if we changed the script, we had to run it through the proxy again to see what the output is. Then we send that output as the script to the server and get back the timing results. It was not a big problem, so if we changed the script, we just ran it through the proxy. Jawwad, Pamela, and I tried to find a solution that would allow us to send the original Ruby script, but decided, after searching for about an hour, to concentrate our efforts on the overall goal.

I ran the script once in the afternoon and once at 5:00am to get a variety of data. It pinged 4 different ip addresses, two in the United States, one in China, and one in Greece. The next day, Jawwad said that it would be best to ping a total of at least 10 nodes so we have a larger variety of data. So, I set up the script to ping 6 ip addresses in 3 different continents and 5 different countries. I ran the script in around 3:00 in the afternoon, but after an hour, the WSU campus network went down. Thus I couldn't connect to the Internet, so my results were interrupted. I decided to stop the program and set it up to run 3 times, pausing for 4 hours in between each run. The next day, I had three good sets of data ready to be analyzed.

In order to analyze the data, I had to write programs to parse through the result files. I wanted to import the files into MS Excel, so I added semicolon delimiters between the fields to make the import process easier. Also, I wanted to analyze the data by ip address to determine what role geography plays in message latency. Thus, I had to further parse the files to separate the times of each ip address into separate files. That means I had 6 separate files for the experiment I ran with 6 ip addresses, for a total of 10 different files. (I used 10 ip addresses total.) Monica suggested that I find the average, minimum, maximum, distribution, and standard deviation of the data. These calculations are not difficult because those tools are built into Excel, so just parsing the raw data into the files that were more meaningful and useful was the time-comsuming part of the experiment. (top)

Week 10: August 2 - August 6

Monica suggested using a histogram to show the distribution of the times, so I created histograms for each ip address. While performing calculations on the data, I discovered that some times were missing. I should have 1200 times for each run (10 bursts * 60 minutes * 2 hours), but for some ip addresses, I had only around 1100 times. The times were missing and I had to search through the original data to find them! This was no easy task! I discovered that in the middle of one output file, there had been an error transmitting the data, which resulted in a loss of 3 out of 10 bursts. At a spot in another file, the server had never processed the script, so I only had 119 instead of 120 sets of data.

Now that I knew where the problems were, I had to go into my spreadsheets and fix the omissions. Sounds simple? I think not! I spent 7 hours fixing the problems! Yes, that's right, 7 precious hours of my life were spent finding and replacing the missing datum! Once the times were in their correct places, I had to recalculate the min, max, mean, standard deviation, and histograms.

Monica also gave Pam and me a paper to read called The Timed Asynchronous Distributed System Model. One of the authors of the paper is Christof Fetzer. Monica got to meet him and discuss her research with him at a conference last week. She shared with us some of the ideas that Christof had about the research.

Since this was my last week, I had to wrap everything up and polish my final report and this website. I met with Monica to go over the results of the experiments and to finish writing my final report. There was so much to do at work and home, since I had to prepare to move back to Tennessee. Overall, I had a good summer. I enjoyed research and would love be in the DMP again. I'm looking forward to grad school a year from now. Who knows, maybe after I get my Ph.D., I could become a DMP mentor. Hmm... (top)