My Summer Log

Week 1

I arrived here at Wayne State on Tuesday, June 1, and began moving in and getting settled. Then for the rest of the week, I worked on getting my computer set up with programs that Dr. Monica Brockemeyer told me would be useful for work this summer. Also, later in the week, I attended a lecture given by one of the graduate students, Jawwad. His lecture was about a simulator he wrote to simulate the internet and being able to do testing about latency between two nodes.

Week 2

This week began with a discussion about the first two papers that I was to read for the summer, "The Design Philosophy of the DARPA Internet Protocols" by David D. Clark and "Rethinking the design of the Internet: The end to end arguments vs. the brave new world" by David D. Clark and Marjory S. Blumenthal. At this discussion, Dr. Brockmeyer first met with Benessa, Liya and I to help bring us up to speed on some of the terminology and concepts that were discussed in the papers. When the graduate students began to arrive, the discussion became focused on our opinions on how the internet was designed ("Design Philosophy") and then, a discussion about how and why the internet might change in its design such that the end to end arguments are not held so strongly. An example that was discussed is spam emails that are being delivered to many email accounts, and with the thought of having the email server provide an additional service of filtering emails that are delivered, so that the spam email is being filtered out of the email accounts. Also, this week I had a couple of meetings with Dr. Brockmeyer to discuss the objectives that I am to work on achieving this summer. On Thursday, I attended a talk given by one of the graduate students, Xingie. This talk was about Distributed Hash Tables and some of the structured peer to peer models, like Chord, CAN(Content Addressable Networks), Tapestry and Pastry.

Week 3

This week was spent mainly reading and working through the examples in a book that Dr. Brockmeyer lent me called, Unix Systems by Stevens. The chapters that I worked through were about connection and connectionless oriented communications, and also about measuring the performance of the servers. The main example that I worked on was creating a ping server. This required that two computers be able to talk to each other and also that when the server is pinged that it will display the server name (in alpha-dot notation, rather then the numeric dot notation that is displayed on current systems when the ping command is activated), the time it started along with how long it has been up along with the number of users and the load average (data that can be collected from the uptime command). This project will be continued into the following week to finish up a few minor changes along with the goal of deploying this project onto PlanetLab to test it over a more realistic network then just the simple LAN that is available for testing on campus.

Week 4

This week began with finishing up the ping server that I was working on last week. Then, I started to learn more about PlanetLab and how to access the slices and nodes that I have access to now that I have my account. There was some learning involved for me as I was pretty unfamiliar with using ssh and scp (I am much more familiar with using Telnet and ftp). Once I got used to the new way to transfer the files, I began to test my ping server on PlanetLab and used the existing client that I wrote on the local machine that I was on. The next day or so was continued testing on PlanetLab with added implementations for testing how long it took for a server to send a reply back with the ping command. By Friday, I was ready to make some changes to the current server that I had running on PlanetLab. Much to my surprise, I found out that when I went to recompile the server, the utility, gcc, was unavailable. This put a hinder on the changes I was going to make. As I went to the internet to see if I could any information about what I was looking for, I realized that there were very few utilities and packages that were pre-installed on the slice of Planet-Lab that I am using. The question now remained, how do I get these packages installed so that I can continue on my way. So,I started searching the internet to find downloads for a gcc package. Once I had the file downloaded on to the local machine and then transferred it to the node I was currently working on in PlanetLab, I ran into more problems of not being able to unzip or uncompress the files inside of these packages. By this time, I figured, maybe I should wait until Monday when I can ask someone who knows more about PlanetLab to point me in the right direction on how I need to go about setting up the nodes on PlanetLab. So I called it a day, and the fourth week was over.

Week 5

This week began with learning more about Planet Lab and what how to add packages to the minimal system it initially started on planet lab. I began by installing python and yum from a script that I had over looked last week that was available on the Planet Lab web site. This made the following steps of adding more packages much easier Once all the packages were installed, I realized that the linux-headers were missing and so after much investigation and time, Benessa and I got them installed, so we could now finally edit, compile and run SOME of the programs on PlanetLab. Even though now everything was properly installed, I was still running into problems with having the ping server that I wrote, even though the server that I based mine off of was running on PlanetLab. I still have yet to figure out this problem; maybe I'll come back to it later on this summer after I have had more experience working with PlanetLab.

Week 6

This week I began working with the UDP protocol and wrote a UDP Port Server in which the client begins by sending a request to the server that includes to a protocol and a service to be entered by the user. The server then processes the request and by using the function getservbyname, the server then sends the client back a response that includes any aliases for the given service and the port number. I ran into a few problem in writing this code, the main obstacle was with trying to extract the data from a structure that used a double pointer for the aliases of the service, and then trying to figure out why only part of the structure was being sent from the client to the server, and vice-versa. Extracting the data was solved by using the following format, *(struct_name->aliases) and that gave me the first alias that is available for a given service, the other aliases are not being displayed. The reason that only part of the structure was being sent is that the size parameter that was being used was incorrect, so to fix it I simply changed the length from the fixed value to a value that is determined by using the sizeof function with the specific structure. This week I also started to do some background reading about fping, a utility that I will deploy onto PlanetLab to do some testing with determining the latency of the round trip that it takes for a message to be sent from one machine back to the initiator. I first began by installing fping on my local Linux machine, and without any problems, I had it up and running. The only glitch to this program is that it requires root access to run since some of the settings that can be altered can result in a flooding of the network. Lucky, on the local machine, I have root access, and I also learned that I can change the access for the program. With a few tests of fping with various hosts to ping, I became comfortable using fping and decided it was now time to try to install fping on PlanetLab and ran into some more problems (surprise, surprise..). The installation seemed to be successful but when I went to run the program, I received an error about an invalid argument, anything more specific then that, I couldn't tell you. Eh, another week that ends with unanswered questions and unsolved problems.

Week 7

This week began with the working on trying to get fping working correctly on PlanetLab. As I spent my time installing a new ping server which uses a socket implementation, thinking that maybe that's where my problem was, but alas, it was not. I finally decided to email one of the maintainers of fping to get additional input from them about my problem. As I waited for a response, I spent some time working on writing a perl script in which fping will run over a period of time to test the time it takes to go from the current machine to a list of other hosts. The eventual goal of this script is to be able to have fping deployed on the nodes that I have access to in PlanetLab and to run this script simultaneously on all the nodes. Then, to collect the data that is collected from fping and compare the results between the various machines. As I have continued to be stuck on getting fping working properly on the PlanetLab machines, I worked on writing a perl script which will run the fping program every five minuets, over a period of two hours. The script also takes the data that is collected from fping and sorts it according to the hosts which are currently being pinged. Additionally, each time a write is performed to a final output file, a timestamp is included so that the results can be differentiated between.

Week 8

This week I spent my time reading over a two new papers that Monica gave me to read while we wait for a solution to use fping (or possibly something different now...) The papers are Scriptroute: A Public Internet Measurement Facility and End-to-End Internet Packet Dynamics. Additionally, this week I spent a good portion of my time working on learning more about Scriptroute and how I might possibly be able to use this system to do my timing experiments instead of using fping on Planet Lab. This process was surprisingly easy, considering all the problems that I have been prior. There was already an example of a Ruby script which implemented a ping server, and so with a few minor adjustments to the script to ping multiple hosts, I was on my way to be able to use Scriptroute to provide the system which I was to collect the data that I am needing to collect. One problem now stands in my way though... the developers of scriptroute decided to have a time limit of around 100 seconds for how long a script can be executed. This is all well for short experiments, but I am needing to collect data over long periods of time (around 2 hours!) So now the question is how can I bypass using the scriptroute web interface and still use the scriptroute services. That question is going to have to wait until next week, because I am off to Traverse City ("Up North") with my friend Megan from Elementary school for a few days, and while I am gone, I will continue to read the papers that I mentioned earlier and think more about this problem and a possible solution- or maybe, if I am lucky, the problem will be solved by the time I come back from Traverse city.

Week 9

Traverse City was a lot of fun! But now back to work! And I must be lucky, because while I was gone Benessa, Jawwad and Monica came up with a way to bypass the web interface for scriptroute indirectly. We are able to run our script through the web server, using a proxy server to obtain the post method information that the web site sends to the server, and then we can perform our experiment over periods of time with short bursts of activity. I have it set up so that a list of clients will be pinged 10 times every time that the script is sent. Additionally, the script runs for a period of about two hours, with a minute between each burst of activity. I also added an additional loop into my perl script so that it can be run through the night, with periods of inactivity when the program is "sleeping." But, this is yet another short week for me. One of my best friends from home is getting married this Saturday (July 31) and I'm one of her bridesmaids! So, I am heading home for a few days to be apart of the wedding and while I'm gone I hope to be able to do some analysis on all the data that I have been collecting this week.

Week 10

Wow! I can't believe that this will be my final week up here at Wayne State! So much to do, and it seems like too little time to do it all in. Unfortunately, I did not get as much of the data analyzed as I would have liked while I was home, so, I've spent a lot of extra time catching up on it all. I ran the experiment two different times last week, once with four clients and then again with six other clients. So, a total of ten different clients, with each client having approximately 3600 lines of data! Did that much data ever seem daunting as I started to look it all over. To make things slightly easier to handle, I wrote a text parser for the data in perl to take each client and have all of its data in its own file, which I then took and put into an Excel document. And determined the minimum, maximum and average latencies, along with determining the standard deviation and the number of timeouts that occurred for each client with all of the data included, along with the three different times that it ran individually. I also added a histogram for each of the clients data groups, and began to look at the differences between them for a time frame that might have a more stable period and also at the timeouts, if there might be any way to describe or predict the timeouts. Additionally, I looked at the data to see if I could determine for a particular pair of end-points, if I could determine a maximum threshold point in order to have a packet resent with it arriving before the original packet arrives.