Stephen Eichler's blog
Carried out a sanity check on the warts data and detected a bug. Carried out a fix and some testing before sending the fix to CAIDA. This has been implemented and the next round of measurements has been kicked off.
The mostly unchanged run on the Internet Simulator has resulted in some errors. Though the magic number and bus errors are gone, there were still some hash table errors saying that the next hop was not found. I kicked off a completely native run of the Internet Simulator, and after studying the errors from the last run I hope to kick off a run with longer times until stop set control messages abort.
I have been downloading warts data as analysis nodes finish their runs. I have also been working out the most efficient ways to do this, in particular how to avoid two step scp via an intermediate node and designing bash scripts for carrying out a series of downloads.
Some time has been spent on studying the code that drives the Internet Simulator. This has been done with a view to making modified versions to investigate the efficiency of doubletree, and to study the Internet coverage of atlas and hubble in their search for blackholes.
The simulator is currently running and there is now no sign of the error messages that were appearing on previous runs. It is also making us of multiple cores on Wraith which is also a good sign.
I have also been writing up a simple summary on the current overview of this research.
After getting an ssh connection issue fixed, the Caida and planetlab scamper runs were kicked off. Planetlab has one node where scamper does not continue, however there are five Caida nodes where this happens. On further analysis I found some driver cores, so I recompiled the driver on these nodes. I also set debugging on, on one of these for now, so I should get a useful core from this one.
We had a meeting with Tony about the Internet Simulator. I solution to the error messages was found in that the simulator appears to be happy to use incomplete memory maps, if these are accidentally created by aborting the program at a crucial time. The error condition was repeatable so Tony wants a tarball of the damaged scenario, so he can produce a modified version of the simulator where this does not happen. I have run the simulator using the original tarball without any aborted runs, and using my few modifications. This analysis is still running.
After finding that I could not directly access all of the Caida nodes I was given a Caida node that could access the analysis nodes, however my attempts to log in to this controller node have failed and so I have had to request further assistance in this regard.
The Internet Simulator has produced some new error messages under its modified analysis settings. I went ahead and graphed the results that were collected under these non ideal conditions. I will get a chance to consult with Tony McGregor about these issues tomorrow.
Further negotiations with Caida have been carried out to allow the running of the traces with maximum packets per trace at a higher rate, 300 pps.
The Internet simulator was modified and is running. Some effort has been put into finding out how to make small runs quicker by using more cores.
The files for the Caida nodes have been prepared including the different files necessary for analysing with a maximum number of probes or the reduced limit. Two nodes with the former will be used and twentythree with the latter.
A modified analysis was created for the Internet Simulator. This used existing available factor levels in a modified pattern. This was done to confirm the function of the simulator. It was noticed that the simulator generates some error messages in its native state and it would be useful to find out if these are important or what can be done them. The simulation is still running, and I note that with the smaller number of simulations that only one core on Wraith is being used.
The set up procedure with Caida is progressing and we should be probing soon.
Negotiations were carried out with CAIDA arc regarding the use of some of their nodes for scamper data collection. Further optimisations of scamper for this environment have been carried out on Yoyo and the set of messages that scamper reports to the error file has been further reduced. This means that it may not be necessary to use screens to manually control each node in order to access error messages.
Scamper has also been upgraded to use a wide range of addresses when running the per destination modes.
An initial document regarding the use of the Internet Simulator has been written. I have been studying the code to determine how the initial changes may be made. A set of parameter settings for the first new scenario has been produced and some optimal values have been obtained from published information on the topic.
Some further optimisation of scamper memory usage was carried out using test runs on Yoyo.
Negotiations with CAIDA about the use of Ark nodes has been carried out, and testing is to begin soon.
A test run of ICMP data collection is being carried out on planetlab. ICMP data collection will be carried out there, while UDP and TCP data will be collected on Ark.
Investigation into the Internet Simulator has been carried out to see why a subset of possible simulations based on the available paramenter levels was carried out. I have been learning about the diagnostic programs that come with IS0 including how to make graphs.
Completed an initial Internet Simulator run, and began to read up on how to produce graphs from the output data.
Ran daily scamper runs on Yoyo to optimise virtual memory and probes per second usage by modifying the structure of the experiment. I ran UDP and TCP souce port MDA without ICMP echo mode. Hopefully it won't be necessary to run UDP and TCP separately. Limited windows to 60 or unlimited. Limited PPS to 300 or 200. Limited total probes to 15000 or 65000. The aim was to get about 5000 addresses processed per day so that the same can be done on the CAIDA Ar
Procedures to determine the low throughput of planetlab have been carried out.
The diagnostic effectiveness on planetlab is also limited as screen cannot run and commands run from cron cause errors related to the requirement for a tty even though there is no password required for sudo on the nodes.
Scamper and the churn driver have both been modified to produce error logs that are not too long.
The internet simulator is being run in its native state from Tony to get used to what it does. It has been running for about five days so far, on Wraith.