A lot of time was spent preparing for the presentation next week, writing a
report and getting a demo program sorted. SD card reading/writing is
working well now, also. In the process of tidying up my code a bit more too.
I have been proceeding with setting up planet lab for turnover analysis. One of the requirements is to allow the data collection continue on from where it got to after a crash or scheduled shutdown. The changes for this have been made and tested. A side benefit of this is that a list of completed trace sets is compiled which can be used as a measure of progress of scamper.
The first test run of scamper on a planetlab node ran quite fast. This is good news as a shorter initial time interval may be used, assuming that all of the 15 planetlab nodes run just as quickly. It may be that the longer interval of two months may have to be ommitted as planetlab slices require renewal and I am not sure how often this may be done.
Spent the first few days of the week working on my presentation and then spent the whole Friday taking care of some tickets.
Previously the server was not handling disconnects from clients, so it would still try to send data to the file descriptors. I fixed that first and then worked on not sending statistics for deprecated(NULL) protocols, which would enable saving on bandwidth and effort.
For the next week, I need to tackle threading. Which I am not looking forward to.
Added simple configuration file parsing using libconfuse to the new
measured and xferd. This makes it easier to configure where the collectors
are running and how to connect to them as well as other things like
ampname, location of tests etc.
Continued with the restructuring from last week and removing some of the
code that had been duplicated between different parts. Had to slightly
change the location that the connection to the local broker was
established because the test processes can't operate on a shared
connection without their own channel, but I haven't yet found a nice way
to create the channel in the parent process.
Made some significant modifications to the structure of NNTSC so that it can be packaged and installed nicely. It is now no longer dependent on scripts or config files being in specific locations and handles configuration errors robustly rather than crashing into a python exception. Still got a few bugs and tidy-ups still to do, particularly relating to processes hanging around even after killing the main collector.
Managed to get some tunnel user counts from Scott at Lightwire to run through the event detection code. Added a new module to NNTSC for parsing the data, but have not quite got the data into the database for processing yet.
Spent a decent chunk of time helping Meenakshee write and practice her talk for Thursday. Once the talk was done, we got back into the swing of development by fixing some obvious problems with the current collector.
This week I focused on changing the NTP software to make it interface correctly with the FPGA hardware, this involves finding all the locations in which the time value for the system is set or adjusted. I also spent some time trying to get the pulse per second signal we receive from the gps registering in the Linux system so that it can be used to discipline the clock.
Worked on the i.mx6 driver and got pause frames enabled didn't increase receive performance as much as I hoped. Made the driver advertise its hardware timestamping capibility. Looked for the proper clock source (in documentation/registers) hoping to remove my hardcoded values - but couldn't find it.
Added hardware timestamping code into amplet. After a bit of poking around in the kernel enabling sending timestamps for a INET RAW sockets (FILTER RAW is fine) doesn't seem easy. So for now icmp is only getting hardware receive timestamps. Until either the kernel supports hardware send timestamps for an INET RAW socket, or we build packets from link layer on a FITLER RAW socket (which is alot of work resolving ip to mac).
I had a meeting with my new chief supervisor Richard Nelson and we decided to make a start on planetlab. I registered online and was approved. I now have to wait for our PI to allocate me a slice. In the mean time I have been working through the instuctions in the planetlab user guide, and have set up an ssh key. I am also working on stop and start scripts to operate scamper on planetlab nodes.
Fixed up the support for ipv6 connections in the current amp xfer to work
when the client only has ipv6 available but the collector has both A and
Started to write the server portion of the new amp, with a generic
consumer process that reads from the broker and performs appropriate,
test-specific saving functions. With the addition of this and a display
function callback the tests now operate almost identically when run as
part of measured or run standalone up to the point where they report.
Spent some time fixing the structure and build system for the new amplet,
as adding the server portion meant that a lot of code was now common
between it and the client.
Made a few modifications to Brendon's detectors which make them perform better across a variety of AMP time-series. In particular, the Plateau detector no longer uses a fixed percentage of the trigger buffer mean as its event threshold - instead it uses several standard deviations from the history buffer. Also fixed some problems we were having with being in an event and treating all the following measurements that are similar to those that triggered the event as anomalous. This is a problem in cases where the "event" is actually the time series moving to a new normality: our algorithm just kept us in the event state the whole time!
Once I was happy with that, got the eventing code up and running against the events reported by the anomaly detection stage. Had to make a couple of modifications to the protocol used to communicate between the two to get it working properly (there were some hard-coded entries in Brendon's database that needed a more automated way of being inserted). Tried to get the graphing / visualisation stuff going after that, but there are quite a few issues there so that may have to wait a bit.
Started looking into packaging and documenting the usage of all the tools in the chain that we've now got working. First up was Nathan's code, which is proving a bit tricky so far because a) it's python so no autotools and b) his code is rather reliant on other scripts being in certain locations relative to the script being run.
Added another protocol to libprotoident: League of Legends.