A scamper run where three source port assignment methods were used, was completed. Some of these used more bits than others or were more efficient. Numbers of LBs will be counted along with the number of successors in each case. The analysis will also be run for reduced confidence.
The algorithm that analyses churn has been updated to reflect LB interface population changes. This will lead into further development to deal with LB asymmetry and determining if convergence has occurred.
A scamper driver has been modified to carry out new 99% confidence before old. A run has been initiated. This is to determine if ICMP rate limiting is affecting traces and causing truncation.
Last week wasn't very productive. I was mostly tied up with lectures and revising. I did give Shane my project to start using. He quickly got the hang of how things worked and wrote a module to push data into a database. Hopefully I'll find some time to do some initial benchmarking on reading the data to see how things actually perform.
The last couple of weeks have been fairly unproductive, with my focus being mainly on other classes, but I did get a couple of things done. Write-metadata went for another round of review, this time based on another patchset from VA Linux. I received feedback for that, and have made most of the adjustments. Tests remain on that front.
In terms of evaluating my overall project, I spent a day attempting to cobble together oftest1[1,2] to do some interoperability testing. Even with the of1[1,2]softswitch implementations that come with the testing frameworks, I've had little luck getting any kind of sane output; it looks like this will be my focus in the following week.
I'd like to be able to have something positive to put in my upcoming honours talk regarding the standards-compliance of the latest OVS code, but we'll see how that pans out.
One of the busiest weeks I've had this week so had very little time to work on 520. Fixed some bugs in last week's code and did some investigation into using non-blocking OpenSSL. I found out that the BIO_gets() line reading also works correctly with non-blocking I/O (it returns a line or it doesn't), not sure how I missed that the first time around as it means I can get rid of all of my nasty pointer arithmetic and newline finding in my receive code. Made a lot of useful progress on Monday (when I am posting this) towards that, with much more elegant code. Combining this with BIO_pending() (which is documented in a strange part of the OpenSSL documentation) means I can avoid issues with reading too far in frame bodies without a content length by reading a byte at a time from the buffer without stooping to reading a single byte per user frame getting call or being only partly non-blocking. I have also started preparing for my conference presentation.
Ran some more tests on the IPv6 packet filtering in the AMP ICMP test and
it does indeed appear that the errors are due to packets arriving between
the socket being opened and the filter being applied. That makes most of
the warnings much less worrying, and I've lowered the priority on those
that I can confirm aren't an issue. While investigating this I also found
a situation where various test resources weren't being freed in the
traceroute test if they involved IPv6 addresses. Fixed that as well.
Finished updating the protocol between the different parts of the event
detection process to use the new protocol design. Also changed it from
using local unix sockets to run across the network, as our data sources
will likely be on different machines to the eventing system. Socket input
for the time series data is also now supported rather than only using
Updated the sample web scripts that display event information to work with
the new database schema to confirm that everything is still working as it
Pushed out the AMP matrix changes to the NLNOG RING. Also investigated
colouring cells based on current performance vs historical performance
rather than raw latency values, which was a request they had.
Managed to get the ArimaShewhart detector fully integrated into the anomaly detection system and producing "correct" results. Now started turning my attention to using Nathan's software to provide suitable input and store measurements in a database that can be queried by the presentation / graphing side of the project.
The latest 301 assignment was due on Friday, so spent a fair bit of time helping out students who were having a few pointer difficulties.
Finished a draft revised version of my IMC paper - turns out I hadn't gone over the page limit by as much as I had feared so it was relatively easy to get the paper down to a suitable length.
Fixed a bug in libtrace relating to the use of Linux native on loopback interfaces that was reported by Asad. Might be time to think about a new release soon.
So today I tried to test the 1.2 version of my switch, but instead got stuck with peripheral tasks. Installing a switch capable of using of1.2 was the biggest of them, but Brendon took care of the issues I was having there..
Just gotta get them to talk to each other now.. The issue seems to be making ryu realise that I want it to talk of1.2 rather than 1.0..
But it is hard to spend fridays not doing what it was you set out to do in the first place.. Might try doing all my wand stuff earlier in the week..
I investigated the problem of reversed results when packet probing is increased. I thought I was on to something when I found that the reverse case I was studying could be resolved by increasing the number of packets allowed. I did another scamper run with max packets increased but still found the same behaviour. I comfirmed the increased packet counts and then I started investigating again and found a reserse case where a clump was found in the new 99% situation and the trace is truncated. This suggests another reason for truncated traces in the new 99% situation, to explain the anomalous results.
In the mean time I coded the analysis for comparing three source port assignment methods: initial value pid and then incrementing, initial value random and then bit flipping, and true random assignment of port values. I set this scamper experiment running using 99.9% confidence.
The experiment to draw cdf graphs for different LB valencies of packet count to find all LBs previously found, was rerun using randomly selected subsets of the same data. This was to get some idea of the variability of the data. Two sets of cdf graphs were drawn.
Just a note for future generations -- the correct file to edit to change the system default application for a given MIME type is:
This took a surprisingly long time to figure out - mainly because of the existence of other similar files such as /usr/share/applications/defaults.list and /etc/gnome/defaults.list.
Also, you can check the default application for a given MIME type using the following command: xdg-mime query default
As an example, the MIME-type for PDF is "application/pdf".
Short week this week due to being in Wellington for Thursday and Friday.
While I was there I caught up with Jamie and Sam Russell at REANNZ for a
chat about AMP and perfSONAR deployments on the network. There should be a
lot of new monitors going in shortly and it would be great if we could run
both measurement platforms.
Spent some time investigating error messages that have been showing up
lately in amplet logs. It appears there is some weirdness happening with
raw icmp6 sockets receiving packets that should have been filtered out by
a socket option. Reading through the kernel source it looks like filters
are doing exactly what they should be doing and I know believe it's due to
packets arriving and being buffered in the time between the socket being
created and the filters being set.
Changed the tooltips in the matrix display to all be fetched via ajax
calls, so none of that data is sent to the client initially. This should
speed up page generation (no need to fetch data for the last week) and
shrink the raw page size futher. Will hopefully deploy and test this on
the NLNOG RING matrix shortly.