Network Event Monitor
The aim of this project is to develop a system whereby network measurements from a variety of sources can be used to detect and report on events occurring on the network in a timely and useful fashion. The project can be broken down into four major components:
Measurement: The development and evaluation of software to collect the network measurements. Some software used will be pre-existing, e.g. SmokePing, but most of the collection will use our own software, such as AMP, libprotoident and maji. This component is mostly complete.
Collection: The collection, storage and conversion to a standardised format of the network measurements. Measurements will come from multiple locations within or around the network so we will need a system for receiving measurements from monitor hosts. Raw measurement values will need to be stored and allow for querying, particularly for later presentation. Finally, each measurement technology is likely to use a different output format so will need to be converted to a standard format that is suitable for the next component.
This component forms the basis of Nathan's 520 project.
Eventing: Analysis of the measurements to determine whether network events have occurred. Because we are using multiple measurement sources, this component will need to aggregate events that are detected by multiple sources into a single event. This component also covers alerting, i.e. deciding how serious an event is and alerting network operators appropriately.
Presentation: Allowing network operators to inspect the measurements being reported for their network and see the context of the events that they are being alerted on. The general plan here is for web-based zoomable graphs with a flexible querying system.
Spent much of my week working on getting BSOD ready to be wheeled out at Open Day once again. During this process, I managed to find and fix a couple of bugs in the server that were now causing nasty crashes. I also tracked down a bug in the client where the UI elements aren't redrawn properly if the window is resized. Normally this hasn't been a big problem, but newer versions of Gnome like to try and silently resize full-screen apps and this meant that our UI was disappearing off the bottom of the screen. As an interim fix, I've disabled resizing in BSOD client but we really should be trying to handle resize events properly.
Received a bug report for libtrace about the compression detection occasionally giving a false positive for uncompressed ERF traces. This is because the ERF header has no identifying 'magic' at the start, so every now and again the first few bytes (where the timestamp is stored) end up matching the bytes we use to identify a gzip header. I've strengthened the gzip check to use an extra byte so the chance of this happening now is 1 in 16 million. I've also added a special URI format called rawerf: so users can force libtrace to treat traces as uncompressed ERF.
Friday was mostly consumed with looking after our displays at Open Day. BSOD continued to impress quite a few people and we were reasonably busy most of the day, so it seemed a worthwhile exercise.
Spent a little time reviewing my old YouTube paper in preparation for discussing it in 513.
Tracked down and fixed a few outstanding bugs in my new and improved anomaly_ts. The main problem was with my algorithm for keeping a running update of the median -- I had a rather obscure bug when inserting a new value that was between the two values I was averaging to calculate the median that was causing all sorts of problems.
Added an API to ampy for querying the event database. This will hopefully allow us to add little event markers on our time series graphs. Also integrated my code for querying data for Munin time series into ampy.
Churned out a revised version of my L7 filter paper for the IEEE Workshop on Network Measurements. I have repositioned the paper as an evaluation of open-source payload-based traffic classifers rather than a critique of L7 filter. I also spent a fair chunk of time replacing my nice pass-fail system for representing results with the exact accuracy numbers because apparently reviewers found the former confusing.
Tried to continue my work in tidying up and releasing various trace sets, but ran into some problems with my rsyncs being flooded out over the faculty network. This was quite a nuisance so we need to be more careful in future about how we move traces around (despite it not really being our fault!).
Managed to get a decent little algorithm going for quickly detecting a change between a noisy and constant time series. Seems to work fairly well with the examples I have so far.
Decided to completely re-factor the existing anomaly_ts code as it was getting a little unkempt, especially if we hope to have students working on it. For instance, there were several implementations of a buffer containing the recent history for a time series spread across the various detector modules. Also, most of the detectors that we had implemented were not being used and were creating a lot of confusion and our main source file had a lot of branching based on the metric being used by a time series, e.g. latency, bytes, users.
It took the whole week, but I managed to produce a fresh implementation that was clean, tidy and did not have extraneous code. All of the old detectors were placed in an archive directory in case we need them later. Each time series metric is now implemented as a separate class, so there is a lot less branching in the main source. There is also now a single HistoryBuffer implementation that can be used by any detector, including future detectors.
Released the ISP DSL I traces on WITS -- we are now sharing (anonymised) residential DSL traces for the first time, which will no doubt prove to be very popular.
Finished up the 513 marking (eventually!) and released the marks to the students.
Released a new version of libtrace -- 3.0.17.
Started working on releasing some new public trace sets. Waikato 8 is now available on WITS and the DSL traffic from our 2009 ISP traces will hopefully soon follow. In the process, I found a couple of little glitches in traceanon that I was able to fix before the libtrace release.
Decided that our anomaly detection code does not handle time series that switch from constant to noisy and back again particularly well. A classic example is latency to Google: during working hours it is noisy, but it is constant other times. We detect the switch, but only after a long time. I would like to detect this change sooner and report it as an event (although not necessarily alert on it). I've started looking into an alternative method of detecting the change in time series style based on a pair of sliding windows: one for the last hour, one for the previous 12 hours before that. It is working better, but is currently a bit too sensitive to the effect of an individual outlier.
Fixed the bugs in the anomaly_ts / eventing chain that I introduced last week. We're back reporting events again on the web dashboard.
Wrote ampy modules for retrieving smokeping and munin data from NNTSC so that Brendon could plot graphs of those time series. Doing this showed up some (more) problems in the graphing which Brendon eventually tracked down to being related to how aggregation was being performed within the NNTSC database.
Spent a large chunk of my week marking the 513 libtrace assignment. It is a much bigger class than previous years (over 30 students) so it was pretty time consuming to mark. In general, it was pleasing to see most students had gotten the basics of passive measurement worked out and hopefully they got some valuable experience from it. My biggest disappointment was how many students didn't read the instructions carefully -- especially those who missed the requirement to write original programs rather than blindly copying huge chunks of the example code.
Another short week, due to being away on Tuesday and Wednesday.
Started writing up a decent description of the design and implementation of NNTSC, which would hopefully make for a decent blog post. It also means that the entire thing is stored somewhere other than in my head...
Revisited the eventing side of our anomaly detection process. Had a long but eventually productive discussion with Brendon about what information needs to be stored in the events database to be able to support the visualisation side. We decided that, given the NNTSC query mechanism, events should have information about the collection and stream that they belong to so that we can easily filter them based on those parameters. We used to use "source" and "destination" for this, but streams are defined using more than just a source and destination now.
Updated anomalyfeed, anomaly_ts and eventing to support the new info that needs to be exported all the way to the eventing program. In the process, I moved eventing into the anomaly_ts source tree (because they shared some common header files) and wrangled automake into building them properly as separate tools. Got to the stage where everything was building happily, but not running so good :(
Very short week this week, but managed to get a few little things sorted.
Added a new dataparser to NNTSC for reading the RRDs used by Munin, a program that Brad is using to monitor the switches in charge of our red cables. The data in these RRDs is a lot noisier than smokeping data, so it will be interesting to see how our anomaly detection goes with that data. Also finally got the AMP data actually being exported to our anomaly detector - the glue program that converted NNTSC data into something that can be read by anomaly_ts wasn't parsing AMP records properly.
Spent a bit of time working on adding some new rules to libprotoident to identify previously unknown traffic in some traces sent to me by one of our users.
Spent Friday afternoon talking with Brian Trammell about some mutual interests, in particular passive measurement of TCP congestion window state and large-scale measurement data collection, storage and access. In terms of the latter, it looks many of the design decisions we have reached with NNTSC are very similar to those that he had reached with mPlane (albeit mPlane is a fair bit more ambitious than what we are doing) which I think was pretty reassuring for both sides. Hopefully we will be able to collaborate more in this space, e.g. developing translation code to make our data collection compatible with mPlane.
Exporting from NNTSC is now back to a functional state and the whole event detection chain is back online. Added table and view descriptions for more complicated AMP tests; traceroute, http2 and udpstream are now all present. Hopefully we can get new AMP collecting and reporting data for these tests soon so we can test whether it actually works!
Had some user-sourced libtrace patches come in, so spent a bit of time integrating these into the source tree and testing the results. One simply cleans up the libpacketdump install directory to not create as many useless or unused files (e.g. static libraries and versioned library symlinks). The other adds support for the OpenBSD loopback DLT, which is actually a real nuisance because OpenBSD isn't entirely consistent with other OS's as to the values of some DLTs.
Helped Nathan with some TCP issues that Lightwire were seeing on a link. Was nice to have an excuse to bust out tcptrace again...
Looks like my L7 Filter paper is going to be rejected. Started thinking about ways in which it can be reworked to be more palatable, maybe present it as a comparative evaluation of open-source traffic classifiers instead.
Added a data parser module to NNTSC to process the tunnel user count data that we got from Lightwire. Managed to get the data going all the way through to the event detection program which spat out a ton of events. Spent a bit of time combing through them manually to see whether the reported events were actually worth reporting -- in a lot of cases they weren't, so I've refined the old Plateau and Mode algorithms a bit to hopefully resolve the issues. I also employed the Plunge detector on all time series types, rather than just libprotoident data, and this was useful in reporting the most interesting behaviours in the tunnel user data (i.e. all the users disappearing).
Added a couple of new features to the libtrace API. The first was the ability to ask libtrace to give you the source or destination IP address as a string. This is quite handy because normally processing IP addresses in libtrace involves messing around with sockaddrs which is not particularly n00b-friendly. The second API feature was the ability to ask libtrace to calculate the checksum at either layer 3 or 4 based on the current packet contents. This was already done (poorly) inside the tracereplay tool, but is now part of the libtrace API. This is quite useful for checksum validation or if you've modified the packet somehow (e.g. modified the IP addresses) and want to recalculate the checksum to match.
Also spent a decent bit of time reading over chapters from Meenakshee's report and offering plenty of constructive criticism.
The development of NNTSC took another dramatic turn this week. After conferring with Brendon, we realised that the current design of the data storage tables was not going to support the level of querying and analysis that he wanted for AMP data. This spurred me to quickly write up a prototype for a new NNTSC from scratch that allowed each different data collection method to specify exactly how the data table should look. This means that instead of having one unified data table with the inflexible schema of (stream id, timestamp, data value), we now have an AMP ICMP test data table that is (stream id, timestamp, pkt size, rtt, loss, error code, error type) and a Smokeping data table that is (stream_id, timestamp, uptime, loss, median, ping1, ... ping20).
We've also done away with the central queue and simply given each data parser its own connection to our database. This fixes a problem I was having where trying to read data from a file too fast was causing the queue to fill up and run the machine out of RAM.
Smokeping data collection is now working with the new NNTSC, so I now need to write the data parsing modules for each of the other input sources we used to support as well as re-do all the nice installation script stuff I had done for the previous version of NNTSC.
Made some significant modifications to the structure of NNTSC so that it can be packaged and installed nicely. It is now no longer dependent on scripts or config files being in specific locations and handles configuration errors robustly rather than crashing into a python exception. Still got a few bugs and tidy-ups still to do, particularly relating to processes hanging around even after killing the main collector.
Managed to get some tunnel user counts from Scott at Lightwire to run through the event detection code. Added a new module to NNTSC for parsing the data, but have not quite got the data into the database for processing yet.
Spent a decent chunk of time helping Meenakshee write and practice her talk for Thursday. Once the talk was done, we got back into the swing of development by fixing some obvious problems with the current collector.
Made a few modifications to Brendon's detectors which make them perform better across a variety of AMP time-series. In particular, the Plateau detector no longer uses a fixed percentage of the trigger buffer mean as its event threshold - instead it uses several standard deviations from the history buffer. Also fixed some problems we were having with being in an event and treating all the following measurements that are similar to those that triggered the event as anomalous. This is a problem in cases where the "event" is actually the time series moving to a new normality: our algorithm just kept us in the event state the whole time!
Once I was happy with that, got the eventing code up and running against the events reported by the anomaly detection stage. Had to make a couple of modifications to the protocol used to communicate between the two to get it working properly (there were some hard-coded entries in Brendon's database that needed a more automated way of being inserted). Tried to get the graphing / visualisation stuff going after that, but there are quite a few issues there so that may have to wait a bit.
Started looking into packaging and documenting the usage of all the tools in the chain that we've now got working. First up was Nathan's code, which is proving a bit tricky so far because a) it's python so no autotools and b) his code is rather reliant on other scripts being in certain locations relative to the script being run.
Added another protocol to libprotoident: League of Legends.
Spent a day messing around with the event detection software, mainly seeing how Brendon's detectors work with the existing AMP data. The new "is it constant" calculation seems to be working reasonably well, but there are still a lot of issues with some of the detectors. Need to spend a bit of uninterrupted time with it to really see how it all works.
Had a quick look at the latest ISP traces with libprotoident to see if there are any obvious missing protocols I can add to the library. Added one new protocol (Minecraft) and tweaked a few existing protocols.
Spent the rest of the week at NZNOG, catching up on the state of the Internets. Most of the talks were pretty interesting and it was good to meet up with a few familiar faces.
Decided to replace the PACE comparison in my L7 Filter paper with Tstat, a somewhat well-known open-source program that does traffic classification (along with a whole lot of other statistic collection). Tstat's results were disappointing - I was hoping they would be a lot better so that the ineptitude of L7 Filter would be more obvious, but I guess this does make libprotoident look even better.
Fixed a major bug in the lpicollector that was causing us to insert duplicate entries in our IP and User maps. Memory usage is way down now and our active IP counts are much more in line with expectations. Also added a special PUSH message to the protocol so that any clients will know when the collector is done sending messages for the current reporting period.
Spent a fair chunk of time refining Nathan to a) just work as intended, b) be more efficient and c) be more user-friendly / deployable. I've got it reading data properly from LPI, RRDs and AMP and exporting data in an appropriate format for our event detection code to be able to read.
Started toying with using the event detection code on our various inputs. Have run into some problems with the math used to determine whether a time series is relatively constant or not - this is used to determine which of our detectors should be run against the data.
Got the bad news that the libprotoident paper was rejected by TMA over the weekend. A bit disappointed with the reviews - felt like they were too busy trying to find flaws with the 4-byte approach rather than recognising the results I presented that showed it to be more accurate, faster and less memory-intensive than existing OSS DPI classifiers. Regardless, it is back to the drawing board on this one - looks like it might be the libtrace paper all over again.
Continued working with Nathan to get smokeping data successfully into the
event detection system. I generated some random data to fill the
historical buffers and then continued to run it over live data, which
generated a small number of plausible looking events. I'm now looking into
the scalability and resource usage of this as it seems a little higher
than it should be. Also polished the dashboard graphs slightly, changing
them to use more sensible axis and better resolution data.
Spent some time with Richard, Tony and Shane thinking about the future
direction of AMP. We've got some good ideas and have a whiteboard full of
initial planning for the work that needs to be done.
Read draft introductions to a number of 520 reports and gave some
hopefully useful feedback. Everyone seems to be on the right track so far,
looking forward to reading more.
Short week this week - took leave on Thursday and Friday.
Released a new version of libtrace (3.0.15) on Monday. Mostly just a few little bug and build fixes, but it had been a while since the last release. Also submitted a patch for the FreeBSD libtrace port which had been broken for a very long time.
Did a bit more refinement on my Plunge and ArimaShewhart event detectors. They're at a stage now where the number of false positives is close to none. False negatives are a bit harder to identify, of course. The next sensible step is probably to think about testing against real-time data and manually validate the events as they roll in.
Spent a day looking at the latest LPI data from a live analysis I have running on our ISP monitor. Managed to get some up-to-date stats on application usage for last September but haven't had a chance to look over it in detail yet.
I did note a bit of an increase in the amount of unknown UDP traffic, so chased up a few of the more common patterns. Have added 3 new protocols to libprotoident as a result: ZeroAccess (a trojan), VXWorks Exploit and Apple's Facetime / iMessage setup protocol.
Added a new anomaly detector to our network event monitor: the Plunge Detector. The basic aim is to detect situations where an otherwise active time series plunges to a very low (or zero) value. Sounds simple, but kinda tricky to do in a generic fashion. The general algorithm is track the median and minimum observed values over the past N measurements and then raise an alarm when the current value is both significantly below the median and the minimum observed values.
Spent much of the week testing both the new Plunge detector and the Shewhart detector against the various LPI time series in my test data set. Lots of refinement going on with both detectors, but starting to get pretty happy with the results.
Started working towards a new libtrace release - mostly just a few little bug fixes and tidyups. Part of the release process is to test it on a FreeBSD machine, but the old emulation image doesn't work with the new emulation network. Set up a FreeBSD 9 machine so that Brendon could make a new image, which was a lot more painful than it should have been. Managed to get libtrace tested and passed the machine over to Brendon for imaging - I expect a decent rant in his weekly report about that step of the process to :)
Tried to make the generated alerts more efficient and more effective by
very slightly delaying the actual alerting - doing so means that the alert
can contain any other events that arrive immediately after the triggering
event. It also now sends me emails for certain event thresholds, but I
broke the live import of AMP data so need to fix that before I can get
more than the emails generated by my test data.
Started trying to make the information presented in the default web
interface a bit more concise and relevant to what is going on right now.
Trying to use a few graphs to give an initial overview of the recent data
while keeping the ability to go look at everything in detail as you can
The AMP deployment on the NLNOG RING was mentioned during a talk at RIPE
about the RING along with screenshots and links back to WAND. The slides
look pretty good and I think it went well.
Continued making tweaks and changes to the Shewhart anomaly detector in response to erroneous events produced when running it against the full set of protocols supported by libprotoident. It now tends to only pick up major or sudden changes in the time series, which is great when dealing with protocols that aren't very common but may not be the best for more popular protocols.
Finished my teaching load for 301 - final lecture was given on Monday and marked the last C programming assignment throughout the week. Definitely enjoyed the opportunity to do something a little different and hopefully it was valuable to the students too. It would be great if we could find a way to keep using some of the material I prepared in future courses.
Alerts due to events can now be triggered on individual events as well as
combined event groups. There are checks in place to try to prevent too
many alerts being generated at any one time or by the same events. The
next step may be to actually generate emails to myself for new events to
test that the thresholds are set appropriately and aren't too annoying.
Finished implementing a fix to help minimise the number of event groups by
rearranging them when possible to get better groups.
Had to rewrite some of the event database queries to be more efficient now
that we have many more historical events being added. The database now
does more of the heavy lifting (as it should) rather than doing it in the