Brendon Jones's blog
Wrote an input module for NNTSC that reads AMP data from a rabbitmq queue
and transforms it appropriately for inserting into the database (and into
the event detection tools). So far it deals correctly with data from the
Spent some more time trying to understand the complexities of the new AMP
graphing system in order to make some changes. There are a few edge cases
that don't display data in the right way and I believe that more data is
being fetched than is necessary in some cases. Continued to refactor
any code I touched for readability as I went.
Spent most of the week working through the new AMP graphing code to get a
feel for how it works. Had to refactor some portions and improve the
readability to properly understand what was going on. Also managed to
sneak in a few improvements to the loading times by removing queries for
data that wasn't really being used.
Installed the new amplet software onto our test machine to have a working
datasource for the event detection testing. Got it up and running and now
have to integrate the example consumer into NNTSC.
After more discussion with Shane I overhauled the data interface for the
new AMP collector to be entirely python rather than a combination of c and
python. This makes it much easier to write, build and integrate into the
new NNTSC that he has been working on. Wrote python modules for the icmp
and dns tests.
Fleshed out the dns test repoting and printing functions to include all
the information that is available (with output loosely based on dig) and
added more information about the addresses tested to in the icmp test when
names were used to properly differentiate between responders. Also
expanded on the test reporting protocols to include version numbers to
help make sure server and clients don't report incorrect data if they get
out of sync.
Tidied up some loose ends and corner cases around reloading the name table
or schedule during operation that would cause tests to have old references
to destinations. Also changed the structure of the nametable in order to
have easier lookups of names from addresses and vice versa from all parts
of the code.
Spent some time talking with Shane about how to best integrate AMP with
the data collector that Nathan wrote. Looks like that is the best way for
now to get collection/storage, as that project is ideally meant to provide
nice easy ways to put data in and fetch it out again. To help facilitate
this I started writing some code to help move the data between C and
Spent an afternoon watching student presentations as a warmup for their
presentations to the department next week. We've got a good bunch of
students and the talks are looking good, so hopefully all goes well on the
Added simple configuration file parsing using libconfuse to the new
measured and xferd. This makes it easier to configure where the collectors
are running and how to connect to them as well as other things like
ampname, location of tests etc.
Continued with the restructuring from last week and removing some of the
code that had been duplicated between different parts. Had to slightly
change the location that the connection to the local broker was
established because the test processes can't operate on a shared
connection without their own channel, but I haven't yet found a nice way
to create the channel in the parent process.
Fixed up the support for ipv6 connections in the current amp xfer to work
when the client only has ipv6 available but the collector has both A and
Started to write the server portion of the new amp, with a generic
consumer process that reads from the broker and performs appropriate,
test-specific saving functions. With the addition of this and a display
function callback the tests now operate almost identically when run as
part of measured or run standalone up to the point where they report.
Spent some time fixing the structure and build system for the new amplet,
as adding the server portion meant that a lot of code was now common
between it and the client.
Updated the AMP web interface to use the new ampy API functions added at
the end of the previous week. Also worked on the look and feel a bit to
pretty it up before it got presented at NZNOG.
Spent most of the week at NZNOG in Wellington. Quite a few interesting
talks this year and it was good to catch up with people (Perry, REANNZ,
Added headers to the test result reporting messages sent via rabbitmq
to describe metadata that will be present for all tests (source,
timestamp, etc). Wrote some code to decode and use the header information
and had a think about how best to implement consumers on the server side.
Started to implement a basic server within the AMP framework that will
receive messages from the server message queue and print information about
Made a first attempt at a schema for the new AMP database and have started
to populate it with real data. Spent some time updating the data fetching
functions in ampy to use this real data rather than the hardcoded test
values. Also expanded the API slightly with more options to select
sites/meshes and added simple caching (using memcache) to some of the
functions used by the matrix view.
Updated the python AMP API to return real data rather than random data. It
fetches data from the existing interface so is a little slow but it is
important to have real data to make sure we are doing sensible things with
the graphs. Updated the matrix to properly make use of the real data.
Built a working shovel config for rabbitmq to move data from amplets to
the server in a reliable manner. Figured out how to properly set headers
and other message attributes when reporting data and spent some time
deciding what test information should be in the message header and what
should be in the body. Also tidied up the install process slightly to make
sure all tests always get installed into the correct location (this avoids
problems I was having with old test libraries being used).
Updated the existing xferd to allow it to listen on both an IPv4 and IPv6
socket if available (requested for the NLNOG RING).
Wrote some simple code to report basic fixed data from an AMP test to a
rabbitmq broker and got the data sent between federated servers (with
proper persistence etc all built in). Federation seems to work in the
wrong direction however, requiring the server to connect to the broker on
what would be our amplet clients. I'm now investigating the rabbitmq
shovel to get the same behaviour but in the right direction.