Meenakshee Mungro's blog
Got a full draft of the report, with a final version of Chapters 2-5. 9 chapters spread over 56pages of (hopefully) dry enough material.
Plan for the next 2 days is to edit, edit and then edit some more.
Glad it'll be over soon.
Spent the week fixing up chapters 2-5 and emailed Richard a copy on Friday night(including chapter 6). Didn't get to work on any new chapters during the weekend, had assignments to catch up on.
Plan for this week is to finish chapter 7 by Wednesday, and start on chapter 8 and 9.
Spent the last week working on the report. Have a draft version of Chapters 2-5 that have been checked atleast once by Shane, and added most of the content for Chapter 6.
The plan for the coming week is to finish chapter 6, get it checked asap and give Richard a copy of chapters 2-6.
Then, I have a decently sized chapter to work on(7 - Threaded Network Export) and 2 smaller ones(8-Testing/Evaluation and 9-Future Work/Conclusion). There's also the intro that I need to edit at some point.
Spent the first half of the week working on the collector. Implemented exporting expired flow records and designed another protocol header and subheader for these records. Cleaned up some repetitive code and added a function to export the ongoing flow buffer when the timer expires(before checking for new ongoing flows). Also added some documentation.
Started working on the report in the middle of the week and so far, have a draft version of the first 4 chapters(excluding the intro). Shane has checked a couple of them already so the plan for the coming week is to tidy up those chapters and get as much writing done as possible.
Shane suggested sending the protocol names once only to reduce the amount of redundant data sent each time and also, save on fifo space and bandwidth requirements. I designed a new protocol subheader for exporting protocol details(id, name, name_len) and these are sent to a client as soon as it connects to the server. Then, I had to chage the old exporting code and get rid of parts adding the name and name length and add in the appropriate code for the protocol IDs.
Then, I started working on exporting expired flow records to clients every X seconds(where X = 3mins/value chosen by user). I created a subheader for expired protocol records, and a structure for an expired flow record. Each time a flow expired, it was sent to be exported and its data added to the appropriate buffer. The buffer was then written to the FIFO when it filled up.
After I made sure that expired flow records were being exported correctly, I setup a timer which would export these records every X seconds, regardless of whether it was full or not.
Also got my Background chapter back from Shane. and started making the proposed changes.
Spent a major part of the week reading up on and adding threading and using libfifo with the collector.
First, I added support for using Libfifo in order to write the buffer to a memory-backed FIFO with a default size of 100MB(which can be changed via options). Then, I wrote out the FIFO to each of the clients through their fd by using the functions provided in the Libfifo API. Tested and got it working like before.
Initially, clients that connected to the server were sent statistics every X seconds(where X is a number specified in the options). Concurrency issues would arise when clients would try connecting/disconnecting during a stat export, which implies that the client list would need to be updated while the exporting process was iterating over it. After discussing this with Shane, we decided to use threading and to create mutexes around the client list when it was being read from/written to. The server can now handle disconnects/new connections while exporting statistics without crashing.
Spent Friday at home and got started on writing the Background and Libprotoident chapters of the report(ch. 2 & 3 respectively). Worked during the weekend too and nearly done with the Background and almost half-way through the 3rd chapter.
Plan for the next week is to get the background section's draft done asap and move it to LaTex and get it checked before the end of the week if possible. I also have a list of features that I need to tackle in the collector.
Spent the first few days of the week working on my presentation and then spent the whole Friday taking care of some tickets.
Previously the server was not handling disconnects from clients, so it would still try to send data to the file descriptors. I fixed that first and then worked on not sending statistics for deprecated(NULL) protocols, which would enable saving on bandwidth and effort.
For the next week, I need to tackle threading. Which I am not looking forward to.
Spent the first half of the week working on my protocol implementation on the server, and tested it by adding the necesary code to parse the bytes received in the client. It can now send flow records to the client in the same format as the lpi_live output. There are a number of features to add to it, but I'll work on those after I get back.
ALso started working on adding some new counters for the number of protocols used by local and external IPs for a reporting period. Not working entirely, but I'm leaving on holidays for 5 weeks and will try to get some work done while away.
Will be back on the first week on Feb and also plan to start on the report when possible.
Spent the whole week working on the collector and a simple client to test it.
Shane helped with working out a packet format which would be used to send details about flows over a network. After working out the format, I started gradually developping a script(lpicp_export.cc) which formats the data according to the required packet structure. Currently, it adds a header, the name of the monitor(or "unnamed" if not specified), a subheader.
After that, I started working on a client which would read in the bytes and parse the values to extract the information sent by the server.
The plan for next week is to make the exporting script send out values from the counters and have the client parse the bytes received as before, as well as looking into using threads to have a separate thread writing out data to the DB while the program reads in values from a trace/other input source.
Spent the week working on my collector.
Started with a simple Libtrace skeleton program and added features to it gradually with Shane's help. I played around with it and added code so that it would count the incoming and outgoing flows and output that every 2mins to the console using a Libwandevent timer. Also used Libwandevent to add and handle SIGINT signals. Then, I used Libflowmanager to keep track of flows and get rid of the ones that had expired and added code to keep counters for the new and expired flows, which were output to the console too. Finally, I had a look at Libprotoident's tool (lpi_live) and modified my code so that it used Libprotoident to identify the application protocol of flows.
Currently, the program outputs the results of processing the packets every n seconds(where n is a value specified in the command line arguments). Next, I have to modify the program to export the output over a network.