..

Day 0: Neutron

In my opening post for 100 Days of Code, I discussed that I would work on Tapes, but while on vacation I realized how sensitive to Air Quality we really are and decided to focus on Neutron instead.

Today I mostly focused on setting up the overall project, but also started to put together the Docker containers and some rough code.

I decided rather than building the entire platform and logic, it’d be best to utilize a platform that’s already built for time series data analysis: InfluxDB. I’ve used InfluxDB in the past but never to this extent.

I’ve build a server - Neutron - hosted on Digital Ocean which hosts InfluxDB and CouchDB. InfluxDB is used as a central repository for all time series readings from Neutrinos, while CouchDB is where configuration data is stored for the Neutrinos.

The Neutrinos are the nodes that have the sensors actually attached to them. They will be deployed on-site and query the configuration database to get their room and customer location assignments. Once the room and customer location are set, the Neutrinos will begin querying the sensors, put together the data as a point in time reading, then push to the Neutron tiee series database.

Each customer location gets it’s own InfluxDB Bucket and key. A helper script will be written to allow me to quickly generate buckets and keys without needing to manually do the steps through the UI.

Currently, I have a docker container that runs the neutrino.py script. It is able to query CouchDB to get node configuration data, generate random time series data, then publish it to InfluxDB. In the coming days I will need to build a physical Neutrino (Raspberry Pi w/ BME680 and PMS5003) so I can remove the random data functions and actually publish real data.

I haven’t pushed any code to Github yet, because I’m still deciding if I want to self host Git (which I probably will).

/wes