I wanted to monitor the air quality, temperature and pressure of my garage workshop using a single board computer (SBC) like a Raspberry Pi.
As well as controlling my air filtration, sawdust collection and heating systems, I was curious to know baselines for general particulate matter, noxious gasses, pressure and humidity. I also wanted a way to trigger a camera from a motion sensor to record who was going in and out of the workshop.
The collected data is pushed to InfluxDB and then visualized using Grafana.
I would like to trigger other services directly using IFTTT. For example, turning on the heating system when the temperature drops below a specific reading, or starting a camera when motion is detected.
Inspired by this article, I figured I could monitor the airbourne sawdust that was not being gathered by my sawdust collection system. If it got above a certain level, I figured I could triger my dust filtration unit.
Using the SDS011 sensor I could collect data on two standard sizes of particulate matter (PM). According to the Air Quality Standards in my area, the 10 micron particles (PM10) should not exceed 150 micrograms per cubic meter (μg/m3) based on a 24-hour average. Similarly, the small nasty stuff that can really hurt you, the 2.5 micron particles (PM2.5) should not exceed 35 micrograms per cubic meter (μg/m3) based on a 24-hour average.
The client code for the SDS011 sensor is sds011.py.
Adafruit sell the amazing BME680 providing the remaining environmental sensing I wanted.
I took advantage of the adafruit_blinka python library to use the CircuitPython hardware API that talks I2C and SPI protocols that sensors often use. Adafruit explains this and how to install this lib onto your Linux SBC. As explained below, I provide a script to install the nessesary sensor support.
The client code for the BME680 sensor is bme680.py.
I wanted a way to compare against current weather conditions, so used the OpenWeather service to fetch data for my location. You can get an API key for free and while you're limited to 60 calls/minute, that's more than enough. The path of where you stored your key and the OpenWeather location name is configured in the .env.
The client code for communicating with the OpenWeather service is openweather.py.
I chose to use the Pyroelectric ("Passive") InfraRed Sensor from Adafruit to understand if someone was in the workshop. I figured that would be good to cross reference with environmental changes but also wanted a way to trigger a camera to at least understand who was in there... who's not been putting tools back in the right place and all that.
The client code for the PIR sensor is pir.py.
InfluxDB is being used to store the sensor data generated by the client side monitor application. Grafana is being used to vizualize this collected data. I used to use AdaFruit IO to do these things but couldn't afford the ongoing subscription.
The InfluxDB and Grafana services are stood up as container based apps using Docker on a seperate system or the one running the monitor.
The start_server_stack.sh script will start up these services using a Docker Compose definition. Make sure you have docker installed because it just makes things so much easier.
The configuration for InfluxDB and Granfana is in an environment file. Copy the .env.template file to a .env file. Then edit this accordingly before you run the start script.
If you make changes to things after running the start script, just use the following Docker Compose commands to cycle InfluxDB and Grafana...
docker compose down -v
docker compose up -dA dashboard called 'Workshop Climate Monitor' is configured in the Grafana instance that is started. The Docker compose definition also provides the configuration for the InfluxDB datasource. This means you get a ready made, albeit very simple, dashboard that renders the data being sent to InfluxDB.
Note that provisioned Grafana Dashboards, like this one, are read-only inside the Grafana UI. If you want to save any edits you make to the Dashboard from the UI, you have to save the JSON file, overwrite the JSON in workshop_climate_monitor.json, then restart Grafana using Docker Compose, as described above.
The monitor application, env_monitor.pl runs on the client side. I run it on a Raspberry Pi Zero 2 with the attached environmental sensors described above.
The configuration for the client application is in an environment file. Copy the .env.template file to a .env file. Edit this accordingly before you run the monitor application.
The setup.sh script uses apt and pip to install libraries so that the python application running on the RPi can talk to the envornmental sensors. You only have to run this once and reboot the RPi to make sure the changes have taken before running the monitoring application.
You may get a warning saying, "Kernel module 'spi-dev' not found". Ignore that.
After the sensor support is established, you can use the test scripts to see if the attached sensors are working.
Note that in my hardware implementation, the PIR sensor was connected to GPIO pin 4. This is configured in the .env file.
Just run env_monitor.py from within the client directory.
By default, the monitor will run indefinately but you can use the --duration option to say how many minutes you want to run the monitor. This is useful if you're debugging and can also change the log level using the --loglevel option.
For example, to run the monitor for 2 minutes and see debug messages, use:
env_monitor.pl -d 2 --loglevel DEBUGRun the monitor as a daemon by using systemd. A systemd Unit file template is provided but the paths and user need to be updated appropriately.
Use the start_env_monitor_service.sh script to move the Unit file into the correct place, cycle systemd, and start up the monitor as a daemon.
Now you should be able to reboot the RPi and the monitor will run automatically.
Check the status using
systemctl status env_monitor.serviceTail the logs using
journalctl -u env_monitor.service -fStart and stop using
sudo systemctl [start|stop] env_monitor.serviceLogs are written to the log file location defined in the client/.env configuration. The .env.template file shows an example of this definition.
When running the monitor application as a daemon, systemd manages logging, and you can use the journalctl command to access the logs.
If the network connection goes down and data cannot be written to InfluxDB, the monitor application will cache the data locally. When the connection is restored, the cached data will be written to InfluxDB.
The location of the cache file and the flush limit (number of items to keep in memory before flushing to to the cache file) are defined in the client/.env configuration. The .env.template file shows an example of this definition.