Graphing Sensor Data from a Raspberry Pi with Grafana
For a weekend project I picked up an Enviro + Air Quality board with a particulate monitor. My goal was to start recording this data and feed it into Grafana for viewing.
The board itself, with the display showing temperature, humidity and pressure
The particulate graph as an example of the output
There's various other sensor data: temperature, pressure, humidity, light, gas and noise. I'm sampling the data every second and sending it to Graphite hosted in docker container on a Ubuntu VPS, which is consumed by Grafana.
Equipment
- Enviro for Raspberry Pi – Enviro + Air Quality
- PMS5003 Particulate Matter Sensor with Cable
- Raspberry Pi Zero WH (pre-soldered)
- Official Raspberry Pi Universal Power Supply (Pi 3 & Zero Only)
- Micro SD Card
- For setting up: Three Port USB Hub with Ethernet (micro B)
- Also for setting up: Mini HDMI to HDMI adapter
In this post I'll explore:
- Setting up the Pi
- Setting up Graphite and Grafana on a VPS
- Adding the Stats Collector Service
- Graphing the Statistics
Setting up the Pi
I used the Raspberry Pi Imager to install Raspberry Pi OS Lite to my micro SD card, which has everything we need for this project.
Once installed, I set up WiFi and SSH using the following command:
$ raspi-config
Next I followed the Pimoroni instructions to set up the Enviro board and particulate sensor.
By the end of that tutorial you'll be able to run the Python scripts to get sensor data and drive the screen. We'll build on the Python libraries there in the Adding the Stats Collector Service step, so make sure you run the install script and try some of the example scripts.
Setting up Graphite and Grafana on a VPS
I use Linode's $5/month plan for my VPS. RAM is tight, but Graphite needs ~256MB and Grafana ~64MB. The official Docker setup instructions are a good resource for getting Docker installed on Ubuntu server.
I keep my docker services in a folder structure like this:
docker/
grafana/
stack.yml
db/ # Contains data mounted into the Grafana container
graphite/
stack.yml
db/ # Contains data mounted into the Graphite container
This structure allows you to back up the docker
folder to gather both the stack definitions and all of the data the containers use to run. You can run this every evening for example and sync the folder to Amazon S3.
Setting up Grafana
The contents of Grafana's stack.yml
, making it available at port 3000, with 50% CPU and 64MB of RAM:
version: '3'
services:
grafana:
image: grafana/grafana
deploy:
ports:
- 3000:3000
volumes:
- ./db:/var/lib/grafana
deploy:
restart_policy:
condition: any
resources:
limits:
cpus: '0.5'
memory: 64M
The Grafana docker container runs under a user with ID 472
, so you must create a service user and give them write access to the db
folder:
$ adduser grafana --system --no-create-home --uid 472
$ mkdir db
$ chown grafana db
To deploy this stack, you use the following command:
$ docker stack deploy -c stack.yml grafana
To see the status of the stack:
$ docker stack ps grafana
ID NAME IMAGE NODE DESIRED STATE
unmhk2cqkgfv grafana_grafana.1 grafana/grafana:latest localhost Running
Grafana should now be available at http://example.com:3000/
Setting up Graphite
Here's the contents of Graphite's stack.yml
, making it available at port 3001, with 50% CPU and 256MB of RAM:
version: '3'
services:
graphite:
image: graphiteapp/graphite-statsd
deploy:
ports:
- 3001:80 # Web UI port
- 2003:2003 # Ingestion port
# - 2004:2004
# - 2023:2023
# - 2024:2024
# - 8125:8125/udp
# - 8126:8126
volumes:
- ./db/storage:/opt/graphite/conf
- ./db/conf:/opt/graphite/storage
- ./db/functions:/opt/graphite/webapp/graphite/functions/custom
- ./db/nginx:/etc/nginx
- ./db/statsd:/opt/statsd/config
- ./db/logrotate:/etc/logrotate.d
- ./db/log:/var/log
- ./db/redis:/var/lib/redis
deploy:
restart_policy:
condition: any
resources:
limits:
cpus: '0.5'
memory: 256M
To start this service:
$ docker stack deploy -c stack.yml graphite
To check on it:
$ docker stack ps graphite
ID NAME IMAGE NODE DESIRED STATE
cno4umv5gq46 graphite_graphite.1 graphiteapp/graphite-statsd:latest localhost Running
Graphite should now be available at http://example.com:3001/
Securing Graphite
For whatever reason Graphite is not secure out of the box. I added HTTP Basic authentication to it by editing my NGINX config:
$ nano db/nginx/sites-enabled/graphite-statsd.conf
Add the auth_basic
and auth_basic_user_file
sections to the location /
block:
...
location / {
auth_basic "Graphite";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
...
Then generate an htpassd file:
$ htpasswd -c db/nginx/.htpasswd <username>
Finally, restart your Graphite stack:
$ docker stack rm graphite
$ docker stack deploy -c stack.yml graphite
Adding the Stats Collector Service
Here is the full Python 3 code I am using to collect the stats every second, and update the screen. You must first install the graphyte
library to send statistics to the server:
$ pip3 install graphyte
Then, save the Python script as collector.py
(change example.com
to your server's address): collector.py.
You should be able to run it using the following command:
$ python3 collector.py
Installing the Collector as a Service
To ensure that the collector starts when the system reboots and to ensure the script is restarted if it stops, we need to create a systemd service.
The first thing to do is to create the service definition:
$ nano /etc/systemd/system/collector.service
The file should look like this:
[Unit]
Description=collector
After=network.target
StartLimitIntervalSec=0
[Service]
Type=simple
Restart=always
RestartSec=1
User=pi
WorkingDirectory=/home/pi
ExecStart=/usr/bin/python3 collector.py
[Install]
WantedBy=multi-user.target
This assumes the script is located at /home/pi/collector.py
and should run as the user pi
. Save the file, then run the following command to enable the service:
$ systemctl enable collector.service
Then it's a case of starting it:
$ systemctl start collector.service
To check the status of the service:
$ systemctl status collector.service
And to get logs from the service if it errors:
$ journalctl -u collector.service
Graphing the Statistics
Now for the easy bit - we can start graphing the statistics using Grafana. First, add a Graphite data source:
Then, configure the Basic authentication using the credentials you entered to create the .htpasswd
file:
Hit "Save & Test", you should be presented with a green banner.
The next step is to create a dashboard:
And finally, you can create a graph based on the data from your Raspberry Pi:
🏷️ grafana graphite pi data statistic raspberry graph collector install script sensor docker enviro particulate vps
Please click here to load comments.