In this workshop you will learn:

This workshop assumes we are working on a RPi 4 with the latest 64bit version of Raspbian installed.

The Raspberry Pi website has a really simple imager to setup a Pi and there are lots of tutorials on-line to get you started configuring network settings etc. This tutorial is useful for setting up a headless device when you can plug it into a network where you don't know which IP address will be assigned.

RPi ImagerDownload Imager

Once the card is flashed, insert it into the RPi and power it up. On your Mac / PC open up a Terminal / Putty session and log into the device using SSH. In my case i used:

ssh pi@staff-pi-casa0014.local

SSH into RPi

To check the OS of the device enter the following:

cat /etc/os-release

Which for me resulted in the following:

Terminal showing OS

Finally do a quick update / upgrade to make sure all files are upto date and then reboot before moving on to installing the datastore.

sudo apt update
sudo apt upgrade -y
sudo reboot

Excellent, after a few minutes of updates, upgrades and a reboot you should have a shiny new RPi sitting alongside a bunch of others on the lab network. Log back into it through Terminal or Putty so that we can continue with the set-up.

Before we start installing the Influx, Telegraf and Grafana we will do a little housekeeping.

There will potentially be many RPi's on same network or in the lab so it will be useful to change the device hostname to something identifiable to you - hopefully you have done this already during the setup, for example I was using staff-pi-casa0014, the default is raspberrypi. If your hostname is still raspberrypi then change it by doing the following. If you have already named your device and are happy with the name then continue to the next page.

In the terminal enter the following command to use Nano to update your hostname.

sudo nano /etc/hostname

And repeat for hosts file:

sudo nano /etc/hosts

In the hosts file you need to edit the entry against 127.0.0.1 as per image below.

screenshot of hosts file

Once done, run a reboot for good measure.

sudo reboot

InfluxDB downloads are available here. You can ignore the cloud sign up or join the free tier - your choice. Cancel the cloud signup pop up. We will use v2 (hence the requirement for a 64bit OS). You should see information similar to below for the downloads - but we will install from the command line on the RPi. We will use the Ubuntu & Debian (Arm 64-bit) option. (Note: the notes below are based on the official influxdata website notes)

InfluxDB download

Install using the process below:

1] Add the InfluxDB key to ensure secure download and the repository to the sources list so that we can download it in the next step.

wget -q https://repos.influxdata.com/influxdata-archive_compat.key
echo '393e8779c89ac8d958f81f942f9ad7fb82a25e133faddaf92e15b16e6ac9ce4c influxdata-archive_compat.key' | sha256sum -c && cat influxdata-archive_compat.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg > /dev/null
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' | sudo tee /etc/apt/sources.list.d/influxdata.list

2] With the repository added we now run another update to the package list

sudo apt-get update

3] Finally we install InfluxDB

sudo apt-get install influxdb2 -y

This should result in an output similar to that in the image below:

screenshot of InfluxDB install

Finally to get InfluxDB up and running at reboot we need to enable and start it within systemctl:

sudo systemctl unmask influxdb.service
sudo systemctl start influxdb
sudo systemctl enable influxdb.service

You can check if it is running ok with:

sudo systemctl status influxdb

Use CTRL C to break out of the systemctl command and get back to the terminal prompt.

You should now be able to browse to your instance of InfluxDB running on your machine - in my case I browse to http://staff-pi-casa0014.celab:8086/

Screenshot of InfluxDB welcome page

To get started fill out form - use your UCL username, remember the password you use, for organisation you can use casa0014 and for the initial bucket name use telegraf.

Screenshot of InfluxDB setup page

Click continue and then select quick start and you should see:

Screenshot of InfluxDB getting started page

One of the great things about InfluxDB is that lots of folk are essentially building similar set-ups which mean that templates are starting to appear that do lots of the heavy lifting of setting up a system. To get started we will use a built in template for creating a datastore and dashboard for stats on the RPi system performance.

Open up the settings page ("cog" in left menu) and then select the Templates tab. From here you can look up a template for "Raspberry Pi". Note: we are just setting up the database to store data here - we have not started collecting data yet.

Screenshot of InfluxDB settings page

InfluxDB install is complete - we will now set-up a bucket called mqtt-data to hold our data from the mqtt feed of the sensor. Expand the left navigation (button at bottom left of window) and then select Buckets and select Create Bucket (button at top right). You should see a dialogue similar to below. Create a bucket called mqtt-data.

Screenshot of InfluxDB setting up a bucket page

Telegraf is a time-series data collector - lots of information is available on the influxdata documentation site. They have some really thorough examples and teaching materials if you want to dig deeper into the functionality - it is a very powerful tool. The best way to learn what it can do is to setup some simple examples. These patterns are how we have configured the CE server infrastructure.

Step 1

Get back to the terminal on the RPi and install telegraf using the following command:

sudo apt-get update && sudo apt-get install telegraf -y

Step 2

Telegraf has lots of configuration options - the default file can be viewed at:

cat /etc/telegraf/telegraf.conf

This file has hundreds of lines so don't worry too much about the contents. To get started we will use a minimal CE setup so that you can see the basic elements of a configuration. But first we will explore data from the RPi template we installed in the previous step to see how the telegraf scripts work. Browse to Load Data and select Telegraf.

Screenshot of InfluxDB telegraf page

Then click on Setup instructions. You have already completed step 1 (install telegraf) but will need to do a couple of additional steps and then complete steps 2 and 3 on the command line by copy and pasting text in to the SSH shell on the RPi.

First up, define some PATH variables:

export INFLUX_HOST=http://10.129.101.214:8086
export INFLUX_ORG=casa0014 

The third PATH variable you need to set is the TOKEN to get access to the Bucket. This is described in step 2 of the screenshot below. Click generate new API token and then copy and paste that into the command line.

export INFLUX_TOKEN=---you-will-have-your-own-token-here---

Finally you need to run telegraf and pass in the configuration file it needs to use - again we need to edit the command to replace the hostname with the IP address. I used:

telegraf --config http://10.129.101.214:8086/api/v2/telegrafs/0a2093eea36cb000

Screenshot of InfluxDB telegraf start page

The terminal should now show a telegraf script running as per below:

Screenshot of terminal window showing telegraf agent running

Back in InfluxDB you can go to the dashboards view and look at the template we downloaded for the RPi - this is taking data from the sensors on the RPi and inserting them into a Bucket called rasp-pi (make sure that is selected at the top of the dashboard if you are not seeing any data).

Screenshot of RPi dashboard using template

That telegraf agent is running live in the terminal using a config file that is stored in the web app (ie it is accessing it via the API). InfluxDB also has default configuration files stored in the local file system that get executed whenever InfluxDB is started. To make management of our plant monitor a little simpler we will edit the default configuration file on the RPi.

Open up the file on sample v2 configuration file on GitHub since we will use this as the basis for setting up your configuration file.

There are several variables that you will need to update. The red arrows below highlight them.

The first OUTPUT PLUGIN contains settings for the RPi sensors going into the rasp-pi bucket. You need to copy in your API token from the InfluxDB webpage for the raspberry pi - you will notice you will have 2 tokens in your Load Data > API Tokens page with different permissions - the main user one with all privileges and one restricted for actions on the rasp-pi bucket. For simplicity copy the user token since we will use here and in the next section.

Telegraf config settings RPI

The second set of plugins pull from MQTT sensor data and push it into the mqtt-data bucket. This section has 4 items we need to change:

Telegraf config settings RPI

Save the file and then we will copy it across to the RPi. One way to do this is to use Nano to edit the file. First we will take a copy of the original file for reference and then will create a new simplified one.

sudo mv /etc/telegraf/telegraf.conf /etc/telegraf/telegraf-original.conf
sudo nano /etc/telegraf/telegraf.conf

You shoud now see an empty nano file - copy paste your config file contents into this file and hit CTRL X to exit and save.

Restart the influxdb service for the configurations to be set and check the status:

sudo systemctl stop influxdb
sudo systemctl start influxdb
sudo systemctl status influxdb
sudo systemctl start telegraf
sudo systemctl status telegraf

You should now be able to explore the data through the Data Explorer

Screenshot of Data Explorer

Example code for the Data Explorer is below and a Dashboards template has been included in the casa0014 GitHub repo

from(bucket: "mqtt-data")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "mqtt_consumer")
  |> filter(fn: (r) => r["plant-topics"] == "student/CASA0014/plant/ucjtdjw/temperature")
  |> filter(fn: (r) => r["_field"] == "value")
  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
  |> yield(name: "mean")

and with a regular expression...

from(bucket: "mqtt-data")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "mqtt_consumer")
  |> filter(fn: (r) => r["topic"] =~ /plant.*/)
  |> filter(fn: (r) => r["_field"] == "value")
  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
  |> yield(name: "mean")

and with the CE weather station data...

from(bucket: "mqtt-data")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["weather-topics"] == "UCL/PSW/Garden/WST/dvp2/loop")
  |> filter(fn: (r) => r["_field"] == "appTemp_C")
  |> keep(columns: ["_value", "_time", "weather-topics", "_field"])
  |> map(fn:(r) => ({ r with 
      _value: float(v: r["_value"]) 
    }))
  |> keep(columns: ["weather-topics", "_value", "_field", "_time"])
  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)

In the final step we will install Grafana so that we can visualise the data in the InfluxDB. Open the Grafana download page and follow the instructions to install the "Ubuntu and Debian (Arm64)" install:

sudo apt-get install -y adduser libfontconfig1 musl
wget https://dl.grafana.com/enterprise/release/grafana-enterprise_10.1.5_arm64.deb
sudo dpkg -i grafana-enterprise_10.1.5_arm64.deb

Once installed make sure to follow the in terminal instructions to add grafana to systemd and then to start the service:

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable grafana-server
sudo /bin/systemctl start grafana-server

You should now be able to browse to http://localhost:3000 and see the Grafana welcome page - the default user pass is admin admin

Similar to the previous two tools Grafana also has a documentation page for the config file

For some useful resources and for the query syntax of the TS tables look at:
https://docs.influxdata.com/influxdb/v2.4/

https://www.influxdata.com/university/

https://grafana.com/docs/guides/timeseries/

In this final step we will set up a Grafana dashboard so that you can visualise your sensed data. Grafana was installed above so open a browser and go the address http://staff-pi-casa0014.local:3000 where the middle bit is the address of your Raspberry Pi. Note that we are connecting on port 3000. You should be asked to login - use the default admin admin and skip changing the password when prompted. You should see a welcome screen like:

Screenshot of Grafana Homepage

In the middle of the dashboard you should see a prompt to Add your first datasource click on that and fill out the following details:

Screenshot of InfluxDB setup

Select save and test at the bottom of the page to check the connection works.

You can always get back to this settings page by following the Configuration item in the left hand menu and selecting Datasources (Icon looks like a small cog).

To create a dashboard select Create Dashboard from the + icon in the left navigation and click Empty Panel.

Screenshot of Grafana create dashboard

In the query panel you can enter queries just like those used in the Dashboard for InfluxDB. At the bottom of the panel you can also select Sample Query to explore query syntax.

Screenshot of Grafana Fom field

The Flux query language enables you to make some interesting graphs and charts but does require a little time to explore and get used to the syntax. You need to start thinking in terms of filtering down the data in the Buckets where timestamped data has tags, topics, measurements and values associated with them. To get started use the previous queries, copied below, to get started.

Make sure to click Apply and Save (top right). You can also play with the graph settings to change the styles, add a panel title etc.

from(bucket: "mqtt-data")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "mqtt_consumer")
  |> filter(fn: (r) => r["plant-topics"] == "student/CASA0014/plant/ucjtdjw/temperature")
  |> filter(fn: (r) => r["_field"] == "value")
  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
  |> yield(name: "mean")

and with the CE weather station data...

from(bucket: "mqtt-data")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["weather-topics"] == "UCL/PSW/Garden/WST/dvp2/loop")
  |> filter(fn: (r) => r["_field"] == "appTemp_C")
  |> keep(columns: ["_value", "_time", "weather-topics", "_field"])
  |> map(fn:(r) => ({ r with 
      _value: float(v: r["_value"]) 
    }))
  |> keep(columns: ["weather-topics", "_value", "_field", "_time"])
  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)

As a final task, can you think how you could compare your plant data with your classmates? How could you see their data and not just yours?

student/CASA0014/plant since it was a plant monitoring deviceucxxxxx (or whatever you called your plant!) which is hopefully a unique nametemperature | humidity | moisture for each of the three sensor types

Look up how to use regular expressions in Flux...

from(bucket: "mqtt-data")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "mqtt_consumer")
  |> filter(fn: (r) => r["topic"] =~ /plant.*/)
  |> filter(fn: (r) => r["_field"] == "value")
  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
  |> yield(name: "mean")

Endnote.

Sometimes if you mess up your RPi set-up it is easier to just rebuild from scratch. This GIST based on these instructions is a bash script to install the TIG stack on RPi4 (steps 11, 12, 13).