Mix Node-Red and Python

If you want to build some fun Pi projects, but are still working on your Python skill, then mixing Node-Red with Python might be a good option for you.

Node-Red is a low-code application that is extremely powerful for the creation of Raspberry Pi robotic and Internet of Things (IoT) projects. Node-Red has a rich set of libraries to create simple drag-and-drop logic flows.

By default Node-Red custom scripting is done in Javascript, however Python can also be used. For Python users this offers them a platform to play and learn Python basics while taking advantage of the Node-Red’s low-code interface to manage scheduling and web dashboards.

There are many cases where Raspberry Pi features are only available in Python, so even die-hard Node-Red users could benefit with knowing how to integrate Python into their projects.

In this blog I’ll look at two examples that mix Python and Node-Red. The first will create web dashboard to drive a Raspberry Pi rover. The entire project will only use two Node-Red widgets, where one of the widgets uses Python script within the Node-Red environment. The second project will create an IoT page that shows temperature and humidity data from an BME280 sensor.

Getting Started

Depending on your Raspberry Pi image, Node-Red may already be installed. If not, see the Node-Red documentation or your Pi image for custom installation directions.

There are some excellent dashboard components that can be used to create some light-weight web interfaces. For this blog, I’ll be using the Button State flow. This widget can be used to create an array of buttons. To install this component go into the Manage Palette menu item, click on the install tab, and then search for ui-button.

The next important step is to add a Python enabled widget, there are a few choices. I chose the python-function-ps component because it’s been recently updated, however the other two choices worked on my test projects.

Being able to use Python instead of Javascript in Node-Red is an extremely useful features, however it’s not bulletproof and some care will be needed when you’re using some advanced Python libraries.

In the next section I’ll use these two widgets to control a Raspberry Pi rover.

Creating a Raspberry Rover

There are many approaches to creating a car or a rover with a Raspberry Pi. For this project I used:

  • A 2-motor car chassis (~$15)
  • Portable battery (5V, 3 Amp output ~$30)
  • Raspberry Pi with a motor shield
  • 4 alligator clips, and 4 jumper wires
  • some elastic bands and duct tape

For a Raspberry Pi 3 and 4 the portable battery needs to output 3A. If you are using an other PI 1/2 you can use a standard 2.1A phone charger.

Due to the power draws it is not recommended that you connect motors directly to a Raspberry Pi, luckily there are quite a few good motor or automation shields available (~$25). If you’re feeling adventurous you can build your own motor shield with a L293D chip for ~$2. On this project, I used an older PiFace Digital module which is has good Python support, but weak Node-Red functionality.

The 2-motor car chassis usually come without any wiring on the motors. For a quick setup I used a combination of alligator clips and jumper wires to connect the motor terminals to the Pi motor shield. A couple of strips of duct tape is useful for holding the wires in place. Finally elastic bands can keep the portable battery and the Raspberry Pi attached to the chassis.

To test the hardware setup, I found it best to keep the car chassis raised up with the wheels off the ground. This step allows you to use a standard power plug without killing your battery before you’re ready to play. You may have to do some playing with the wiring to ensure the motors are both turning in the required direction.

The first software step is to install your motor’s Python library. (Note: this step will vary depending on your motor shield). For my hardware the PiFace library is installed by:

pip install pifaceio

At this point, it’s important to test the hardware directly from Python. Check your hardware documentation for some test code to turn on and off a motor.

To test a single motor with Python within Node-Red, three flows can be used: an inject, a python-function-ps, and a debug.

The inject flow is used to create a message payload with either a numeric 0 or 1, to stop or start a motor.

In the python-function-ps flow the incoming Node-Red message (msg) is accessed as a Python dictionary variable. Below are some Python examples to read and manipulate the Node-Red message.

# get the message payload
themsg = msg['payload']
# set the payload 
msg['payload'] = "Good Status"
# create an item in the message structure
msg['temperature'] = 23.5
# clear the entire message
msg.clear()

For the PiFace library, my code needed to do a write_pin command to set a specific pin, and then a write command outputs the request states for all the pins.

pin = 0
# Pass the msg payload as the pin state
pf.write_pin(pin,msg["payload"])
pf.write()

A debug flow will show if there are any problems in the Python code.

Once the basic testing is complete, then next step is to define a Node-Red dashboard that creates an array of buttons to control the rover.

The Node-Red logic for this project only requires the two new widgets that were installed earlier.

The Button State widget is edited by double-clicking on it. Multiple buttons can be added with custom labels, payloads and colors.

A simple two character string is used for the button’s message payload, with the first character being the LEFT motor state, and the second being the RIGHT motor state. A FORWARD command will set both the LEFT and RIGHT motors to 1, with a payload of 11. It’s important to note, that to turn LEFT, the LEFT motor needs to be turned off and the RIGHT motor needs to run, and visa versa for turning RIGHT.

Inside the python-function-ps flow (see below), the Python pifaceio library is imported (line 4) and a pf object is created (line 5). Next the passed button payload is parsed to make two variables, the LEFT and the RIGHT requested motor state (lines 8-9). The final step is to write to the motor states (lines 11-12).

#
# Set PiFace Digital Pins
# 
import pifaceio
pf = pifaceio.PiFace()

# Get the Left and Right requested state
LEFT = int(msg["payload"][0])
RIGHT = int(msg["payload"][1])

# Set the left and right pin motor values
#   the left motor is on pins 0 and  right on pin 1
pf.write_pin(0,LEFT)
pf.write_pin(1,RIGHT)
pf.write()

return msg

A debug flow isn’t required but it can be useful to verify that the Python code completed cleanly.

The image below shows Rover with PiFace Digital module mounted on a Pi 1 with the Node-Red dashboard.

The interesting thing about this first project, is that it shows how lean the logic can be if you mix Python with the right Node-Red components, (only 2 flows!).

If the motor shield is based on the L293D chip set, then buttons and pins could be added for going is backward directions.

Easing into Python

There is an excellent selection of Raspberry Pi Python starter projects that can be done. Communicating with I/O, motors or sensor is always a good place to start.

The second example will look at getting temperature and humidity data from a BME280 sensor (~$5). The gathering of the sensor data will be done in Python, then the real time scheduling and the web dashboard will be created in Node-Red.

The BME280 sensor wires to the Pi using I2C (Inter-Integrated Circuit) connections. The SDA (Serial Data) and SCL (Serial Clock) are on Rasp pins 3 and 5.

The first step in the project is to enable I2C communications, and then install a Python BME280 library:

# Enable I2C, 0 = enable, 1=disable
sudo raspi-config nonint do_i2c 0
# Install Python BME280 library
pip install RPI.BME280

BME280 sensors are typically on addresses 0x76 or 0x77. To verify the address the i2cdetect command line tool can be used:

# Show the feedback of i2cdetect with sample output
$ i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- -- 
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
70: -- -- -- -- -- -- -- 77  

To ensure that things are working a quick Python program needs to be created.

# bme_test.py - Show values from a BME280 sensor
#
import smbus2
import bme280

# BME280 sensor address (default address could be:  0x76)
address = 0x77

# Initialize I2C bus
bus = smbus2.SMBus(1)

# Load calibration parameters
calibration_params = bme280.load_calibration_params(bus, address)

# Get sampled data
data = bme280.sample(bus, address, calibration_params)

print("Temperature: ", data.temperature)
print("Pressure: ", data.pressure)
print("Humidity: ", data.humidity)

If everything is hooked up and working correctly, some values should appear:

$ python3 bme_test.py
Temperature:  20.943249713495607
Pressure:  996.5068353240587
Humidity:  52.84257199879564

This Python code can be tested in Node-Red with inject and debug flows.

There is a slight modification to the code (lines 17-21) where instead of doing a print statement, the sensor results are passed to the dictionary msg variable. The debug flow is defined to show the complete message, so in the debug pane it is possible to see all the sensor results.

The next step is to show the results in a web pages, and this is done with the addition of two new widgets. The first new is an old style mercury thermometer widget (ui-widget-thermometer), and the second is a scheduler (bigtimer). Note: it is also possible to include the Node-Red BME280 component for a comparison check.

The final application uses the same Python code, but a bigtimer widget schedules its execution. The bigtimer has excellent scheduling functionality, but to keep things simple the middle output can be used to sends out a pulse every minute.

The thermometer widget shows the temperature value which is the payload message from the Python script. A chart widget also reads the temperature and is configured to a 2-hour line plot.

A change component is needed to move the humidity to the payload, and this allows a second chart to show the humidity in a bar chart.

Below is a picture of a Raspberry Pi with a BME280 sensor and the Node-Red Dashboard.

Calling External Python Scripts

I found a few cases where the python-function-ps widget crashed. For me this occurred with hardware specific libraries like pyusb.

The build-in Node-Red exec component can be used to run any external program.

Below is an example to pass a Javascript timestamp into a Python program which outputs the date/time as a string without milliseconds.

The exec block is configured to append the upstream msg.payload to the command.

The external Python script uses the sys library with the argv array to read in the passed payload. Python print statements become the top output pin (stdio) that is the msg.payload for downstream components.

Below is the test Python script used to read the Node-Red payload and output a date/time string.

#
# test.py - interface with Node-Red
#           Convert Javascript timestamp to string
import sys
from datetime import datetime

# Print out the timestamp as a string
if len(sys.argv) > 1:
    passedtime = int(sys.argv[1])
    # Strip out milliseconds
    timestamp = int(passedtime/1000)
    print(datetime.fromtimestamp(timestamp))

Summary

Python scripting in Node-Red offers new user programmers a great way build some fun applications without getting too bogged down.

OPC UA Protocol with Python and Node-Red

Industrial operations such as chemical refineries, power plants and mineral processing operations have quite different communications requirements than most IT installations. Some of the key industrial communications requirements include: security, multi-vendor connectivity, time tagging and quality indications.

To meet industrial requirements a communications standard called OPC (OLE for Process Control) was created. The original OPC design was based on Microsoft’s Object Linking and Embedding (OLE) and it quickly became the standard for communications between control systems consoles, historians and 3rd party applications.

The original OPC standard worked well but it had major limitations in the areas of Linux/embedded systems, routing across WANs, and new security concerns. To better address new industrial requirements the OPC UA, (Open Platform Communications Unified Architecture) standard was created.

In this article I will create an OPC UA server that will collect sensor data using Python and Node-Red, and the results will be shown in a Node-Red web dashboard.

Install Python OPC UA Server

There are a number of OPC UA open source servers to choose from.

For “C” development applications see the Open62541 project (https://open62541.org/), it offers a C99 architecture that runs on Windows, Linux, VxWorks, QNX, Android and a number of embedded systems.

For light weight quick testing OPC UA servers are available in Python and Node-Red.

The Free OPC-UA Library Project (https://github.com/FreeOpcUa) has a great selection of open source tools for people wishing to learn and play with OPC UA.

I keep things a little simple I will be using the python-opcua library which is a pure Python OPC-UA Server and client. (Note: a more complete Python OPCUA library, https://github.com/FreeOpcUa/opcua-asyncio, is available for more detailed work). Also an OPC-UA browser is a useful tool for monitoring OPC UA server and their tags. To load both of these libraries:

# Install the pure Python OPC-UA server and client
sudo apt install python-opcua
# Install the OPC UA client and the QT dependencies
sudo apt install PyQT*
pip3 install opcua-client

Simple Python OPC-UA Server

As a first project a simple OPC-UA server will be created to add OPC-UA tags and then simulate values.

The first step in getting this defined is to set an endpoint or network location where the OPC-UA server will be accessed from.

The default transport for OPC-UA is opc.tcp. The Python socket library can be used to determine a node’s IP address. (To simplify my code I also hard coded my IP address, opc.tcp://192.168.0.120:4841).

The OPC-UA structure is based on objects and files, and under an object or file tags are configured. Tags by default have properties like value, time stamp and status information, but other properties like instrument or alarm limits can be added.

Once a tag object is define, the set_value function is used to simulate the tag values.

# opcua_server1.py - Create an OPC UA server and simulate 2 tags
#
import opcua
import random
import time
 
s = opcua.Server()
s.set_server_name("OpcUa Test Server")
s.set_endpoint("opc.tcp://192.168.0.120:4841")
  
# Register the OPC-UA namespace
idx = s.register_namespace("http://192.168.0.120:4841")
# start the OPC UA server (no tags at this point)  
s.start() 
  
objects = s.get_objects_node()
# Define a Weather Station object with some tags
myobject = objects.add_object(idx, "Station")
  
# Add a Temperature tag with a value and range
myvar1 = myobject.add_variable(idx, "Temperature", 25)
myvar1.set_writable(writable=True)
  
# Add a Windspeed tag with a value and range
myvar2 = myobject.add_variable(idx, "Windspeed", 11)
myvar2.set_writable(writable=True)
 
# Create some simulated data
while True:
    myvar1.set_value(random.randrange(25, 29))
    myvar2.set_value(random.randrange(10, 20))
    time.sleep(5)

The status of the OPC-UA server can be checked using the OPC-UA browser:

# start the Python OPC-UA browser client
opcua-client

Items within an OPC-UA server are defined by their name space index (ns) and their object index. The name space index is returned after an name space is register. An object’s index is defined when a new object is create. For this example the Windspeed tag has a NodeId of “ns-2;i=5”, or an index 5 on name space 2.

The opcua-client application can view real-time changes to a tag’s value using the subscription option.

In OPC the terms “tags” and “variables” are often used interchangeably. In the instrument lists the hardware signals are usually referred to as “tags”, but within the OPC UA server the term “variables” is used. The key difference is that a variable can also be an internal or soft point such as a counter.

Python OPC-UA Client App

For my Python client application I loaded up a simple gauge library (https://github.com/slightlynybbled/tk_tools):

pip install tk_tools

The Python client app (station1.py) defines an OPC-UA client connection and then it uses the NodeId definition of the Temperature and Windspeed tags to get their values:

# station1.py - Put OPC-UA data into gauges 
#
import tkinter as tk
import tk_tools
import opcua

# Connect to the OPC-UA server as a client
client = opcua.Client("opc.tcp://192.168.0.120:4841")
client.connect()

root = tk.Tk()
root.title("OPC-UA Weather Station 1")

# Create 2 gauge objects
gtemp = tk_tools.Gauge(root, height = 200, width = 400,
            max_value=50, label='Temperature', unit='°C')
gtemp.pack()
gwind = tk_tools.Gauge(root, height = 200, width = 400,
            max_value=100, label='Windspeed', unit='kph') 
gwind.pack()

def update_gauge():
    # update the gauges with the OPC-UA values every 1 second
    gtemp.set_value(client.get_node("ns=2;i=2").get_value())
    gwind.set_value(client.get_node("ns=2;i=5").get_value())
    root.after(1000, update_gauge)

root.after(500, update_gauge)

root.mainloop()

XML Databases

In the earlier Python OPC-UA server example tags were dynamically added when the server was started. This method works fine for simple testing but it can be awkward for larger tag databases.

All industrial control vendors will have proprietary solutions to create OPC-UA tag databases from process control logic.

Users can also create their own tag databases using XML. The OPC-UA server tag database can be imported and exported to XML using the commands:

# to export from the online system to an XML file:
# where: s = opcua.Server()
s.export_xml_by_ns("mytags.xml")
# to import an XML file:
s.import_xml("mytags2.xml","")

The XML files can be viewed in a web browser, and unfortunately the format is a little ugly. The XML files have a header area with a large number of options.The Name Space Uris is the custom area that defines the OPC UA end point address.

After the header there are object and variable definitions (<AUVariable>). In these section the variable’s NodeID, tag name and description are defined.

The Free OPC-UA modeler that can help with the creation of XML tag databases. To install and run the Free OPC-UA modeler:

$ pip install opcua-modeler
$ opcua-modeler

The OPC-UA modeler will read existing XML files and then allow for objects, tags and properties to be inserted into the XML structure.

CSV to XML

A CSV file is an easy format for defining tag databases. For example a file mytags.csv could be defined with 3 fields; tagname, description and default value.

$ cat mytags.csv
# field: tag, description, default-value
TI-101,temperature at river, 25
PI-101,pressure at river, 14

A basic CSV to XML import tool can be created to meet your project requirements. There are a number of good programming options to do this migration. For my project I created a small Bash/AWK program to translate the 3 field CSV file to the required OPC-UA XML format.

The first awk section prints out the header information. The second awk section reads the input (CSV) text line by line and pulls out each of the three fields ($1, $2 and $3) and prints out the XML with these fields inserted in the output.

#!/usr/bin/bash
# csv2xml.sh - create an OPC UA XML file from CSV
# 
 
# add the xml header info
awk ' BEGIN {
  print "<?xml version=\"1.0\" encoding=\"utf-8\"?>"
  print "<UANodeSet xmlns=\"http://opcfoundation.org/UA/2011/03/UANodeSet.xsd\"" 
  print "           xmlns:uax=\"http://opcfoundation.org/UA/2008/02/Types.xsd\""
  print "           xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\"" 
  print "           xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">"
  print "<NamespaceUris>"
  print "  <Uri>http://192.168.0.120:4841</Uri>" ; # This address would be passed in
  print "</NamespaceUris>"
}'

# Read the input CSV format and process to XML
awk ' {
   FS="," ; # separate fields with a comma
# Skip any comment lines that start with a #
  if ( substr($1,1,1) != "#" )
  {
    i = i+1 ; # increment the NodeID index
    print "<UAVariable BrowseName=\"1:"$1"\" DataType=\"Int32\" NodeId=\"ns=1;i="i"\" ParentNodeId=\"i=85\">"
    print "  <DisplayName>"$1"</DisplayName>" ; # set the display name to the 1st field
    print "  <Description>"$2"</Description>" ; # set the description to the 2nd field
    print "      <References>"
    print "        <Reference IsForward=\"false\" ReferenceType=\"HasComponent\">i=85</Reference>"
    print "      </References>"
    print "    <Value>"
    print "      <uax:Int32>"$3"</uax:Int32>" ; # set the default value to the 3rd field
    print "    </Value>"
    print "</UAVariable>"
  }   
}
END{ print "</UANodeSet>"} '

To run this script to read a CSV file (mytags.csv) and create an XML file (mytags.xml) :

cat mytags.csv | ./csv2xml.sh > mytags.xml

Node-Red OPC UA Server

There is a good OPC UA node (https://flows.nodered.org/node/node-red-contrib-opcua) that includes a server and most common OPC UA features. This node can be install within Node-Red using the “Manage Palette” option.

To setup a Node-Red OPC UA server and a browser, define a OPCUA server node to use the Node Red IP address and set a custom nodeset directory. For my example I set the directory to /home/pi/opcua and in this directory I copied the XML file that I created from CSV (mytags.xml) into.

The OPCUA Browser node will send messages directly into the debug pane. This browse node allows me to see the objects/variables that I defined in my XML file.

The next step is to look at writing and reading values.

The simplest way to communicate with an OPC UA server is to use an OpcUa Item node to define the NodeID and an OpcUa Client node to do some action. For the OpcUa Client node the End point address and an action needs to be defined.

In this example the pressure (PI-101) has a NodeID of “ns=5;i=2”, and this string is entered into the OpcUA item node. The OpcUA Client node uses a Write action. When a Write action is issued a Good or Bad status message is returned.

The OpcUa Client node supports a number of different actions. Rather than doing a Read action like in the Python client app, a Subscribe can be used. A Subscribe action will return a value whenever the value changes.

NodeRed Dashboards with the Python OPC UA Server

For the last example I will use the Python OPC UA server from the first example. The Temperature and WindSpeed will use the same simulation code, but an added Waveheight tag will be a manually entered value from Node-Red.

A Node-Red application that connects to the Python OPC UA server and presents that data in a Node-Red dashboard would be:

This example subscribes to two real-time inputs (Temperature and Windspeed) and presents the values in gauges. The OpcUA Item nodes define the OPC UA NodeId’s to be used.

All the OpcUa Client nodes will need their Endpoints defined to the Python OPC UA server address.

The subscribed data values are returned as a 2 item array (because the data type is a Int64). The Gauge node will only read the first payload array item, (which is 0) so a small function node copies the second payload item (msg.payload[1]) to the payload message:

// Copy the second payload array item to be the payload
//  Note: msg.payload[0] = 0 and the Dashboard Gauge needs to use the value at payload[1]
msg.payload = msg.payload[1]
return msg;

For this example a manual input was included. The WaveHeight is subscribed to like the other tags, and the slider position is updated to its value. The slider can also be used to manually set the value by having the slider output passed to an OpcUa Client node with a WRITE action.

After the logic is complete the Deploy button will make the application live. The Node-Red dashboard can be viewed at: http://node-red-ip:1880/ui

Final Comments

This is a quick and dirty set of examples on how to use Python and Node-Red with OPC UA.

OPC UA has a ton of other features that can be implemented like : Alarms and Events and Historical Data.

Also it should be noted that most high end OPC UA servers support accessing the OPC UA items via their browse names. So instead of accessing a point using “ns=5;i=6” a browser name string can be used, such as “ns=5;s=MYTAGNAME”.

OpenPLC on a Raspberry Pi

For home automation projects there are a lot of good software packages like Home Assistant and Node Red that can be used to control and view sensors and devices.

If you are interested in looking at an industrial controls approach to your automation projects then OpenPLC is a good package to consider.

A PLC (Programmable Logic Controller) is an industrial hardened hardware device that manages I/O and logic using the IEC 61131-3 standard.

OpenPLC is open source software that runs on a Raspberry Pi, Linux or Windows PC and it offers users a great way to learn industrial control concepts, programming languages and communications protocols.

In this article I will create three small Raspberry Pi projects using the IEC 61131-3 ladder logic, function blocks and structure text programming languages. Finally I will have these projects pass their data via Modbus TCP to a Node Red dashboard.

Getting Started

The OpenPLC software comes in three packages, a logic editor, the runtime component, and a graphic builder. See https://www.openplcproject.com/getting-started/ for specific instructions for your installation.

For my installation I put the OpenPLC editor on my Ubuntu PC so that I could do remote configuration. I loaded the OpenPLC runtime on a Raspberry PI. The OpenPLC runtime web interface is used to load and monitor logic.

I didn’t install the OpenPLC graphic builder instead I used Node-Red Dashboards as my final user interface

OpenPLC has a good number of optional communications packages and slave I/O components. A typical layout could be as below.

For my application I created a project with three programs; a ladder program, a function block program and a structure text program. The Resource object (Res0) defines global variables that can be used by all programs, and the task cycle times. This is a small project so I put all the programs into the same task execution (task0). For larger project I might put all my digital logic into a fast task execution (20ms) and my analog logic into a slower task execution (250ms).

I setup the Raspberry Pi with a push button on pin 17 and an LED on pin 23.

On the Raspberry Pi GPIO pins are referenced using the IEC 61131-3 addressing. So the pushbutton at BCM pin 17 (physical pin 11) is addressed by %IX0.3 , an input bit on bus 0 at bit 3. The LED at BCM pin 23 (physical pin 16) is addressed by %QX0.2 , as output bit on bus 0 bit 2.

It’s important to note that OpenPLC has allocated all the left side (odd) pins as inputs and all the right side (even) pins as outputs.

Ladder Diagrams (LD)

Ladder logic was the first IEC 61131-3 programming languages, it was developed as a graphic representation for circuit diagrams of relay logic hardware. The term “ladder” comes from the fact that the logic looks a little like a ladder with the left side having a vertical power rail and a a vertical ground rail on the right side, then there are a series of horizontal lines or “rungs” wiring hardware components between the rails.

Most electricians feel very comfortable using Ladder logic and it is a good programming method for managing digital logic. If you come from a programming background Ladder logic may feel a little strange at first, for example to do AND / OR logic to light an LED would be:

For my Ladder program I wanted to light an LED for 3 seconds with a single push of a button. In the OpenPLC editor I referenced an external variable PB1 (it’s defined in Resource object Res0) and I created two local variables, LED2, my output LED and TOF0, an off delay timer.

IEC 61131-3 has a wide range functions that can be used in Ladder rungs. In this example a TOF function was inserted after the push button, and the time parameter is wired in variable.

Function Block Diagrams (FBD)

One of limitations of Ladder logic is that managing analog logic can be a little messy, for this reason Function Block Diagrams (FBD) was developed.

If you feel comfortable using graphic programming applications like Node-Red then you shouldn’t have any problems working in Function Block Diagrams.

For my FBD program I wanted to count the number of times the LED was lite and output the value to a Modbus hold register.

Like in the Ladder program the external PB1 variable is referenced. A new output CNT_FB is defined as an output word on bus 100, %QW100.

The FBD uses a Rising Edge Trigger (R_TRIG) to catch when the LED turns on. The output from R_TRIG is a boolean so the value is converted to an INT and added to the value of CNT_FB.

Structured Text (ST)

One of the advantages of Function Block Diagrams is that it is very readable and somewhat self documenting. The downside of FBD is that it can be messy for complex conditional logic.

Structured Text (ST) was developed as a programming option that can work along with the other 61131-3 languages. Structure Text is block structured and syntactically resembles Pascal.

For my Structured Text program I wanted to do the same functionality that was done in the earlier Function Block Diagram program. To do the same functionality in ST as the FBD programs it only took 3 lines of code vs. 5 Function Blocks.

In my ST program I added a simple IF condition to reset the push button counter if the value reached 1000.

It’s important to note that library functions such as R_TRIG are available in all the 61131-3 programming languages. It is also possible to create your own custom functions in one programming language and they can then we used in all the other languages.

Running OpenPLC Programs

After the three programs have been compiled and saved they can be install into the OpenPLC runtime application. To manually start the runtime application:

pi@pi4:~ $ cd OpenPLC_v3
pi@pi4:~/OpenPLC_v3 $ sudo ./start_openplc.sh &

The OpenPLC runtime will start a Web application on port 8080 on the Raspberry Pi. After logging into the web interface, the first step is to select the “Hardware” option and set the OpenPLC Hardware Layer to “Raspberry Pi”. Next select the “Programs” option and upload the OpenPLC configuration file. After a new configuration file is uploaded and compiled, the final step is to press the “Start PLC” button.

The “Monitoring” option can be used to view the status of variables in the PLC configuration.

Modbus with Node-Red

Modbus was the earliest and most common communication protocol used to connect Industrial devices together. Modbus can be used on serial interfaces (Modbus RTU) or on Ethernet networks (Modbus TCP), both are supported by OpenPLC.

Node-Red has a number of Modbus TCP nodes that can be used. I found that : node-red-contrib-modbustcp worked well for my application. New nodes can be added to Node-Red using the “Manage Palette” option.

A simple Node-Red application that can monitor the LED and counter statuses would use three modbustcp input nodes and a text and two numeric nodes.

The Modbus read call returns 16 bits of information, so a small function was created (“Only pass Item 0”) to change the msg payload to be just the first item in the array:

msg.payload = msg.payload[0];
return msg;

Modbus supports 4 object types; coils, discrete inputs, input registers and holding registers.

For this project the LED’s IEC addressing is %QX0.2 and this would be a coil at address 2. The Function Block counter (CNT_FB) address of %QW100 is a Hold Register of 100, (CNT_ST is a Hold Register of 0).

Modbus Writing from Node-Red

The Ladder logic program was updated to light the LED from either the push button or a hold register. The hold register (%QW1) is an integer so the value is converted to a boolean then “OR”-ed with the push button interface.

On Node-Red a slider node is used to pass a 0/1 to a modbus tcp output node, that write to hold register 1.

The Node-Red web dashboard is accessed at: http://your_rasp_pi:1880/ui/

Final Comments

OpenPLC is an excellent testing and teaching tool for industrial controls.

Home Assistant History on Node-Red Charts

Home Assistant is an open source home automation platform that can monitor and control smart home devices and it integrates with many of other common systems.

HA_demo

Home Assistant installation is targeted for Raspberry Pi’s but other hardware options are available.

I was very impressed how easy it was to install Home Assistant and get a basic home integration system up and running.

There is a huge number of integration solutions (1500+) that connect to most of the mainstream products. However if you want to do some custom programming with connections to Arduino or other Raspberry Pi or PCs there isn’t an easy “out of the box” solution.  To solve this requirement Home Assistant has included Node-Red as an add-on.

Node-RED is a visual programming tool for wiring together hardware devices, APIs and online services.

For information how to install Node-Red on Home Assistant see the HA documentation. (I wrote a blog for my installation).

Home Assistant History

The default installation of Home Assistant has history enabled, and the data is stored in local SQLite database (home-assistant_v2.db) within your configuration directory unless the recorder integration is set up differently.

Charts of sensor history can be show in the Home Assistant Overview pages or as dialogs.

Using Node-Red with HA History

The Node-Red installation has a number of Home Assistant nodes that allow sensors data to be read and created. For viewing history the HA get history node is used.

A simple manual test circuit to get history would have: an injector, a get_history and a debug node.

By double-clicking on the get history node its configuration can be defined. For this example a sensor entity id of sensor.cpu_temp is used with a 10 minute (10m) time scale. When the injector is toggled the debug window will show an array to data and time entries.

3 Button History Chart

The logic to create a 3 button history chart would use: 3 dashboard buttons, 3 get history nodes, a Javascript function, and a dashboard chart node.

The Node Red chart node typically does local historical storage within the node itself. However the chart node can also be used to read external data and show a line chart of the data,nothing is stored locally (more info on this).

A javascript function node can be used to format the data into the required form:

//
// Format the HA History results to match the charts JSON format
//

var series = ["HA Values"];
var labels = ["Data Values"];
var data = "[[";
var thetime;
 
for (var i=0; i < msg.payload.length; i++) {
    thetime = (msg.payload[i].last_changed); // Note: check your format?
    data += '{ "x": "' + thetime + '", "y":' + msg.payload[i].state + '}';
    if (i < (msg.payload.length - 1)) {
        data += ","
    } else {
        data += "]]"
    }
}
var jsondata = JSON.parse(data);
msg.payload = [{"series": series, "data": jsondata, "labels": labels}];
return msg;

The payload data will be in the format of:

If everything is formatted correctly the Node Red dashboard page should look like:

For a more flexible presentation it would be good to use adjustable time periods.

Final Thoughts

For simple historical storage I found that the built-in History worked fine, however if you’re looking for custom long term storage then the InfluxDB add-on to HA might be a better solution.

Monitor Linux Servers with SSH/command line tools and Node-Red

There are a number of technologies and packages available for monitoring computer hardware. For medium to large systems an SNMP (Simple Network Monitoring Protocol) approach is usually the preferred solution. However if you have a smaller system with older servers there are excellent light weight command line utilities that can be used.

These command line utilities can be remotely run using SSH (Secure Shell) and the output is parsed to return only the data value. This value can then be graphically shown in a Node-Red web dashboard.

bash_goals

In this blog I will show some examples using iostat to monitor CPU utilization, and  lm-sensors and hddtemp to monitor temperatures.

 

iostat – CPU Utilitization

The iostat utility is part of the sysstat package and it is probably already loaded on your systems, if not it can be installed by:

 sudo apt-get install sysstat

When iostat will generate a report of CPU, device and file system utilization.

$iostat 
Linux 4.15.0-72-generic (lubuntu) 	2020-04-16 	_i686_	(4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          19.48    0.01    7.96    0.65    0.00   71.90

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00       1543          0
loop1             0.21         0.21         0.00     107980          0
loop2             0.13         0.13         0.00      66224          0
loop3             0.00         0.00         0.00       1141          0
loop4             0.00         0.00         0.00          8          0
sda               1.92        13.08        31.89    6722321   16395304

To find a specific result, an iostat option and some bash code. For example to find just the CPU %idle:

$ iostat -c
Linux 4.15.0-72-generic (lubuntu) 	2020-04-16 	_i686_	(4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          19.45    0.01    7.94    0.65    0.00   71.95

$ # get the 4th line of just stats
$ iostat -c | sed -n 4p
 19.45 0.01 7.94 0.65 0.00 71.95

$  # get the 4th line, 6th string
$ iostat -c | sed -n 4p | awk '{print $6}'
 71.95

lm-sensors – Chip based temperature sensors

To install lm-sensor on Ubuntu enter:

sudo apt-get install lm-sensors

The next step is to detect which sensors are available and need to be monitored:

 sudo sensors-detect

This step will give a number of prompts about which sensor to scan. Once the scan step is complete the sensors command will return results for hardware that it found:

pete@lubuntu:~$  sensors  
dell_smm-virtual-0
Adapter: Virtual device
Processor Fan: 2700 RPM
CPU:            +42.0°C  
Ambient:        +34.0°C  
SODIMM:         +34.0°C  

acpitz-virtual-0
Adapter: Virtual device
temp1:        +48.5°C  (crit = +107.0°C)

coretemp-isa-0000
Adapter: ISA adapter
Package id 0:  +43.0°C  (high = +87.0°C, crit = +105.0°C)
Core 0:        +42.0°C  (high = +87.0°C, crit = +105.0°C)
Core 1:        +40.0°C  (high = +87.0°C, crit = +105.0°C)

Specific sensors can be shown by: sensors the-chip .The grep command can be used to get a specific line in the output. For example to get just the Core 0 temperature:

pete@lubuntu:~$ sensors | grep 'Core 0'
Core 0:        +45.0°C  (high = +87.0°C, crit = +105.0°C)

The awk command can be used to get just the temperature, (the third string on the line).

pete@lubuntu:~$ sensors | grep 'Core 0' | awk '{print $3}'
+45.0°C

Later in Node-Red I will show this value in a graph or chart.

 

hddtemp – Monitor Hard Drive Temperatures

hddtemp is a hard drive temperature monitoring package. It can be installed by:

sudo apt-get install hddtemp

By default hddtemp requires superuser rights, to make the results available to non-superusers enter:

sudo chmod u+s /usr/sbin/hddtemp

To see the temperature of a hard drive, enter the drivers device name. For example to see /dev/sda :

pete@lubuntu:~$ hddtemp /dev/sda
/dev/sda: WDC WD3200BPVT-75JJ5T0: 34°C

Again the awk command can be used to parse the output to get just the temperature. For the /dev/sda example the temperature was the fourth string.

pete@lubuntu:~$ hddtemp /dev/sda | awk '{print $4}'
34°C

Psensor – a Sensors Graphic Widget

This is a little off topic but it is worth mentioning. Psensor offers a widget that will show CPU idle time and it monitors data from lm-sensor and hddtemp.

psensor

Psensor is install by:

sudo apt-get install psensor

Psensor is a slick utility for local monitoring, but it isn’t really designed to pass information to a central monitoring server.

Remotely Running Commands

Rather than having remote Linux servers send data a central node, the central node can periodically poll the remote servers for data.

SSH (Secure Shell) can be used to run remote commands. The only issue is that SSH needs a user to  enter a password. This is fine for manual testing but for automated polling this is a problem. There are a couple of solutions to this problem:

  1. ssh-keygen – can generate an ssh key pair that is stored in a user directory. This allows the standard ssh client to run without entering a password.
  2. sshpass – is an ssh client that includes the username/password as an command line option.

The ssh-keygen approach is recommended for most applications because it does not expose passwords. For the testing I will show the sshpass method and then in the Node-Red project I will use the ssh-keygen approach.

The sshpass is included in many standard distributions and it can be installed by:

sudo apt-get install sshpass

An sshpass example get to get some CPU board and hard drive temperatures would be:

$ sshpass -p pete ssh pete@192.168.0.116 sensors dell_smm-virtual-0
dell_smm-virtual-0
Adapter: Virtual device
Processor Fan: 3044 RPM
CPU: +31.0°C 

$ sshpass -p pete ssh pete@192.168.0.116 hddtemp /dev/sda 
/dev/sda: HTS548040M9AT00: 39°C

Using some grep and awk calls the output can be shortened to just show the temperatures:

$ sshpass -p pete ssh pete@192.168.0.116 sensors | grep temp1 | awk '{print $2}'
+30.5°C

$ sshpass -p pete ssh pete@192.168.0.116 hddtemp /dev/sda | awk '{print $3}'
39°C

With this basic set of commands it is now possible to use Node-Red to periodically poll Linux servers for data.

Node-Red

Node-Red is an web based visual programming environment. Node-Red has a wide variety of nodes that can be wired together to create logic.

NodeRed is pre-installed with the Raspbian images. To install it on other systems see : https://nodered.org/#get-started

For this project I added two nodes:

  • bigssh – a ssh node that saves and uses ssh keygen credentials
  • bigtimer – a timer node, that is used to poll for data

These components can installed either manually, or via the “Manage Palette” menu option.

add_bigssh

A basic test circuit to manually poll a Linux server and return a temperature, would use an injector, a bigssh and a debug node.

ssh_test

The logic to poll a Linux server every minute and put the results on a Node-Red web dashboard would use a bigtimer, a bigssh, a gauge and a chart node.

The bigssh node can pass all those useful parsed commands that we worked on earlier to a remote node. For example the temperature value on the temp1 (on acpitz-virtual-0)  is:

sensors | grep temp1 | awk '{print#2}'

The bigtimer node has a good selection of scheduling and timing function. By default the middle output pin will generate a pulse every minute.

ssh2_dash

After the logic is complete click on the “Deploy” button on the right side of the menu bar. The Node-Red web dashboard is available at: http://node-red:1880/ui/

nr_ui

Final Comments

In this blog I only looked at three command line utilities there are many others that could use this technique.

Micro:bits and Node-Red

BBC Micro Bit, (micro:bit) is an open source hardware ARM-based embedded system designed by the BBC for use in computer education in the UK. The device is half the size of a credit card and has an ARM Cortex-M0 processor, accelerometer and magnetometer sensors, Bluetooth and USB connectivity, a display consisting of 25 LEDs, and two programmable button.

Depending on where you purchase it the price ranges between $15-$20. So this is a very attractive module for the beginning programmer.

The micro:bit module has 2 buttons to interface to it and a small 5×5 LED screen. This is good for small tests but its a little limiting.

For the most part micro:bit is a standalone unit so in this blog I wanted to show how to put micro:bits information on to a Node-Red web dashboard that could be viewed from a smart phone, tablet or PC.

mp_nr_overview

Micro:bits Setup

The micro:bits has a USB connection that can be used for communications to PCs or Raspberry Pi’s. For my setup I used a Raspberry Pi Zero W, with a microUSB-to-USB adapter to connect into the micro:bit.

mp_pi_setup

The micro:bit can be programmed via a nice Web Interface, for details see: https://microbit.org/guide/quick/. For this application I programmed with blocks.

My logic had the temperature and light sensor values written out ever 10 seconds, in the format of: T=xxx, L=xxx, I used a comma separator between the data pieces. Button presses were sent as either A=1, or B=1, .

mp_usb_logic

Node-Red Setup

Node-Red is pre-install on the Raspberry Pi image, if you want to use a PC instead see the Node-Red installation documentation.

A Node-Red has a Serial port component (https://flows.nodered.org/node/node-red-node-serialport) that can be loaded manually or via the Palette Manager.

The first step is to insert a serial input node and define the serial interface. Double-click on the serial input node and edit the serial connection. The interface will vary with your setup but node-red will show a list of possible USB ports. The default baud rate of the micro:bits USB port is 115200. I used a timeout of 200ms to get the messages, but you could also look for a terminating character (the comma “,” could be used).

nr_serial_edit

The logic used 4 Javascript function nodes to parse the micro:bit message

nr_serial_logic

“Get Temp Value” Function:


// Pull out the temperature
//
var themsg = msg.payload;

if (themsg.indexOf("T=") &gt; -1) {

var msgitems = themsg.split(",");

var temp = msgitems[0];
temp = temp.substring(2,4)
msg.payload = temp;
return msg;
}

“Get Light Value” Function:

// Pull out the Light Sensor Value
//
var themsg = msg.payload;

if (themsg.indexOf("T=") > -1) {

var msgitems = themsg.split(",");

var light = msgitems[1];

light = light.substring(2,5)

msg.payload = light;

return msg;

}

“Check Button A” Function:

// If the message is Button A pressed
// "A=1,"
if (msg.payload == "A=1,") {
msg.payload =  1;
return msg;
}

“Check Button B” Function:

// If the message is Button B pressed
// "B=1,"
if (msg.payload == "B=1,") {
msg.payload =  1;
return msg;
}

Chart nodes are used to show the results. (Note: you’ll need to create a dashboard name).

For the button presses a 1-0 transition is needed after a button press, otherwise the chart will always show a value of 1. The 0-1 transition is done using a trigger node.

The final web dashboard is available at: http://your_node_red_ip:1880/UI.

mp_screen

Final Comments

The next step will be to add the ability to have Node-Red write values to the micro:bit. This would be done with the Node-Red serial output node. Micro:bit’s have a serial read function that would then process the command.

InfluxDB with Node-Red

There are a lot of excellent databases out there. Almost all databases can support time tagged information and if you have regularly sampled data everything works well. However if you have irregularly sampled data things can get a little more challenging.

InfluxDB is an open-source time series database (TSDB). It is written in Go and optimized for fast, high-availability storage and retrieval of time series data in fields such as operations monitoring, application metrics, Internet of Things sensor data, and real-time analytics.

InfluxDB has a number of great features:

  • when data is added, a time stamp is automatically added if it’s already incluced.
  • InfluxDB manages aggregation of times (i.e. means over the hour)
  • Open Source Web Trending packages like Grafana and Chronograf will talk directly to InfluxDB
  • an SQL language with a time based syntax

In this blog I wanted to document my notes on:

  • How to add sampled data from Node-Red to Influx
  • How to view Influx historical data in a Node-Red chart

Why Use Node-Red with Influx

With the great Web trending interfaces like Grafana and Chronograf why use Node-Red?

  • I really like Grafana, but I didn’t find it to be 100% mobile friendly, whereas Node-Red is designed for mobile use.
  • if you’re inputting data or doing logic in Node-Red it makes sense to keep the interface logic there also.

The downside of using Node-Red is that you will have to make your own charting controls.

Getting Started with InfluxDB

The official installation document  lists the various options based on your OS. For a simple Raspberry Pi or Ubuntu installation I used:

sudo apt-get install influxdb

The influxdb configuration/setup is modified by:

sudo nano /etc/influxdb/influxdb.conf

After configuration changes Influx can be restarted by:

sudo service influx restart

The Influx command line  interface (CLI) is useful for getting started and checking queries. It is started by entering: influx (Note: it might be slow to initially come up).

Below I’ve opened the influx CLI and created a new database called nrdb.

~$ influx
Connected to http://localhost:8086 version 1.7.9
InfluxDB shell version: 1.7.9
> create database nrdb
> show databases
name: databases
name
----
_internal
pidata
nrdb
>

Node-Red and Influx

Node-Red is pre-installed on Raspberry Pi. If you need to install Node-Red on a Window, MacOS or Linux node see the installation instructions.

For my testing I used the following definitions:

  1. nrdb – the InfluxDB database
  2. mytemps – the measurement variable for my temperatures
  3. Burlington, Hamilton – two locations for the temperatures
  4. temperatures – the actual temperatures

Two Node-Red libraries were installed:

These libraries can either be installed using npm or within Node-Red using the “Manage Pallet” option.

nr_pallet

For this project I create two sets of logic. The first set used the BigTimer to write a new simulated input every minute (via the middle output pin of BigTimer), or manual push in a value. The second part of the logic used a selected time to query the data and present it to a chart and table.

nr_influx_logic

The first step is to drop a InfluxDB outpt and then configure the Influx server, table and measurements.

influxdb_edit

A Javascript function node (“Simulate an Input”) is used to format the fields and values. The first passed item is the key item, and the second parameter is a tagged value. Note: there are a number of different ways to use this node.

nr_sim_input

The Big Timer middle output will send a value out every minute. I added an Inject Node (“Force Test”) so I could see more values.

To test that things are running, the influx cli can be used:

> use nrdb
Using database nrdb
> show measurements
name: measurements
name
----
mytemps
> select * from mytemps
name: mytemps
time location temperature
---- -------- -----------
1580584703785817412 Burlington 17
1580584706364427345 Burlington 5
1580584761862704310 Burlington 8

Show Influx Data in a Node-Red Dashboard

For a simple Dashboard I wanted to use a dropdown node (as a time selector), a chart and a table.

The drop down node has a selection of different times.

nr_dropdown

The payload from the dropdown node would be something like: 1m, 5m, 15m. A Javascript function node (“New Time Scale”) used this payload and created an InfluxDB query.

nr_timescales

This syntax can be tested in the influx cli:

> select time,temperature from mytemps where location='Burlington' and time > now() - 5m
name: mytemps
time temperature
---- -----------
1580588829859372644 12
1580588889896729245 6
1580588949931621672 17
1580589009972333308 8
1580589069980649689 12

The InfluxDB input node only has the InfluxDB server information. The query is passed in from the Javascript function node (“New Time Scale”) .

A Javascript function node (“Javascript function node (“Format Influx Results”) is used to put the msg.payload into a format that the chart node can use.


//
// Format the InfluxDB results to match the charts JSON format
//

var series = ["temp DegC"];
var labels = ["Data Values"];
var data = "[[";
var thetime;

for (var i=0; i < msg.payload.length; i++) {
    thetime = Number(msg.payload[i].time); // Some manipulation of the time may be required
    data += '{ "x":' + thetime + ', "y":' + msg.payload[i].temperature + '}';
    if (i < (msg.payload.length - 1)) {
        data += ","
    } else {
        data += "]]"
    }
}
var jsondata = JSON.parse(data);
msg.payload = [{"series": series, "data": jsondata, "labels": labels}];
return msg;

Once all the logic has been updated, click on the Deploy button. The Node-Red dashboard can be accessed at: http://node-red_ip:1880/ui. Below is an example:

nr_influx_screen

Final Comments

This project was not 100% there are still some cleanup items to do, such as:

  • use real I/O
  • make the times a little cleaner in the table
  • a better time selections for the chart.

Also to better explain things I only used 1 location but multiple data points could be inserted, queried and charted.

Sqlite and Node-Red

Sqlite is an extremely light weight database that does not run a server component.

In this blog I wanted to document how I used Node-Red to create, insert and view SQL data on a Raspberry Pi. I also wanted to show how to reformat the SQL output so that it could be viewed in a Node-Red Dashboard line chart.

Installation

Node-Red is pre-installed on the Pi Raspian image. I wasn’t able to install the Sqlite node using the Node-Red palette manager. Instead I did a manual install as per the directions at: https://flows.nodered.org/node/node-red-node-sqlite .

cd ~/.node-red npm i --unsafe-perm node-red-node-sqlite npm rebuild

Create a Database and Table

It is possible to create a database and table structures totally in Node-Red.

I connected a manual inject node to a sqlite node.

sqlite_create_table

In the sqlite node an SQL create table command is used to make a new table. Note: the database file is automatically created.

For my example I used a 2 column table with a timestamp and a value

sqlite_db_config

Insert Data into Sqlite

Data can be inserted into Sqlite a number of different ways. A good approach for a Rasp Pi is to pass some parameters into an SQL statement.

sqlite_insert_flow

The sqlite node can use a “Prepared Statement” with a msg.params item to pass in data. For my example I created two variable $thetime and $thevalue.

sqlite_insert_conf

A function node can be used to format a msg.params item.


// Create a Params variable
// with a time and value component
//
msg.params = { $thetime:Date.now(), $thevalue:msg.payload }
return msg;

Viewing Sqlite Data

A “select” statement is used in an sqlite node to view the data.

A simple SQL statement to get all the data for all the rows in this example would be:

select * from temps;

A debug node can used to view the output.
sqlite_select

Custom Line Chart

Node-Red has a nice dashboard component that is well formatted for web pages on mobile devices.

To add the dashboard components use the Node-Red palette manager and search for: node-red-dashboard.

By default the chart node will create its own data vs. time storage. For many applications this is fine however if you want long term storage or customized historical plots then you will need to pass all the trend data to the chart node.

For some details on passing data into charts see: https://github.com/node-red/node-red-dashboard/blob/master/Charts.md#stored-data

Below is an example flow for creating a custom chart with 3 values with times.custom_chart_data

The JavaScript code needs to create a structure with: series, data and labels definitions


msg.payload = [{
"series": ["A"],
"data": [
[{ "x": 1577229315152, "y": 5 },
{ "x": 1577229487133, "y": 4 },
{ "x": 1577232484872, "y": 6 }
]
],
"labels": ["Data Values"]
}];

return msg;

This will create a simple chart:

custom_chart_image

For reference, below is an example of the data structure for three I/O points with timestamps:


// Data Structure for: Three data points with timestamps

msg.payload = [{
"series": ["A", "B", "C"],
"data": [
[{ "x": 1577229315152, "y": 5 },
{ "x": 1577229487133, "y": 4 },
{ "x": 1577232484872, "y": 2 }
],
[{ "x": 1577229315152, "y": 8 },
{ "x": 1577229487133, "y": 2 },
{ "x": 1577232484872, "y": 11 }
],
[{ "x": 1577229315152, "y": 15 },
{ "x": 1577229487133, "y": 14 },
{ "x": 1577232484872, "y": 12 }
]
],
"labels": ["Data Values"]
}];

Sqlite Data in a Line Chart

To manually update a line chart with some Sqlite data I used the following nodes:

sqlite_2_chartThe SQL select statement will vary based on which time period or aggregate data is required. For the last 8 values I used:

select * from temps LIMIT 8 OFFSET (SELECT COUNT(*) FROM temps)-8;

The challenging part is to format the SQL output to match the required format for the Line Chart. You will need to iterate over each data row (payload object) and format a JSON string.

 //  
 // Create a data variable   
 //  
 var series = ["temp DegC"];  
 var labels = ["Data Values"];  
 var data = "[[";  
   
 for (var i=0; i < msg.payload.length; i++) {  
   data += '{ "x":' + msg.payload[i].thetime + ', "y":' + msg.payload[i].thetemp + '}';  
   if (i < (msg.payload.length - 1)) {  
     data += ","  
   } else {  
     data += "]]"  
   }  
 }  
 var jsondata = JSON.parse(data);  
 msg.payload = [{"series": series, "data": jsondata, "labels": labels}];  
   
   
 return msg;  

To view the Node-Red Dashboard enter: http://pi_address:1880/ui

Screen_chart_sqlite

Final Comments

For a small standalone Raspberry Pi project using sqlite as a database is an excellent option. Because a Pi is limited in data storage I would need to include a function to limit the amount of data stored.

Apache Kafka with Node-Red

Apache Kafka is a distributed streaming and messaging system. There are a number of other excellent messaging systems such as RabbitMQ and MQTT. Where Kafka is being recognized is in the areas of high volume performance, clustering and reliability.

Like RabbitMQ and MQTT, Kafka messaging are defined as topics. Topics can be produced (published) and consumed (subscribed). Where Kafka differs is in the storage of messages. Kafka stores all produced topic messages up until a defined time out.

Node-Red is an open source visual programming tool that connects to Raspberry Pi hardware and it has web dashboards that can be used for Internet of Things presentations.

In this blog I would like to look at using Node-Red with Kafka for Internet of Things type of applications.

Getting Started

Kafka can be loaded on a variety of Unix platforms and Windows.  A Java installation is required for Kafka to run, and it can be installed on an Ubuntu system by:

apt-get install default-jdk

For Kafka downloads and installation instructions see: https://kafka.apache.org/quickstart. Once the software is installed and running there a number of command line utilities in the Kafka bin directory that allow you to do some testing.

To test writing messages to a topic called iot_test1, use the kafka-console-producer.sh  command and enter some data (use Control-C to exit):

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic iot_test1
11
22
33

To read back and listen for messages:

 bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic iot_test1 --from-beginning
11
22
33

The Kafka server is configured in the /config/server.properties  file. A couple of the things that I tweeked in this file were:

# advertised the Kafka server node ip
advertised.listeners=PLAINTEXT://192.168.0.116:9092
# allow topics to be deleted
delete.topic.enable=true

Node-Red

Node-Red is a web browser based visual programming tool, that allows users to create logic by “wiring” node blocks together.  Node-Red has a rich set of add-on components that includes things such as: Raspberry Pi hardware, Web Dash boards, email, Tweeter, SMS etc.

Node-Red has been pre-installed on Raspbian since 2015. For full installation instructions see:  https://nodered.org/#get-started

To add a Node-Red component select the “Palette Manager”, and in the Install tab search for kafka. I found that the node-red-contrib-kafka-manager component to be reliable (but there are others to try).

For my test example I wanted to create a dashboard input that could be adjusted. Then read back the data from the Kafka server and show the result in a gauge.

This logic uses:

  • Kafka Consumer Group – to read a topic(s) from a Kafka server
  • Dashboard Gauge – to show the value
  • Dashboard Slider – allows a user to select a numeric number
  • Kafka Producer – sends a topic message to the Kafka server

nodered_kafka Double-click on the Kafka nodes and in the ‘edit configuration’ dialog create and define a Kafka broker (or server). Also add the topic that you wish to read/write to.

kafka_consume

Double-click on the gauge and slider nodes and define a Dashboard group. Also adjust the labels, range and sizing to meet your requirements.

kafka_gauge

After the logic is complete hit the Deploy button to run the logic. The web dashboard is available at: http://your_node_red_ip:1880/ui.

kafka_phone

Final Comment

I found Node-Red and Kafka to be easy to use in a simple standalone environment. However when I tried to connect to a Cloud based Kafka service (https://www.cloudkarafka.com/) I quickly realized that there is a security component that needs to be defined in Node-Red. Depending on the cloud service that is used some serious testing will probably be required.