rt-ai is a new concept in edge processing that makes it easy to build AI and ML enhanced stream processing stream processing networks. rt-ai leverages hardware acceleration within embedded devices to filter raw data into highly salient messages for higher level processing. The goal is to simplify testing and evaluation of models as it allows reuse of supporting infrastructure, minimizing custom code.
Stream Processing Networks
rt-ai Stream Processing Networks (SPNs) are constructed from Stream Processing Elements or SPEs. An SPE typically falls into one of three classes:
- Source SPE. This would be a source of data at the start of an SPN. Examples would be video capture, audio capture, sensor data capture etc.
- Inline SPE. This is an SPE that takes input, performs some sort of processing and then outputs a revised version of the input. An example might be semantic labeling where the input is a video frame and the output is the video frame annotated with semantic labels.
- Sink SPE. This is a point at which data leaves the SPN. Data could be saved to a file, passed off to an Apache NiFi instance, displayed on a GUI etc.
SPEs can have zero or more input pads (subscribers) and zero or more output pads (publishers). SPEs are joined together into pipelines by connecting together publishers and subscribers in the defined manner. Almost any publisher/subscribe system could be used to implement the communications but right now rt-ai is using MQTT. As publish/subscribe can be one to many and many to one, it is possible to build arbitrary Stream Processing Network architectures with SPEs.
An important concept in rt-ai is that SPNs can be distributed across multiple machines. This is particularly important in a heterogeneous environment where only some machines have powerful GPUs for example. Also, some data sources might only be connected to specific machines in the system, requiring that the appropriate SPE be located there.
Stream Processing Network Management
As anyone who has tried building processing networks knows, things become unwieldy very fast as the network grows in complexity. Just keeping track of MQTT topic names is a pain. rt-ai solves this problem using a tool called rtaiDesigner. This is a GUI based tool that allows the manager of an SPN to build the SPN using a graphical editor, configure all the SPEs as required and then deploy SPE code and configuration to target machines. rtaiDesigner takes care of all of the boring details like topic names etc. Once the SPN is running, rtaiDesigner can also be used to look at data at various points in the network to assist with visualization of operation.
Building Stream Processing Clusters (SPCs)
In order to centralize management of the available machines that can be used for SPN deployment, rt-ai creates an out of band management network system, providing a management communications path between all machines and the rt-ai management tools.
rt-ai can make putting together SPCs really quite easy. Each machine needs to have installed the various support libraries needed for the SPE library along with the management app, rtaiSPEManager, that has to run on every machine. It is this app that supports remote deployment from rtaiDesigner and also manages execution state of the SPEs hosted on that machine. To simplify this process, a Docker container can be used to install required support and rtaiSPEManager on every machine. rtaSPEManagers and rtaiDesigner can automatically construct the management overlay network if on the same LAN. SPCs spanning multiple LANs require manual configuration of secure tunnels in an appropriate gateway machine.
Small embedded devices can be even simpler to bring into an SPC. For example, if Raspberry Pis are being used, a memory card could be mass-produced with the required support and rtaiSPEManager app and then fitted to multiple devices. Because the management overlay uses MAC addresses rather than IP addresses to identify machines and also can autodiscover machines on the same LAN, identical cards can be used with no further configuration (this is yet to be tested out however!).
SPE Development and Testing
SPEs can be pretty simply to implement. Python and C++ SPEs are currently supported. New SPEs can be developed by incorporating the appropriate SDK and then adding custom code to handle messages from input pads and generate messages for output pads.
rtaiDesigner has a test mode where all SPEs are run on the local machine (of course, this can only work if the machine has the required resources). Alternatively, all other SPEs in the SPN can be run as normal with just the development SPE run on the local machine. This allows traditional debuggers and development to be used while working with real time data.