Embedding AI inference in the Spatial Networking Cloud

rt-ai is a system for graphically composing edge AI stream processing networks either distributed on multiple processing nodes or all running on a single node. In the latter case, shared memory can be used for transfers between the functional blocks, making it almost as efficient as monolithic code. So when it comes to embedding AI inference in a Spatial Networking Cloud (SNC), rt-ai makes perfect sense. However, the underlying network styles are completely different – SNC uses a highly dynamic series of multicast and end to end virtual links whereas rt-ai uses static MQTT or shared memory links. Each makes sense for the different applications so it is necessary to create a bridge between the two worlds.

Right now, bridging is done (for video streams) with the GetSNCVideo and PutSNCVideo stream processing elements (SPEs) that can be added to any video stream processing network (SPN). GetSNCVideo can grab SNC video frames from the configured stream source which then acts as an rt-ai source for the downstream SPEs. Once processing has been completed, the frames can be re-injected into SNC using the PutSNCVideo SPE. There can be similar bridges for sensor or any other type of data that needs to be passed through an rt-ai SPN.

Originally, rt-ai had its own SPEs for collecting sensor data but this led to quite a bit of duplication between rt-ai and SNC. The embedding technique completely removes the need for this duplication as rt-ai SPNs can hook into any SNC data stream, no matter what hardware generated it.

The screen capture above shows an example that I am using as part of the driveway detection system that I have been running for quite a long term now to detect vehicles or people moving around the driveway – this post describes the original system. The heart of this is an NCS 2 inference engine with some post processing code to generate email alerts when something has been detected. All of the SPEs in this case are running on the same Raspberry Pi 4 which is humming along nicely, running inference on a 1280 x 720 10fps video stream. Now that this SPN has been embedded in SNC, it is possible to save all of the annotated video using standard SNC storage if required or else further process and add to the metadata with anything that connects to SNC.

rt-ai SPNs can be used to create synth modules (basically SPN macros) that can be replicated any number of times and individually configured to process different streams. Alternatively, a single SPN can process data from multiple SNC video streams using an SNC fan out SPE, similar to this one.

So what does this do for rt-ispace? The whole idea of rt-ispace is that ubiquitous sensing and other real-time data streams are collected in SNC, AI inference distills the raw streams into meaningful data and then the results are fed to SHAPE for integration into real world augmentations. Embedding rt-ai SPNs in SNC provides the AI data distillation in a highly efficient and reusable way.

Using shared memory for rt-ai inter-SPE transfers

The screen capture above couldn’t have been obtained previously as it is passing uncompressed (RGB888) video between rt-ai SPEs on the same node (a Jetson Nano in this case). The CVideoView window is showing the output of the simple network using the CSSDJetson SPE to classify objects and also computes the frames per second and latency of received frames. The source of the frames is a Logitech C920 webcam running at 1280 x 720, 30fps. It shows that the latency is around 128mS at around 15fps.

This screen capture shows what happens when shared memory isn’t used. Actually, the latency here is misleading as it seems to be the link from the CUVCCam SPE to the MQTT server that is causing the bottleneck when running uncompressed video. The latency goes monotonically upwards until there is no memory left as there is no throttling on that interface since normally it isn’t a problem.

There doesn’t seem to be much benefit when passing smaller messages between SPEs.

This screen capture above shows shared memory being used when transferring JPEG frames. The one below is with shared memory support turned off.

This just shows that bouncing off the MQTT server within the same node is pretty efficient, at least when compared to the latency of the inference.

Being able to pass large messages around efficiently, even if only point to point within the same node, is quite a step forward by itself. For example, it makes it practical to create networks that pass RGBD frames around.

Shared memory support in rt-ai2 uses the Qt QSharedMemory and QSystemSemaphore wrappers to make things simple. When a design is generated, rtaiDesigner determines if shared memory has been enabled for the network, if the publisher and subscriber are on the same node and if the connection is point to point (i.e. exactly one subscriber). If so, the publisher and subscriber SPEs are told to use shared memory instead of MQTT for that particular connection. The SPE configuration file for the publisher SPE also includes the shared memory slot size to use and how big the pending transmission queue should be. The system is set up at the moment to always use three shared memory slots forming a rotating buffer. The shared memory slots are created by the publisher and attached by the subscriber.

To minimize latency, every time the publisher places a new message in the next shared memory slot, it releases a QSystemSempahore to unblock a thread in the subscriber that can then extract the message, free the shared memory slot and process the received message.

This implementation of shared memory seems to work very well and is highly reliable. In principle, it could be extended to support multiple subscribers by replicating the shared memory slot structure for each subscriber.

Jetson Nano SSD-Mobilenet-v2 SPE for rt-ai

Following on from the earlier work with the Jetson Nano, the SSD-Mobilenet-v2 model is now running as an rt-ai Stream Processing Element (SPE) for Jetson and so is fully integrated with the rt-ai system. Custom models created using transfer learning can also be used – it’s just a case of setting the model name in the SPE’s configuration and placing the required model files on the rt-ai data server. Since models are automatically downloaded at runtime if necessary, it’s pretty trivial to change the model being used on an existing Stream Processing Network (SPN).

The screen capture above shows the rt-ai design that generated the implementation. Here I am using the UVCCam SPE so that the video is sourced from a webcam but any of the other rt-ai video sources (such as RTSPCam) could be used, simply by replacing the camera SPE in the design using the graphical editor – originally this design used RTSPCam in fact.

Using 1280 x 720 video frames, the SSDJetson SPE processes around 17fps. This is not bad but less than the 21fps achieved by the monolithic example code. The problem is that, in order to achieve the one to many and many to one, heterogeneous multi-node network graphical design capability, rt-ai currently uses MQTT brokers to move data and provide multicast as necessary. Even when the broker and the SPEs are running on the same node, it is obviously less efficient than pointer passing within monolithic code.

This “inefficiency of generality” isn’t really visible on powerful x86 machines but has an impact on devices like the Jetson Nano and Raspberry Pi. The solution to this is to recognize such local links and side-step the MQTT broker interface using shared memory. This optimization will be done automatically in rtaiDesigner when it generates the configurations for each SPE in an SPN, flagging appropriate nodes as sources or sinks of shared memory links when both source and sink SPEs reside on the same node.

Shared nothing – sometimes being selfish is the way to go

Lock-free code is all the rage these days but it’s not just a fad. Having recently quantified the performance impact of a single lock on shared memory it’s easy to understand why eliminating locks (and indeed any other kind of kernel interaction) is the key to high performance.

A logical consequence of this is that threads must share no state (memory, disk or anything else) with any other thread unless it can be done in a safe manner without requiring synchronization. While there are some patterns that can be used for this, in general the solution is the shared nothing (or sharded) architecture where each thread works completely independently.

Coupled with core-locked threads, shared nothing architectures are capable of extracting the last drop of performance out of the underlying hardware. Suddenly that multi-core CPU looks like a very loosely coupled bunch of bare-metal processors.

One core == one thread

Back in the dark ages, when CPUs only had one, two or maybe four cores, the idea of dedicating an entire core to a single thread was ridiculous. Then it became apparent that the only way to scale CPU performance was to integrate more cores onto a single CPU chip. People started wondering – how to use all these cores in a meaningful way without getting bogged down in delays from cache coherency, locks and other synchronization issues.

Turns out the answer may well be to hard-allocate threads to cores – just one thread locked into each core. This means that almost all of an application can be free of kernel interaction. This is how DPDK gets its speed for example. It uses user space polling to minimize latency and maximize performance.

I have been running some tests using one thread per core with DPDK and lock-free shared memory links. So far, on my old i7-2700K dev machine (with another machine generating test data over a 40Gbps link), I have been seeing over 16Gbps of throughput through DPDK into the shared memory link using a single core without even trying to optimize the code. It’s kind of weird seeing certain cores holding at 100% continuously, even if they are doing nothing, but this is the new reality.

Jetson Nano and rt-ai

The Jetson Nano is an obvious platform for rt-ai to support, to go with the existing Intel NCS2 and Coral edge platforms. One nice plus is that the Jetson Nano comes basically ready to go, all set up for inference.

The screen capture above shows the Nano running the detectnet-camera example code using a webcam as the source generating 1280 x 720 frames and SSD-Mobilenet-v2 as the model. Performance was not bad at 21fps running at 10W, 16fps running at 5W. The heatsink did get very hot in a pretty short space of time however!

Installing the rt-ai runtime was no problem at all and it was easy to utilize the H.264 accelerated pipeline in rt-ai’s RTSP camera capture module. The screen capture above shows this running along with a viewer, demonstrating basic rt-ai funtionality.

Next up is to roll the detection code into an rt-ai Stream Processing Element (SPE). This will generate identical metadata to the existing SSD detectors, allowing full compatibility between server GPU, Jetson, NCS 2 and Coral SSD detectors.

rt-ai will enable Intelligent Spaces

The idea of creating spaces that understand the needs of the people moving within them – Intelligent Spaces – has been a long term personal goal. Our ability today to create sensor data (video, audio, environmental etc) is incredible. Our ability to make practical use of this enormous body of data is minimal. The question is: how can ubiquitous sensing in a space be harnessed to make the space more functional for people within it?

rt-ai could be the basis of an answer to this question. It is designed to receive large volumes of multi-sensor data, extract meaningful information and then take control actions as necessary. This closes the local loop without requiring external cloud server interaction. This is important because creating a space with ubiquitous sensing raises all kinds of privacy issues. Because rt-ai keeps all raw data (such as video and audio) within the space, privacy is much less of a concern.

I believe that a key to making a space intelligent is to harness artificial intelligence concepts such as online learning of event sequences and anomaly detection. It is not practical for anyone to sit down and program a system to correctly recognize normal behavior in a space and what actions might be helpful as a result. Instead, the system needs to learn what is normal and develop strategies that might be helpful. Reinforcement via user feedback can be used to refine responses.

A trivial example would be someone moving through a dark space at night. It might be helpful to provide light at a suitable intensity to safely help a person navigate the space. The system could deduce this by having experienced other people moving though the space, turning on and off lights as they go. Meanwhile, face recognition could be employed to see if the person is known to the space and, if not, an assessment could be made if an alert needs to be generated. Finally, a video record could be put together of the person moving through the space, using assembled clips from all relevant cameras, and stored (on-site) for a time in case it is useful.

Well that’s a trivial example to describe but not at all trivial to implement. However, my goal is to see if AI techniques can be used to approach this level of functionality. In practical terms, this means developing a series of rt-ai modules using TensorFlow or similar to perform feature extraction, anomaly detection and sequence prediction that are then glued together with sensor and control modules to perform a complete system requiring minimal supervised training to perform useful functions.

rt-ai: real time stream processing and inference at the edge enables intelligent IoT

Real-time inference at the edge enables decision making in the local loop with low latency and no dependence on the cloud. rt-ai includes a flexible and intuitive infrastructure for joining together stream processing pipelines in distributed, restricted processing power environments. It is very easy for anyone to add new pipeline elements that fully integrate with rt-ai pipelines.

Edge processing and control is essential if there is to be scalable use of intelligent IoT. I believe that dumb IoT, where everything has to be sent to a cloud service for processing, is a broken and unscalable model. The bandwidth requirements alone of sending all the data back to a central point will rapidly become unworkable. Latency guarantees are difficult to impossible in this model. Two advantages of rt-ai (keeping raw data at the edge where it belongs and only upstreaming salient information to the cloud along with minimizing required CPU cycles in power constrained environments) are the keys to scalable intelligent IoT.

The ghost in the AI machine

The driveway monitoring system has been running full time for months now and it’s great to know if a vehicle or a person is moving on the driveway up to the house. The only bad thing is that it will give occasional false detections like the one above. This only happens at night and I guess there’s enough correct texture to trigger the “person” response with a very high confidence. Those white streaks might be rain or bugs being illuminated by the IR light. It also only seems to happen when the trash can is out for collection – it is in the frame about half way out from the center to the right.

It is well known that the image recognition capabilities of convolutional networks aren’t always exactly what they seem and this is a good example of the problem. Clearly, in this case, MobileNet feature detectors have detected things in small areas with a particular spatial relationship and added these together to come to the completely wrong conclusion. My problem is how to deal with these false detections. A couple of ideas come to mind. One is to use a different model in parallel and only generate an alert if both detect the same object at (roughly) the same place in the frame. Or instead of another CNN, use semantic segmentation to detect the object in a somewhat different way.

Whatever, it is a good practical demonstration of the fact that these simple neural networks don’t in any way understand what they are seeing. However, they can certainly be used as the basis of a more sophisticated system which adds higher level understanding to raw detections.

Object detection on the Raspberry Pi 4 with the Neural Compute Stick 2


Following on from the Coral USB experiment, the next step was to try it out with the NCS 2. Installation of OpenVINO on Raspbian Buster was straightforward. The rt-ai design was basically the same as for the Coral USB experiment but with the CoralSSD SPE replaced with the OpenVINO equivalent called CSSDPi. Both SPEs run ssd_mobilenet_v2_coco object detection.

Performance was pretty good – 17fps with 1280 x 720 frames. This is a little better than the Coral USB accelerator attained but then again the OpenVINO SPE is a C++ SPE while the Coral USB SPE is a Python SPE and image preparation and post processing takes its toll on performance. One day I am really going to use the C++ API to produce a new Coral USB SPE so that the two are on a level playing field. The raw inference time on the Coral USB accelerator is about 40mS or so meaning that there is plenty of opportunity for higher throughputs.