Embedding AI inference in the Spatial Networking Cloud

rt-ai is a system for graphically composing edge AI stream processing networks either distributed on multiple processing nodes or all running on a single node. In the latter case, shared memory can be used for transfers between the functional blocks, making it almost as efficient as monolithic code. So when it comes to embedding AI inference in a Spatial Networking Cloud (SNC), rt-ai makes perfect sense. However, the underlying network styles are completely different – SNC uses a highly dynamic series of multicast and end to end virtual links whereas rt-ai uses static MQTT or shared memory links. Each makes sense for the different applications so it is necessary to create a bridge between the two worlds.

Right now, bridging is done (for video streams) with the GetSNCVideo and PutSNCVideo stream processing elements (SPEs) that can be added to any video stream processing network (SPN). GetSNCVideo can grab SNC video frames from the configured stream source which then acts as an rt-ai source for the downstream SPEs. Once processing has been completed, the frames can be re-injected into SNC using the PutSNCVideo SPE. There can be similar bridges for sensor or any other type of data that needs to be passed through an rt-ai SPN.

Originally, rt-ai had its own SPEs for collecting sensor data but this led to quite a bit of duplication between rt-ai and SNC. The embedding technique completely removes the need for this duplication as rt-ai SPNs can hook into any SNC data stream, no matter what hardware generated it.

The screen capture above shows an example that I am using as part of the driveway detection system that I have been running for quite a long term now to detect vehicles or people moving around the driveway – this post describes the original system. The heart of this is an NCS 2 inference engine with some post processing code to generate email alerts when something has been detected. All of the SPEs in this case are running on the same Raspberry Pi 4 which is humming along nicely, running inference on a 1280 x 720 10fps video stream. Now that this SPN has been embedded in SNC, it is possible to save all of the annotated video using standard SNC storage if required or else further process and add to the metadata with anything that connects to SNC.

rt-ai SPNs can be used to create synth modules (basically SPN macros) that can be replicated any number of times and individually configured to process different streams. Alternatively, a single SPN can process data from multiple SNC video streams using an SNC fan out SPE, similar to this one.

So what does this do for rt-ispace? The whole idea of rt-ispace is that ubiquitous sensing and other real-time data streams are collected in SNC, AI inference distills the raw streams into meaningful data and then the results are fed to SHAPE for integration into real world augmentations. Embedding rt-ai SPNs in SNC provides the AI data distillation in a highly efficient and reusable way.