Completed ZeroSensors all ready for long term data collection

Finally this is a ZeroSensor all ready to go into full time service, capturing video, audio and environmental data. The goal is to use this data, and that from other cameras around the space, as training data for machine learning systems.

One specific goal is to create an anomaly detector with minimal supervision. As much as possible, it will learn from experience. This is kind of tricky as it requires detection of unknown length sequences depending on the circumstances. I am intrigued by the ideas behind the Universal Translator but not sure how much could carry over to this application. This paper reviews some of the techniques usually applied, at least for video processing. The situation here is a little different as there are quite different types of features involved. My plan is to preprocess video and audio to recognize salient features (using object detection or whatever) and then input these features, along with environmental sensor data, in the form of uniform time-slotted data sets to the anomaly detector. This doesn’t help with detecting the length of an interesting sequence – that’s the fun part of the project.

Adding audio support to the ZeroSensor

Something conspicuously missing from the original ZeroSensor concept was audio support. That’s now been remedied with the SPH0645 board from Adafruit. I was originally a bit put off by the somewhat extended software installation but, in the end, this is the best approach.

This is the new ZeroSynth synth module for the ZeroSensor. Basically it just consists of an audio capture stream processing element added to the existing video and sensor capture SPEs.

The capture above shows the simple test design with the ZeroSynth module and which once again uses the DeepLabV3 SPE.

rtaiView can be used to view and review the three streams captured from the ZeroSensor. The audio can also be played out of the rtaiView host’s speakers if desired.

The intention is not so much to process audio for content like words at this stage. It’s more the presence or absence of sounds or transients in conjunction with other sensed events that could be more useful. There seem to be two basic choices: either use just an average amplitude level during a specific timeslot or actually try to recognize sound sources during a timeslot. Either of these then becomes a feature for input into an inference engine along with other detected features from the video and sensor streams.

Real time edge inference monitoring with rt-ai

rt-ai is progressing nicely and now supports multi-node operation (i.e. multiple networked servers participating in a processing network) along with real-time monitoring. The screen capture shows a simple processing network where the video feed from a camera is passed through a DeepLab-v3+ stream processing element (SPE) and then on to two separate media viewers. At the top of each SPE block in the Designer window is some text like Cam(Default). Here, Cam is the name given to the SPE while Default is the name of the node (server) on which the SPE is running. In this design there are two nodes, Default and rtai0.

The code underlying the common SPE API communicates with the Designer window and supplies the stats about bytes and messages in and out. Soon, this path will also allow SPE-specific real-time parameter tweaking from the Designer window.

To add a node to the system, it just needs to have all of the prerequisites installed and run a special NodeManager SPE. This also communicates with the Designer and supports SPE deployment and runtime control, activated when the user presses the Deploy design button. Moving an SPE between nodes is just a case of reassigning it, generating the design and then deploying the design again.

The green outlines around each SPE indicate the state of the SPE and the node on which it is running. When it is all green, as in the first screen capture, this indicates that both SPE and node are running. For the second screen capture, I manually terminated the View2 SPE on rtai0. The inner part of the outline has now gone red. This indicates that the node is up but the SPE is down. If the outline is all red, it means that the node is down and not communicating with the Designer.

It’s interesting to note that DeepLab-v3+ is processing around 5 frames per second using a GTX-1080 GPU. The input rate from the camera is 30 frames per second. The processor drops frames while it is still processing an earlier frame, ensuring that queues do not build up and latency is kept to a minimum.