Displaying a system’s DMI information

Linux has a very handy function, dmidecode, that can be used to display useful information about the hardware configuration of a system. The more general purpose command hardinfo displays some of the same information but not all.

Basic usage is:

sudo dmidecode

The output can be made more specific using one of the options described here. In my case, I wanted to know what the part number of the DRAM fitted to a system was while the system was running and without literally looking at the part. This command selects the memory information:

sudo dmidecode -t memory

Ignoring the annoying SIG32 signal when debugging SPDK apps in QtCreator

Well I don’t know if I am doing something wrong but, every time I close down an app using SPDK being debugged via QtCreator, SIG32 gets trapped and pauses execution rather than cleaning up and exiting. It’s easy to suppress the pop-up window but I couldn’t work out how to avoid trapping the signal entirely. Fortunately I found this post which has the solution.

Basically, you open up the Options window and select Debugger. Then select the GDB tab. Within that, there’s a window for Additional Startup Commands. Within that window, add the line:

handle SIG32 pass nostop noprint

Then press Apply and Ok. From now on, the app won’t get hung up if this signal is generated on exit.

Forcing a specific Linux kernel at boot time

Another problem I had with the irdma driver build mentioned in the previous post is that it would not build with the default kernel in my Ubuntu 20.04 install (5.11.0). However, I knew it would build using 5.4.0 which was present but not automatically selected. The trick is to get grub to default to the desired version. To do this, first list /boot/grub/grub.cfg and find the appropriate string for the desired version. In my case, this was:

Ubuntu, with Linux 5.4.0-42-generic

Then, edit /etc/default/grub and change the GRUB_DEFAULT line to this:

GRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux 5.4.0-42-generic"

Finally, run:

sudo update-grub

to activate the new default option and then reboot.

Solving a kernel module signing issue

I am currently working on getting an E810 100G ethernet NIC working and had a bit of trouble building the irdma driver due to a signing problem:

At main.c:160:
- SSL error:02001002:system library:fopen:No such file or directory: ../crypto/bio/bss_file.c:69
- SSL error:2006D080:BIO routines:BIO_new_file:no such file: ../crypto/bio/bss_file.c:76
sign-file: certs/signing_key.pem: No such file or directory

After some research, I came across the solution here, which seemed to work for me. Specifically the solution is (reproduced here just in case I can’t find the original again!):

cd /lib/modules/$(uname -r)/build/certs

sudo tee x509.genkey > /dev/null << 'EOF'
[ req ]
default_bits = 4096
distinguished_name = req_distinguished_name
prompt = no
string_mask = utf8only
x509_extensions = myexts
[ req_distinguished_name ]
CN = Modules
[ myexts ]
basicConstraints=critical,CA:FALSE
keyUsage=digitalSignature
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid
EOF

sudo openssl req -new -nodes -utf8 -sha512 -days 36500 -batch -x509 -config x509.genkey -outform DER -out signing_key.x509 -keyout signing_key.pem

Debugging DPDK and SPDK applications in QtCreator: the root issue

There doesn’t seem to be any easy way to get SPDK and DPDK to work without root permission (ok, there is a way but this technique works for these and other things). Not impossible to deal with when running code but it does present a problem when debugging with QtCreator (or any other IDE that directly calls the debugger). There is a trick that allows gdb to be run with root permission without requiring a password (largely as described here).

sudo visudo

and add this line at the bottom of the file:

<username> ALL=(root) NOPASSWD:/usr/bin/gdb

where <username> is the appropriate user name. Now all that’s needed is to call sudo gdb instead of gdb. This can be done by creating a simple executable script called sudo-gdb containing:

#!/bin/bash
sudo gdb $@

QtCreator then needs to know to use this version of the debugger. Go to Tools -> Options and select Kits. Select the Debugger tab and click on Add. Enter the full path to the new script. Then click on the Kits tab and select the new debugger in the Debugger drop-down and then click on Ok.

Yes, I should probably do it with the permissions trick and I will. One day.

Adding a password protected share to Ubuntu

This is one of those posts that helps me remember how to do something I knew how to do once but would inevitably forget how to do again one millisecond later. In this case it is how to configure a directory that is shared on the network but only to registered users.

First off, create a new group for the users (I will use the group name sharegroup as an example):

sudo addgroup sharegroup

Then add yourself to the group:

sudo usermod -aG sharegroup $USER

Add any other users to the group in the same way.

Now it is time to prepare the directory to be shared. Create (or choose) the directory and share it using the local network share option from the GUI. Alternatively, edit /etc/samba/smb.conf directly by adding something like this at the bottom:

[<sharename>]
  path = <full path to shared directory>
  writeable = yes
  guest ok = no
  read only = no
  browsable = yes
  create mask = 0770
  directory mask = 0770
  valid users = @sharegroup

and then restart samba:

sudo systemctl restart smbd

Then the directory group ownership must be set to the new group. Set the directory as current and enter:

sudo chgrp -R sharegroup *
sudo chgrp sharegroup .

Be very careful any time using the -R option with sudo as it is easy to completely mess up the OS! Generally it is best to start in the directory that’s being modified. That way, there’s less chance of making unintentional modifications.

Now change the permissions to allow group members to operate on files properly:

sudo chmod -R g+rw *
sudo chmod g+rw .

Then the users must be added to samba – for example:

sudo smbpasswd -a $USER

This will ask for a password to be used to access the share. That’s it!

Using UWB asset tags to dynamically create rt-ispace augmentations

Essentially, an asset tag is a small device that can be used to locate and instantiate, using UWB in this case, an augmentation in an rt-ispace environment completely dynamically. The augmentation follows the position and orientation of the asset tag, making for a very simple way to implement augmented spaces. If engineered properly, it could be an extremely simple piece of hardware that would be essentially UWB hardware along with a MEMS IMU and a battery. Instead of WiFi as in this prototype, pose updates could be sent over the UWB infrastructure to make things really simple. Ideally, these would be extremely cheap and could be placed anywhere in a space as a simple way of adding augmentations. These augmentations can be proxy objects (all aspects of a proxy object augmentation can be modified by remote servers) and can be as simple or complex as desired.

There are some similarities and differences with the ArUco marker system for instantiating augmentations. The ArUco marker can provide an ID but that has to be matched with a previously instantiated object that has the same ID attached. Asset tags don’t require any pre-configuration like that. Another problem with ArUco markers is that they are very affected by occlusion – even a wire running across a marker might make it undetectable. Asset tags are not affected by occlusion and so will function correctly in a much wider range of circumstances. They do require UWB enabled spaces, however. In the end, both styles of augmentation instantiation have their place.

Note that the asset tag doesn’t need to contain the actual asset data (although it could if desired). All it needs to do is to provide a URL of a repository where the asset (either Unity assetbundle or glTF blob) can be found. The asset is then streamed dynamically when it needs to be instantiated. It also provides information about where to find function servers in the case of a proxy object. The rt-ispace user app (in this case an iOS app running on an iPad Pro) doesn’t need to know anything about asset tags – they just inject normal looking (but transient) augmentation updates into the rt-ispace system so that augmentations magically appear. Obviously, this kind of flexibility could easily be abused and, in real life, a proper security strategy would need to be implemented in most cases. For development, though, it’s nice for things just to work!

One application that I like is a shared space where people can bring along their virtual creations in the form of some asset tags and just place them in the rt-ispace space so that any user in the space  could see them.

Another idea is that items in stores could have rt-ispace asset tags attached to them (like security tags today) so that looking at the item with an AR device would perhaps demonstrate some kind of feature. Manufacturers could supply the asset and functions servers, freeing the retail store from having to implement something for every stocked item. 

The video above shows how the augmentation tracks the UWB tag around and that the IMU controls the augmentation’s orientation. For now, the hardware is a complete hack with multiple components but it does prove that the concept is viable. The UWB tag (the white box on the floor under the figure’s right foot) controls the location of the augmentation in the physical space. A Raspberry Pi fitted with an IMU provides orientation information and drives the resulting pose via WiFi to the rt-ispace servers. The augmentation is the usual glTF sample CesiumMan.

Linking AR augmentations to physical space using the ArUco marker system

Following on from the earlier work with ArUco markers, rt-ispace can now associate ArUco markers with augmentations in a space. The image above shows two glTF sample models attached to two different ArUco marker codes (23 and 24 in this case). Since these models are animated, a video also seems appropriate!

The image and video were obtained using an iPad Pro running the rt-ispace app that forms the front end for the rt-ispace system. A new server, EdgeAnchor, receives the AR video stream from the iPad via the assigned EdgeAccess, detecting any ArUco markers that may be in view. The video stream also contains the iPad camera instrinsics and AR camera pose, which allows EdgeAnchor to determine a physical pose in space of the marker relative to the camera view. The marker detection results are sent back to the iPad app (via EdgeAccess) which then matches the ArUco IDs to instantiated augmentations and calculates the world space pose for the augmentation. There are some messy calculations in there but it actually works very well.

The examples shown are set up to instantiate the augmentation based on a horizontal marker. However, the augmentation configuration allows for a 6-dof offset to the marker. This means that markers can be hung on walls with augmentations either on the walls or in front of the walls, for example.

A single EdgeAnchor instance can be shared among many rt-ispace users as no state is retained between frames allowing the system to scale very nicely. Also, there is nothing specific to ArUco markers: in principle, EdgeAnchor could support multiple marker types, providing great flexibility. The only requirement is that the marker detection results in a 6-dof pose relative to the camera.

Previously, I had been resistant to the use of markers, preferring to use the spatial mapping capabilities of the user device to provide spatial lock and location of augmentations. However, there are many limitations to these systems, especially where there is very limited visual texture or depth changes to act as a natural anchor. Adding physical anchors means that augmentations can be reliably placed in very featureless spaces which is a big plus in terms of creating a pleasant user experience.

Adding ArUco marker detection to rt-ai

There are many situations where it is necessary to establish the spatial relationship between a camera in a space and 3D points within the same space. One particular application of interest is the ability to use markers to accurately locate holograms in a space so that AR headset users see the holograms locked in the space, even as they look or move around the space. OpenCV has the ArUco marker detection included so that seemed like a good place to start. The screen capture above shows the rt-ai Aruco marker detector identifying the pose of a few example markers.

This is the simple rt-ai test design with the new ArUcoDetect stream processing element (SPE). The UVC camera was running at 1920 x 1080, 30 fps, and the ArUco SPE had no trouble keeping up with this.

This screen capture is a demonstration of the kind of thing that might be useful to do in an AR application. The relative pose of the marker has been detected, allowing the marker to be replaced by an associated hologram by a 3D application.

While the detection is quite stable, the ArUco SPE implements a configurable filter to help eliminate occasional artifacts, especially regarding the blue (z-axis) which can swing around quite a bit under some circumstances due to the pose ambiguity problem. The trick is to tune the filter to eliminate any residual pose jitter while maintaining adequate response to headset movement.

One challenge here is management of camera intrinsic parameters. In this case, I was using a Logitech C920 webcam for which calibration intrinsics had been determined using a version of the ChArUco calibration sample here. It wouldn’t be hard for the CUVCCam SPE to include camera intrinsic parameters in the JSON associated with each frame, assuming it could detect the type of UVC camera and pick up a pre-determined matrix for that type. Whether that’s adequate is a TBD. In other situations, where the video source is able to supply calibration data, the problem goes away. Anyway, more work needs to be done in this area.

Since rt-ai stream processing networks (SPNs) can be integrated with SHAPE via the Conductor SPE (an example of the Conductor is here), an AR headset running the SHAPE application could stream front facing video to the ArUco SPN, which would then return the relative pose of detected markers that have previously been associated with SHAPE virtual objects. This would allow the headset to correctly instantiate the SHAPE virtual objects in the space and avoid problems relying on inside out tracking alone (such as in a spatial environment with a repeating texture that prevents unique identification).

Extreme edge depth video processing: Intel L515 LiDAR + Raspberry Pi and Stereolabs ZED + Jetson Nano

Depth cameras are an important component of rt-ispace but things just aren’t going to scale if each one needs a server with GPU just to generate useful data. This means that the extreme edge, consisting of hopefully low cost components that can be widely distributed, needs to be able to interface to depth cameras and make the data available to the wider network.

I have been testing with a Stereolabs ZED camera connected to a Jetson Nano and an Intel L515 LiDAR connected to a Raspberry Pi 4. The depth video stream generated by the rt-ai capture code is 1280 x 720 pixels, JPEG encoded along with uncompressed 640 x 360 16 bit depth data with a target frame rate of 15fps. Both systems seem quite capable of capturing and transmitting the data streams as shown above. The rt-ai design being used is this:

The rtai0 node is the Raspberry Pi 4. In this case, the depth video streams from the cameras are not being processed further on the extreme edge systems. The depth data views, which display data coming directly from the extreme edge systems, show that the Jetson Nano is generating frames at the target rate while the Raspberry Pi 4 is achieving about half that. Both are usable rates for many applications.

The depth video frames are also passed to the OpenPoseGPU Stream processing Element (SPE). This is an implementation of OpenPose that uses the desktop GPU (a GTX1080 Ti) to implement pose estimation. The OpenPoseGPU SPE can work with standard video streams but if given a depth video stream will work out the depth of each identified joint and add that to the metadata generated.

The total throughput of the OpenPoseGPU SPE is around 14fps. As can be seen in the rt-ai design, the depth video streams are multiplexed into the OpenPoseGPU SPE so that this capability is being shared between the two streams. The FanOut SPE separates the output streams which are then sent to viewers. Due to the limited throughput of the OpenPoseGPU SPE, the data streams are reduced in frame rate by a factor of 2.

So this design, where OpenPose processing is offloaded from the extreme edge, works fine but it would be far more interesting to do this at the extreme edge.

The screen capture above shows pose estimation running at the extreme edge using an Intel NCS 2 to run inference as implemented by the OpenPoseVINO SPE on the rtai0 Raspberry Pi 4 node. This does work pretty well but can only achieve 2fps. This might be ok for some applications but it would be nice to get to around 10fps.

I did also try running trt_pose on the Jetson Nano to try extreme edge pose estimation there but this was not successful. It may be that trying to run the ZED camera and trt_pose on the same Nano is just asking too much. Moving to a Xavier NX would probably make sense as it has double the memory and more power in general but it is a fair bit more expensive that the Nano so somewhat less relevant to the extreme edge application.

Work is now moving onto a new architecture using distributed inference to relieve the load on the extreme edge while still achieving usable pose estimation frame rates.