About rt-ispace

rt-ispace (Intelligent Space) is a fun research project that I am using to explore what can be done with augmented reality in the real world. It brings together many software components to realize my vision of a world where every space is augmented with virtual objects – holograms or whatever you want to call them. Augmentations could be as simple as information displays (advertising does come to mind sadly), virtual objects with which someone can interact or replacing large parts of real spaces with augmentations that greatly improve on the original world. Yes, the idea is that rt-ispace will make the world a better place…

From an implementation perspective, rt-ispace is the combination of a few separate components:

  • Spatial Networking Cloud. This was the first component to be developed. It is general purpose infrastructure for collecting, storing and retrieving data for intelligent spaces.
  • rt-ai. This is actually the newest component. It allows flexible reuse of AI inference functions to be embedded within a spatial networking cloud. rt-ai makes it really easy to process real time information streams and produce distilled, high quality information in real-time. Embedding within SNC means that any SNC data stream can be passed through an rt-ai stream processing network and then re-injected into the spatial networking cloud for further processing.
  • SHAPE. SHAPE stands for Scalable Highly Augmented Physical Environment and is the component that makes the spatial augmentations possible. It implements the various servers to manage and operate augmentations, manage sub-spaces and also provides the universal AR application. Importantly, SHAPE supports both pre-configured, global (i.e. visible to everyone) augmentations along with dynamically instantiated group augmentations, visible only to selected users. For example, anyone can leave messages in a space for other members of their group but they won’t be visible to people outside of their group.

There are two aspects of rt-ispace that I think make it really useful. One is that almost anyone can create new augmentations that can be used to enhance a space. A key enabler for this is proxy objects, which also means that only a single universal application is required for AR headsets. Proxy objects are runtime downloadable assets that can then be operated by remote servers (hence the proxy in the name). There’s no need to get involved with the nitty gritty of writing AR applications. Instead, it can be as simple as generating a few JSON messages and then sending them to the proxy objects via the rt-ispace API.

For example, someone wearing an AR headset could walk down Main Street in some town and see (and hear) it as a highly dynamic, informative space no matter what the real space looks (or sounds) like. As rt-ispace only requires a single universal app installed on the user’s AR device, as the user moves from sub-space to sub-space (by walking down the street), the augmentations automatically update to be relevant to where they are.

It’s also interesting to think of what effect intelligent spaces would have on homes. If you assume for a moment that residents are always wearing AR headsets (far fetched at the moment but one day…), who needs big screen TVs or art on the wall when you can have a (virtual) 108″ screen or the Mona Lisa on the wall instead?

The entertainment possibilities are endless. Football stadiums, theme parks etc could all massively benefit from rt-ispace. Displaying real-time information in creative ways (such as tagging players on a field with augmentations that show interesting data) is just part of it. And of course, since it is all virtual, information can be displayed in the user’s language. Visual guidance on how to get around would be very useful. Theme parks could have entirely virtual attractions that fully embed into the real environment.

There are many applications for rt-ispace in industrial settings. The usual thing is to annotate or overlay machinery with real-time data, thermal maps etc. Another idea from long ago addresses spaces like server farms where, if you wall down a row of servers, they get augmented with identification information and real-time performance data. rt-ispace could also be used to guide engineers to faulty servers that need attention. But really this is just scratching the surface.

I started work on what is now rt-ispace (it was called Sentient Space back then) about 20 years ago. The problem then was that the technology to realize the vision didn’t exist. Actually it still doesn’t really. What’s missing at the moment are AR headsets with high quality spatial locking that can be worn by lots of people for long periods of time. However, we are not too far from that reality now (I hope). I believe that we will get to the point where not wearing an AR headset is many times worse than not having a smartphone to hand.

Finally, the photo gallery below shows the progression of headset technology that I have tried over the years. Most of them are now gathering dust somewhere around the house.

This is me, by the way.