Essex logo VASE logo

Facilities

The Networked Interaction Laboratory

The Networked Interaction Laboratory (yes, it's pronounced "NIL") provides a focus for research into human-computer interaction with a strong emphasis on virtual reality applications. The networked part of the name arises from the way in which the distributed hardware and software components that form the NIL infrastructure are controlled and managed.

The NIL is based around a pair of super-high-definition projectors, able to display 4096 x 2400 pixels, back-projecting onto a screen that covers a complete wall of the lab. The size of a projected pixel is about 1 mm2. The projectors project their images through coloured filters, and user wear spectacles that have the same colour filters. The fields of view of the projectors are carefully aligned, so that users perceive the virtual world stereoscopically and obtain a good impression of depth.

Each projector is driven by a dedicated PC-class computer (not many machines were able to drive 4K graphics cards when the lab was set up). It is vital that these machines update the visual image synchronously: if they don't and the visual fields in users' eyes update out of step, they quickly become nauseous. We achieve this by having a third machine, called nil-command, send updates to the viewpoint over the local network to the machines that do the rendering: our network is quick enough that this is imperceptible.

We have found that this approach scales well. For example, we have a pair of lower-resolution projectors connected to machines that project side views of the virtual world, producing a "poor man's CAVE", and they are driven by precisely the same mechanism. Moreover, machines are able to join a running virtual world simply by connecting to TCP port 6666 on nil-command.

The focus of the NIL is not on the virtual world per se but rather on interaction with it. This networked architecture gives a natural way to integrate interaction devices: they simply send packets to nil-command which update the viewpoint in the virtual world, and these are forwarded to all the machines involved in displaying it. We have found that the most effective way to do this is for interaction devices to send UDP, rather than TCP, packets containing incremental changes to the viewpoint; if one of these is dropped by the network, the user simply continues their interaction, which causes further packets to be sent.

The use of interaction via UDP is that there is no concept of a "connection" as with TCP, so that many interaction devices may interact with the virtual world concurrently. Most people will be familiar with the use of keyboard and mouse to interact with, say, a game: but in the NIL, you can also use finger motions on a graphics tablet, spoken commands, or gestures via a Microsoft Kinect. We have even interfaced a (static) bicycle, so that you can cycle through our virtual worlds. In fact, a common demonstration is for one person to produce forward motion through a world by cycling while a second controls turning by gestures — this turns out to be great fun!

To make interfacing devices particularly straightforward, the controlling software that runs on nil-command receives textual commands. To move forward by a single step, for example, the message sent to UCP port 6667 is simply

  move_forward

(There are equivalent single-character alternatives if you think this places too much load on the network.) The step size is configurable in the program that implements the virtual world and adjustable via the same command interface.

One potential problem with this arrangement is that the software on nil-command would have to know the physical arrangement of views produced by the various computers involved in displaying the virtual world, because that software tells each machine the viewpoint to render. We have overcome this by having that software send to each machine the view of the right eye only, leaving it up to the machines to impose any additional transformations. This approach turns out to be very effective. For example, machines can join a running virtual world and obtain the correct view straight away, especially useful for involvement remotely. All machines receive identical copies of the software that implements the virtual world and use their hostname to choose which of several pre-defined transformations to apply to the view geometry distributed. Hence, simply calling the machine that projects the left eye's view nil-left means that it applies the correct transformation. We have found that this greatly simplifies software maintenance.