In one episode of the UK historical comedy series “Blackadder II”, set in England in the reign of Queen Elizabeth I (1558–1603), the protagonist, Edmund Blackadder, is about to set off on a perilous sea voyage to become the first man to sail around the Cape of Good Hope. Lord Melchett, one of the Queen’s courtiers, approaches Blackadder as he is about to set sail …
[Lord Melchett]: “The foremost cartographers of the land have prepared this for you. It's a map of the area that you'll be traversing.”
[Melchett hands Blackadder a parchment that is completely blank on both sides. Blackadder looks at Melchett quizzically.]
[Lord Melchett]: “They’d be very grateful if you could just fill it in as you go along.”
The mapping task faced by Blackadder has spawned a rich technology field known as Simultaneous Localization and Mapping (SLAM). SLAM involves a machine, which might be a robot or a mobile/wearable device carried by a person, performing the following pair of tasks concurrently:
1. Dynamically construct and store a map of a previously-unknown or changed location
2. Accurately track the machine’s location within the map dynamically constructed
Construction of a “map” typically involves determining the physical layout of the surrounding area e.g. rooms, doors, stairs and any important objects or obstacles that are situated in the environment. Detecting all these features on-the-fly within an unfamiliar area is a tough challenge for a machine and involves a sophisticated combination of different types of and signal processing and device sensors, such as laser, sonar or camera.
There is a lot of interesting work currently ongoing in combining SLAM with virtual reality (VR) and augmented reality (AR) technologies. Many historical VR or AR systems have either had mapping and location information “pre-programmed” into them by the system designer or used third-party digital mapping services and corresponding GPS information to derive a high-level sense of absolute and relative location. However, SLAM offers the possibility of a VR or AR device being able to learn the fine details of a local environment as the device user turns and moves. As a result both the type of digital media presented over VR or AR and the correspondence between computer-generated objects and real-world objects could be adapted based on the composition of the real-world scene and the location and orientation of the device user. With SLAM, all this could potentially be done without a priori knowledge of the physical environment, as the device would dynamically build a detailed local map and a sense of the device location within it.
For those of you familiar with computer gaming the effect is very similar to the “map mode” found in many games, whereby a map of the areas that your character has already explored is shown in detail, but the areas your character has yet to visit are not drawn on the map. For the unexplored areas the map fades out and the detailed map features will only be computed and drawn when those areas are eventually visited.
Edmund Blackadder would have found a SLAM device very useful in his sea voyage around the Cape of Good Hope! In another relevant TV reference, proponents of the combination of SLAM and virtual reality believe that an environment similar to the “holodeck” in Star Trek will be practical. There are many exciting possibilities for educational simulation and training applications, provided the presentation technology can be fed with sufficient, suitable and easily-authored digital content. This content production aspect is often overlooked and underplayed, yet it is key to providing the desired user experience and education/training outcomes.