I wanted a rover robot that would explore an area and then map it out once the exploration is finished. The robot would move at random, use simple sensors to detect obstacles, and store its movements as vectors in memory. Obstacle locations could then be inferred from movement vectors because the vector direction must change as the robot detects and responds to each obstacle. Given sufficient inferred obstacle location data, a map with obstacle locations could be produced so that future exploration could be intelligently guided, not random. Human visualization of movement and obstacle data would also be a useful capability.
I built a robot prototype that approximates the above functions. I used an Arduino Uno R3 board to control the robot’s movements and store movement data in the EEPROM of the board’s microcontroller. At this point, the robot operates on the tenuous assumption that turning its continuous rotation servos at a constant speed results in constant movement of the robot itself. I then wrote a program in the Processing language that acquires movement vector data from the Arduino Uno via a serial cable and then plots the movement vectors for the user to interpret. Below are images of the robot and of the path plot generated by Processing. I’m currently working on a Processing algorithm that will make assumptions about the movement vectors to generate a “map” of the robot’s surroundings.
Because this cartography technique relies on high-volume data acquisition and because the environment could immobilize or destroy the explorer robot, using only one autonomous robot to generate a map would be slow and not robust. I envision a future system in which a swarm of expendable, small robots could be deployed in unison to gather large amounts of data; loss or immobilization of a fraction of the robots would be acceptable and would not significantly impede data gathering.