After running into a flagpole in the Namibian desert and a burnt out car on the streets of Doncaster, I decided it was time to work on object detection. My previous challenges had all utilized very simple systems and i wanted to stay within that simple communication paradigm for object detection.
Learning to train solo as a blind runner used two very simple inputs, distance and feeling underfoot. Combined these inputs allowed me to learn to train solo along a 5 mile route. Objects were identified by me running into them and memorising where they were from an audible distance marker. I had reduced blind navigation to two simple elements and that was enough to run. With one, well 2 keyassumptions, 1. I knew where all the obstacles were and 2. There would be no new obstacles. I knew these assumptions were flawed, but i was happy to take on the risk.
Running through the desert solo made the exact assumptions. I would be aware of all obstacles ahead of time and there would be no surprise obstacles. This allowed for a very simple navigation system, as i had reduced the problem to one of bearing. As long as i knew the bearing i was running and could stick to it i could navigate a desert. The system developed along with IBM used a simple beep system to maintain bearing, silence would denote the correct bearing. A low tone beep would mean i had drifted left and a high tone drifted right. Incredibly simple, but simple is all you need in these situations, an overload of sensors and data doesn’t improve the system it just makes the process of understanding what is going on beyond comprehension. Therefore reducing navigation to one simple communication point to the user, in this case me, i was able to navigate the desert solo.
So where did it go wrong? Well those key assumptions, the obstacles in this case were a flagpole and a rock field. The flagpole can be engineered out, the rock field however, we run into the complex system problem. A highly granular descriptive system would not allow the end user to navigate such a rock field. It as a unique and specialized environment that required centimeter accurate foot positioning, indeed the correct way to navigate would be to avoid it entirely!
But could we avoid that burnt out car and flagpole? Yes we could. Could we make it a simple system for the user to understand? Absolutely.
The simplest way to communicate an object within a visual field is hapticly. It is highly intuitive for the end user with ibration feedback instantly recognizable as an obstacle. For the sensor a tiny ultrasonic sensor mounted at chest level. The chest had been chosen as it always follows the direction of running. We had discounted a head mounting, as people often look in a different direction to the one they are moving in.
It is an incredibly simple system, but that is all it needs to be. The idea is to explore the minimal communication required for obstacle avoidance. In future revisions we intend to use multiple sensors but be ever careful not to introduce complexity to the point the simple communication system is interrupted. For example, it may be tempting to use a series of sensors all over the body, this however increases complexity and issues with differentiating between different vibrations and object detection. Not to mention that human interpretation adds latency to the system which may result into running into the obstacle we are trying to avoid.
This all sounds interesting, but does it work? Yes, yes it does. I was over in Munich recently to test an early prototype. With only one sensor i felt we were so close i was tempted to test it while running. The immediacy of the system is incredible. It is totally intuitive that a vibration denotes an obstacle. Avoiding the obstacle is a simple case of drifting left or right until there is no vibration. Then moving on by.
Below is a video of the device in action. I will continue to give updates on the development of the system up until i give it a real workout at a packed city marathon, where i will run solo.