Thank goodness for technology

When my sight began to slip away, I feared losing so many things I love. After all, so much of our daily lives revolves around the ability to connect on a visual level.

My first love has always been technology and just as touch screens were becoming common place, I was unable to see them. How could I possibly interact with technology that was so heavily visual? There wasn’t even any tactility to the screen, it was a perfect smooth piece of glass. No raised buttons to identify what I was pressing, no way to memorise an elaborate process of taps and clicks – I felt lost. Lost but not defeated; I clung steadfast to the belief that there must be a way to adapt this to make it work to my benefit.

There was an unforeseen advantage- and as a result an adaptability – to this. The migration to touch screen forced the industry to reimagine how we would interact with these devices. The result was Apple developing VoiceOver for the iPhone, a gesture based screen reader. I didn’t realise it at the time but this would be my entry point to making the world accessible.

Now that my phone was equipped with the ability to read on screen items aloud, the phone became indispensable. It would be my reading tool for university, with all the books converted to digital form and my phone now reading them aloud. It would also become my window to interacting with the world at large – Facebook, Twitter, email all made accessible through this fantastic interface. It even allowed me to help my kids with their homework. It would creep into every aspect of my life becoming more and more indispensable as the days wore on. The unforeseen disadvantage: battery anxiety. My phone was now an extension of me, filling in the gaps that my lack of sight had created.

With the constant creation of new and previously unthinkable technological advancements, I wonder whether my main assistive device will even be the phone? Looking ahead 5-10 years I foresee a transitional period in the mechanics of interacting with our technology. One that will see a move away from typing onto screens and move towards spoken language, with a natural migration to a screen-less (or at least screens as we know them now) future. I believe that this technology is just on the horizon and something I relish the thought of.

Accessibility – low hanging fruit

There is a lot of low hanging fruit ripe for the picking within the inclusive design realm. So in 2017 what fruit do i think is the ripest?

Dark mode. This one feature alone implemented OS wide could make a huge difference to a substantial user base Not only would it solve a problem for the visually impaired for whom contrast is a major issue, but those with situational requirements where dark mode makes the most sense. Think late at night in bed, that white screen just makes your eyes ache.

So will there be an appetite for this in 2017? My gut says yes. If rumours hold true and the iPhone moves to an AMOLED display, we will see an introduction of dark mode. This will have a wonderful knock on affect of influencing design direction for a while. So not only we will see dark mode introduced at the OS level, but we will start to see a whole host of apps fall in line.

The dream scenario? Would be for apple to introduce a way for apps to toggle in and out of dark mode dependent on users preferences. This may be a visually impaired user using this feature instead of invert colours, or perhaps a sighted user having dark mode set for specific tie frames. I think this scenario is less likely than an OS wide dark theme and waiting for app creators to fall in line, but we can dream.

So lets see if that low hanging fruit is finally picked this year.

Lightweight night vision goggles

Night blindness is a common issue for people with low vision, especially those with Retinitis Pigmentosa. While your vision may be adequate for mobility in daylight, as the night draws in and contrast begins to drop, night blindness occurs. 
When i had sufficient vision for this to be a problem for me, I was always tempted by night vision goggles. There have even been research projects exploring this possibility. The good news is it can really help with mobility, the bad news night vision goggles are expensive, cumbersome and heavy.
Due to these restrictions i never quite took the plunge. But an interesting development once again has me intrigued in night vision. Thanks to a new breakthrough the advantages of night vision goggles can be had in a spectacle frame. There is still a need for external power, but great to see this moving forwards.
As augmented reality products advance it would be great to see this technology integrated to enable low light navigation.

Night vision goggles

Blind hiring? Use the blind

Technology has a diversity problem, as do many other companies. An immediate point of change is the hiring process, my interest was peaked from a comment by Leslie Miley, of Slack. It was proposed that a blind assessment process is used during hiring, stripping applications of identifiable data.

This is an interesting proposition and similar to one i have been proposing for a while. Don’t simply do blind assessments, use blind people to do the hiring.

Passed the application assessment stage, blind people really come into their own. The inability to see the applicant massively reduces implicit bias. It cannot be overstated how important it is to remove those unconscious bias that we all possess but find it difficult to identify. Removing the ability to visually trigger these unconscious biases will assist in improving the diversification of the hiring process.

But couldn’t you just wear a blindfold? Why use someone who is blind?

Apart from this being a terrible gimmick, social interactions can be difficult when you remove the vision of one participant. However, blind people have had years to perfect non visual interactions. To the point where if I dont have my guide dog or cane with me, in social interactions no one ever realises i am blind. I can maintain eye contact – which, greatly eases the comfort of the other participant, something a blindfolded participant would be unable to do.

Blind people have also been spending many years understanding how to read people without visual cues. Actually listening to someone, rather than adding a level of visual distraction. These advanced listening skills are something that take years to hone, and blind people have been perfecting them their entire lives.

So if yo want to diversify your hiring process, start by diversifying your hiring team.

Mixed reality systems

Project Tango seemed like a revelation a couple of years ago, a system that could do 3D mapping of enviroments in a small package. Now with the demands of inside out tracking for gaming we are starting to see other products hit the market.
I still feel this technology has a long way to go, eventually being shrunk down to a sensor that is as small if not smaller than today’s front facing phone cameras. Once we arrive at that point we enter the realm of discrete technology that is capable of augmenting reality in interesting ways.
I really see this being a product that is immensely helpful for the assistive technology arena. I will definitely be shaping the future of such products.

Mixed reality systems

Windows running on ARM

Exciting news from Microsoft that future versions of windows will run on ARM. Perhaps even more impressive it will emulate 32-bit x86, there is even a demo of Photoshop running on a Qualcomm 820.
If Microsoft can really improve the Narrator as has been mentioned recently this could be a great unification across all their devices. Not to mention it may instantly solve their lack of apps on mobile, opening the door for a Surface mobile phone.
Windows 10 to run on ARM

Computer vision for the blind

With computer vision rapidly improving, it was only a matter of time before we began to see head mounted computer vision systems. Horus, has a unique approach in that it doesn’t rely on connectivity for the visual processing. That means it will even work when the data connection is down. It covers some interesting basics of computer vision for the blind, reading and facial recognition for example. It does however, suffer from what i always conside the ultimate pitfall in these products. it was designed specifically for the blind, meaning the cost is high, as the market is small. 
There is definitely space for a head mounted digital assistant. So with a little shift in the market this product could be aimed at a wider spectrum bringing down the cost. Therefore, making it highly accessible.
However, this is a wonderful step forward and I am looking forward to seeing where products like this go.

COmputer vision for the blind

Object avoidance for the blind

img_0385

After running into a flagpole in the Namibian desert and a burnt out car on the streets of Doncaster, I decided it was time to work on object detection. My previous challenges had all utilized very simple systems and i wanted to stay within that simple communication paradigm for object detection.

Learning to train solo as a blind runner used two very simple inputs, distance and feeling underfoot. Combined these inputs allowed me to learn to train solo along a 5 mile route. Objects were identified by me running into them and memorising where they were from an audible distance marker. I had reduced blind navigation to two simple elements and that was enough to run. With one, well 2 keyassumptions, 1. I knew where all the obstacles were and 2. There would be no new obstacles. I knew these assumptions were flawed, but i was happy to take on the risk.

Running through the desert solo made the exact assumptions. I would be aware of all obstacles ahead of time and there would be no surprise obstacles. This allowed for a very simple navigation system, as i had reduced the problem to one of bearing. As long as i knew the bearing i was running and could stick to it i could navigate a desert. The system developed along with IBM used a simple beep system to maintain bearing, silence would denote the correct bearing. A low tone beep would mean i had drifted left and a high tone drifted right. Incredibly simple, but simple is all you need in these situations, an overload of sensors and data doesn’t improve the system it just makes the process of understanding what is going on beyond comprehension. Therefore reducing navigation to one simple communication point to the user, in this case me, i was able to navigate the desert solo.

So where did it go wrong? Well those key assumptions, the obstacles in this case were a flagpole and a rock field. The flagpole can be engineered out, the rock field however, we run into the complex system problem. A highly granular descriptive system would not allow the end user to navigate such a rock field. It as a unique and specialized environment that required centimeter accurate foot positioning, indeed the correct way to navigate would be to avoid it entirely!

But could we avoid that burnt out car and flagpole? Yes we could. Could we make it a simple system for the user to understand? Absolutely.

The simplest way to communicate an object within a visual field is hapticly. It is highly intuitive for the end user with ibration feedback instantly recognizable as an obstacle. For the sensor a tiny ultrasonic sensor mounted at chest level. The chest had been chosen as it always follows the direction of running. We had discounted a head mounting, as people often look in a different direction to the one they are moving in.

It is an incredibly simple system, but that is all it needs to be. The idea is to explore the minimal communication required for obstacle avoidance. In future revisions we intend to use multiple sensors but be ever careful not to introduce complexity to the point the simple communication system is interrupted. For example, it may be tempting to use a series of sensors all over the body, this however increases complexity and issues with differentiating between different vibrations and object detection. Not to mention that human interpretation adds latency to the system which may result into running into the obstacle we are trying to avoid.

This all sounds interesting, but does it work? Yes, yes it does. I was over in Munich recently to test an early prototype. With only one sensor i felt we were so close i was tempted to test it while running. The immediacy of the system is incredible. It is totally intuitive that a vibration denotes an obstacle. Avoiding the obstacle is a simple case of drifting left or right until there is no vibration. Then moving on by.

Below is a video of the device in action. I will continue to give updates on the development of the system up until i give it a real workout at a packed city marathon, where i will run solo.