Using capslock as screen reader key on Parallels

The caps lock key is often used across multiple operating systems to trigger screen reader functionality. This can become an issue when running multiple operating systems under Parallels. Typically, I have re-mapped a key to act as the screen reader key under Parallels, this works but it has two main issues: I lose a key and I have to perform some ridiculous finger gymnastics for keyboard shortcuts.

I wanted a far more refined solution. Ideally the caps lock key would automatically identify which operating systems windows was currently selected and trigger the appropriate screen reader. The great news is I finally have this working. There are a few steps to make sure it will all work smoothly.

Requirements

  1. Windows installed through parallels and running in Coherence mode

  2. NVDA installed with insert as one of the trigger keys

  3. Install Karabiner-Elements

  4. Download this extract the JSON

5. Copy the JSON file from above into the following folder: ~/.config/karabiner/assets/complex_modifications

  1. Activate the complex modification from above in Karabiner-Elements

Now when switching between apps running in coherence mode in windows and your native MacOS applications, the caps lock key will trigger the appropriate screenreader.

I run a whole host of other keyboard modifications that I will post in future, so keep an eye out on the blog.

Google Drive on ARM under Parallels on the Mac

For the Last few months I have been perfecting my Windows 11 setup through Parallels on theMac. While, Apple Silicon can run windows 11 fantastically well, tailoring to my workflow may take a while. There’s still a way to go, however, a substantial hurdle was solved this week.

I now have access to Google Drive under ARM through the file explorer. It is worth mentioning, that Google have publicly said they have no plans to support Google Drive on ARM for windows. So we are left looking for other solutions. Thankfully, this week I have one. Mountain Duck works well under ARM and is capable of mounting Google Drive as well as other cloud services. Giving complete access to Google Drive through the file explorer.

It is a great solution ands one step forward for supporting my complete workflow.

Last year -> this year

Entering a new year always makes me ponder the challenges and goals for the year ahead. The past year has been the usual treading the lines of technology, inclusivity and running so the year ahead will cover those bases but in new ways.

Last year my two technology highlights were creating an eye gaze control system and working on user led accessible hackathons. The eye gaze system saw its first use in a real time painting robot, its applications however, are much broader and would be great to see it integrated with environmental control in 2019.

The hackathons were also a fantastic success. There was a careful and thoughtful focus in the projects being user led. This allowed a number of disabled people to nengage, highlight a goal they had and be a key driver during the hackathon. This is something we hope to grow not only this year but in future years.

This year I intend to work on the sonification of data to enable greater inclusivity within computer science and particularly machine learning. Interpreting and analysing data is an important step and the current tools are somewhat lacking. This will form my MSc dissertation project and looking forward to getting stuck in.

Through exposure to some incredibly interesting projects through the hackathon work, i also intend to do a few side projects around switch access. With a focus on zero force switch access, i.e. trigger switches without.a physical press of a button.

The inevitable over eating at Christmas has also ensured i commit to some running. My favourite side of running nowadays is helping others achieve those goals. So in the first half of this year i will be training with a few friends and crossing the finish line alongside them on their first races.

There is of course always the thought of pushing the boundaries, something that is never too far away. All i need is for LIDR to drop in price and that line of possibility will be moved forward once more.

HOWTO change the font size in safari on the iPad and iPhone

The ability to change font size can have an enormous impact on accessibility. Pinch and zoom is wonderful for this on iOS, but it introduces another problem. Zoom to much and you now have to scroll sideways as well as down to consume content.

There is however, a little workaround. You can increase and decrease the font size on a per site basis in Safari. This is done through a bookmark, adding two bookmarks one for increase and one for decrease. You can manually set the appropriate font size. Reloading the website will return the font to its original size.

To enable this feature follow the steps below:

  1. In Safari create a new bookmark, this can be of any website as we will be editing it soon
  2. Open bookmarks and tap edit and edit your new bookmark
  3. Change the Title to either Increase Font or Decrease Font
  4. Copy the Appropriate code from below into the link fiel
  5. Click save and repeat so you have both increase and decrease font size bookmarksd

Increase Font size

javascript:var%20p=document.getElementsByTagName('*');for(i=0;i%3Cp.length;i++)%7Bif(p%5Bi%5D.style.fontSize)%7Bvar%20s=parseInt(p%5Bi%5D.style.fontSize.replace(%22px%22,%22%22));%7Delse%7Bvar%20s=12;%7Ds+=2;p%5Bi%5D.style.fontSize=s+%22px%22%7D

Decrease font size

javascript:var%20p=document.getElementsByTagName('*');for(i=0;i%3Cp.length;i++)%7Bif(p%5Bi%5D.style.fontSize)%7Bvar%20s=parseInt(p%5Bi%5D.style.fontSize.replace(%22px%22,%22%22));%7Delse%7Bvar%20s=12;%7Ds-=2;p%5Bi%5D.style.fontSize=s+%22px%22%7D

Now whenever you need to adjust the font size on a website, tapping the increase or decrease font size button will adjust the font on your current website. This is a simple way to increase the accessibility of any website in Safari on the iPad or iPhone.

AirPods, The Most Accessible Headphones

Headphones are an often overlooked but essential piece of equipment for the blind. Accessing a screen reader in the privacy of your own home in a quiet room is a simple affair, you can just use the loudspeaker of your phone or computer. Add some environmental noise, head outside or dare to venture into a coffee shop and the loudspeaker is no longer functional.

Headphones enable me to use my iPhone both indoors and out and about, i literally couldn’t use my iPhone without headphones. Therefore, over the years i have amassed a rather substantial collection. Everything from a cheap pair of JVC up to a rather expensive pair of active noise cancelling Bose. I am rarely seen without a pair of headphones and have them stuffed in every pocket and every bag.

I am constantly looking for the perfect pair of headphones, the pair that will make using my iPhone that much more accessible. Now i have found that elusive pair, the Apple AirPods.
The AirPods are Apple’s truly wireless earbuds. Two single ear pieces that fit snugly inside their own charging case.

They solve many of the problems a blind user has with headphones. Cables. Cables are a nightmare. Get them tangled in your pocket? Try untangling them when you can’t see. It just takes that much longer to untangle them. To the point where if I quickly need to access my phone i would prefer not too. The time taken to untangle the headphones ends up being greater than the time i needed to use the phone. So often i would either ignore a notification and vow to take a look when i got home, or place the phone close to my ear to listen. After all with a screen reader the only way you get privacy is by using headphones. Imagine if all your texts were read aloud? That embarrassing one from your friend is even more embarrassing when everyone in the lift hears it too!

So the wireless nature of the AirPods truly makes them more accessible. I can just quickly and easily slip them in. No cables to un tangle, just flip the lid of the storage case and they are in my ears for that quick check of my phone.

This brings me to one of my other favourite accessible features. Only using one of the AirPods. When you rely on sound to understand what is happening around you, having one ear focus on the screen reader frees up the other to environmental noise. Handy when walking down the street and handy at home or in a meeting. Previously if i received a notification in a meeting and hadn’t worn headphones upon entering i am left with three options. Ignore the message, go through the messy untangle process or interrupt the flow of conversation by having everyone hear your notifications through the loudspeaker. Now.I have a fourth option, just slip in one AirPod and i am away.

While out and about another side effect of being blind is generally having only one hand accessible. To navigate around i either use my guide dog or a long cane. This basically gives me no way to untangle the headphones, so i would often go for the loudspeaker approach. This is gambling with the possibility of dropping your phone as you attempt to juggle it around with one hand.

Now i just slip out one AirPod from the case, pop it in my ear and activate Siri.

There is one other fantastic bonus of using one ear piece. I double the battery life. Not to mention whenever i remove them from the case they are fully charged.

The AirPods truly have increase the accessibility of my iPhone by enabling me to use it in more daily events. I no longer have to remove myself from a social space to use my phone, these AirPods are increasing my social ability.

They truly are the most accessible headphones.

Thank goodness for technology

When my sight began to slip away, I feared losing so many things I love. After all, so much of our daily lives revolves around the ability to connect on a visual level.

My first love has always been technology and just as touch screens were becoming common place, I was unable to see them. How could I possibly interact with technology that was so heavily visual? There wasn’t even any tactility to the screen, it was a perfect smooth piece of glass. No raised buttons to identify what I was pressing, no way to memorise an elaborate process of taps and clicks – I felt lost. Lost but not defeated; I clung steadfast to the belief that there must be a way to adapt this to make it work to my benefit.

There was an unforeseen advantage- and as a result an adaptability – to this. The migration to touch screen forced the industry to reimagine how we would interact with these devices. The result was Apple developing VoiceOver for the iPhone, a gesture based screen reader. I didn’t realise it at the time but this would be my entry point to making the world accessible.

Now that my phone was equipped with the ability to read on screen items aloud, the phone became indispensable. It would be my reading tool for university, with all the books converted to digital form and my phone now reading them aloud. It would also become my window to interacting with the world at large – Facebook, Twitter, email all made accessible through this fantastic interface. It even allowed me to help my kids with their homework. It would creep into every aspect of my life becoming more and more indispensable as the days wore on. The unforeseen disadvantage: battery anxiety. My phone was now an extension of me, filling in the gaps that my lack of sight had created.

With the constant creation of new and previously unthinkable technological advancements, I wonder whether my main assistive device will even be the phone? Looking ahead 5-10 years I foresee a transitional period in the mechanics of interacting with our technology. One that will see a move away from typing onto screens and move towards spoken language, with a natural migration to a screen-less (or at least screens as we know them now) future. I believe that this technology is just on the horizon and something I relish the thought of.

Accessibility – low hanging fruit

There is a lot of low hanging fruit ripe for the picking within the inclusive design realm. So in 2017 what fruit do i think is the ripest?

Dark mode. This one feature alone implemented OS wide could make a huge difference to a substantial user base Not only would it solve a problem for the visually impaired for whom contrast is a major issue, but those with situational requirements where dark mode makes the most sense. Think late at night in bed, that white screen just makes your eyes ache.

So will there be an appetite for this in 2017? My gut says yes. If rumours hold true and the iPhone moves to an AMOLED display, we will see an introduction of dark mode. This will have a wonderful knock on affect of influencing design direction for a while. So not only we will see dark mode introduced at the OS level, but we will start to see a whole host of apps fall in line.

The dream scenario? Would be for apple to introduce a way for apps to toggle in and out of dark mode dependent on users preferences. This may be a visually impaired user using this feature instead of invert colours, or perhaps a sighted user having dark mode set for specific tie frames. I think this scenario is less likely than an OS wide dark theme and waiting for app creators to fall in line, but we can dream.

So lets see if that low hanging fruit is finally picked this year.

Object avoidance for the blind

img_0385

After running into a flagpole in the Namibian desert and a burnt out car on the streets of Doncaster, I decided it was time to work on object detection. My previous challenges had all utilized very simple systems and i wanted to stay within that simple communication paradigm for object detection.

Learning to train solo as a blind runner used two very simple inputs, distance and feeling underfoot. Combined these inputs allowed me to learn to train solo along a 5 mile route. Objects were identified by me running into them and memorising where they were from an audible distance marker. I had reduced blind navigation to two simple elements and that was enough to run. With one, well 2 keyassumptions, 1. I knew where all the obstacles were and 2. There would be no new obstacles. I knew these assumptions were flawed, but i was happy to take on the risk.

Running through the desert solo made the exact assumptions. I would be aware of all obstacles ahead of time and there would be no surprise obstacles. This allowed for a very simple navigation system, as i had reduced the problem to one of bearing. As long as i knew the bearing i was running and could stick to it i could navigate a desert. The system developed along with IBM used a simple beep system to maintain bearing, silence would denote the correct bearing. A low tone beep would mean i had drifted left and a high tone drifted right. Incredibly simple, but simple is all you need in these situations, an overload of sensors and data doesn’t improve the system it just makes the process of understanding what is going on beyond comprehension. Therefore reducing navigation to one simple communication point to the user, in this case me, i was able to navigate the desert solo.

So where did it go wrong? Well those key assumptions, the obstacles in this case were a flagpole and a rock field. The flagpole can be engineered out, the rock field however, we run into the complex system problem. A highly granular descriptive system would not allow the end user to navigate such a rock field. It as a unique and specialized environment that required centimeter accurate foot positioning, indeed the correct way to navigate would be to avoid it entirely!

But could we avoid that burnt out car and flagpole? Yes we could. Could we make it a simple system for the user to understand? Absolutely.

The simplest way to communicate an object within a visual field is hapticly. It is highly intuitive for the end user with ibration feedback instantly recognizable as an obstacle. For the sensor a tiny ultrasonic sensor mounted at chest level. The chest had been chosen as it always follows the direction of running. We had discounted a head mounting, as people often look in a different direction to the one they are moving in.

It is an incredibly simple system, but that is all it needs to be. The idea is to explore the minimal communication required for obstacle avoidance. In future revisions we intend to use multiple sensors but be ever careful not to introduce complexity to the point the simple communication system is interrupted. For example, it may be tempting to use a series of sensors all over the body, this however increases complexity and issues with differentiating between different vibrations and object detection. Not to mention that human interpretation adds latency to the system which may result into running into the obstacle we are trying to avoid.

This all sounds interesting, but does it work? Yes, yes it does. I was over in Munich recently to test an early prototype. With only one sensor i felt we were so close i was tempted to test it while running. The immediacy of the system is incredible. It is totally intuitive that a vibration denotes an obstacle. Avoiding the obstacle is a simple case of drifting left or right until there is no vibration. Then moving on by.

Below is a video of the device in action. I will continue to give updates on the development of the system up until i give it a real workout at a packed city marathon, where i will run solo.

IBM & CMU assisting in mobility

Mobility for the visually impaired is always difficult. From simple tasks as heading to Starbucks for a coffee, to jumping on a bus or grabbing a taxi. Lets take the first example, heading to Starbucks is certainly challenging when you are unable to see, but what about when you enter the store? Without sighted assistance locating the counter or indeed finding somewhere to sit is challenging.

Therefore, any technology that aims to improve any of these mobility issues is always a step in the right direction. With the fear that this blog is turning into IBM fandom, it is yet another project IBM are working on.

Along with Carnegie Mellon University, IBM have developed and open sourced a smartphone application that can help you move from point A to point B.

The app called NavCog utilises either voice or haptic feedback to aid in navigation. NavCog currently uses beacons to assist in the navigation process.

It is great to see the combination of beacons and haptic feedback to aid in navigation. Over 4 years ago I was pitching to just about every GPS manufactured that this could be an interesting direction to head. My ideas seemed sound when Apple announced the Apple watch and it used the exact same haptic feedback system I had been proposing. Further the use of beacon technology to navigate is exactly what I pitched to British Airways a couple of years ago.

I proposed using beacons to navigate Terminal 5 could not only be used to direct potential customers to shops, restaurants and gates, but also aid visually impaired customers navigate the terminal.

It is truly great to see all these ideas put together and finally implemented. We now just need a larger rollout of beacon technology!

This system could also be adapted to solve the internal navigation problem. I was speaking with Google a year or so ago about how project Tango could be utilised to achieve this. I imagined a haptic feedback device that could assist in real time internal navigation. After all my guide dog may be able to avoid obstacles, but an empty chair is an obstacle to my guide dog!

Artificial Intelligence and accessibility

Over the past couple of weeks I have been fortunate enough to be exposed to some fantastic technology as well as ideas. Attending WiRED 2015 kickstarted my thought process on how artificial intelligence could be applied to accessible technology.

While attending the conference there were two ideas I wanted to pitch to people, emotion detection to facilitate social situations for the visually impaired and facial recognition. I felt both these technologies could improve an individuals ability to socialise greatly. After chatting to a few people and pitching my ideas on how these systems could work from a design, implementation and marketing front I managed to interest a few companies and institutions.

There is fantastic scope for these technologies and their assistive ability. I concentrated on the emotion detection system initially as I feel these could have the greatest and speediest impact. I have encapsulated the idea into a product for all, rather than a product specifically for the visual impaired, as I believe these to be key for mass market adoption which, in turn will reduce the price significantly and reduce that initial barrier on any accessible product, price.

I am yet to find a partner to work with for facial detection, but I recently read an article highlighting that IBM are working on this. It really does seem as time goes on that IBM and I could be a great match!

I did also have a grander idea on accessibility while at the conference and was delighted to see it referenced by yet again IBM – cognitive assistance. I have been batting around a few ideas on how accessibility could be personalised. After there are nuances in an individuals accessible needs so why not make the solutions as nuanced. This could definitely be achieved through a cognitive accessible assistant that has the capacity to learn.

An accessible system that is capable of learning could aid in such tasks as reading. It would be able to identify how an individual likes to read information and execute it in that fashion. A nice example would be skim reading, being able to learn how to read a specific document for certain contextual references would be fantastic. This would certainly of assisted me greatly while at university, losing the ability to skim read is absolutely a skill I miss.

I continue to be excited by what technology is enabling and how I can become part of the revolution of accessibility.