Today, May 9, 2012 is Global Accessibility Awareness Day (#GAAD). What started with a simple blog-post by Los Angeles Web Developer, Joe Devon, has grown to include events around the world designed to increase awareness about web accessibility issues. To read more about the day and these various activities, see the official GAAD Website and Facebook page.
According to the US Centers for Disease Control and Prevention, “Today, about 50 million Americans, or 1 in 5 people, are living with at least one disability, and most Americans will experience a disability some time during the course of their lives.” In other parts of the world, this number may be significantly higher.
In the interest of full disclosure, Joe Devon is a personal friend of mine, and I must admit that if he were not, I likely wouldn’t have seen his blog post or explored the issues of accessibility as deeply as I have in recent weeks. But I have been exploring, and I’ve been surprised at what I’ve found. In my opinion, Semantic Technology and Assistive Technology are a natural fit for one another, but there seems to be very little discussion or work around the intersection of the two. I have looked, but have not found much collaboration between the two communities. I have also found few individuals who possess much knowledge about both Semantic Tech and Assistive Tech. Of course, if I’ve missed something, please let me know in the comments!
My premise goes something like this:
Because differently abled people access web content with devices unique to their disabilities, those machines need to be able to interpret, compute, translate, and process data. Effective assistive technology relies on making web content TRANSLATABLE, COMPUTABLE, AND PROCESSABLE — BY MACHINES! This should sound very familiar to SemanticWeb.com readers.
Certainly, there are unique things that need to be done to content to make it accessible, and unique semantic markup that needs to be in place to make that same content interpretable by machines. There are nuances and standards in both arenas that should be learned and followed. I strongly suspected that the process of actually implementing those standards is very similar, and having not worked in accessibility standards before, I sought out someone who had experience with both.
I spoke to Jonathan Ingram, founder of Ingserv.com, a web shop in the UK that focuses on accessibility issues. Ingram is also the creator of the web comic, Bifter. Bifter has RDFa “under the hood,” and also is screen-reader friendly. In our chat, I asked him about the processes of making a site accessible, and of adding semantic markup to a site (both of which he has been doing for many years).
“Coding for accessibility and coding semantic markup felt like a natural fit for me. When you’re attacking RDFa, you’re editing the attributes to try to get a result that may not be visually apparent right away. That approach/workflow is exactly the same for accessibility.”
I also asked Brian Sletten, President of Bosatsu Consulting, who added, “Given that RDFa data can be trivially extracted with surrounding context, it seems like a good match for allowing alternate user interfaces to be built somewhat dynamically on top of the UI that is already there. UI concerns for accessibility are not unique to this, but this approach certainly makes zooming, reading, voice commands, etc. easier to handle.”
“Additionally, a lot of the work can be done through templates so it doesn’t have to be a big burden on the original developer (which is important for accessibility adoption, otherwise people will just ignore it).”
I believe that because both communities are focused on getting developers to pay attention to code that does not render something immediately visible or demonstrable to most users, we face similar cultural challenges in gaining adoption. I also believe there is a big connection to be drawn between Assistive and Semantic Technologies, and big opportunities — personal and business — to be had.
In honor of Global Accessibility Awareness Day, then, I humbly pose some challenges to our readers:
- Let’s start a discussion! It should be clear by now that I think the SemTech community and the Accessibility community are not working together as closely as they could. Do you agree? What can we do to change that? Please add your thoughts to the comments below.
- Imagine navigating the web if you did not have sight. Or hearing. Or mobility. What if you were an amputee? How would you interact with the various devices you currently use for web content? If you have never experienced having a disability, I encourage you to try at least one of the following for one hour today:
- Disconnect your mice (i.e., use your keyboards alone),
- Turn off your screens (i.e., use one of the free screen readers available). Here’s one: NVDA – http://www.nvda-project.org/ (perhaps try it on Bifter and then on a site you had a hand in building).
- Turn on iPhone and other OS/devices’ accessibility features
- Watch YouTube videos without sound (or captioning). Then watch a short video with auto-captioning turned on (e.g. try this “Intro to Semantic Web” video: http://www.youtube.com/watch?v=OGg8A2zfWKg. To turn on auto-captioning, click the small “CC” button in the lower right of the player to open a menu of options. Select “Transcribe Audio (BETA).”
After you have tried these things, tell us about the experience in the comments below.