Categories
Accessibility Screen Readers Testing

Tips to make Slideshows Accessible

Slideshows are today a common phenomenon on websites. Websites make use of slideshows to showcase their product and services. Such slides comprise mainly of images and at times text is supplemented with images.

Often while including slideshows on web pages, authors tend to overlook its accessibility. This results in different user groups finding it difficult to access the information made available through slideshows. So I decided to write a blog post on how to make slideshows accessible.

Before moving on to discussing the tips for making slideshows accessible, lets first understand which user groups will get affected due to inaccessible slideshows:

  • Users with learning disabilities, such as attention deficit disorders will find it difficult to concentrate on other web page content due to the presence of moving slides.
  • Users with visual impairments who use screen readers and screen magnifiers as they will not understand the information being read out for them as the slides keep changing.
  • Users with mobility impairments, who rely on keyboard to access the web pages, will find it difficult to navigate between slides due to lack of keyboard support for slideshow controls.

Here are few tips for making slideshows accessible:

  • Add controls, such as Pause, Play, Previous, Next etc. to allow users to control the slides.
  • Ensure that the slideshow controls are keyboard accessible.
  • Specify descriptive alternate text for slideshow controls, if images are used for the same.
  • Add instructions to orient screen reader and Braille displayer users about the presence of slideshow on the page. Hide the instructions using CSS to maintain the presentation of the page.
  • If only images are present in the slides, add descriptive alternate text to convey the information.
  • Last but not the least ensure that content of only the currently visible slide is available for keyboard and screen reader users and not the content of all the slides.

Well above are my observations for making slideshows accessible. If you would like to add to the above list, drop in your valuable comments and I shall add them.

Categories
Accessibility PDF Screen Readers Testing

PDF Accessibility Testing with JAWS, NVDA and Window-Eyes

Accessibility testing of PDF files involves screen reader testing on Microsoft Windows platform to a great extent. Over the years JAWS and Window-Eyes were the two screen readers used for testing PDF documents on Windows platform. However other screen readers, such as NVDA, Supernova have joined the party and are being today used extensively for testing web pages and PDF documents for accessibility.

So I decided to analyze different screen readers running on Windows platform for accessing and testing PDF accessibility. Different elements, such as document title, headings, images, bookmarks, tables, forms, lists, and links were tested with each of the screen reader to understand what were their interpretation and how well did they read a tagged PDF file.

All the screen readers interpreted couple of things uniformly and correctly; “Untagged Document” and “Document reading Order” so the same were not included in the analysis.

Here are my findings of the analysis carried out using different leading screen readers on Windows 7 platform; JAWS 13.0.527, NVDA 2011.3, Window-Eyes 7.5.3.0 and Supernova 12.07 with Adobe Reader X for accessibility testing of different elements of a PDF file.

Document Title

Two aspects were checked; reading the file title on press of a keystroke and reading the file title while switching between applications – Alt + Tab in Windows:

  • JAWS: Reads the file title correctly in both the cases.
  • NVDA: Reads the file title correctly while switching between applications but fails to read the same on press of a keystroke, instead reads “acrord32”.
  • Window-Eyes: Reads the file title correctly in both the cases.
  • Supernova: Reads the file title correctly in both the cases.

Headings

Heading structure of the file was checked using the quick navigation key ‘H’ and heading list utility of the screen readers:

  • JAWS: Reads the tagged headings correctly.
  • NVDA: Reads the tagged headings correctly.
  • Window-Eyes: Fails to identify the tagged headings instead read them as plain text.
  • Supernova: Reads the tagged headings correctly.

Bookmarks

Two aspects were checked; navigating between the Bookmarks pane and document area of Adobe Reader as well as moving focus to the marked location:

  • JAWS: Navigation between the bookmark pane and document area works fine but focus fails to move to the marked location. In fact on selecting a bookmark results in focus moving erratically on the page sometimes above and sometimes below the marked location. It was noted that visually the focus moves to the correct location but JAWS focus tends to move on the user’s last position on that page.
  • NVDA: Similar to JAWS, navigation works fine but focus tends to move to the top of the page instead of the marked location. However, visual focus moves to the correct location.
  • Window-Eyes: Similar to JAWS and NVDA, navigation works fine but focus tends to move to the top of the page instead of the marked location. However, visual focus moves to the correct location.
  • Supernova: Similar to JAWS, NVDA and Window-Eyes, navigation works fine but focus tends to move to the top of the page instead of the marked location. However, visual focus moves to the correct location.

Lists

Different types of lists; ordered and unordered lists as well as nested lists were analyzed:

  • JAWS: Reads the lists and nested lists correctly.
  • NVDA: Reads the lists and nested lists correctly.
  • Window-Eyes: Fails to identify the lists and nested list instead reads them as plain text.
  • Supernova: Reads the lists and nested lists correctly.

Images

Different types of images, such as simple images, decorative images and complex charts were tested using quick navigation key ‘G’ as well as reading the page content continuously:

  • JAWS: Reads the alternate text correctly.
  • NVDA: Reads the alternate text correctly.
  • Window-Eyes: Reads the alternate text but similar to web page reading does not identify it as a ‘Graphic’ making it difficult for users to differentiate it from rest of the text on the page. A point to note is that it reads the alternate text on pressing the quick navigation key ‘G’, providing users with an option to do a bit of tweaking to identify the image information on the page.
  • Supernova: Reads the alternate text but does not identify it as a ‘Graphic’ making it difficult for users to differentiate it from rest of the text on the page. Also it fails to read the alternate text on press of a keystroke (G).

Links

Links were checked with the ‘Tab’ key as well as using the list of links utility of each screen reader:

  • JAWS: Reads the links correctly.
  • NVDA: Reads the links correctly.
  • Window-Eyes: Reads the links correctly.
  • Supernova: Reads the links correctly.

Tables

Table identification, table headers and table structure (number of rows and columns) for simple as well as complex tables were tested:

  • JAWS: Identified the table and its structure correctly. Also reads the marked table headers correctly for simple as well as complex data tables.
  • NVDA: Identified the table and its structure correctly. However, fails to identify the marked table headers for simple as well as complex data tables.
  • Window-Eyes: Identified the table and its structure correctly. However, fails to identify the marked table headers for simple as well as complex data tables.
  • Supernova: Identified the table and its structure correctly. However, fails to identify the marked table headers for simple as well as complex data tables.

Forms

Different form fields were tested in document reading mode as well as Forms/Focus/Browse mode: of each of the screen readers:

  • JAWS: Reads the form field label, role and state information correctly in both the modes.
  • NVDA: Reads the form field label, role and state information correctly in both the modes.
  • Window-Eyes: Reads the form field label, role and state information correctly in document reading mode but fails to read the label of check boxes in ‘Browse Off’ mode. It reads the check box as “Check box Unchecked”.
  • Supernova: Reads the form field label, role and state information correctly in both the modes.

Conclusion

Apart from links, all the eight elements analyzed yielded different results which are quite interesting. All in all JAWS is leading the pack when it comes to testing PDF accessibility on Windows platform and NVDA is in the second position with couple of elements to catch up with. Supernova is in the third position and Window-Eyes has a long way to go.