PDF Accessibility Testing with JAWS, NVDA and Window-Eyes

Accessibility testing of PDF files involves screen reader testing on Microsoft Windows platform to a great extent. Over the years JAWS and Window-Eyes were the two screen readers used for testing PDF documents on Windows platform. However other screen readers, such as NVDA, Supernova have joined the party and are being today used extensively for testing web pages and PDF documents for accessibility.

So I decided to analyze different screen readers running on Windows platform for accessing and testing PDF accessibility. Different elements, such as document title, headings, images, bookmarks, tables, forms, lists, and links were tested with each of the screen reader to understand what were their interpretation and how well did they read a tagged PDF file.

All the screen readers interpreted couple of things uniformly and correctly; “Untagged Document” and “Document reading Order” so the same were not included in the analysis.

Here are my findings of the analysis carried out using different leading screen readers on Windows 7 platform; JAWS 13.0.527, NVDA 2011.3, Window-Eyes 7.5.3.0 and Supernova 12.07 with Adobe Reader X for accessibility testing of different elements of a PDF file.

Document Title

Two aspects were checked; reading the file title on press of a keystroke and reading the file title while switching between applications – Alt + Tab in Windows:

  • JAWS: Reads the file title correctly in both the cases.
  • NVDA: Reads the file title correctly while switching between applications but fails to read the same on press of a keystroke, instead reads “acrord32”.
  • Window-Eyes: Reads the file title correctly in both the cases.
  • Supernova: Reads the file title correctly in both the cases.

Headings

Heading structure of the file was checked using the quick navigation key ‘H’ and heading list utility of the screen readers:

  • JAWS: Reads the tagged headings correctly.
  • NVDA: Reads the tagged headings correctly.
  • Window-Eyes: Fails to identify the tagged headings instead read them as plain text.
  • Supernova: Reads the tagged headings correctly.

Bookmarks

Two aspects were checked; navigating between the Bookmarks pane and document area of Adobe Reader as well as moving focus to the marked location:

  • JAWS: Navigation between the bookmark pane and document area works fine but focus fails to move to the marked location. In fact on selecting a bookmark results in focus moving erratically on the page sometimes above and sometimes below the marked location. It was noted that visually the focus moves to the correct location but JAWS focus tends to move on the user’s last position on that page.
  • NVDA: Similar to JAWS, navigation works fine but focus tends to move to the top of the page instead of the marked location. However, visual focus moves to the correct location.
  • Window-Eyes: Similar to JAWS and NVDA, navigation works fine but focus tends to move to the top of the page instead of the marked location. However, visual focus moves to the correct location.
  • Supernova: Similar to JAWS, NVDA and Window-Eyes, navigation works fine but focus tends to move to the top of the page instead of the marked location. However, visual focus moves to the correct location.

Lists

Different types of lists; ordered and unordered lists as well as nested lists were analyzed:

  • JAWS: Reads the lists and nested lists correctly.
  • NVDA: Reads the lists and nested lists correctly.
  • Window-Eyes: Fails to identify the lists and nested list instead reads them as plain text.
  • Supernova: Reads the lists and nested lists correctly.

Images

Different types of images, such as simple images, decorative images and complex charts were tested using quick navigation key ‘G’ as well as reading the page content continuously:

  • JAWS: Reads the alternate text correctly.
  • NVDA: Reads the alternate text correctly.
  • Window-Eyes: Reads the alternate text but similar to web page reading does not identify it as a ‘Graphic’ making it difficult for users to differentiate it from rest of the text on the page. A point to note is that it reads the alternate text on pressing the quick navigation key ‘G’, providing users with an option to do a bit of tweaking to identify the image information on the page.
  • Supernova: Reads the alternate text but does not identify it as a ‘Graphic’ making it difficult for users to differentiate it from rest of the text on the page. Also it fails to read the alternate text on press of a keystroke (G).

Links

Links were checked with the ‘Tab’ key as well as using the list of links utility of each screen reader:

  • JAWS: Reads the links correctly.
  • NVDA: Reads the links correctly.
  • Window-Eyes: Reads the links correctly.
  • Supernova: Reads the links correctly.

Tables

Table identification, table headers and table structure (number of rows and columns) for simple as well as complex tables were tested:

  • JAWS: Identified the table and its structure correctly. Also reads the marked table headers correctly for simple as well as complex data tables.
  • NVDA: Identified the table and its structure correctly. However, fails to identify the marked table headers for simple as well as complex data tables.
  • Window-Eyes: Identified the table and its structure correctly. However, fails to identify the marked table headers for simple as well as complex data tables.
  • Supernova: Identified the table and its structure correctly. However, fails to identify the marked table headers for simple as well as complex data tables.

Forms

Different form fields were tested in document reading mode as well as Forms/Focus/Browse mode: of each of the screen readers:

  • JAWS: Reads the form field label, role and state information correctly in both the modes.
  • NVDA: Reads the form field label, role and state information correctly in both the modes.
  • Window-Eyes: Reads the form field label, role and state information correctly in document reading mode but fails to read the label of check boxes in ‘Browse Off’ mode. It reads the check box as “Check box Unchecked”.
  • Supernova: Reads the form field label, role and state information correctly in both the modes.

Conclusion

Apart from links, all the eight elements analyzed yielded different results which are quite interesting. All in all JAWS is leading the pack when it comes to testing PDF accessibility on Windows platform and NVDA is in the second position with couple of elements to catch up with. Supernova is in the third position and Window-Eyes has a long way to go.

This entry was posted in Accessibility, PDF, Screen Readers, Testing and tagged , , , , , , , , , , , . Bookmark the permalink.

22 Responses to PDF Accessibility Testing with JAWS, NVDA and Window-Eyes

  1. Mike Elledge says:

    Hi Priti–

    This is very helpful, especially as we discuss the appropriate eDocuments policy at MSU. I do have a question: How did you set focus for a particular part of the PDF document for the Bookmarks test? Also, would it be possible to post the test document so we can see how it’s tagged? Thanks!

    • priti says:

      Mike, I won’t be able to share the test files. Bookmarks were created using the Bookmark panel & no additional effort was put in to handle the focus.

  2. Mike Moore says:

    It looks like PDF still poses significant barriers for people who a blind and do not use JAWS. A couple of years ago NVDA have better performance than JAWS. I would love to see similar tests with Nova on Windows and with VoiceOver for the Mac. T

    My main takeaway from this is that HTML is still the best way to ensure that content is available to everyone. I am particularly disappointed in the PDF form performance, since PDF forms are more useful in certain circumstances than HTML.

    • priti says:

      Hi Mike,

      I hope you mean ‘nova’ as ‘Supernova’ from Dolphin. I shall try to add my findings for Supernova for Windows but am not sure about Mac as I don’t own a Mac computer so chances are that it would be difficult to test the PDFs with VoiceOver as of now but surely some time in the future.
      As far as forms are concerned, forms can be made accessible but I agree that more effort needs to be put in for the same. As far as screen readers are concerned, I was thrilled with JAWS & NVDA’s interpretation of tagged PDF forms.

  3. Kathy Keller says:

    Thank you for making this effort. This information is interesting and very useful.

  4. Ryan Benson says:

    Very interesting.

    There are a three ways to construct bookmarks: 1- use the bookmark panel. 2- Make a desination then link a bookmark to that desintation. 3- use JavaScript

  5. Yes, we also need analysis from other screen readers and other platforms, such as Nova and the VoiceOver on Mac/Safari platform. And of course repeated analysis in the future (once in March 2012 isn’t sufficient to base policy) along with availability of the PDF files themselves so we can separate the PDF tagging issues from the screen reader support issues, from the end user configuration issues (and I’m assuming there are no platform issues).

    • priti says:

      Hi Phill, Yes I shall be adding findings for Supernova for Windows but won’t be able to add VoiceOver findings as I don’t own a Mac computer. Also I shall do repeated analysis of the same in the future and shall try to make the test files available as well. Hope that helps…

      • Jason Hester says:

        Thanks Priti,

        This is very useful and is within my own findings using JAWS and NVDA primarily for PDF testing.

        I agree that we should do more testing with other screen-readers as well including VoiceOver and maybe even SAToGo.

        In my limited experience with VoiceOver it seems that headings, lists, and links read out well, but have had some trouble reading tables. I am not sure if it that was due to my unfamiliarity with some of VoiceOver’s features.

        Look forward to hearing more in the future and will contribute my findings with other screen-readers where I can.

        • priti says:

          Thanks Jason for sharing few of your VoiceOver findings! Yes, I shall come up with other screen reader findings as suggested by you’ll in the near future. I had opted for the 3 most widely used screen readers on the Windows platform but surely would be happy to build on that.

  6. tony says:

    This is very useful because we are beginning to do a review of our screen readers and if there are other people coming to the same conclusion then I know that we are doing it right.

    Thanks for the post

  7. Mal says:

    Any findings on comparing similar items with Kurzweil? I am particularly interested in how to get around the editing of scanned materials that tend to get laborious.

  8. Gary says:

    As someone who uses assistive technology that ISN’T screen reader software, once again, only a limited audience was considered by ‘accessibility experts’ or consultants. What about those of us who use speech recognition software? Navigating PDF forms for us is just as difficult as for those who use screen readers. Should we start pitting ourselves against one another (e.g., there are more of us than there are of you)? Is that the implication?

    • priti says:

      Hi Gary,
      I completely agree with your concerns & assure that this review will not be restricted to screen readers but will also include other ATs and automated tools in the near future. As we’ll agree with “Accessibility is not only about screen readers!”, I will ensure that in my future posts I make it clear that reviews consider all user groups and not only screen reader users or visually impaired users. Apologies if it has hurt your sentiments.
      Regards,
      Priti

  9. Neil King says:

    Hi All

    As some of you may be aware the Australian Government and Vision Australia – with the assistance of Adobe, conducted an depth study into the accessibility of PDF back in 2010. The work included a detailed analysis of the support by screen readers and other devices for the PDF format; including those listed in the study above.

    The report detailing the findings from the Australian Government’s study into the Accessibility of the Portable Document Format for people with a disability is available at: http://www.finance.gov.au/publications/pdf-accessibility-study/index.html

    The findings of this report have been instrumental in focusing R&D into this area. Assistive technology developers, Adobe, the W3C, Australian Government, Vision Australia and others have all made strides in providing greater support and advice to remove the accessibility barriers in relation to the PDF format. As a result of this work the Australian Government now considers the PDF format to be an accessibly supported technology that can be used as a standalone format if compliant with WCAG 2.0.

  10. Sandesh says:

    Good work indeed!
    i just remembered the brief period of my testing of such PDF documents. JAWS and Supernoa i gave my hands to, i couldn’t get the opportunity to do the testing with Win-eyes. all such future studies-findings are awaited.
    thanks

  11. pvagner says:

    Hello,
    I don’t want to hurt anyone but just to remind gently that NVDA 2012.1 release was overlapping a little with Priti’s work.
    Some little things she has mentioned are improved in the new stable version of NVDA.
    The report title script correctly reports the window title while the acrobat reader X window has focus. Previously it would only report acroread32.
    There are some significant changes related to interpreting and reporting tables however I haven’t tested that my-self so am unable to say it really addresses some of the points mentioned in the article.

    • priti says:

      Hi,

      Thanks a lot for sharing your findings with new release of NVDA. Its not about hurting anyone, infact readers will benefit from this update, it is much appreciated!

      Regards,
      Priti

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>