This paper raises the need for quantitative accessibility measurement and proposes three different application scenarios where quantitative accessibility metrics are useful: Quality Assurance within Web Engineering, Information Retrieval and accessibility monitoring. We propose a quantitative metric which is automatically calculated from reports of automatic evaluation tools. In order to prove the reliability of the metric, 15 websites (1363 web pages) are measured based on results yielded by 2 evaluation tools: EvalAccess and LIFT. Statistical analysis of results shows that the metric is dependent on the evaluation tool. However, Spearman's test produces high correlation between results of different tools. Therefore, we conclude that the metric is reliable for ranking purposes in Information Retrieval and accessibility monitoring scenarios and can also be partially applied in a Web Engineering scenario
Accessibility is one of the key challenges that the Internet must currently face to guarantee universal inclusion. Accessible Web design requires knowledge and experience from the designer, who can be assisted by the use of broadly accepted guidelines. Nevertheless, guideline application may not be obvious, and many designers may lack experience to use them. The difficulty increases because, as the research on accessibility is progressing, existing sets of guidelines are updated and new sets are proposed by diverse institutions. Therefore, the availability of tools to evaluate accessibility, and eventually repair the detected bugs, is crucial. This paper presents a tool, EvalIris, developed to automatically check the accessibility of Websites using sets of guidelines that, by means of a well-defined XML structure, can be easily replaced or updated.
This paper presents a framework and system to evaluate the accessibility of web pages according to the individual requirements of users with disabilities. These requirements not only consist of users' abilities, but also users' assistive technologies and the delivery context. In order to ascertain interoperability with other software components, user requirements are specified taking advantage of the extensibility of the W3C CC/PP recommendation and other featurespecification vocabularies. An evaluation tool capable of understanding these specifications generates evaluation reports that are tailored to the user's individual needs. Quantitative accessibility measures resulting from personalized evaluation reports can be used to improve the web browsing experience for users with disabilities, such as through adaptive navigation support and by sorting the results of search engines according to users' personal requirements. In addition, developers benefit from personalized evaluations when developing websites for specific audiences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.