Federated Learning (FL) is a new technology that has been a hot research topic. It enables training an algorithm across multiple decentralized edge devices or servers holding local data samples, without exchanging them. There are many application domains where large amounts of properly labeled and complete data are not available in a centralized location, for example, doctors' diagnosis from medical image analysis. There are also growing concerns over data and user privacy as Artificial Intelligence is becoming ubiquitous in new application domains. As such, very recently, a lot of research has been conducted in several areas within the nascent field of FL. A variety of surveys on different subtopics exist in current literature, focusing on specific challenges, design aspects and application domains. In this paper, we review existing contemporary works in the related areas in order to understand the challenges and topics that are emphasized by each type of FL surveys. Furthermore, we categorize FL research in terms of challenges, design factors and applications, conducting a holistic review of each and outlining promising research directions.
This demonstration will propose a touch-based directional navigation technique, on touch interface (e.g., iPhone, Macbook) for people with visual disabilities especially blind individuals. Such interfaces coupled with TTS (text-to-speech) systems open up intriguing possibilities for browsing and skimming web content with ease and speed. Apple's seminal VoiceOver system for iOS is an exemplar of bringing touch-based web navigation to blind people. There are two major shortcomings: "fat finger" and "finger-fatigue" problems, which have been addressed in this paper with two proposed approaches. A preliminary user evaluation of the system incorporating these ideas suggests that they can be effective in practice.
The Five Ws is a popular concept for information gathering in journalistic reporting. It captures all aspects of a story or incidence: who, when, what, where, and why. We propose a framework composed of a suite of cooperating visual information displays to represent the Five Ws and demonstrate its use within a healthcare informatics application. Here, the who is the patient, the where is the patient's body, and the when, what, why is a reasoning chain which can be interactively sorted and brushed. The patient is represented as a radial sunburst visualization integrated with a stylized body map. This display captures all health conditions of the past and present to serve as a quick overview to the interrogating physician. The reasoning chain is represented as a multistage flow chart, composed of date, symptom, data, diagnosis, treatment, and outcome. Our system seeks to improve the usability of information captured in the electronic medical record (EMR) and we show via multiple examples that our framework can significantly lower the time and effort needed to access the medical patient information required to arrive at a diagnostic conclusion.
Advances in web technology have considerably widened the Web accessibility divide between sighted and blind users. This divide is especially acute when conducting online transactions, e.g., shopping, paying bills, making travel plans, etc. Such transactions span multiple web pages and require that users find clickable objects (e.g., "add-to-cart" button) which are essential for transaction progress. While this is fast and straightforward for sighted users, locating the clickable objects causes considerable strain for blind individuals using screenreading technology. Screen readers force users to listen to irrelevant information sequentially and provide no interface for identifying relevant clickable objects.This paper addresses the problem of making clickable objects readily accessible, which can substantially reduce the information overload that is otherwise experienced by blind users. A static knowledge base of keywords constructed from the captions of clickable objects does not provide enough learning capability for identifying clickable objects which do not have any captions (e.g., image buttons without alternative text). In this paper, we present an Information Retrieval based technique that uses the context of transactioncentric objects (e.g., "add-to-cart" and "checkout" buttons) to identify and classify them even when their captions are missing. In addition, the technique utilizes a reinforcement mechanism based on user feedback to accommodate previously unseen captions of objects as well as new categories of objects. We provide user study and experimental evidence of the effectiveness of our algorithm.
In recent years, the Web has become an ever more sophisticated and irreplaceable tool in our daily lives. While the visual Web has been advancing at a rapid pace, assistive technology has not been able to keep up, increasingly putting visually impaired users at a disadvantage. Web automation has the potential to bridge the accessibility divide between the ways blind and sighted people access the Web; specifically, it can enable blind people to accomplish quickly web browsing tasks that were previously slow, hard, or even impossible to complete. In this paper, we propose guidelines for the design of intuitive and accessible web automation that has the potential to increase accessibility and usability of web pages, reduce interaction time, and improve user browsing experience. Our findings and a preliminary user study demonstrate the feasibility of and emphasize the pressing need for truly accessible web automation technologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.