Images on social media platforms are inaccessible to people with vision impairments due to a lack of descriptions that can be read by screen readers. Providing accurate alternative text for all visual content on social media is not yet feasible, but certain subsets of images, such as internet memes, offer affordances for automatic or semi-automatic generation of alternative text. We present two methods for making memes accessible semi-automatically through (1) the generation of rich alternative text descriptions and (2) the creation of audio macro memes. Meme authors create alternative text templates or audio meme templates, and insert placeholders instead of the meme text. When a meme with the same image is encountered again, it is automatically recognized from a database of meme templates. Text is then extracted and either inserted into the alternative text template or rendered in the audio template using text-to-speech. In our evaluation of meme formats with 10 Twitter users with vision impairments, we found that most users preferred alternative text memes because the description of the visual content conveys the emotional tone of the character. As the preexisting templates can be automatically matched to memes using the same visual image, this combined approach can make a large subset of images on the web accessible, while preserving the emotion and tone inherent in the image memes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.