Peya Mowar
peyajm29 [at] cmu [dot] edu

A sketch of the head of Scotty, CMU's mascot scottish terrier. Below its collar is handwritten - 'My heart is in the work' and signed @peyajm29.

For a comprehensive and up-to-date list of my publications, please refer to Google Scholar.

I am a technical HCI researcher with the goal of making visual information widely accessible. I build, deploy and evaluate machine learning systems that help develop novel forms of interaction. More recently, I have been thinking about:

Accessibility Developer Tools

A long-standing challenge in accessible computing has been to get developers to produce the accessible UI code necessary for assistive technologies to work properly. With the increasing adoption of AI coding assistants, we investigate whether these tools can aid novice and untrained developers in writing accessible code by offering feedback and guidance.

Example of how novice UI developers might fail to explicitly prompt the AI Coding Assistant for accessibility requirements, and the assistant would fail to consider accessibility by default.
CodeA11y: Making AI Coding Assistants Useful for Accessible Web Development
Peya Mowar, Yi-Hao Peng, Jason Wu, Aaron Steinfeld, Jeffrey Bigham
Under Review

Tab to Autocomplete: The Effects of AI Coding Assistants on Web Accessibility
Peya Mowar, Yi-Hao Peng, Aaron Steinfeld, Jeffrey Bigham
ASSETS (Posters Track), 2024
paper / poster / video

Accessibility in AI-Assisted Web Development
Peya Mowar
ACM International Web for All Conference (W4A) Doctoral Consortium, 2024
paper / slides

Visual Assistance

Visually impaired users rely on mulimodal LMs for receiving information about the visual world, such as by using Be My AI, or Seeing AI. We investigate the transparency of AI outputs to empower users in assessing the reliability of the information provided.


An example image captured by blind users in the VizWiz dataset asking about a cartoon character.
Shifted Reality: Navigating Altered Visual Inputs with Multimodal LLMs
Yuvanshu Agarwal, Peya Mowar
VizWiz Grand Challenge (CVPR Workshop), 2024
paper

We investigate how VQA outputs vary with augmented images that simulate the capture styles of visually impaired users.

Media Accessibility

Through mixed-methods research, we understand the challenges that print-impaired people face in accessing news media. We develop a document segmentation system for digitizing print newspapers by iteratively prototyping with 32 blind users. The system is now deployed by Microsoft as a Telegram chatbot for delivering daily news in screenreader-accessible formats.

A sketch of two hands holding an Indian newspaper, divided into various segments marked by different bounding boxes. These boxes represent the visualization output from an image segmentation system used for digitizing print newspapers.
Breaking the News Barrier: Towards Understanding News Consumption Practices among BVI Individuals in India
Peya Mowar, Meghna Gupta, Mohit Jain
ASSETS, 2024
paper / video / slides

A qualitative study with BVI participants on their news consumption practices providing insights for designing accessible digital news platforms.

Towards Optimizing OCR for Accessibility
Peya Mowar, Tanuja Ganu, Saikat Guha
AVA: Accessibility, Vision, and Autonomy Meet (CVPR Workshop), 2022
paper

Identification of key visual cues that can significantly enhance the reading experience for print-impaired individuals through aural formats.


To chat, please email me or book a slot in my weekly office hours.
Modified template by Jon Barron
Last updated: November 2024