For a comprehensive and up-to-date list of my publications, please refer to Google Scholar.
I am a technical HCI researcher with the goal of making visual information widely accessible. I build, deploy and evaluate machine learning systems that help develop novel forms of interaction. More recently, I have been thinking about:
|
Accessibility Developer Tools
A long-standing challenge in accessible computing has been to get developers to produce the accessible UI code necessary for assistive technologies to work properly.
With the increasing adoption of AI coding assistants, we investigate whether these tools can aid novice and untrained developers in writing accessible code by offering feedback and guidance.
|
|
CodeA11y: Making AI Coding Assistants Useful for Accessible Web Development
Peya Mowar,
Yi-Hao Peng,
Jason Wu,
Aaron Steinfeld,
Jeffrey Bigham
ACM CHI, 2025
|
Tab to Autocomplete: The Effects of AI Coding Assistants on Web Accessibility
Peya Mowar,
Yi-Hao Peng,
Aaron Steinfeld,
Jeffrey Bigham
ASSETS (Posters Track), 2024
paper /
poster /
video
|
Accessibility in AI-Assisted Web Development
Peya Mowar
ACM International Web for All Conference (W4A) Doctoral Consortium, 2024
paper /
slides
|
|
|
Visual Assistance
Visually impaired users rely on mulimodal LMs for receiving information about the visual world, such as by using Be My AI, or Seeing AI.
We investigate the transparency of AI outputs to empower users in assessing the reliability of the information provided.
|
|
|
|
Media Accessibility
Through mixed-methods research, we understand the challenges that print-impaired people face in accessing news media.
We develop a document segmentation system for digitizing print newspapers by iteratively prototyping with 32 blind users.
The system is now deployed by Microsoft as a Telegram chatbot for delivering daily news in screenreader-accessible formats.
|
|
|
|
|