Imagine you are an AAC user. You are typing out a sentence, and mid-thought, someone interrupts with a quick question. You want to answer them immediately and then return to your original thought. But you can’t. The app doesn’t support holding a thought. So the moment is gone.
Or maybe you are an SLP working with a student on a vocabulary activity. You want the device to automatically spotlight the word “work” and fade the buttons around it every time you ask “where’s mom?” It is a small change that would focus your student’s attention exactly when you need it. But there is no way to set that up. The app doesn’t know about your session, your student, or that question.
Or perhaps you are a parent who has spent two years watching your son’s muscle fatigue worsening through the day. By 3pm, he starts missing targets on the scan, and you need to change settings to slow it down before he gets frustrated. You have been doing this adjustment every single afternoon. You keep thinking: the device should be able to detect when he is struggling and slow down on its own. It seems so obvious. Why doesn’t it exist?
If you have ever hit one of these walls, you already know what your options look like.
You can submit a feature request and hope it gets prioritized. You can spend hours piecing together a workaround. You can try to find another app to solve just this specific use case. Or you can quietly let it go, adjust your practice or your expectations, and find another way through. Most of the time, it gets let go.


No Two Users Are the Same
Every team building software faces the same pressure: limited time, limited resources, and the need to reach as many people as possible. So they build for the majority. For most software, that works.
However, people who rely on AAC span a wide spectrum of motor abilities, communication profiles, cognitive needs, languages, and life situations. A setting that works perfectly for one child can be completely wrong for another. A feature that transforms one adult communicator’s independence is irrelevant to someone else’s. There is no typical AAC user, no average family, no median clinic.
The result is a feature list built around what users have in common. In AAC, that gets you most of the way. It rarely gets you all the way.
The Feature Request Graveyard
Even when you know exactly what you need, the path from that knowledge to a feature included in your desired app is long and uncertain.
You observe something. You write a support ticket. You explain it to someone from the product team. It becomes a feature request. The request sits in a queue alongside hundreds of others. Eventually, it either gets built, often months or years later, or it quietly disappears.
This is not a failure of care or skill on anyone’s part. A good product team does listen. They genuinely try to understand individual stories. But understanding the problem does not change the prioritization math: a feature that helps three people will never outrank a feature that helps three thousand. The request gets buried because the economics of software development were never designed to serve needs this specific.
What the App Lets You Do (And What It Doesn’t)
Most AAC apps give SLPs and families meaningful control over how the device looks and works.
Think of it in three levels. At the first level, you can change how things look: colors, icons, arrangement. At the second level, you can change what’s there: which words appear, how pages connect, what voice is used. At the third level, you could change how the app behaves: creating new logic, new responses, new patterns that the app never anticipated.
Almost no AAC app lets you reach that third level. The walls are fixed. You can rearrange the furniture, but you cannot move the walls or add a new room.
This is where most of the unmet needs live. Not in the vocabulary or the layout. In the behavior. In the things that would make the device respond to your user the way only you know they need.
What’s Different Now
For most of the history of software, there was a hard line between the people who knew what was needed and the people who could build it. Writing software required specialized skills. Even with the best intentions, a parent or clinician had no way to act on what they knew. Not without the tools, the infrastructure, the team behind every piece of software.
That line is shifting. New AI tools can now take a plain description of what you need and turn it into working software. Not always perfectly, but well enough that the gap between “I know what this child needs” and “I built the thing that does it” is getting smaller faster than most people realize.
This creates a genuine possibility that hasn’t existed before: what if the people who understand the problem most deeply could be empowered to solve it?
An SLP describes exactly what they need for a vocabulary activity. A parent explains the fatigue pattern he has spent months tracking. An AAC user says: “I need a way to hold my thought when someone interrupts me.” These are plain descriptions. And those descriptions could become real features, built for one person, for one context, without waiting for anyone else to prioritize them.
It’s Already Happening
In general education, this is already happening. Teachers are using AI to build micro tools for their classrooms, interactive exercises, simulations, reading scaffolds, tailored not to a general student but to the specific class in front of them. It won’t be long before we see the same thing happen in AAC.
The question for AAC companies is not whether to enable this. It is what needs to be in place so that when people do this, they succeed: safe infrastructure to build within, ways for clinicians to review and trust what gets built, solutions that persist as the child grows.
The Part That Isn’t Solved Yet
The problem is that right now, every solution built this way has to be built in complete isolation. A clinician designing something for a student has no way to draw on what others have built and learned. What they create exists outside the ecosystem they work in because there is no structure for it to connect to in the first place. Every update, every fix, every adjustment falls back on them, taking time away from the practice it was built to support.
The fix is not to build better workarounds. It is to build the scaffold inside the ecosystem itself: one that understands the context you are working in, holds what gets built, and keeps it running as the child grows. Where a clinician can hand off to another clinician. Where proper review happens before anything goes live. Where what works for one family can, with the right permissions, inform what another family tries. Grounded in the clinical knowledge that already exists. Speaking your language, not the language of a computer.
Success for such personal software shouldn’t be measured by scale, but by agency. We don’t need a single, massive solution; we need the infrastructure that empowers anyone to build their own. The future of AAC is a thousand specialized solutions, and our challenge is building the sustainable framework that lets them all flourish.
What We Are Trying to Understand
If you are an SLP, a family member, or a communicator who has lived with this gap, who has quietly stopped asking for something because you learned it would never come, we would genuinely like to hear from you.
Not to pitch you on a product. To understand what you know.
What is the thing you have been working around for years? What would it look like if it finally existed? What would it take for you to trust it?
The knowledge of what’s needed has always been yours. What we are trying to figure out is how to build the scaffold that lets you act on it.
Author: Vignesh Pasupathy
Vignesh is a Senior Product Manager leading Avaz Tomorrow, a team within Avaz created to tackle the harder, unsolved problems in the AAC ecosystem. He spent four years at Amazon building contextual advertising systems before joining Avaz, where he led the development of the Avaz app from 2019 to 2024 and contributed to the experimentation of novel prototypes and features including Expressive Tones, SwiftSpeak, and Just-in-Time Customization.





