Artificial Intelligence in ABA: Why Data Transparency Matters

By: Dr. David J. Cox, Ph.D., M.S.B., BCBA-D

    •    Reading time: 9 min

3d illustration of abstract technology and science global world network and telecommunication

You’d be hard-pressed to read any news article around technology where artificial intelligence (AI) or machine learning (ML) aren’t mentioned in some capacity. AI/ML are everywhere; novel solutions continuously make headlines; and the rate at which outlandish breakthroughs occur can sometimes feel like hyperbole.

Further, though AI solutions were once restricted to the realm of tech companies and heavily funded research facilities, AI solutions are now in the hands of the people.

For example, an estimated 30% of surveyed professionals admit to using ChatGPT in their work and 70% of surveyed professionals want to use AI to help them with their jobs.

AI also regularly makes for dramatic headlines in healthcare, be it outperforming radiologists in diagnosing cancer, identifying new drugs to treat diseases, or predicting your 10-year risk of death from heart attack or stroke.

But, AI isn’t perfect. Hilarious examples aside (insert your favorite ChatGPT or Bard joke here), mistakes that AI makes can have very serious and life-altering consequences. The result is that academics, policy-makers, and governments alike are actively engaged in conversations around how to ethically use AI in society so we can reap its benefits while minimizing its harms.

The ethical use of AI in society involves many topics and many areas of application. One opinion-piece certainly can’t cover them all. So, this one is about the ethical use of AI as it relates to one area of healthcare and for one population: Applied Behavior Analysis (ABA) for individuals with autism spectrum disorders and related developmental disabilities.

For the unfamiliar, ABA is the application of behavior science to help people improve their quality of life. Improving one’s quality of life is obviously applicable to everyone across all industries. Nevertheless, in terms of sheer numbers, ABA is most often associated with educational and clinical interventions for individuals with autism spectrum disorders (ASD).

AI is beginning to be applied to the delivery of healthcare and ABA services for autistic individuals. For example, researchers have used AI to diagnose individuals with ASD, automate data collection and personalize interventions, and to improve how clinicians analyze intervention effectiveness. Further, where AI is not currently in use, a recently published white paper highlights many ways that AI can be used to deliver ABA services in behavioral health across the entire care continuum. It seems we are only at the tip of this iceberg.

Sometimes, conversation around the ethical use of a new technology comes only once the tool is available and on the market. But, the above AI use cases for ASD are only experimental. Few (if any) currently exist in the marketplace. This means that ABA practitioners and individuals with ASD can talk about how AI might be ethically used in ABA before it reaches widespread adoption and infiltration. This seems like a good thing and an opportunity that not every industry gets.

The ethical use of AI in any industry will require a lot of serious conversations around a variety of topics, each with many nuances. To help get this conversation started, a recent white paper reviewed common topics in AI ethics and how they might inform the rollout of AI in ABA for ASD. Most of these topics will need to be addressed at some point in some capacity. However, one particular topic seems critical to address from the beginning: biased AI models.

Biased Models as the Primary Ethical Concern of Technology Companies

When asked, many people cite data privacy and security as their most important concern around how AI is built and used. This makes sense. In an era of data collection gone crazy, people (myself included) are uncomfortable not knowing what data is being collected on their behavior or how it is being used. Thus, data security and privacy are nontrivial ethical AI topics around which many are actively developing solutions

But, in terms of the likelihood of direct physical harm, biased AI solutions are just as, if not more, concerning. Biased models are often the culprit behind unfortunate snafus that lead to ethically problematic headlines such as inequity in face recognition algorithms and enhancing existing systemic biases in the healthcare system. 

In response to the above, a common call is to “solve AI’s inequality problem.” But anyone who claims they can (currently) eliminate biased models suggests a fundamental misunderstanding of how AI works. At least in the near term, solving model bias is likely to be both theoretically and practically impossible. 

Theoretically, the famous mathematician Gödel showed back in 1931 that any complex set of mathematical equations that describe a system will always be incomplete and require assumptions that cannot be proved. At its core, AI involves a set of mathematical equations designed to describe and predict a system. Practically, we also can’t collect data on everything, for everyone, everywhere, and all at once. 

Translated to our current topic, no AI-based system will contain all the information it needs to do its job perfectly well and to account for current and historical trends. Something will always be missing. But that’s okay. This is not a new problem. 

There have always been (and will always be) limitations to any technological or scientific advancement. Open any scientific journal, flip to any article reporting on a research study, and skim the discussion section. It’s likely that most (all?) articles you do this for will have a section on the limitations to that experiment or study. 

As part of their training, scientists often learn to identify and communicate about the limitations of their work. If they don’t, peer reviewers will gladly call them out on those limitations.

In contrast, technology companies rarely point out the limitations of the AI products they publish for mass consumption. The result is that people will use an AI product assuming it is safe and appropriate. And, when they fail in embarrassing or harmful ways, the publicized results bring attention to the company for all-the-wrong reasons. And it’s completely avoidable. 

Data Transparency as an Ethical Demand of Consumers

As noted above, it’s unlikely any AI system will ever be complete, error free, and without bias. There are many ways that technology companies can and are mitigating bias when developing and maintaining their AI solutions. But consumers of AI products can’t do much about that work. 

Consumers can vote with their feet, however. For example, consumer push back has already changed the way your data privacy and security are managed. It’s the reason you often have to select what data is collected and how it’s used for nearly every website you visit. Consumers can demand the same around data transparency for AI products. 

It’s relatively easy for companies to be transparent about the data used to create their AI product. They know where the data came from and if it was ethically sourced. They know the limitations of the data they used to train their models. They know what groups are and are not underrepresented. And they know how the performance of their model changes for the various groups in their dataset.

Just like scientists who publish peer-reviewed articles, why not simply be transparent about what your model is good at, where it needs more work, and timelines around when those improvements are coming?

Demanding data transparency shifts from a “nice-to-have” to a “must-have” for AI solutions that help individuals with ASD. Autism spectrum disorders include many different people along a spectrum—a broad range of similar things or qualities. Each person with this diagnosis has unique skills, strengths, deficits, and definitions of what will make their lives better. As noted by Stuart Duncan, “Autism is one word attempting to describe millions of stories.” 

No AI solution can adequately describe or predict every unique story. If facial recognition is as tricky as it is, imagine describing and predicting the complex milieu of bio-behavioral-social interactions that change dynamically for any one individual with autism. If an AI product is built using only a subset of all possible stories, then model bias will necessarily exist. But, again, that’s okay. This is simply how science and technology works.

Let’s just make sure the users of AI products for individuals with autism know what those biases are so they can make an informed decision. And, most importantly, so they can avoid misapplying AI products and causing avoidable harm. The only cost is humility.

If you have questions about the future of AI in ABA Therapy, contact us today.

About the Author

Headshot of Dr. David Cox, Ph.D., M.S.B., BCBA-D, Data Science for RethinkFutures

VP of Data Science

Dr. David Cox leads Data Science for RethinkFutures. Dr. Cox has worked within the behavioral health industry for 17 years. He began working in behavioral health by providing and then supervising Applied Behavior Analysis (ABA) programs for individuals with autism spectrum disorders. After 8 years of clinical work, Dr. Cox went back to school to earn a MS in Bioethics, a PhD in Behavior Analysis from the University of Florida, Post-Doctoral Training in Behavioral Pharmacology and Behavioral Economics from Johns Hopkins University School of Medicine, and Post-Doctoral Training in Data Science from the Insight! Data Science program.

Since 2014, Dr. Cox’s research and applied work has focused on how to effectively leverage technology, quantitative modeling, and artificial intelligence to ethically optimize behavioral health outcomes and clinical decision-making. Based on his individual and collaborative work, he has published over 45 peer-reviewed articles, three books, and over 150 presentations at scientific conferences.

Share with your community

Facebook
X
LinkedIn

Sign up for our Newsletter

Subscribe to our monthly newsletter on the latest industry updates, Rethink happenings, and resources galore.

Related Resources

Blog

A summary of the webinar titled “Crack Open the Autism Black Box- Exposing unjustified inconsistencies...

Publication

About this Publication Past researchers have sought to describe and predict how individuals with autism...

eBook

About this eBook While every person diagnosed with autism, and their family, experiences their own...