Stuck On Algorithms

Sean Erreger LCSW
4 min readJun 20, 2019

Ready or not… here they come…

….Algorithms are playing an important role in our daily lives. They tell us what to shop for, they decide if we get alone, and coming soon how we make healthcare decisions. These could have significant implications for social work practice. Learning about concepts such as algorithms and artificial intelligence has a large part of my journey of trying to get “unstuck” about technology issues. In expressing my concern on twitter, I was able to learn a tool to get clarity on algorithms and how social workers can gain a voice in their design. It’s important back up just a bit and define what they are…

I found this one minute video via BBC Learning to sum it up nicely…

This illustrates the need for algorithms to be clear, concise, and accurate. As algorithms, machine learning, and other forms of artificial intelligence get into more complex problems, this gets tricky. For social work practice the question is not IF algorithms will impact our practice but WHEN and HOW. This post was inspired by an amazing medical blogger, Dr. Berci Mesko aka “The Medical Futurist”. He consistently explains how technology will effect medical care.

In a recent post he explains the medical algorithms currently approved by the Food and Drug Administration (FDA) in the United States. This provides an excellent overview of what algorithms are used for in medicine. What caught my eye is the four highlights as being relevant to psychiatry.

My enthusiasm for technology has been tempered over the last year as I learn more about algorithms and machine learning. I recently reviewed and read “Weapons of Math Destruction” about the potential faults of algorithms. That algorithms that determine teacher evaluations, college rankings and criminal justice sentencing are inherently biased. Social workers should be aware of potential biases in these systems. What I struggled to find was a way to analyze these issues in a concise way.

I began to to question concerns about medical algorithms and my twitter crew came through…

Those four algorithms for psychiatry are possible signposts. If the FDA approval is based on relative accuracy comparison by humans (example, ADHD), I have questions, but not necessarily surprised.- 𝗦𝘁𝗲𝗽𝗵𝗲𝗻 𝗖𝘂𝗺𝗺𝗶𝗻𝗴𝘀, 𝗟𝗜𝗦𝗪 🎙💻 (@spcummings) June 18, 2019

Along with (for some of these) who gets the data, what else is data used for, is there any kind of auditing…
- One Ring (doorbell) to surveil them all… (@hypervisible)
June 18, 2019

Hard to say w/o more detailed breakdown, but one issue is definitely the “usual” question: what populations were used to train the algos?
- One Ring (doorbell) to surveil them all… (@hypervisible)
June 18, 2019

The most helpful resource I found was provided by Dr. Laura Nissen. She found the AI Blindspot by the MIT Media Lab and others

Ok that is completely fascinating and I don’t have complete answers. So far I’ve found 2 things I like that seem like promising scaffolding to decide “do I like this?” Or “ do I not like this?” Here’s one of them… Laura Nissen, PhD, LMSW (@lauranissen) June 18, 2019

They walk you through the process of potential errors in building AI and algorithms. The provide a series of cards that gives examples of each error. They provide further resources…

I found the card on “Representative Data” to best capture my initial concerns about data diversity. That in healthcare we want to be concerned about making sure that diverse data sets are available. From the social work perspective two more notions of algorithmic justice are important.

The concept of Discrimination by Proxy is a critical one. This means the algorithm may “have an adverse effect on vulnerable populations even without explicitly including protected characteristics. This often occurs when a model includes features that are correlated with these characteristics.” An example that I have heard about is algorithms that decide criminal justice sentencing. That correlated things such as race and socio-economic status will determine sentencing rather than other factors.

Also important to social workers should be the Right To Contest. That if one of these common blindspots are found, there is means to reconcile this. Is there enough transparency in the algorithm to fix “the problem. This is important when thinking about empowering the individuals and families we serve.

As decisions continue to be made more and more by algorithms, I found this frame work to be helpful in thinking critically about this. This provides a helpful overview of these issues and hope that it too gets you “unstuck” about algorithms.



Sean Erreger LCSW

blogger, consultant #socialwork, #mentalhealth, suicide prevention. How tech & Social Media is changing change…blog: www.StuckOnSocialWork.Com