Your current AI approach is broken

Your current AI approach is broken

Some weeks ago a friend asked me how to automate and achieve scale for his mental health startup. It’s not been the first time someone has asked me this. Scale and automation are, under the technology creed, synonyms for Artificial Intelligence.

I was hesitant to answer. I told him that I am a believer on AI, but that you couldn’t apply AI as it stands today to mental health problems.

Most AI methods are based on function optimization. In layman’s terms, the algorithm looks for the best (optimal) solution to a given goal. The problem with this is that such goal rarely contains information about its moral worth.

“A system that is optimizing a function of n variables, where the objective depends on a subset of size k < n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.”

Stuart J. Russell. Of Myths and moonshine. The Myth of AI. Edge. 2014

In other words,

“Building an agent to do