Your AI Model Is Not Objective
OpinionWhere we explore the subjectiveness in AI models and why you should careI recently visited a conference, and a sentence on one of the slides really struck me. The slide mentioned that they where developing an AI model to replace a human decision, and that the model was, quote, “objective” in contrast to the human decision. After thinking about it for some time, I vehemently disagreed with that statement as I feel it tends to isolate us from the people for which we create these model. This in turn limits the impact we can have.In this opinion piece I want to explain where my disagreement with AI and objectiveness comes from, and why the focus on “objective” poses a problem for AI researchers who want to have impact in the real world. It reflects insights I have gathered from the research I have done recently on why many AI models do not reach effective implementation.Photo by Vlad Hilitanu on UnsplashWhere math touches realityTo get my point across we need to agree on what we mean exactly with objectiveness. In this essay I use the following definition of Objectiveness:expressing or dealing with facts or conditions as perceived without distortion by personal feelings, prejudices, or interpretationsFor me, this definition speaks to something I deeply love about math: within the scope of a mathematical system we can reason objectively what the truth is and how things work. This appealed strongly to me, as I found social interactions and feelings to be very challenging. I felt that if I worked hard enough I could understand the math problem, while the real world was much more intimidating.As machine learning and AI is built using math (mostly algebra), it is tempting to extend this same objectiveness to this context. I do think as a mathematical system, machine learning can be seen as objective. If I lower the learning rate, we should mathematically be able predict what the impact on the resulting AI should be. However, with our ML models becoming larger and much more black box, configuring them has become more and more an art instead of a science. Intuitions on how to improve the performance of a model can be a powerful tool for the AI researcher. This sounds awfully close to “personal feelings, prejudices, or interpretations”.But where the subjectiveness really kicks in is where the AI model interacts with the real world. A model can predict what the probability is that a patient has cancer, but how that interacts with the actual medical decisions and treatment contains a lot of feelings and interpretations. What will the impact of treatment be on the patient, and is the treatment worth it? What is the mental state of a patient, and can they bear the treatment?But the subjectiveness does not end with the application of the outcome of the AI model in the real world. In how we build and configure a model, a lot of choices have to be made that interact with reality:What data do we include in the model or not. Which patients do we decide are outliers?Which metric do we use to evaluate our model? How does this influence the model we end up creating? What metric steers us towards a real-world solution? Is there a metric at all that does this?What do we define the actual problem to be that our model should solve? This will influence the decision we make in regard to configuration of the AI model.So, where the real world engages with AI models quite a bit of subjectiveness is introduced. This applies to both technical choices we make and in how the outcome of the model interacts with the real world.But why does this matter?In my experience, one of the key limiting factors in implementing AI models in the real world is close collaboration with stakeholders. Be they doctors, employees, ethicists, legal experts, or consumers. This lack of cooperation is partly due to the isolationist tendencies I see in many AI researchers. They work on their models, ingest knowledge from the internet and papers, and try to create the AI model to the best of their abilities. But they are focused on the technical side of the AI model, and exist in their mathematical bubble.I feel that the conviction that AI models are objective reinsures the AI researcher that this isolationism is fine, the objectiveness of the model means that it can be applied in the real world. But the real world is full of “feelings, prejudices and interpretations”, making an AI model that impacts this real world also interact with these “feelings, prejudices and interpretations”. If we want to create a model that has impact in the real world we need to incorporate the subjectiveness of the real world. And this requires building a strong community of stakeholders around your AI research that explores, exchanges and debates all these “feelings, prejudices and interpretations”. It requires us AI researchers to come out of our self-imposed mathematical shell.Note: If you want to read more about doing research in a more holistic and collaborative way, I highly recommend the work of Tineke Abma, for example this paper.If you enjoyed this article, you might also enjoy some of my other articles:A series of articles on learning an AI bot to play tic-tac-toeAltair plot deconstruction: visualizing the correlation structure of weather dataAdvanced functional programming for data science: building code architectures with function operatorsYour AI Model Is Not Objective was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.
Welcome to Billionaire Club Co LLC, your gateway to a brand-new social media experience! Sign up today and dive into over 10,000 fresh daily articles and videos curated just for your enjoyment. Enjoy the ad free experience, unlimited content interactions, and get that coveted blue check verification—all for just $1 a month!
Account Frozen
Your account is frozen. You can still view content but cannot interact with it.
Please go to your settings to update your account status.
Open Profile Settings