We are already excited about Explainable AI. Explanation from who and for whom? Are we yet able to explain relatively simpler technologies? These are the questions I have been asking since Ruha Benjamin’s keynote talk today at CHI 2021.
Being a long-term fan, I was eagerly waiting for Ruha Benjamin’s keynote talk. The talk was captivating. I had a question to ask her that came from her book; I even forgot that because I was so into her talk.
However, the live stream paused from time to time. I have a reasonably good internet connection, which is at least good enough for live streaming. Yet, I could not enjoy the talk seamlessly. Then I found out that many people are facing the same problem. Fortunately, the video was seamless when Dr. Benjamin talked about a very important design concept, “friction.”
She gave an example of a spiked bench. The benches are designed with sharp upward spikes. When money is deposited in an attached box, the spiked go down, and one can sit there. These benches are often designed in public spaces so that homeless people are not able to sit there. Dr. Benjamin describes those spikes as “friction” that enables those people to sit on the bench who has a dollar. In other words, the friction transfers voices and agencies to the people who hold power.t page on your website, you can pick a design that’s different from your main blog page.
Later Dr. Benjamin clarified that she is not against “friction,” in many design frictions are helpful. Instead, she is disturbed by hidden friction in design.
My mom is a digital migrant. Every time I teach her any software, I have to put in a great effort. Despite, every time a pop-up window appears asking for software updates or an advertisement starts, my mom gets lost. Is it her fault? You could say, oh, just teach her those things, and with time it will get better.
I don’t buy this rhetoric. I am trained in computer science from some reasonably good schools, living in one of the so-called developed countries, and using one of the best internet infrastructures. Yet, I am not able to “explain” why a simple live stream was breaking down frequently. Is it an engineering problem? It should not be; we have made significant progress. As my internet provider would most probably suggest, should I switch to a higher speed internet plan? How those plans vs. costs are designed? Who controls them? The bottom line is, I cannot “explain” this even though I think I have good digital literacy.

Don’t we face similar “frictions” every day? How many of us can explain these. Those frictions are not random; rather, as Dr. Benjamin would say, political, intentional, and profit-oriented.
Now, going back to my original question, Explainable AI for whom? For designers, engineers, or people who are using them? Is it a coincidence that we are asking questions about explaining AI at about the same time when we have known that the deep learning model works, but many times we don’t know why? Who is asking why my mom is not able to “explain” a simpler everyday technology? Or why I, a well-trained digital tech enthusiast is not able to “explain” an internet infrastructure when it breaks down?
We have known long ago that artifacts have politics. Since then, we have advanced so far and so fast. Yet, we are missing questions on a very rudimentary level. Should we go one step back and ask even more important questions even though they don’t sound cool? Such as, are simple technologies “explainable” by who are using them every day? Are we aware of insidious frictions? Who brings the frictions forefront?
Comments