The notion of accountability, often misimagined as by endusers as transparency, was a key item of discussion in my dissertation.
When it comes to technology, questions that users may not even think to ask impact in significant ways technology’s usability. What is it doing? How is it doing what it is doing? What are the ground rules and operating parameters? Can these be expressed in simple, intuitive ways?
These things are the keys to predictability and interactivity, and they are questions that are also critical in human-to-human interaction, though we don’t often think about it in these terms. But they come up (as Garfinkel pointed out) when they are violated or when exceptions occur. This is one of the reasons that mental illness is so difficult and problematic for us; we struggle to interact with those that are mentally ill because they violate assumptions about precisely the questions above, rendering us unsure about the effects of our actions within the context of the ongoing interaction.
So I’m here to point out that the Google+ transition (from old to new) and Amazon’s author pages are amongst the most recent examples I’ve found of poorly accountable technologies. It’s not clear why they do what they do, what they’re doing, or what will happen as we interact with them. Only after the fact do we know, and of course by then it’s too late to make a decision about whether or not we’d like to do the things that we did—make the gestures that we made—as we related to one another.
Such accountability is often misframed as “transparency” or “documentation” and users tend to bemoan the lack of these when outcomes are unexpected, but in fact, nobody really wants transparency (i.e. understanding of the actual operations at the machine level). Those things are best left to machine state diagrams of the sort that I used to do as a computer science student all the way back in 1991 (when departments were still teaching in C and Pascal and assembly).
Instead, what people really want to know is what the ground rules of an interaction are and what the outcomes will be, as an interactive totality, of any particular interactive choice that they make. So—not “what is this software or hardware doing”—but rather “what will be the result for this interaction and relationship of any particular action that I might take in response to the system’s actions?”
On this level, these two bits of software fail miserably.
— § —
As a supplemental note, the term “accountability” does not imply “responsibility” but in fact Garfinkel’s discussion of the ability “to provide a sensible and defensible account of” what each party to an interaction is doing. Accountability is essential to interaction as it enables parties both to explain themselves (to others and to themselves) and to come to grips with the very same kinds of explanations provided by the counterparty. Unaccountable activity, particularly in social interaction, tends to lack sensibility—that is to say that people cannot make sense of or integrate the sensations of what has occurred. An “accounting” by both parties and the “accountability” of each party’s actions are thus critical both to individual and to mutual understanding.
One can easily see the ways in which such accountability is at the core of most problems in usability and interactivity in the technology space, as has been pointed out by both Suchman (first) and Dourish (later). As it turns out, this concept is also at the core of most of the problems we’ve had building AI systems, though such a point is beyond the scope of a complain-complain post like this one.