The Othering of HCI

Michael Correll
15 min readOct 24, 2023

--

Painting of Napoleon reflecting on the shores of St. Helena with a British sentry and a red-tinged multihued sunset in the background.
War, the Exile and the Rock Limpet” by JMW Turner, 1842.

What gets people into the field of computer science, keeps them there once they are in, and shows up in rosy predictions and imagined computing futures? Conversely, what are we anxious about, when it comes to computers? What comes out of the Black Mirror writer’s room, shows up in doom-predicting news articles, seeps through our unconscious into horror movies or ethics guidelines? I think, a lot of the time, it’s stuff that computer scientists would call Human-Computer Interaction (HCI). Stuff like how artificial intelligence interacts with people, new interfaces and digital landscapes, social media networks, video games, killer apps and robotic helpers. All of these big opportunities and problems of course touch on many areas of computer science (machine learning, software engineering, databases, graphics, security, networking,… you get the idea), but HCI is, at the very least, solidly in the mix.

But in my experience with computer science departments, interviewing with them or collaborating with them or otherwise trying to pitch my work to them, I’ve started resonating more and more with this tweet from Evan Peck:

I think Evan meant “painted it pink” in a narrow way in this tweet, but to me there’s a lot of phenomena I think are covered by that phrase, all of which boil down to ways that HCI is an “other,” a less-serious place to be shunted aside or begrudgingly included for marketing reasons so the serious men in real computer science can get on with the actual work. HCI as “bless your heart” patronized computer science, peripheral, marketing fluff, a place to put the icky people contaminated with their different perspectives or humanities backgrounds or, gasp, politics. I would rankle whenever I was asked (and I was asked!), on my various (usually) ill-fated jaunts on the job market, to make sure that my work was “legible” to CS departments by including little winking asides that, sure I primarily do HCI work and submit to HCI venues and work with other HCI people, but that’s just for show, and really, in my heart of hearts, I am actually secretly a database guy or a machine learning guy or otherwise “really” CS, with all that HCI stuff as a youthful phase I’ve outgrown, or a trick to convince people in information schools or science and technology studies to work with me (which they usually didn’t, by the way; it turns out that for those schools you need to perform the opposite bit of posturing to convince people that you’re not “really” CS, which I was never really good at either). I hated having to talk about myself and my work in that almost disparaging way, and would think less of both myself and the people I was pitching to whenever I felt like I had to do that kind of intra-departmental camouflage.

Connected with this “pink-washing” is the related phenomena I’ll call “blue-washing,” where stuff that was formerly “just for girls” gets absorbed and masculinized and made “serious” once it gets prestigious or important or lucrative enough. Computer programming is the prototypical example here: from Ada Lovelace to the codebreakers of Bletchley Park to NASA’s human computer “Hidden Figures” to the iconic photograph of Margaret Hamilton next to the code for the Apollo program, computer programming was predominantly seen as women’s work, and was rewarded at accordingly disproportionate levels compared to the (at the time) more masculine-coded bits like computer engineering and theorizing. Heck, the very idea that “software engineering” was a field of serious endeavor on par with other sorts of engineering is one of the many things that Hamilton had to fight tooth and nail for. The stereotypical men were there to build the machines and think about what to do with them, and the stereotypical women were just there to make the machines do what they were told by others; a programmer as a combination of a babysitter and a switchboard operator. Odd, then, that now that programming as puzzle solving is at the core of CS-related prestige and moneymaking, that issues about whether women “really belong” in CS keep getting raised by the same set of profoundly boring, allegedly empirically-minded people. To keep up with the “painted pink” metaphor, for every “painting a ballpoint pen pink and charging more for it,” there’s an equal and opposite “painting wet wipes with camo and charging more for it.”

To me, then, HCI is suffering from both pink-washing and blue-washing. The care-focused bits get painted pink (“soft” science! not engineer-y enough!). The flashy bits get painted blue (serious stuff! engineering!) and then taken away from us. The gender parts of this analogy begin to break down here, but I am keeping the terms because I want you to keep them in mind how these turf wars interplay with issues of gender and belonging (and race! and class! and caste! and…) in “core” CS. But pink- and blue-washing is part of my explanation of why, every six months or so, an AI or AI-adjacent person writes an editorial that goes something like “hey, this artificial intelligence stuff is changing society in a big way; we really should be studying both computers and people.” It’s not just Kissinger or other AI snake oil grifter-criminals writing these things either. It’s usually pretty smart people, people that I know for a fact are aware of the landscape of HCI as a field, that either themselves or through their students have made large and important contributions to the HCI literature. And yet, they write long op-eds about a need for a new science of “human-centered AI” or for a “historically new… human-centric engineering discipline.”

HCI/information school/science and technology studies (STS)/etc. people then take turns attempting to dunk on these people, claiming that the field they are asking for has been around for decades and maybe they should crack open a book some time. But I never found those kinds of dunks particularly satisfying. To me, they are evidence that HCI, as an academic field of study, is either totally ignorable or seen as unfit for the purposes it was allegedly created to tackle. Part of the reason for this perceived gap in utility might be external factors: that HCI has been pink-washed so much that big serious people think they can totally ignore it, or, worse, discard it out of hand so they can build a blue-washed replacement that’s more to their liking. But I am more crucially worried about the internal factors: the way that HCI’s own self-conception might be positioning it poorly to make the kinds of big institutional changes I (we? they?) want to have.

I will admit that I am likely projecting here; putting a flashlight behind my own imposter syndrome and then asking everybody to be afraid of the big shadow monster that gets thrown up on the wall. But I can’t be the only one concerned that, in the shadow of ChatGPT’s omnipresence in AI discourse, one of Microsoft’s first moves was to fire its AI ethics team. In a blog post that nobody read because it was too long and not particularly insightful (this is emerging as a personal theme), I talked about how AI critics walk along a tightrope between “capture” (too beholden to the Powers That Be to make useful critiques) and “escape” (too far away from real problems to make useful critiques), but I guess I should have anticipated that, since all tech trends are ephemeral now, if your ethics team slows down the product even a little, you don’t get to walk that tight rope at all: off you go, along with all the other unprofitable drags on business logics.

What keeps HCI, then, at a distance? What strategies do others employ to keep HCI away from where it (at least idealistically) could do the most good? How, internally, does the self-conception of HCI as a discipline keep us separated from these same points of impact? The answers to most of these questions are “the omnipresent and interlocking structures of oppression and dominance that are an inescapable part of neoliberal capitalism,” but I thought I’d at least go a level of detail down from that answer to unload a bit more of my personal anxieties.

HCI as Sprinkles

The first potential issue arises as an aftershock of what I think of as “the great command line interface (CLI) wars.” The conflict, which reverberates today, is the extent to which the CLI is the place where “real” computational work gets done, with the graphical user interface (GUI) as a begrudging sop to users (and even then, with skepticism, since if they aren’t smart enough to use the CLI, should we really be trusting them with our software anyway?). Important psychological aspects to this debate are two-fold: first, that user interface design is the optional “sprinkles” you add on top to the computational cake after you’ve done the serious work of building out the “core functionality.” The second is a sort of desire to guard all of the power of computation from an onrushing “eternal September” of users who should not be trusted, with the extent to which they think or interact differently than you do as a sign of justified skepticism.

Aspects of this “sprinkle-thinking” occur periodically in my collaborations with other computer scientists, especially in my sub-field of visualization. The job of a visualization researcher is all too often envisioned a service-focused afterthought: to make pretty pictures after the “real” scientists have finished the hard work of collecting the data. Noted statistician (and hyper-racist eugenicist) Ronald Fisher is famously quoted as saying “To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.” I would perhaps suggest that the fact that HCI people are only consulted after the data set is collected, or the technological intervention proposed, and even then only to “make it pretty” or “nicer” is to commit a similar sort of error. I’ve even had people act as though I should be thanking them for the opportunity to make pretty pictures, as though making a pretty picture is what motivates me! (It’s not; what motivates me at this point is a combination of spite, the sunk cost fallacy, and deep and occasionally paralyzing personal neuroses).

HCI Immediately Rolls Over

One response to the perception of being cut out of a good thing (say, the increasing power and prominence of computer science both within and without academia) is to sort of just immediately revert to lapdog mode and hope that you’ll get enough crumbs from the table to carry on. HCI has done this a few times in its history. The first is HCI’s tight connection with the military industrial complex. There’s a reason so many early foundational papers are about task complexity and interface design: it’s because designing for cockpits and command centers was where all the money was. I find it interesting that many of these government sources and partnerships have largely dried up, over the years. Maybe they got what they wanted, or maybe they realized it was faster and cheaper (and less noisy) to develop stuff elsewhere. After designing jet cockpits and dashboards, HCI then had its dalliance with the mobile phone industry, then a turn on the dance floor with social media companies, and now even those sources are cooling on the whole thing.

But the most recent example of rolling over as far as I’m concerned is the subordinate posture HCI has developed with ML and AI. I fear there are big parts of HCI that see themselves as fundamentally subservient to ML research: that the job of HCI is or will be primarily to explain ML models to users, to provide interfaces to the models that will be doing the “real” computational work, or to do little tweaks or interventions or audits on ML models to make them nicer or less biased or less harmful. To come back to visualization as an example, from Tukey on down the premise of visual analytics was that you could use visualization to help people explore and make sense of their data. Now I feel many in visualization are willingly casting visual analytics as a temporary embarrassment until machine learning models get good enough and the whole human exploration part is unnecessary. They’ve bought into the hype, in other words. Many are the visualization researchers (myself included, I should note) who have felt pressure to zhuzh up their personal statements or grant proposals with a few dashes of ML, because, well, that’s the big dog right now.

A related issue here is that our focus on nuance traps us into inaction. I will have to reset my big sign that mentions how long I’ve gone without quoting Yeats, but really there’s no way to state it other than “the best lack all conviction, while the worst // Are full of passionate intensity.” Academic HCI is often easy to dismiss because it often makes itself super dismissable: unable to articulate a strong, uncompromising vision of the world, and so tossed aside or compromised by those that have no such qualms about laying out how they want things to be. The sort of Chidi Anagonye issue that studying something, seeing its complexity and nuance, means that, when it’s time to actually do something, you don’t know what to do, or are unable to make a univocal and unfaltering choice. Or, worse, and this is where I think HCI can flounder, getting suckered along by people without such doubts or concerns.

Finishing School HCI

Another sort of way of defanging or otherwise put HCI at arm’s length is to treat the whole social and critical project of HCI as sort of a side effect or necessary toll to pay to support the “real” project of HCI, which is to turn out well-behaved apolitical front end engineers who will help design systems to get people to click on ads. In other words, Big Tech might look the other way while some HCI professor somewhere writes anarchist critiques of surveillance tech or whatever so long as they keep turning out students that have no qualms (or are let out into job markets or life circumstances such that they feel they have no choice) about rolling up their sleeves and contributing to the adtech-riddled capitalist order. That’s certainly the impression I picked up from high profile firings of critical AI voices like Timnit Gebru: go off and play with your toys and write your articles critiquing this and that, but don’t bite the hand that feeds you.

It’s not just industry-adjacent research where these pressures are felt. Academia, to the extent that it was ever anything other than a glorified multi-level-marketing scam/cult, is increasingly unable to suggest with anything like a straight face that most (or even many) of the people that pass through its gates will emerge as anything like independent academic researchers. Students are placed into situations of occasionally immense economic and personal precarity, asked to make sacrifices about where and how they live, all to prepare them for roles that, increasingly, do not exist as envisioned. Or they can instead get jobs where they might be asked to do personally unsavory or unappealing things, that seem eager to have them, and are backed by resources that make stuff like “paying out of pocket to go to conferences” or “living with roommates in your thirties” seem like bad dreams. It’s against this backdrop where I view many of the calls for ethical or principled or human-centered design (even my own): side shows or off-gassing of a system for alienated labor that is otherwise ticking along smoothly as designed.

HCI as the Borg

There’s a twitter bot called “Everyword Exists at the Intersection of Art and Te[chnology]” that, well, does what it says: periodically claim that my buffet or my terrorism or my what-have-you exists at the intersection of art and technology. The joke is that this “intersection” is so cliche as to be practically meaningless. You could live totally off grid in a cabin in the woods and still make a solid case that whatever art you generate could be placed in those terms. Similarly, just as I used to take note of whenever somebody would start their job talk with a (often totally context-less) graph of some curve like “data” exponentially increasing, now I have started by noting when they start their introduction with a Venn diagram: sets labeled “computer science” or “design” or “psychology” or “statistics” with their own special research area just so coincidentally in the intersection of all of them.

The HCI Venn diagram has been getting larger and larger, and with it, the danger of losing its coherency, communicability, and, in some cases, credibility. If I were to self-describe as a HCI researcher, I then have to do the further political work of proving, okay, what kind of HCI researcher? The kind that builds things, the kind that conducts quantitative psychology studies, the kind that writes philosophy papers, etc. etc. etc. This diffusion creates all of the standard issues of interdisciplinary work, but with the additional danger that not everybody is even pointed in the same metaphorical direction. And note that my little list is all about methods — are we focused on common problems, regardless of the approaches we take?

A second issue with this ever-expanding Venn diagram is that it makes HCI seem sort of… acquisitive. You may have noticed this already in the little story I told at the beginning to motivate my piece, about how HCI lies at the heart of a lot of thorny computer science problems. Does it, though? Is it really at the center of all of those Venn diagrams? Or does it just aspire to be? For every person like me who is tired of having to give perfunctory “no, I’m real CS!” pitches, I’m sure there are just as many people who feel a pressure to put a perfunctory “no, I’m really human-centered!” in their job materials when they would rather just be left alone to work on advances in algorithms or complexity theory. I am worried that just looking at the problem until we find the human, and then assuming that this makes it prima facie an HCI problem, and so naturally best amenable to HCI methods, is a mistake. Maybe the lens of how the digital and the human intersect is just a contingent one: is literature an HCI problem, just because most people use word processors? Will HCI ever “let go” of a discipline once the digital components of it become quotidian?

I will once again use visualization as an example. Visualization as the study of how to visually represent data has nothing fundamentally to do with computers, excepting the (admittedly non-trivial) facts that data are often (but do not have to be) stored on computers, and that visualizations are often (but do not have to be) designed and rendered using computers. Cartographers, for instance, were doing visualization research that predates computers as objects of study, and will likely keep doing so long after we’ve broken up all the tech monopolies and upcycled all of our smart phones into flower vases. There are lots of places visualization could have “lived”, as a discipline. Even within visualization, the link between visualization research and HCI research is a bit murky. So maybe my frustration as a visualization researcher is at being “captured” by doctrinal maneuvering that happened before I entered the field, with ripple effects that mean that even moving to other departments wouldn’t help, since those departments are forced to define themselves by the ways they aren’t HCI, with all of the aforementioned prickliness that comes with playing those sorts of games.

So What Do We Do?

The most obvious solution to these problems is to burn “it” all down, where “it” is academia, the tech industry, capitalism, the patriarchy, etc. etc. If somebody wants to engage in some “propaganda of the deed” antics and run some big magnets through some Silicon Valley data centers or whatever, I guess have at it, but maybe there are some steps we can take while we are waiting for the relevant revolutions to spin up. I think some short-term reform in this area is mostly about deciding to articulate a specific and uncompromising vision of what a human-centered relationship with computing looks like. Include some of the standard Steve Jobs “bicycle for the mind” crap if you have to, but suggest that there are real, material consequences for not thinking about the human-scale impact of systems, not just at the end of some process of “innovation,” but at all stages of the process of building technology. In other words, to make the strong claim that, if you’re working with computers, and you are aren’t thinking about people, and maybe even thinking about people first, you’re in trouble. Not just in a “you haven’t captured enough nuance” way or a “you are using a naïve theoretical lens” way, but real, material, ethical, and sociological consequences.

Another option is to link up with broader intellectual efforts and political struggles (this is something like a “take our ball and go home” option, but with the ability to keep playing the game, as it were). HCI not just in computer science, or as an appendage to computer science, or even as a commentary on or reaction to computer science, and also probably not as a blob that eventually eats all of computer science, but as a coherent and opinionated field of study. Build something like an explicitly feminist or anarchist HCI that is situated amongst independent structures of power and intellectual traditions beyond a bunch of nerds trying to build better fighter jet cockpits in the 80s. This could mean kicking people off of the “team” as it were. Or it could mean incorporating people we’ve left out of the cold. Normally here is where I’m expected to call for more theory-building, but, if there isn’t a shared epistemic project in HCI per se (and I’m not quite convinced that there will or even should be), then I’m not sure what theory-building would even look like.

The last, and perhaps likeliest, step is to stop giving a shit. Stop trying to force yourself to belong in a place that doesn’t want you by compromising your vision of the world you want to live in. Conversely, stop trying to get your paws on disciplines that don’t seem to want to have you around as equal partners anyway. Find the problems in the world you want to solve, find the people you want to help solve them with, and, hey, if that results in making a few enemies, so much the better.

Thanks to Crystal Lee, Evan Peck, and Arvind Satyanarayan for feedback (but not necessarily agreement) on this post.

--

--

Michael Correll

Information Visualization, Data Ethics, Graphical Perception.