Some Criticisms of Tech Criticism

Michael Correll
21 min readNov 12, 2021
Painting of Greek philosopher Diogenes living in a barrel, holding a lamp, and surrounded by dogs. For some reason he has clearly defined muscles and is not covered in dirt. Sort of a Hollywood version of Diogenes I think.
Diogenes looking for an unbiased machine learning model. “Diogenes” by Jean-Léon Gérôme, 1860.

Whatever you do, you’re not going to have complete freedom — you’ll be muzzled in one way or another, but at least you can have some diversity in how you’re muzzled.

Timnit Gebru

The title of this post should give you an idea of the base level of cynicism with which I am approaching this topic, but the positive project I am undertaking here is to get you to ideate a bit about what tech/data criticism ought to look like as a discipline, practice, or organizing force. And, once you’ve done that, to ask you to look at the gap between “is” and “ought” and try to make a critical tech movement that lives up to your expectations. To that end I will explore several modes of failure that others have proposed have, will, or might happen within the space of tech criticism, which I will very broadly and opaquely define as “socio-political critiques around the development and deployment of new technologies (especially computing), or the critical theory around those critiques.” As a spoonful of sugar to help that particular medicine go down, I will begin with a brief argument for what I think the tech criticism project is about and why we need it. I will occasionally swear, also.

Why Tech Criticism?

[T]he computer has acted as fundamentally a conservative force, a force which kept power or even solidified power where [it] already existed.

Joseph Weisenbaum

Technology is about power. It always has been. It is about magnifying the impact and abilities of people, from the plow to the cotton gin to the neural net. Wherever there is power, there are questions about how this power is to be distributed and used. Answering these questions is both a political and ethical concern (refraining from asking these questions is to implicitly answer them with something of the form “however the current people in charge tell us to distribute and use them,” by the way). So we need, and have always needed, oversight and critique of technology for the usual Spiderman reasons: with great power comes great responsibility.

We want a strong tech criticism project in the current historical moment for both strategic and tactical reasons. Strategically, there are potentially transformative technologies at play (often around machine learning, social networks, and automation) that have resulted in radical impacts for high-stakes ethical questions of human dignity and flourishing (around warfare, around work, around privacy, etc. etc.). Tactically, we are in the midst of a historical moment (the so-called “techlash”) where there is both an emerging public understanding that our current structures around the use of technology aren’t working (or are working only for a certain privileged few with certain combinations of race, gender, and class), and a growth of organized efforts to resist, reform, or rethink the current technological order.

All together, there is both a need to establish our values (and to create policies that instantiate those values), and an ongoing opportunity to change things for the better in concert with other big social movements. These efforts can be bundled together or oriented around many causes, from “data ethics” to “AI ethics” to a more general “tech ethics” to efforts like “data justice” or “data protest” or “algorithmic justice” or any of a number of terms that I will be rather slapdash about using in this post.

Given all of that, here’s where we might screw up. Taking a page from Sage Lazzaro, who is concerned with “independence vs. integration” in AI ethics, I have divided my critiques into two non-exhaustive classes, capture and escape. To use an extremely forced analogy whose presence in this final draft is a result of a lack of imagination on my part, imagine we are trying to launch a satellite into a precise geosynchronous orbit. If we don’t have powerful enough rockets, then Earth’s gravity will draw us back and we will eventually crash back into the ground. By contrast, if our rockets are too powerful, then we will reach escape velocity and fly off into interplanetary space never to be seen again. In the same way, I view two potential categories of bad ends for tech criticism as a movement or discipline: either being captured by the very same big tech companies or hegemonic structures we are supposed to be critiquing, or moving so far away from the political or technological realities on the ground that we cease to have any impact whatsoever. There are many kinds of capture and escape, however, and it’s in those nuances where I will spend most of the rest of this post.

Capture

Powerful companies like Google have the ability to co-opt, minimize, or silence criticisms of their own large-scale AI systems — systems that are at the core of their profit motives.

Alex Hanna & Meredith Whittaker

I am taking a very expansive definition of “capture” here, from the literal control of organizations around tech oversight or criticism to some fuzzier Deleuzean notion of “reterritorialization” (this is the last time I will mention Deleuze in this post, please don’t bail on me this early). This looseness with definitions extends not just around the verb “to capture” but also around nouns of “capturers.” Tech ethics is captured by corporations when, for instance, the company gets to investigate itself to determine wrongdoing, when a select few companies fund (or employ) all the researchers investigating it, and when those same companies directly craft the government regulations that are passed against it (with or without the assistance of the “revolving door” of people moving between government and industry). But tech ethics efforts can also be captured by more fundamental and diffuse structures like capitalism or nationalism, patronage or patriarchy. For instance, many of the pioneering prominent voices around the potential harms of the uncritical use of AI (Ruha Benjamin, Joy Buolamwini, Timnit Gebru, etc. etc.) are women of color, but spots on AI advisory boards and panels and inches in newspaper columns on this topic just as likely to go to Henry fucking Kissinger.

Overt capture is of course a concern (maybe the biggest one), and I don’t want to diminish how important it is that there is independent oversight of technology with the actual power to effect change, but I think there are more subtle forms of capture and control that are just as worrisome.

The Legitimacy Trap

By legitimizing [human computer interaction] and its role in technology production in terms of user experience, user delight, and user acceptance — which were only ever means toward other ends — we have ceded the space from which we could argue for the considerations that were actually at the center of the discipline’s ambitions.

Paul Dourish

Here is my understanding of what is meant by “legitimacy trap,” as grafted by Paul Dourish and others from its original milieu and into the tech sphere:

  1. A discipline has big, reformatory goals. For instance, we might be interested in applying lessons from sociology, psychology, and public policy to create an equitable and empowering work environment.
  2. However, this discipline lacks an inroad into the halls of power: it lacks legitimacy in the eyes of, say, businesses.
  3. An opportunity arises, via crisis or some other obvious display of value. For my example, all of these sociologists and public policy folks might look to the American business environment of the 1960s and 70s and say “hey, all of those lawsuits you are facing for Title IX violations and hostile work environments and stuff like that, we can help you with that. Let us in to help train managers and shape standards, and you’ll be sued less.”
  4. This inroad of “legitimacy” calcifies into a much more modest missions. For instance, instead of a big reformer for an equitable environment, you are now a “human resources department” and a main part of your job is to stop companies from being sued by their employees. Those grandiose goals you had? Not really part of your job description anymore.
  5. The field is now “trapped” by its own success, and no longer has to ability to intervene in any role outside of the foothold role it used to achieve legitimacy: “Improve working conditions? That’s not your job, you’re here to stop us from getting sued.”

Academic disciplines like Human-Computer Interaction and skillsets like User Experience Design, the very places where we’d expect to do work to make things fundamentally better for people who use or are subject to technological forces, are in the midst of their own legitimacy crisis. From a lofty goals of human flourishing assisted by computing technology, a large part of human UX effort appears to be devoted to getting people to look at ads, to buy things they don’t want, to stopping them from cancelling services they inadvertently signed up for, often using slimy “dark patterns” to accomplish these goals. As per Jeff Hammerbacher, “The best minds of my generation are thinking about how to make people click ads. That sucks.”

I am worried that tech criticism is falling into a similar legitimacy trap, where it is focused on very narrow projects that are, at best, only pieces of larger ethical projects, seeing initial buy-in from corporations, but then being victims of their own success by achieving this buy-in for only these narrow projects. That is, I don’t think it is totally inconceivable that “data ethics” in the public or commercial spheres becomes concerned with three (and only three) projects: 1) enforcing minimum compliance (or perhaps non-compliance, but in areas where they are unlikely to be sued) with data privacy or data collection legislation like the GDPR, 2) “de-biasing” data sets according to a very narrow set of technical or statistical specifications and 3) “ethics washing,” where these efforts are used as a evidence of “good faith” or “good intentions” to avoid more restrictive regulation or bad press.

The reduction of ethics efforts in machine learning down to “fairness” or “bias” already show the narrowing of the tech ethics project in that domain. “De-biasing” is often a conceived of as a technical fix to a technical problem: the problem is “fixing” existing datasets (by tuning models, by removing “sensitive” features, or by collecting more data from people), rather than determining if the datasets need to exist, or if the models being built from these datasets are producing outcomes that are anything other than “absurd.” You will not be able to “de-bias” (in the technical, ML fairness sense) your way into collecting less data, or increasing personal privacy, or putting a stop to fundamentally unethical work.

Keyes et al.’s “A Mulching Proposal” takes these frameworks around bias to their absurd conclusion: just because, say, the robot drones that come to kill us also kill a representative proportion of people from different races or genders doesn’t mean that we’ve created a “ethical tech”— maybe the thing to do would be to not make the killer robots at all. To use a slightly less extreme example, Facebook often faces accusations around surfacing too much (or too little, depending on who you talk to) extreme conservative content. The solution here would not be (in the largely misspelled words of a dril tweet, “turning a big dial taht says “Racism” on it and constantly looking back at the audience for approval like a contestant on the price is right”) but to, you know, have some courage of convictions here (instead of carving out special exceptions to their terms of service whenever they were afraid of getting bad press). But if all you can operationalize is “debiasing” and “fairness” then you’re only going to get outcomes in those technological and mathematical algorithm-tweaking terms.

(Techno-)Capitalist Realism

[T]echnical fairness, like rhetorical equality, cannot sustain its legitimacy in the face of de facto inequality, widening disparities in wealth, unequal access to basic medical and other care, and the differential impacts of climate change […] recognizing, visualizing, and caring for difference is ultimately insufficient if it leaves dominant logics and structures intact.

Anna Lauren Hoffman

From the British tradition of servants and masters switching places on Boxing Day down to the more quotidian practices like casual Fridays, hegemonic systems have attempted to create “release valves” for the revolutionary pressures that builds up over time (spoilers for The Matrix Reloaded, I guess). Mark Fisher refers to the notion of “capitalist realism” as an understanding that there are no viable alternatives to the current capitalist system, and so even anti-capitalism within such a “realistic” viewpoint acts to reinforce hegemony, as an “unrealistic” outlet that shows just how unimaginable change is, or (at best) a reform effort that is there to smooth out some of the rougher edges. These release valves might look like they are creating threats to the system or otherwise working to upend societal structures, but are in fact just part and parcel of keeping the existing system alive without making any substantive changes.

For instance, one might think that oil companies like Shell would spend all of their time denying climate change, trying to get people to keep using oil and gas, and putting political pressure to get governments to avoid transitioning to renewable energy. And, of course, oil companies are doing that kind of stuff. But Shell also spends a lot of money and time and effort thinking about “green energy” and climate change. This means they get to have it both ways: benefit from the fossil fuel boom while also monetizing the crisis and being ready for the shift to solar and wind with loads of existing infrastructure and market positioning.

I worry that this is the fate of many tech ethics efforts, especially around AI and ML: the afore-mentioned danger of “ethics-washing” where the presence of internal ethics efforts are just marketing attempts to avoid having make real changes, or to benefit from a perceived marketing boom around ethics (in the same way many companies in agriculture benefit from the substantial grey areas or misinformation around what “organic” means; you could imagine a tech company might similarly benefit from slapping a big “fair and ethical” label on their ML products). These efforts are to some extent designed from the ground up to prevent transformative changes; I am worried, as per Z.M.L., that “[i]t is easier to imagine Facebook causing the end of the world than it is to imagine the end of Facebook.”

I also worry that the mere presence of these token ethics efforts can defang causes that require revolution rather than reform. “There’s no need to regulate us or disband us, look at how many ethics oversight committees we have” is a rhetorical play that is probably already being made and I’m sure will be made more in the future. But there’s also a play that I have seen just as often that is more in the capitalist realism lens, of “it’s totally unreasonable to not use technology at all; best to just work with us to make it a little better.” Liz O’Sullivan has a talk where she refers to the “the Devil is in the Dual Use,” where there’s a rhetorical push to highlight the “dual use” properties of technology (that all technology has both good and bad applications) as a way to paint reformists or critics as naïve or unrealistic. E.g. you can use a broom to hit somebody over the head, but you can also use it to sweep the floor, and wouldn’t it be silly to try to ban brooms? And of course it’s impossible to make a broom that is totally bonk-safe and still useful, so it’s silly to demand that I stop hitting you on the head. The almost insulting truism that technology can have both benign and malicious uses is used to elide having to take any particular responsibility for how technology is used while simultaneously sidelining external critics. It’s “once the rockets are up, who cares where they come down, that’s not my department” with a few extra steps.

Criti-Hype

The problems I explore below develop when people begin working on the ethics and governance of technological situations that aren’t real — and not just “aren’t real” in the sense aren’t yet real but aren’t even realistic projections of where the science and technology is headed.

— Lee Vinsel

I’ll end my section on capture with something that straddles both elements of capture and escape, namely what Lee Vinsel (rather controversially) called “criti-hype,” which is where tech critics largely buy into the hype and bullshit of the domain they are critiquing, and so turn their focus away from (or inadvertently create or exacerbate) actual problems of immediate or imminent concern. In Maciej Cegłowski’s words, “we obsess over these fake problems while creating some real ones.” For instance, tech ethics researchers might spend all of their efforts trying to determine how to make autonomous cars ethically decide how to value different groups of people when the car is forced into an unavoidable collision, but that’s not actually the current ethical issue with autonomous cars: it’s stuff like the fact that they keep slamming into white trucks after confusing them with clear horizons, killing pedestrians that they fail to see in time, that they require lots of rare earth elements and dehumanizing labor that we can’t seem to stop trying to start wars over, etc. etc. By buying into the hype that fully autonomous self-driving cars are just around the corner, the tech critics are to some extent distracted from the problems we do currently have.

When we succumb to criti-hype, we focus on the wrong problems, play to sci-fi fantasies, and ignore the actual problems we are facing as we march to the beat of whatever the press agencies or vision statements tell us is going to happen in the near future. To be even more direct, while there will be lots of meaty ethical issues to tackle when deciding how to build optimal artificial intelligences, right now the main problem seem to be that machine learning doesn’t really work a lot of the time (or “works” only in the sense that it is a thin fig leaf over a bunch of human menial labor or decision-making), and yet still demands to be taken seriously as a decision-making tool or a rationale to conduct yet more data collection or surveillance. But even with ML that (largely) doesn’t really work for important problems, we still have to deal with the inequalities and inequities that are associated with all of this flawed ML architecture, like the immense sociological and ecological footprints of collecting all of this data to with which to make bullshit models, and then using the resulting bullshit models for bullshit ends.

I should note that criti-hype is not just an academic failing, but shows up whenever we let hype rather than reality capture our concerns. E.g. the U.S. military-industrial complex is really convinced that we are in an “A.I. War” with China that seems to be mostly an excuse to fund arbitrary pet projects and put the word “cyber” on powerpoint slides, and industry are really certain that they need to invest in “data science” even when they aren’t really sure what that practically entails.

Escape

My career is possible in part because of an intensely honed ability to see and work with patterns. This is arguably why I’m employable at all. That same ability, applied to anything outside of my papers, is largely rejected as being misguided or “in my head”. In other words, even after being a world expert with a long and productive career, I still have not made it to a stage in tech where my thoughts or analytical abilities seem to particularly matter.

— Margaret Mitchell

There are many negative responses to tech criticism (and tech critics) that I don’t really find interesting enough to devote much in the way of column inches. The first is a weird sort of obsequious bootlicking where people ride or die for tech billionaires under a conflation of net worth with intelligence. The second is a “but you also participate in society, curious”-level burn that because we haven’t moved into a Unabomber-style cabin in the woods that we’ve lost the ability to critique the technology we see around us. But a level up from those that it’s just as tempting to glibly put aside is “well, what have you even accomplished, anyway?” I think this objection is one where those folks have a point: it’s hard to look at the growing net worths of tech CEOs, public approval ratings for big tech that blow away those for governments or scientists, or the continued omnipresence of largely unrestrained tech in our lives and think that we are on the right trajectory. Books are being written, awareness is being raised, documentaries are being filmed, but what real changes are in the pipeline, and how do we expect to enact them?

It’s this tension that I am talking about when I mention “escape”: that the levers we are pulling on and the levers that create change are different; that we are working on the wrong problems, or at the wrong level of abstraction, or with the wrong languages and audiences. That we can be dismissed as luddites, or dwellers in ivory towers, or not sufficiently “realistic” or “nuanced” (or the opposite, that we are not sufficiently idealistic or are too nuanced) to make the changes we need to make. While some of these criticisms are made in the same kind of bad faith as the ones I mention at the top of this section, I still can’t help but be convinced that real change in how our society uses technology is not going to come from a university or a think tank (and certainly not from blogging).

Lockout

The reasons why some technologies live and others die are not strictly technical, but political […] some technologies are sponsored by the advertising industry, while others are constrained by a neocolonial trade embargo. Some are backed by the Pentagon, others crushed by the Vatican.

Rodrigo Ochigame

One of the things that makes captures like the types described in the first half of this post happen is a sort of good cop/bad cop routine. If you play nice with big tech, you get all sorts of rewards (money, publicity, job security… whatever your heart desires). If you don’t, then you get all of those nice things taken away (at the very least). One of the very mildest things that gets taken away is access to internal data (or the computing and storage resources to process such internal data even if you had it). If you want to study how a tech behemoth like Facebook or Google impacts the world, then you often need their data. If you aren’t on the inside, then you don’t get that data, and then they get to say that your analysis is flawed or incomplete and discount what you have to say (although note that this sometimes happens even when they do claim to be releasing their data).

I’ve now heard some form of this story a couple times from a couple people, but I should at least preface it by saying that it’s hearsay. The non-hearsay part is that in 2012 Facebook, in collaboration with social scientists, ran an experiment on “emotional contagion,” where they manipulated 700,000 or so users’ feeds to present mostly positive or mostly negative words, and then looked at the impact on the emotional valence of the manipulated words of the users themselves to see what impact they had had. The aggregate effects were statistically significant but very small, but the uproar around the study itself (that Facebook was “manipulating emotions” and “transmi[tting] anger”) as well as the seeming lack of internal review or participant consent were both relatively big deals at the time. Now for the hearsay part: you might think that the bad press would encourage Facebook to reconsider these sorts of studies or at least change their internal review standards. But the lesson that Facebook seems to have learned is don’t work directly with social scientists, it leads to bad press. After all, Facebook’s “move fast and break things” style at the time meant that they were performing dozens if not hundreds of simultaneous A/B tests of different interfaces or sorting algorithms or what have you at a time, some of which probably involved manipulations that were just as drastic if not more so; the only difference about this particular one is that, since the academics needed to publish and publicize their work, people heard about it and Facebook got in trouble.

I’ve heard several similar forms of this, from startup CEOs to librarians (and, er, from professors): don’t work with academics, because they don’t care about “shipping code” or “real problems” and will just slow down your tech projects or make noise. And the message coming loud and clear from Timnit Gebru’s and Margaret Mitchell’s firing from Google Research, but also from earlier shutdowns of researchers or research arms in places from Xerox to Apple to IBM is that somebody up in the corporate hierarchy is looking at the cost/benefits of having all of these researchers around, and when that balance sheet doesn’t add up they will not hesitate to cut them loose. To the extent that tech criticism is fueled by/enmeshed with academia (I dither on this point), we have to accept that corporations are often just not interested in what we have to sell, and have no obligation to listen to us, and have the ability to completely ice us out from impact at will. The ways around this are either to find an “in” (although this might result in a “legitimacy trap” as discussed above) or to develop enough independent power that you can’t be ignored or overlooked (which often involves “playing politics,” or other sorts of worldliness that academics often find distasteful, especially the “apolitical subjects” that are turned out by computer science programs).

Ideal Theory

Ideal theory, I would contend, is really an ideology, a distortional complex of ideas, values, norms, and beliefs that reflects the nonrepresentative interests and experiences of a small minority of the national population — middle-to-upper-class white males — who are hugely over-represented in the professional philosophical population. Once this is understood, it becomes transparent why such a patently deficient, clearly counterfactual and counterproductive approach to issues of right and wrong, justice and injustice, has been so dominant.

Charles W. Mills

The recently departed Charles Mills took issue with what he called “ideal theory” as a method of doing ethics. Just as the old joke about physicists and their abstractions is that you need to “assume a spherical cow in a vacuum,” ideal theory in ethics works from idealized assumptions: e.g., that each of us are atomized but perfectly rational individual subjects and our decisions are about choosing from an array of potential choices in order to make the most ethical one. We are never tired or poor or placed into unjust or unequal societies, or grappling with unjust or oppressive institutions, but can decide on matters of right and wrong abstractly, perhaps while puffing on a pipe and sipping a glass of port. “Ah,” after some contemplation, “it would be better for me to go to the symphony tonight than to stay in, since I had promised a friend I would, and it is good to keep promises, and it would also cultivate my aesthetic senses besides.”

Where these idealized models might fail us in tech ethics is where they do not capture historical structures of oppression, of differences in our capacities, and in how we are (or are not) protected or guided by governments and institutions and existing social structures. But to me a particularly damning indictment of the ideal theory lens of ethics as a guiding force for tech criticism is that the core ethical principles at play in tech are rarely in serious doubt. That is, we (for a pretty expansive definition of “we”) know that it’s a bad outcome that, say, the police can coerce an AI product that purports to detect gunshots to dramatically change their predictions to suit the narratives that the police want to push. We know it’s a bad outcome that schools are forcing kids to install spyware and then accusing them of cheating for moving their eyes too much, or living in homes that are too noisy, or having poor internet (that is, if their skin is not too dark to even show up at all, or if they even let you take the test at all rather than just assume your grade). It’s a bad outcome that the facial recognition technology that police use to put people in jail is so flawed that it routinely flags lawmakers as gang members or drug dealers. While an ethicist might be able to eloquently expand on the particular unethical aspects of these outcomes, it’s not clear to me that that is where the real heart of the debate is. To me, it’s about overturning entrenched power structures, reforming or disbanding corrupt institutions, creating a society where these kinds of regular harms are not tolerated.

In short, the argument I’m adapting from Mills here is that the problems raised by tech criticism are often not really ethics problems, but socio-political ones. So rather than saying, e.g., “how should a particular tech company CEO best act in order to produce the most ethical outcomes?” the more apt question might be “why do we live in a society where I have to care about the ethical commitments of a particular tech CEO?” That a particular algorithm is biased or unbiased is perhaps of less importance than the fact that I could lose my job or my liberty or my life based on what a number in a computer happens to say at a particular time.

Wrapping Up

That radical critique of technology in America has come to a halt is in no way surprising: it could only be as strong as the emancipatory political vision to which it is attached. No vision, no critique.

— Evgeny Morozov

I spent a lot of words above but really I think my objections can be summed up in a few short sentences:

  1. Tech companies won’t save us. They/we have a perverse set of incentives; their interests are often just about appearing to do good, and even those interests can shift with the market.
  2. Academia won’t save us. Their/our incentives are differently perverse, but are still just as entrenched against making fundamental structural changes, especially if it involves action against the systems that fund or support their research.
  3. Our current governments also won’t save us, for pretty much a straight-up combination of the first two rationales above.

The situation is a bit less hopeless than laid out above.

Firstly, some of the points I raise above are I think a touch stronger than they need to be: e.g., I think Vinsel’s “criti-hype” notion overstates the extent to which academics buy in to corporate speak, and suggests that they can’t walk and chew gum and attack multiple problems at once; and I think Mills’ “ideal theory” undersells the ability of ethics as a longstanding and evolving intellectual discipline to deal with contemporary issues of justice and power. So my endorsement of some of these criticisms above are not whole-hearted. So the existence of critiques are not evidence of total non-utility.

The second ray of hope is that, even to the extent that you buy my objections, the verb phrase I repeat above is “save us”: there’s no reason these have to be unipolar all-or-nothing efforts: “un-captured” academics may help shape government policy that provides oxygen for reform efforts in industry, say (hey, it could happen). And of course existing corporations, governments, and universities are not the only the only levers for political and social change: to me it is telling that many of the recent “wins” for tech critics (such as climate pledges, or bans on facial surveillance) are the fruit of pressure brought to bear by mass movements and unions, issues raised by activists rather than executives.

So we have tactics at our disposal that at least have a hope of working. But these successes are incomplete, fragile, and have to be hard-fought each time. The tech revolution is not going to be won by a whitepaper or a press release. We need to clearly and passionately articulate a vision of a better world (and not just a techno-utopian one) and then fight for it.

--

--

Michael Correll

Information Visualization, Data Ethics, Graphical Perception.