Folk Theories and Fallacies in Tech Ethics
You were not born knowing the theory of gravity, yet when somebody tosses a ball you know roughly where to go to try to catch it (or, at the very least, where to look to see where it’s going to land). Likewise, you do not have to totally understand the thermodynamics of a car engine in order to drive somewhere, or to know not to fill your tank with detergent instead of gasoline. Rather, this kind of stuff is powered by folk theories— the simplified models we have of the world around us and how it operates, often built up through intuition and inference. Folk theories, like all kinds of other heuristics, are often really useful! But also folk theories, like all kinds of other heuristics, can also be pretty dangerous and incomplete. It’s nice that we don’t have to be philosophy professors to come to the conclusion that we shouldn’t indiscriminately murder and steal. It’s less nice that, say, we can be pushed around by propaganda or in-group bias or plain old ignorance into doing horrible things to each other.
There are several schools of debate about how these folk theories work in ethics, how “good” they are at approximating the moral principles we claim to have, and even debates about whether we can or should use moral principles at all or instead just rely on our off-the-cuff judgments (“intuitionism” and “particularism” being two relevant ethical -isms in this space). But even the strongest proponent of the “self-evidence” or “non-procedural” nature of ethical decision-making I would hope would agree that there are situations where our folk theories do not avail us. They are either fallacious or disingenuous or otherwise distractions from attempts to do good. Tech, and responses to burgeoning tech ethics, are where I am acutely concerned about the prevalence of folk theories of ethics, both because it shows a lack of inroads of ethical thought (at the expense of “just winging it”) but also because it’s a sign that people are deploying lots of rhetorical chaff to avoid having to self-reflect or otherwise take steps to do the right thing.
It’s these failure cases that I discuss here. These are patterns (I’m not certain if all or most of them rise to the level of conscious belief) that I have encountered when I observe how people think about ethics and technology: both in terms of how we develop and deploy technology but also how critiques or responses to ethical concerns in technology are raised or communicated. I have attempted, where possible, to connect each of these folk theories with a representative dril tweet, for the purposes of illustration.
Learned Helplessness
Ethics is a complex field. People can and do have very different (and very strong, and often contradictory) opinions about it. There’s no way to please everybody. One of the central conceits of The Good Place was that you can study ethics for a lifetime and still encounter major doubts and uncertainties when it comes time to make actual real-world choices. All of this messiness and uncertainty is very easy to weaponize into a sort of “aw shucks, who’s to say?” attitude. Doing the right thing is so messy, this folk theory goes, and there are reasonable people on both sides, so best to just not get involved (note that “not get involved” usually means “carry on with the status quo”). This is a connected to a related theory, the slippery slope that if we have to do a thorough deep ethical dive once, that soon we’ll have to do it for everything and then people will be unable to do any other work, and wouldn’t that just be so much effort?
Listen, I’ll level with you: it is of course true that philosophy can be used to generate arbitrary levels of complexity and debate around ethical questions. But it can also be used to do that about, like, hot dogs. The existence of complexity or disagreement doesn’t give the automatic win to the status quo (nor does it inherently give the win to “disruption” either). But I see this sort of learned helplessness in the face of cogent and clear ethical petitions as all too common. Or, I think, perhaps a reluctance to even admit that there could be ethical arguments that are strong enough to motivate course corrections. If you open the door a crack for admitting that you should be held to your principles, then sooner or later you have to reckon with your entire self-identity and value system, and that’s a lot of effort when you are just there to make money or write papers or ship code.
Another place where I see “gee, it’s all so complex” as a folk theory to prevent changes or even ethical reflection of any kind is in the fallacy of dual use. A tech company creates something that seems to have a lot of pretty bad externalities (like, maybe they didn’t build the Torment Nexus, but it’s on up there), and the response is something like “well, sure, it can be used for evil, but it can also be used for good!” With the assumptions being that therefore: a) it’s out of our hands what particular uses it ends up having, b) we can’t stop or even delay deploying the system, because then we are preventing potential good, and c) since we are all good people trying to do good, if bad stuff happens it was all an “unintended consequence” and the designers can’t be blamed in any event.
That something is difficult or complex or requires expertise to assess does not remove its moral character. The right thing to do is not always simple, and the thing that is simple is not always the right thing. If consistent and thorough ethical oversight would make your technological innovation or business impossible, my response would be: good, it should be.
Let’s Just Be Rational
Another common assumption is that being upset means you’ve lost the argument. In syllogistic form, it maybe goes a little something like this:
- People that are upset are no longer dispassionate or otherwise “not thinking clearly.”
- The person arguing with me is upset.
- Therefore, this person is irrational and so likely wrong.
The consequences of this kind of thinking have been most obviously disastrous on social media and politics, where trolling your opponent until they get mad is then taken as a proxy for winning the argument, but it’s also present in tech ethics as a way to discredit or discount critiques.
In particular, it is perfectly natural that people would be upset by systematic harms arising from technology. Being furious would not even be out of the question. But to make these harms legible to technologists often seems to require some kind of weird “rationality whisperer” who can appear calm, cool, and collected about injustice. You can perhaps speculate about what forms these “translators” take in organizations.
This shortcut is also present in bad faith debate patterns like sealioning or “just asking questions”: if you can be really pestering and annoying while keeping up a veneer of rationality, then if you successfully annoy your opponent you are the “winner” of the exchange even though you’ve shifted the intellectual and emotional burden squarely away from yourself. In general, this is a pattern about dismissal: that an ethical harm doesn’t “count” if the people pointing it out aren’t sufficiently sanguine about it.
A person who is upset does not forfeit their moral standing. In fact it’s potentially evidence (if not necessarily conclusive evidence; people are mad about a lot of stuff) that there are active harms taking place that need to be addressed.
There’s No Law Against It
This one is a bit weird in terms of internal consistency, but to me it seems to be a conflation of what is legal with what is moral, in both directions. That is, that following laws is sufficient to be acting ethically, but also that, in order to be considered an ethical person, you have to be acting within laws. By “laws” here I don’t mean just literal legal codes, but also organizational norms or agreements or even just the general idea of “going with the flow.” Perhaps more expansively, that just as acting too emotional means that we can ignore your ethical appeals, that acting outside of the system is similarly evidence that you can be ignored.
There’s perhaps an element where (temporal or geographic) absence makes the heart grow fonder. We often look to acts of defiance and revolutionary solidarity overseas or in our pasts as heroic, but are often more divided when that defiance happens in our backyard, or in situations where the history has not yet been written and the winners and losers assigned. Of course this perspective is valuable (not everybody who stirs up trouble is a hero, not all heroes stir up trouble), but there’s a gap all the same.
I acknowledge that breaking laws and traditions is not for everybody. Even if you think laws and customs are totally unmoored from ethics and values (which they aren’t), you still have to make tactical decisions about risk and benefit. People do this all the time without making a big deal out of it: stuff like speed limits and parking signs are often the first casualties of convenience. And at least from a look around the news these days, companies seem to be willing to break labor laws and anti-trust laws willy nilly if they think the payoff justifies the risk and expense. As a closing remark, I would also point out that what “counts” as breaking the law, let alone what sort of punishments happen for doing so, are not straightforward and objective facts about the world, but contingent on who is doing the punishing and to whom.
This is all to say that the methods that people use to bring up or protest ethical violations in tech are often criticized as a form of distraction from the ethical violations themselves. For instance, a company might get caught ignoring its own internal ethics team, and decide that the thing to do is to do an internal investigation to try to identify and punish leakers rather than, you know, quit doing bad stuff. Or, conversely, an organization might lean on adherence to the letter of the law or existing custom as justification that their ethical obligations are fulfilled. For instance, university researchers often act like the existence of IRB (institutional review board) approval is prima facie justification that their work is inherently ethical, and then act shocked when everybody gets mad at them.
It’s Just Data
If you are at all paying attention to the tech ethics space I am sure that you have by now already been exposed to the fact that data are not neutral and objective. I fear that this statement has become so common or abstract that you might think of it as some rarefied bit of academese, or something that is “technically correct” at the edges but is not relevant to your day-to-day life. But no, I really mean that every dataset is just something that somebody (or something designed by somebody) entered into a spreadsheet or a notebook, and has no more of a claim to objective truth or correspondence with the phenomenal realm than any other crap that you could put in a notebook. The doodles you draw when you’re bored in a meeting are on par, in this sense, with the most sophisticated models Amazon or whoever can run. Of course, some datasets can be more important than others (if I enter in the wrong numbers on my tax forms, I could go to jail), and some datasets are more useful than others (reading today’s weather predictions might help me decide whether to put on a coat or not), but none of them have a lock on truth, and it’s not just a matter of “fixing” them to make them “truthier.”
Where the issue of data’s (non-)objectivity arises in tech ethics is in the connected rhetorical plays that a) data has no ethical character or import (that is, that humans can be prejudiced or biased or evil, but once it’s all in a spreadsheet, we’re in the clear) and b) that, in any apparent ethical lapses, if it’s “in the data” then our hands are tied (that is, if there’s some apparent biased or unequal impact of a technology, if it can be somehow justified or pointed to in terms of the backing data, then we’ve solved the problem).
For instance, many facial recognition technologies often have poor accuracy on people with dark skin. One response to this fact is to go “ah, well, that’s an issue in the training sets, they have too many caucasian faces in it” or “ah, that’s due to darker pixels being less differentiable since they have less dynamic range,” etc. etc. These things are true, as far as I know, but where I object is that once somebody has identified a plausible “data-based” explanation for the harm, that that somehow resolves the issue. Even if you ignore the societal harms that led to the issues in the data (like the response curves of film being targeted towards whiter faces, or the lack of diverse training data, or….), the issues with facial recognition technology don’t go away by collecting more data and improving some metric in a table somewhere. I would suggest that the relative performance features of the dataset are maybe less ethically salient than the fact that any facial recognition system is used unquestioningly or without oversight to do things like flag people as gang members or terrorists, determine if they are cheating on tests, or if they are paying attention in class. More efficiently or more objectively doing an unjust thing is not a necessarily a virtue.
As with my discussion of unemotional “rationality whisperers” above (people who are taken seriously in organizations because they aren’t seen as “too emotional”), I wonder if these appeals to data are also part of a similar rhetorical effort. If you translate things from an ethical problem into a data problem, now we don’t need to do any introspection or insight, we can just keep plugging away with technical fixes until the final dataset is “fair” and “unbiased.” It also means you can ignore people who don’t come to you with sufficient “evidence” that they are being harmed.
A problem doesn’t have to be “datafied” to be important. And, conversely, “good” data doesn’t always produce good outcomes.
Sympathetic Magic
The last folk theory I want to address is that around a proposed “division of labor” in ethical thinking. Kate Crawford calls this “ethics at arm’s length”, but it goes by many other names like “I’m just an engineer” or “I’m just a researcher” or “we just provide the product, we can’t control how it’s used.” Hopefully by now the objections to this sort of thinking are clear to you by now: if you’re working with technology, you’re impacting society, and you don’t get to deny responsibility for that impact.
But a new wrinkle I have seen tries to mold this objection into something like the medieval abuse of Catholic indulgences, perhaps inspired by carbon offsets (as in the tweet “we’ve committed to purchasing ethics offsets, to be net ethical by 2030”), where there is license to be as amoral or ethically unreflective in your particular start-up, research lab, or company so long as you can point to somebody, somewhere, who is doing ethical stuff (and often you are quite publicly proud of those people). Like, sure, we contributed to a genocide or two, but look at how many trees we planted!
The particular form that this folk theory takes seems to be in flux, but in many cases it seems to result in having totally disjoint centers of ethical consideration or critique, and using the latter as license for the former to run wild. For instance, you can have a computer science department at a university that is totally separate from the science and technology studies department, have an internal tech ethics research group that is allowed to generate as many thinkpieces as it wants but is not allowed to question the company’s cash cows, or throw in a single ethics class or module within a much larger computer science curriculum that makes no mention of that material and assume that you are therefore turning out reflective and ethical computer scientists once they get their degrees.
There are serious and engaged people in all of these parallel structures, but it irks me to see their forceful and revolutionary proposals used as evidence per se that the system is working, or as some sort of begrudging tax paid out so that people can get on with the “real” work. I’m still not 100% on why the doctors who look at my teeth are totally separate from the doctors who look at all of the other parts of my body, but that’s a quirky historical accident compared to the absurdity where the process of building new things is cordoned off from the process of determining what kinds of harms those new things might do.
Wrapup
If I had to summarize all of these interrelated anxieties, I’d suggest: your guts are good. Trust your guts. But your guts can be wrong about ethics, just like they can be wrong about whether you should really have another cup of coffee even though it’s late. So don’t only trust your guts, and certainly not anybody else’s. Instead, be on the lookout for:
- Ways to be ignored without having your ethical concerns addressed. Somebody doesn’t have to ask nicely and politely to earn the right to be treated fairly. A harm doesn’t have to be unambiguous, or its redress uncontroversial, to matter.
- Ways to be placated without having your ethical concerns addressed. A table in a database somewhere is not going to be the thing that fixes society. Neither is a panel of luminaries at a conference. There’s lot of work to be done, and somebody is going to have to do it.