Visualization Design Research: Catastrophizing, Collaborating, and Caring

Michael Correll
9 min readMar 7, 2023


Painting of a hospital where a group of doctors are attending to a single patient who is sitting up in bed.
“Una sala del hospital durante la visita del médico en jefe” (“A hospital ward during the visit of the head doctor”) by Luis Jiménez Arada, 1889.

This blog post accompanies a paper to be presented at CHI 2023, “Troubling Collaboration: Matters of Care for Visualization Design Study” written by Derya Akbaba, Devin Lange, Michael Correll, Alexander Lex, and Miriah Meyer. For more details, read the paper.

Researchers are occasionally asked (or evaluated on) how much they collaborate, and many universities and institutions spend however much time and money and organizational bloat trying to make more collaborations happen between academics that, one assumes, seem to prefer to be left alone. I always felt like data visualization researchers had a very good answer for the collaboration problem, which is that data visualization is just inherently interdisciplinary, or at the very least service-oriented: “after all,” is my standard cliché response, “I don’t have interesting data, it’s other people who do.” There’s only so much you can do with toy datasets; your eugenicists measuring flowers in the Swiss alps or your log of Titanic passengers or what have you. Data visualization research just doesn’t function very well without consistent and continuous access to people with either interesting data or interesting things they want to do with their data.

Heavily influential in the space of what it means to do collaborative visualization work is therefore the design study. Sedlmair et al. define a design study as “a project in which visualization researchers analyze a specific real-world problem faced by domain experts, design a visualization system that supports solving this problem, validate the design, and reflect about lessons learned in order to refine visualization design guidelines.” In other words, you (as a visualization researcher) go off, find somebody with interesting data-related problems, have fun with them, build them something, see if it works, and then come back and tell the class what you all did on your summer vacation. When these kinds of projects work, they really work. Some of my favorite visualization papers come from design studies (heck, the design study paper itself is one of my favorite papers), and the fact that the field is full of researchers who can both build and think about what they built is not to be underestimated.


I have always been uncomfortable and anxious about design studies, despite having been a part of quite a few in one role or another. My discomfort is, of course, not diagnostic of a universal concern: I am uncomfortable and anxious about a whole bunch of stuff, and I try not to make that everybody else’s problem. But I’ve now put together what I can only describe as three full-on public diatribes about the problems, so there’s at least something there. In particular, I have been worried enough to give public talks in front of all of my colleagues that focus on the following issues:

  1. Collaborative visualization projects can create weird power imbalances and extractive models of working (with other domains as “problem factories” rather than equal partners in collaborative research) that can make visualization a “bad neighbor” to the other disciplines it purports to be helping.
  2. It’s not clear that we reliably learn useful things from the actual systems we built as part of these collaborations, despite the “heroic” effort often spent on building them.
  3. It’s not clear that all of these collaborations, taken together, will add up to new generalizable knowledge. That is, while the collaborations might be great for the people involved, is the field progressing as a result of all of this work?

It was this background radiation of anxiety that I brought with me when I was asked on board the project I will discuss today, which was an attempt to actually perform some sort of post-mortem analysis on how all of these collaborations went. Were the collaborators in the applied domains still using the tools that the visualization researchers built (or otherwise still getting utility out of them)? Did the collaboration result in good outcomes, not just with respect to specific research problems, but for the people involved (professionally, pedagogically, or even economically)?

To find out, my co-authors interviewed the people involved, and in many cases were able to talk to the full “triad” of perspectives: that of the visualization researcher, the domain collaborator, and the graduate student who was doing much of the actual building. These interviews turned out to be a lot of work! Making sense of these interviews also involved a lot of work. Further chunks of work, still, were involved in figuring out what analytical and philosophical lenses to apply to all of this stuff when we were done. The first author, Derya Akbaba, goes into some of this gnarliness in her post about the work, which is great because that gives me leave to just opine (more) about it without fear that you’ll miss out.

But I do want to at least touch on one lens that we settled on very early in the project, which is that of care, and caring. Care ethics (to the extent that you believe “care ethics” picks out a particular ethical lens, or is the right term for what we actually focused on, etc. etc.; consult Derya’s post linked above for more discussion, but ultimately know that at one point we had people in five time zones simultaneously weighing in on this with dramatically different opinions) asks us to consider, as part of our ordinary ethical assessments, the relationships that we build, nurture, and structure over the course of our lives.

So to address the worries about collaborations I raised above, you need to consider the relationships involved, and who is caring for whom. For instance, my “Do we learn anything from the systems we build?” requires a few stages of adjustment: first, to split that “we” into the specific roles involved (somebody in the role of a student might want to learn very different things compared to somebody in the role of the domain scientist) and second, to expand “systems we build” into a larger network of relationships between people. These relations are mediated by the specific technological artifact everybody is working on, sure, but often spilling outside of the borders of any particular tool or dataset.

With all of that in mind, what are we doing right or wrong (or carefully or carelessly) when we go out into the world, full of good intentions, to help somebody with their data by building them something cool? I’ll focus on a few tensions here that I kept coming back to. I’ll note that these are, again, somewhat idiosyncratic worries. The paper itself presents a more holistic (and more nuanced) view of how these collaborations work.

Designing for the Junkyard

Building something is tough. Maintaining the thing you’ve built is also tough, and a different kind of tough from building it in the first place. The “find somebody interesting, build something cool for them, and report back” model noticeably doesn’t include the step “and make sure they can keep using the thing you built for years, long after the visualization research paper has been published.” We found that many tools built as part of these collaborations were no longer in use, even only a year or two after the publication of the visualization paper about the system in question. Some tools were never in serious use. Some of this lack of use was technical (bit rot or changes to data pipelines) but it was also (maybe even mostly) personal: students would graduate and/or not be available to do maintenance work, collaborators would move on to new projects, grants would run out or relationships would fizzle out from mutual lack of interest or mismatched expectations. Sometimes these tools died unmourned, as experiments that didn’t pan out, or as time-boxed interventions that outlived their usefulness. But often (especially amongst students) there was lingering guilt or disappointment or friction or even outright anger over the “death” of a project.

So, what to do about this? One option would be to find ways to explicitly value maintenance work in research settings. I recognize that maintenance is tough, and occasionally unfair to ask from students who came to learn how to be scholars, not to be full-time one-person software companies. I personally think we need to be more honest about the fact that most of the tools we build as researchers have (and perhaps should have) pretty limited lifespans, and act accordingly. I’ve used the term “designing for the junkyard” to describe this phenomena before: a clear-eyed notion that you will have to think of the research value of the tool in terms beyond its indefinite utility, per se, to a specific domain. What can we learn after a tool “dies?” What can we salvage from the “wreckage?” That means things like being able to reuse or recycle “pieces” (like specific visual designs or concepts) in new tools that may have nothing to do with the original domain, or to be able to articulate a specific contribution to visualization research rather than just service to an applied domain (“we built a thing” is not a contribution statement, in other words: what can I learn about visualization as a whole from the thing you built?). I also think designing for junkyard involves being more clear-eyed to our collaborators about what we’re building and why. What are people signing up for, when they agree to collaborate?

Setting Expectations

Part of building relationships of care is having good ideas of your roles and relations to others. Design studies make this difficult, either through mismatches in expectation, or the shifting of roles over the course of the collaboration. For instance, if you’re the visualization researcher, are your domain collaborators your clients (with an almost contractual relationship to build something that helps them), co-researchers (in that you are both engaged in solving the same research problems), or study participants (there to assess the system that you designed for some target population that just so happens to include them)? Or all of those things? Or none of them? Your responsibilities are dramatically different for each of these interpretations! And your collaborators may or may not have the same understanding that you do. This flux was often most keenly felt by the students doing the system-building work, trapped having to satisfy multiple “bosses” with mutually contradictory goals while trying to progress in their own personal and professional lives. But it was also felt by collaborators, who would discuss situations sort of like a bait and switch where they would expect a continuously maintained and functional tool but receive a research prototype, or have to wait for features or bug fixes while the visualization folks went off to write up their paper, or even feel like they were nothing but passive providers of datasets and unpaid beta testers rather than active co-researchers. None of these situations are ideal, and could be fixed by being more upfront (both to ourselves and our prospective collaborators) about what we’re doing and why.

I’m personally a bit skeptical that the outcome of a design study is really, in our heart of hearts, intended to be a professional, robust, and mutually interesting tool most of the time. If so, why are these tools usually built by just one or two students, often in the middle of the process of learning about software engineering or visualization design? There were of course exceptions to this in our interviews, big teams (or even entire spun up start ups) run more like product development teams than research labs, but those produced their own relationship frictions as well: students may not have signed up for these sorts of service roles when they started the project, and bug fixes don’t result in long publication records. I think we need to stop marketing ourselves as the visualization superheroes who will swoop in and fix all of your problems with bespoke tools, and change our marketing to focus more on prototyping, ideating, and exploration. Who knows, getting people to change how they think about their data might have more longitudinal benefit for applied work than any one bespoke analytics tool might have.

So What Next?

This paper, and the interviews that fed into it, put me maybe closer in the pro-design study camp than I was before, but by no means quieted all of the inner (and outer) doubts in my head. Among our interviewees were plenty of examples of care and caring that flew in the face of my dread of visualization researchers only sticking around to get a paper out of things before disappearing into the night, or fresh students thrown to the wolves and expected to act like underpaid software engineers for the rest of their graduate careers. People care for each other, and place value on that care even if it’s not explicitly expected or rewarded by the systems in which they are enmeshed. I think there are ways of tweaking those incentives structures to make this kind of care easier, though. Or more common. Or more visible. Or more evenly spread around in relationships. But that care is there all the same.

One place where this care is often not visible, however, is in the research artifacts of the design study, the hundreds of conference papers all of the form “System Name X: A Tool for Task Y in Domain Name Z.” These papers are just the visible parts of the big icebergs of work and knowledge that are involved in building, maintaining, and learning from harmonious collaborative relationships. The benefits and costs of these collaborations cast long shadows over all of the people involved that extend beyond the novelty or utility of the tools created. These parts are all important, and we should talk about them more, both in terms of failures and successes.



Michael Correll

Information Visualization, Data Ethics, Graphical Perception.