Visualization in the Wild: A Trip Report from VIS 2022

Michael Correll
15 min readDec 8, 2022

--

A lesser person might caption this “I had a whale of a time at the Museum of Osteology in OKC!” but I will refrain.

I recently (for some pretty elastic definition of “recent,” but I’ve been juggling priorities and this recap got dropped on the floor) attended this year’s installment of IEEE VIS, the premiere conference on information visualization. This was the first hybrid installment after two years of pandemic-induced purely virtual events, with the in-person portion hosted in Oklahoma City. I attended in person, eager to talk to actual human beings in a physical space.

First, some notes about hybrid conferences. After CHI 2021 I thought the intricacies of hybrid conferences were so interesting that I never actually got around to turning my notes into a public-facing trip report and just opined about conferences as a medium. After CHI 2022 I only took up half of the recap with similar thoughts. One hopes that eventually these things will get more and more standardized and quotidian that I will not feel the need to waste any ink reflecting on it at all, but I, at least, am not there yet.

I did not, as far as I know, get COVID at the conference. Some people did. I don’t really know how many. Less than at the CHI conference earlier this year, by both count and rate, but with all the possible grains of salt about those figures (huge differences conference size, community spread, etc. etc.) I can provide. My phone only gave me one exposure notification, compared to the (as I recall) three I got from CHI. I don’t really know what the relative risk was like, and I also don’t know how I ever would. I felt I had only a limited control over my risk management, in any case. I often spent my coffee breaks by the railing of the outdoor patio, sipping tea with my mask around my chin and looking in at the knots of people conversing inside (however, I acknowledge that “awkwardly hovering on the outskirts of social events” is a pretty common experience for me, pandemic or not).

From comparing notes with folks who attended virtually, hybrid conferences are still often two conferences hastily stitched together. There were of course the usual parade of technical difficulties that come with stress-testing ad hoc infrastructure, some minor and some major, but I still (perhaps naïvely) think those are just first-gen wrinkles that will get ironed out once everybody has enough of these conferences under their belt. Less obviously fixable was the lack of a feeling of a unified event with equal participation and value from both online and in-person attendees. The main lines of communication between in-person and virtual attendees was a discord that I almost never used. The organizers set up “virtual hallways” where remote attendees could connect and wave at people walking by as an attempt to recreate the sort of stochastic hallway interactions I enjoy, but these booths were hard to notice and often had technical issues (one virtual attendee reported desperately trying to get in-person attention and failing, resorting to ever more extreme arm motions and shouting; another reporting what they thought was a full conversation with an in-person attendee only to learn that the audio was not working and they had been pretty much talking to a bemused wall for five minutes).

A thing I liked (that other people seemed not to) was that, for this conference, the intended Q&A process was everybody, in-person or remote, was meant to submit questions to a per-room sli.do, where they would be upvoted and moderated. There were of course hiccups when this was actually implemented (some session chairs were better at following this process than others, some in-person questioners would grab the mic and start talking regardless of moderation, moderators would often miss the meaning of particular questions with no way for the questioner to repair, and the delay in the live stream meant that, even for written questions, in-person attendees had a head start, etc. etc.), but it’s so close to what I have always wanted from academic Q&A that I was excited to see it tried out. I especially liked how mad some people got about having to actually write out their questions concisely and having to share the stage with other people, like it was an affront to their natural rights as blowhards. Good.

Another thing I liked was that I could, as an in-person attendee, switch between modes. One afternoon I decided to watch the conference from the laptop in my hotel room, a welcome respite from wall to wall conversations with new people after a couple years of talking to mostly the same set of people in mostly thirty minute chunks. It was much easier to manage FOMO with that kind of flexibility.

All together, these observations (combined with the information that VIS next year, in Melbourne, will likely return to a no-live-stream status quo) sort of… disappointed me? I had sort of hoped that we could use pandemic necessities to rethink the academic conference as a venue, and produce more equitable and accessible forms of participating. But it looks like the general consensus was that all of this mucking about with live streaming and virtual attendance options was just a temporary embarrassment to be dispensed with at the earliest convenience. I think that’s a shame, and I hope that other communities don’t make the same determination. At the very least, I continue to hope that there’s less of need for these hybrid events to be so “heroic”: held together by hope and duct tape, at great personal and monetary expense, and built up of one-off solutions stress-tested at the last minute. We’ve tried a few experiments, let’s get together and see what works.

Woven stainless steel and brass sculptures of birds.
Inset of Michelle McKinney’s “Scissor-tailed Flycatchers. ” In contrast to the (real) skeletons in the header image, these are made out of “woven stainless steel and brass.” Even though it’s the Oklahoma state bird, I didn’t see any flycatchers. I did see lots of catbirds and crows, so not a total wash, birding-wise.

Okay, with that out of the way, time to discuss papers. It was hard for me to pull a theme out from this year. There was a lot of energy building around topics I care about, like visualization rhetoric and communicating to mass audiences, but I think I will wait at least a year or two before I start making pronouncements about how the field in general thinks about things. This list below will be more of a grab bag. On reflection, a commonality in what I chose to highlight (and the source of the title of this post), is messiness: that once visualizations (and how we evaluate them) move out of the lab and into the real world with all of its wild and wooly complexity, things start to break down.

One advantage of being so late with my report is that I don’t have to make the standard caveats about my limited perspective and obvious biases: you can go through the other trip reports and get a more expansive view all by yourself. Nicholas Kruchten has a trip report worth checking out that is somehow both more concise and more detailed than this one ended up being, and there was also a trip report in podcast form from the Data Stories folks with special guest Tamara Munzner that is worth a listen (she put together a big ol’ tweet thread of reflections as well).

But without further ado, here’s a (more or less unordered) list of papers and talks that I found interesting:

Visualization Design Practices in a Crisis: Behind the Scenes with COVID-19 Dashboard Creators

Yixuan Zhang, Yifan Sun, Joseph D. Gaggiano, Neha Kumar, Clio Andris, Andrea G. Parker

I will let the dust settle a little bit more before making a final gut check, but, for me, the academic visualization community does not have much to be proud of when it comes to our efforts during this pandemic. Here came along a vital need to communicate with data to wide audiences, to work our hardest to persuade and inform and potentially save lives and, as far as I’m concerned, we blew it. Our models about efficiency and precision didn’t cover the actual uses (or non-uses) of our COVID dashboards, our academic papers didn’t seem to provide much that the actual practitioners could use in their work, and our instinct at seeing all of this important data was to use it to test out our pet techniques or goose the “broader impacts” of our grant proposals. I still stand by what I suggested back in April 2020, which is that, in the absence of a tight connection with people in public health and public communication, our first instinct as visualization researchers when dealing with COVID data should be to shut the fuck up.

This paper was a look at the people who did have those connections to public health and immediate audiences, and how they designed the dashboards so many of us incorporated into our doomscrolling habits. What they found is fascinating, visualization as I think it really is, as a push and pull between stakeholders, an exercise in rhetoric and practicality rather than a technical problem to be finally and unequivocally “solved.” There were two observations that particularly resonated. The first was how for “real” visualizations, the practicalities of performance, tool availability, and tool knowledge (I’m not going to spend the night learning a new library or language if I’ve got to put something out in the morning) blow other concerns (especially the things that we seem to focus on in research, like support for novel visual designs) out of the water. The second was a phenomena I’d observed when I was able to be a fly on the wall for the UW Climate Impact Group as they developed their public visualizations: the importance, when designing public-facing visualizations around potentially controversial topics, of minimizing the amount of angry emails from cranks that you get. I don’t think the community has really internalized that visualizations can be contested, mis-interpreted, and mis-applied in ways beyond a rather binary notion of “deceptive” or “truthful” visualizations; the truth is vastly more interesting and more complicated. Anyway, lots of interesting stories in this paper: definitely something to stew on and/or prompt an existential crisis, if you are so inclined.

Do you believe your (social media) data? A personal story on location data biases, errors, and plausibility as well as their visualization

Tobias Isenberg, Zujany Salazar, Rafael Blanco, Catherine Plaisant

This was my favorite talk of the conference (maybe tied with the capstone, see below), almost entirely because, as the title said, it was a story. Tobias Isenberg had what seemed like a simple question (“where are there carnivorous plants in the wild that I can go look at?”) and ended up building a sort of data detective story, complete with analyses of time series data for suspicious patterns but, most crucially, actual hiking in actual woods to look at actual plants and landmarks. This was a Rex Stout dual model of data detective: the Nero Wolfe sitting at home looking at visualizations and logs and metadata paired with an Archie Goodwin who goes out (with wisecracks and savoir faire) to pound the pavement.

I think it’s this combination of both visualization from a distance and actual investigation that will be the future of actual data analytics. Nobody should be making decisions based on (just) a value in a bar chart without checking to see what, if anything, in that bar chart connects to in the real world. But I will admit that, beyond sympathies with the methods used in this paper, I want to highlight this work just because I think more academic talks (and papers) should be stories with narratives and beginnings and middles and ends and morals and so on. There’s no reason they can’t be.

No Grammar to Rule Them All: A Survey of JSON-style DSLs for Visualization

Andrew McNutt

There has been a cottage industry of different specification languages for visualization, handling everything from specifying the visual design of charts, to their affordances for interactivity, to the statistical tests that evaluate them. Some of these languages interoperate (a lot of them build on/off/with the Vega grammar, for instance), but others don’t. Some have complementary capabilities, others tackle totally different design problems. If you’re a negative person, you can see this as a mounting pile of technical debt that means that anybody who wants to make progress in visualization has to internalize a chunk of different languages with different syntaxes and abstractions, not all of which will be maintained. If you are more positive (and Andrew McNutt seems to be, in this work), then that means that there’s a wide open space of new languages to design and conceptualize, and plenty of work to do to get them all to interoperate and actually be useful (and used) by designers.

As with the paper above, I mention this work not just for its content but also for its delivery. In keeping with his paper from last year, Andrew McNutt accompanied this work with a zine laying out the major points of the paper. It’s cool. I’ve got it right on my shelf, next to the grab bag of other zines that I’ve collected over the years on subjects like workplace organizing, death, and trash. I like zines, I like public science communication, and I like when academic work isn’t a locked-away pdf and an ephemeral, half-remembered talk. So more stuff like this, please.

How Do We Measure Trust in Visual Data Communication?

Hamza Elhamdadi, Aimen Gaba, Yea-Seul Kim, Cindy Xiong

I’ve been blogging about conferences for at least four years now, and I think a paper by Cindy Xiong has shown up in every single post; I’m not certain how to interpret that other than to suggest that the kind of work she’s been doing around how how framing and visual metaphors and other minute design decisions can create important impacts down the line is something that I find fascinating and something the community should be doing more of.

This particular work tackles a particular pet peeve of mine, which is that scientists of all stripes love to take big thorny concepts, reduce them down to simple abstractions in service of operationalizing them, and then conflate the abstraction with das Ding an sich. “Trust” is one of these concepts: in principle, trust is this complex, situational, hard to gain but easy to lose negotiation between people. You might trust me to make you a sandwich but not to perform open heart surgery, say. You might trust me to go up and sing during karaoke, but not trust me to do a passable job once I’m on stage. I can trust someone too much, or too little, given their level of actual honesty or reliability. Yet, all too often (and I think computer science papers are especially guilty of this), when we ask if people “trust”, say, an AI system, this is reduced down to a single 5- or 7-point rating, or a binary choice to agree or disagree with an expert system. This paper (from the BELIV workshop before the main conference) talks about the problems with these sort of simplistic measures, and discuss some alternatives. The notion of a Bayesian model of trust (where we see how much information from a system causes us to update our priors) I think is an especially promising avenue of research that is already bearing fruit.

Dashboard Design Patterns

Benjamin Bach, Euan Freeman, Alfie Abdul-Rahman, Cagatay Turkay, Saiful Khan, Yulei Fan, Min Chen

A while ago, I was on a paper where we complained that dashboards were these omnipresent objects, one of the most commonly designed and consumed types of visualization in the world, and yet the academic community seemed to regard them with disinterest or, occasionally, outright disdain. We even got to end the paper with a section called “A Call To Action,” which was on my academic bucket list (alongside “owning a tweed jacket with leather patches” and “putting ‘Dr.’ as my title in random online forms”). But the central idea was that there was this particular visual genre of dashboards that needed to be understood.

Well, this paper studies that visual genre, from the usual suspects like the actual types of visualizations used in dashboards to the often-overlooked components like the arrangement of views and their narrative structure. They even made a cleverly-designed website that lays all of these forms out (including, in a zine-like twist, little compact printable “cheat sheets” for showing the space at a glance). The main weakness in this paper is actually partially my fault: they build off our our corpus of dashboards (supplemented by several dozen other examples, thankfully) to create their design space, and our corpus was an amalgamation of different dashboard examples we found personally interesting or informative (we had a dashboard of Chernoff faces in there, for pity’s sake) rather than anything particularly representative of common practice (let alone exhaustive of different designs). So I’m curious about how well the design space picked out here aligns with general practice, and if there are even wilder or wackier dashboards out there, waiting to be discovered.

Data Hunches: Incorporating Personal Knowledge into Visualizations

Haihan Lin, Derya Akbaba, Miriah Meyer, Alexander Lex

One problem with data visualization, maybe the fundamental problem, is that you’d have to be an idiot to blindly trust your data: real data is messy, biased, incomplete, and often totally irrelevant to your question at hand. Yet, visualizations all too often hide all of that messiness and present you with, say, a list of very precise-looking bars in a very precise-looking bar chart that does not give you a lot of wiggle room for interpretation or negotiation.

This paper presents an exploration of how we can add a little bit of this wiggle room back, and make visualizations a bit more bi-directional as a communication medium. They focus on what they call “data hunches,” which they envision as articulations of the expert’s prior knowledge about the data, but that’s a big umbrella, incorporating everything from very specific assessments of uncertainty or corrections of values to hazier concepts like collaborative doubt or agreement. What I found generative here were the visual designs for representing these hunches, these sort of sketchy in situ additions that make the formerly solid and uncontestable visualization look like an object in flux, marked up like a whiteboard during an heated brainstorming session, or a poem after a close reading. As a long-time proponent of the idea that people need to do this kind of annotation and marking in order to get their hands dirty and really make sense of their data, I want to see these sorts of designs and affordances in more places, and in more contexts.

“Galileo’s Telescopic Discoveries: Thinking Visually in the History of Science”

Kerry Magruder

As a habitual anxious over-thinker, I often like to reflect and provoke, but I’m never sure how effective these provocations are. I feel like it’s very easy to propose something that looks provocative but ends either preaching to the choir, dressing up the status quo in radical trappings, or can just be dismissed as shtick. The capstone talk here is my new model. It would have been so easy for Kerry Magruder, curator of OU’s History of Science Collections, to give a talk that was just “hey, we’ve got some cool books, wanna see ‘em?” They do, in fact, have some cool books, after all. But this talk went straight for the jugular, seemingly without effort. And all the while he was being very nice about it.

The particular narrative here is how Galileo was able to make his discoveries (like mountains on the moon) and convince others of their truth because he was not “just” a scientific observer, but was skilled in visual thinking and rendering, of thinking with visuals to learn, teach, describe, and explain, and that all of his exercises in perspective drawing, architectural rendering, and art were the “midwife” of his scientific discovery. With this interdisciplinary and practical background, de rigeuer in the artisan schools of renaissance Tuscany, and the resulting “renaissance man” as a model, Magruder raised three questions that hit me like a ton of bricks (paraphrased and reframed):

  1. Are your visualization techniques of similar cross-disciplinary applicability as things like Galileo’s visual thinking?
  2. Pedagogically, what is your equivalent today of the artisan workshops of Tuscany? Is this conference something similar?
  3. Would someone like Galileo or one of the artists/engineers who taught him fit into your department? What would they advise students to learn?

I did not like my personal answers to any of these questions. I feel like computer science has an outsized influence on visualization research, producing narrow models of what visualization work looks like, how it is taught, and how it is valued. I feel like we do not turn out students with broad expertise in skills like visual design, experimental methodology, communication, and software engineering. And, even when these students do arise, we make it hard to place them where they will do the most good, or reward them for their inter- or trans-disciplinary work. I want to build a field that lives up to the vision expressed in this talk, and I don’t think we are anywhere close yet. But I will admit a fatalist streak. Maybe you think differently, and future generations will look on your manuscripts with the same sort of reverence as when we see Galileo’s drawings of the craters on the moon.

Wrapup

I am once again over the limits of space and reader patience, so I will leave you with a chunk of one-sentence conclusions:

  1. Hybrid conferences are often messy in practice, but I think we need to keep as much of the hybrid modality as we can.
  2. The final paper is often the least useful or memorable part of an academic contribution: we need to value and reward the “paratext” around papers more.
  3. Academic visualization has a lot of work to do if it wants to provide useful and credible help to the community of people who regularly design visualizations, especially for high stakes and mass audiences.

See you all next year in Melbourne!

--

--

Michael Correll
Michael Correll

Written by Michael Correll

Information Visualization, Data Ethics, Graphical Perception.

No responses yet