Museums of Tomorrow Roundtable 2023 — words from ‘art museums and technological change’

Seb Chan
10 min readApr 30, 2023

For a week in late April 2023, I was the guest of Fine Art Museums of San Francisco’s inaugural Museums of Tomorrow Roundtable developed in partnership with András Szántó. Behind closed doors 13 art museum directors from all continents collectively discussed how the field might respond to the seemingly rapid changes in computational technologies. We toured art museums and major tech companies and met with designers and engineers and began to form a shared language with each other. The week concluded with a public symposium at Stanford University. Speakers were asked to respond to the question of “can technology transform systems of power within culture and its institutions?”.

Here’s a slightly tightened up set of words and slides from my talk. Whilst there is some new material I also integrated some of my slides from other recent talks for the ICOM General Conference 2022 (Prague) and tech conference Everything Open 2023 (Melbourne). These are not new issues for our sector.

Let’s go.

This is not the first time we have asked “will technology transform museums”?

Hopes (and fears) about computers in museums goes right back to the 1960s. In fact in 1968 the Metropolitan Museum of Art held a conference on ‘computers and museums’ and like many things from the late 1960s, we still feel its resonance today. The University of Illinois, 1968 PLATO [Programmed Logic for Automated Teaching Applications] was developed for the Illiac I computer connected to 20 student terminals. we see the birth of user experience design desires here that still resonate now. One of the startups that we met during this last week was making a new product called Plato, too. Its aims felt quite similar to that of its 1968 namesake.

But it turns out that art museums are surprisingly resistant to technological change. There are a lot of reasons for that.

The IT revolution promised by computerisation didn’t really amount to much in art museums. As a sector we were transformed far less than libraries were transformed by computerisation In libraries this transformation was also one of professionalisation and standardisation. You might even argue that computerization, professionalisation and standardization work together supercharging each other in the logic of the late 20th century.

In libraries, this meant data was created, and metadata. Lots of it. And activities were reorganized around them.

With their wide public interface and mythic appeal, libraries have been at the forefront of most of the questions museums currently ask ourselves. They have managed to handle ‘the digital’ at the core of their business better than us. Having dealt with the existential crisis two decades ago with the arrival of web search and ebooks they are quite transformed.

Public libraries , especially, have radically changed. Some now employ social workers and loan out even wireless internet connections and streaming subscriptions, doubling down on all the ways that the library can act as a provider of access to knowledge — with a strong emphasis on those who have been structurally excluded.

The pulbic library of today offers a lot more than just access to books’

In Australia we talk a lot of the GLAM sector. That stands for galleries, libraries, archives and museums. It’s a neat way of saying that those four fields share a lot more now than they used to — even if sometimes we don’t think we do. If you’re wondering why galleries are there, it is because what you call an ‘art museum’ we call an ‘art gallery’.

The multimedia wave of the 1990s was a moment of CDROMs, interactive kiosks, and the notion of the museum collection as an ‘encyclopaedia’. It was a temporary bubble that didn’t make a fundamental change to institutions — and was able to be fitted into existing ‘publishing’ workflows.

Smithsonian’s mid 1990s ‘Museum Collection’ CD ROM with ‘extensive 3D viewing’ and ‘more than 600 objects’

Products like this little eBay discovery from the Smithsonian promised a sparkly future whittled down to just 600 objects. (The first online collection project I did for the Powerhouse Museum featured 100 of a vast collection).

The web was meant to affect more change than that.

I built some of that second wave of museum websites. By the time I found myself working in museums the opportunities of the web had begun to shift from the web as a publishing medium to the web as a social space.

We were optimistic. Maybe because at first it was ‘people like us’ who were online.

At the Powerhouse my teams did a lot of work on openness, open access, open culture, and Creative Commons. This was a ‘non adversarial openness’ — a ‘view source’ model of culture that we had adopted from how we had learned to build websites ourselves. Contrast this to a situation more commonly where openness needs to be ‘requested’ (through Freedom of Information requests and ‘enquiries’). Setting our default position to open was seen an undisputed good.

And it worked.

Powerhouse Museum’s Electronic Swatchbook (2005)

The Powerhouse Museum’s Electronic Swatchbook (2005) showed how digitizing content from out-of-copyright historical books of fabric swatches at an item-on-page level brought new forms of access and opened up new vectors of use. It also piloted a search-by-colour feature — something that we had been inspired to make from the interface design of early-era Etsy.

Powerhouse Musuem’s OPAC2.0 launched shortly after the Electronic Swatchbook

A year later Powerhouse launched OPAC2.0 which inspired a vibrant features-battle between Powerhouse, Brooklyn Museum, and other in-house museum web teams at major museums. There were four years of friendly competition with each institution’s teams copying and improving on each others features and designs. There was rapid iteration and a lot of what you still see as ‘features’ on museum collection websites date back to what was being done between 2007–2010.

Visitors and experts from the far corners of the internet told us things about our collections we either didn’t know or hadn’t had time to formally document in online-accessible databases. They tagged our images when given the chance. They even connected our objects to other digital records and artefacts online. It was a vibrant time.

Social media also arrived and for a few years felt like a significant change was upon us.

Those initial steps of the social web which coincided with a cultural turn in museums with the continued rise of the museum educator. ‘Education and engagement’ became intertwined buzzwords. Museums began to work with Wikipedia without suspicion, having finally realised that it was better to contribute to a networked encylopedia project rather than try to build their own ‘authoritative competitor’.

All of this gave the perception that this optimism was warranted.

Powerhouse Museum’s immerisve audio visual experience designed by Jean-Francois Lanzarone with Paula Bray for the Bruno Benini exhibition (2010)

In exhibitions this same digitisation and ‘atomisation’ of content allowed for new in-gallery experiences to be built like this mirror and projector environment made for an exhibition on Australian fashion photographer Bruno Benini at the Powerhouse in 2010.

When I moved to the Cooper Hewitt, the Smithsonian’s design museum in New York, many of the same methods were repeated with my teams there.

Undergoing a major bottom-to-top reconstruction over four years, the Cooper Hewitt collection was digitised and the in-building museum experience reoriented around the affordances that a perpetually available digital collection might offer.

Local Projects developed the wallpaper room experience using digitised wallpapers from the collection to recreate the ‘feeling’ of a room decorated in historic patterns from the past. Looking back it can be seen as a conceptual descendent of the Electronic Swatchbook.

For the new Cooper Hewitt online collection , Aaron Cope and Micah Walter built features that maximised the use of the scant metadata that was available — and reimplemented the colour-based navigation done several years earlier in the Electronic Swatchbook too. Rapid mass digitization of the Cooper Hewitt collection meant that even the most scant of metadata was able to have a value once coupled with an image — even if the textual metadata was unable to be researched and expanded upon, suddenly new modes of discovery were possible.

The real transformation didn’t occur with the desktop web browser but with the arrival of proper mobile web with the iPhone in 2008.

The iPhone also meant that visitors came to our museums armed with networked cameras. In a quick two years almost all those ‘no photography’ signs vanished. For a brief moment those ‘no photography’ signs were replaced with ‘no selfie sticks’ signs.

All that visitor photography was good word of mouth advertising.

The art lover’s social media is now filled with images of art in museums that they will never get to see in other parts of the world. The experience of art in art museums has been transformed into one that is primarily (or at least initially is) through someone else’s images on their social feed. The art museum’s multi sensory three dimensional experience is rendered in two dimensions through a high resolution but small portable rectangle.

Pre visit “Augmented art reality” if you will.

This has not been universally welcomed. I took some photos in the Kehinde Wiley exhibition at the deYoung and on the wall of visitor feedback in that same exhibition was this.

Visitor comment at deYoung Museum’s Kehinde Wiley exhibition April 2023, photo by Seb Chan

You might be able to claim that the smartphone was what really popularised the idea of ‘experience design’ within museums. For the first time, exhibition designers and curators had to pay real attention to visitors and how they behaved. Design firms who used to work primarily with science and history museums started to do work for art museums.

It turned out that this was a bubble period. It was pre-pandemic era of low interest rate economics. Capital was cheap.

It is a bit different now.

The last week here in San Francisco and Silicon Valley has shown us that perhaps AI is one of those inflection moments like the arrival of mobile that may be more rapidly transformative for museums.

ACMI in Melbourne has been experimenting with various AI tools for about six years with open source, commercial, as well as in R&D pilots with the major players.

We had a sense that change was coming and in my pre-CEO role I wanted to ensure we could be on the front foot when if that change did eventuate.

In order to understand potential AI futures we need to dabble in the AI present. Understand the limitations. Understand which limitations are just those of data and computing power, and which are more fundamental.

The results have been mixed.

Using the off the shelf tools of the late 2010s to do object detection inside videos ended up with noisy results. The best results have been with marginally useful applications and a few silly playful generative exercises.

Early experiments in object detection and generative descriptions at ACMI

Our emphasis on valuing human curation has set a high bar for AI-based recommenders. In this instance the personality and narrative of humans has traditionally won out over the scale applications of machines.

But I am conscious that we are approaching the future with the mindset of the present. “Past performance is not a guarantee of future results” as investment scheme disclaimers are compelled to warn.

Several weeks ago we launched an experiment in video search powered by OpenAI’s Whisper tools. This example shows how sometimes we can make significant leaps forward from older methods — a little like the long term and unexpected value of the Electronic Swatchbook back in 2005.

Undertaking this R&D work in-house has made staff conscious of the labour issues and extractive energy challenges with training our own AI models. Without this in-house work our agency and internal technical literacies to make sense of it all in the future would be reduced.

Artist commissions have also been a useful means to begin the ethical discussions around use. Like many we have asked whose data and whose training models? This also brings into question, whose ethics? Artworks like Joel Sherwood Spring’s DIGGERMODE (2022) have brought First Nations perspectives into these debates — how do we reconcile training datasets and extraction. There are increasing discussions about data sovereignty, the geolocation of cloud services, and the materiality of these technologies.

We know that language has power. It defines and shapes our realities and frames our ways of imagining the future. As such language models are especially powerful. It is useful to think of Large Language Models (LLMs) as political infrastructure. And these services need huge computational resources. These resources now only available to a few. Despite the rhetoric of decentralization there is enormous centralization happening.

Public computational capacity is limited after decades of neo-liberalism and under-investment in publicly owned infrastructure. In Australia — where WiFi was invented in our publicly funded CSIRO — this may be able to be revived but elsewhere, YMMV (your mileage may vary).

As I continuously ask in talks these days — what new literacies do we need? How do the public, creators, and cultural workers distinguish modern technologies from magic? How might we shift our focus from technologies of information and data, to world-building, narrative, emotion and imagination? How might we better collaborate across the arts and cultural sector?

Just as there are multiple futures, there are multiple presents — and our experience may be very different to yours. We need to remind ourselves that we do have agency in this. There is no ‘natural’ arc to technological futures.

Seb Chan, Stanford, April 2023

--

--

Seb Chan

I’m currently the Director & CEO at ACMI (Australian Centre for the Moving Image) in Melbourne. Previously Cooper Hewitt (NYC) & Powerhouse Museum (Syd).