In 2020, Sweden’s second-largest city, Gothenburg, introduced an algorithm to allocate junior high places. The idea was simple: rather than manually reviewing geographical catchment areas, an algorithm would efficiently assign children to schools based on a combination of distances, parental preferences, and school capacity. On paper, it seemed like a no brainer for the municipality, reducing friction and politics. After all, aren’t algorithms already used to draw up timetables and class compositions in schools?
In practice, it was a different story. When parents received their allocations a few months later, the results were baffling. Children had been placed in schools that sometimes required crossing rivers and fjords to reach, miles away from home, in completely different neighborhoods. Parents complained, but the city suggested that they were being difficult.
Almost a year later, an audit revealed the problem: the algorithm had been instructed to calculate distances “as the crow flies” rather than by actual walking, despite the city being split in half by the river Göta älv. Close to 700 children spent the school year studying an hour’s commute away, taking spots intended for local children, who then were allocated to different schools, often further away, displacing more children in turn.
I read about this story in the Guardian,1 after a mother whose child was affected by the unjust sorting decided to take the city to court to demand accountability. She lost the case, as the court sided with the municipality, deciding that it was up to the plaintiff to demonstrate that the system had been unlawful.
The Gothenburg case is not an outlier. Algorithms already govern life-defining decisions: welfare claims, social housing, or mortgage approvals. And yet when I tell people I’m interested in mitigating AI risks in urban settings, I usually get the same answer. Surely algorithmic school sorting is the least of our worries? Shouldn’t we be more focused on AI’s genuinely catastrophic risks – engineered pandemics, autonomous weapons, the concentration of power in the hands of a small number of unaccountable technology companies? I take those risks seriously.
My honest answer is that we can’t think about AI without thinking about cities. Cities are where AI learns. The geographer Federico Cugurullo argues that AI is an urban phenomenon: it depends on cities as vast repositories of real-time data, and on the materiality of urban life to train and improve its own intelligence.2 In a sense, the city is not just where AI is deployed, but also what AI feeds on.
This dependence has produced what Cugurullo calls ‘city brains’: large-scale AI capabilities that are capable of managing multiple urban domains including transport, safety, health, environmental monitoring, and planning.3 The term is actually quite literal, as ‘city brains’ do not have physical bodies but appendages: CCTV cameras that act as eyes, sensors as ears, networks of perception that allow them to apprehend the surrounding environment and develop situational awareness.
None of this is entirely new. Geographers have been documenting what Graham called ‘software-sorted urbanism’ since the early 2000s – the way that code restructures urban space, extending rights and privileges to some, whilst simultaneously excluding others.4 Thrift and French wrote about the ‘automatic production of space’, the gradual ceding of active human involvement in the formation of space to software and algorithms,5 whilst Kitchin and Dodge coined the term ‘code/space’, the idea that environments are so thoroughly produced by software that they cease to function without it.6
What Cugurullo argues is genuinely new is the scale at which AI technology is currently being deployed in urban environments. The smart city, as Kitchin described it, was a ‘real-time city’: sensors showed what was happening right now, how much energy a building is consuming at this very moment, for instance.7 According to Cugurullo, AI urbanism has extended the timeframe into the future: it doesn’t just show what is taking place but anticipates what will.
This is the logic behind predictive policing.8 I’ve watched Metropolitan Police officers on Whitechapel Road film the entire market to identify individuals they believe are likely to commit crimes – not crimes that have happened, but crimes that a model has suggested might.
Which brings me back to why I think AI urbanism and existential risks aren’t separate conversations. Cities not only concentrate the infrastructure on which advanced AI systems depend, but also the populations most exposed to AI’s uneven consequences.9
Cugurullo raises the question of autonomy as a zero sum game, for someone’s autonomy grows at the expense of someone else’s. We already know that urban software agents exert stereotypical, racist and sexist tendencies.10 How will we look beyond the algorithmic black box to prevent AI from generating uneven outcomes and exacerbating existing power dynamics?
We also know what happens when large technology companies move into cities without adequate governance frameworks – think of growing inequality, gentrification and racial displacement caused by the broader tech industry in the Bay Area.11 As London positions itself as Europe’s AI capital, will we have learnt from experience?12 What place are we ready to give private technology companies such as Anthropic and OpenAI within our cities, and if most urban AI tools are developed in the Global North, how will we account for the ethnic, cultural, political and ideological diversity of cities around the world?
In the original 1933 film, King Kong is captured on a remote island, brought back to New York, and exhibited as spectacle as proof of human mastery over nature. He escapes, climbs the Empire State Building, and is shot down by biplanes, as the city that put him on display destroys what it cannot contain.
I think AI could be our century’s King Kong. Cities around the world have adopted it as a solution to a long list of urban problems. But what will happen if like King Kong, AI escapes our control, and wreaks havoc in our cities? Will our regulatory frameworks and accountability mechanisms be strong enough to shoot it down?
These are the questions I plan to explore in this newsletter.