Math is not neutral (neither are technologies, libraries, syllabi, textbooks, digital assistants, smarthomes, etc…)

IN this week’s readings on the “Internet of Things,” I was fascinated by the implications of the structural assumptions necessary for a connected world. Adam Greenfield uses two specific examples–the first, involving RAND’s assessment of fire department allotments in NYC, drives home the real-world costs of assuming that data is gathered from and organized through a neutral perspective. People can die or lose their homes or their livelihoods–even in a situation where all parties are, ostensibly, acting in good faith (an article in New York Magazine from 2011 discusses some of the lessons from the 70s that are still being learned in an era of budget cuts). Further, the Dutch persoonskart‘s direct connection to the Holocaust is especially chilling in the light of current arguments over the 2020 census and its potential inclusion of a citizenship question.

The idea that Greenfield challenges is the notion that technology develops along predictable problem->solution patterns. The problem with this positivist theory is, of course, that there can be no single solution or set of solutions for the myriad problems posed by cities and everyday lives. Each individual user or client of technology developed for their convenience uses it very differently–no single algorithm can address our always imperfect understanding of an unpredictable and imperfect world.

This makes sense, even when some solutions seem to be intuitive and reasonable because what is intuitive and reasonable for one person may be exclusionary for another. I tend to use Alexa as a voice timer while cooking because I don’t like the oven timer. Others have Alexa play “Despacito” when something sad happens. Still others activate their Echo by saying “Computer.”

Another good example comes to me from my long-time favorite ride at Disney World, The Carousel of Progress. And, yes, my fave is super-problematic. It’s a misogynistic, materialistic ode to super capitalism…but the song is catchy.

The last scene in Carousel is relatively new. The family we’ve been following every ten years throughout the 20th century has now skipped ahead into the semi-present. Grandma makes the high score on a space shooter game controlled by a VR headset and what looks like an old NES PowerGlove and Dad atomizes the turkey in his voice controlled oven. It’s the most dystopic version of internet of things played for a laugh. I kinda love it and hate that I love it at the same time. It’s telling, I think, that the futurist ideal (and, now, the retrofuturist aesthetic) is so dependent on smart technology that listens and learns.

But what Carousel really tells me is that the needs of this upper-middle-class white family are only sort of met by technology that was designed specifically for them, as if their version of the world and the home and the family was the universal one. Even the bots and animatronic people in Carousel come pre-programmed with institutionalized structural privilege.

David Golumbia explores this idea to a fuller extent in a recent post on Medium on the inherent biases to Aritificial General Intelligence and AI. For Golumbia, AGI represents a stage beyond the machine-learning–machine consciousness. Yet he’s right to recognize that the theorists and most prominent promoters of AGI are also closely intertwined with the white supremacist arguments of race and intelligence. It’s that last word, intelligence, that is at stake. An AI, in the sci-fi sense, can be a machine person, fully intelligent and conscious. But to define what is meant by intelligence is to be overly reliant on measures of human intelligence, most of which have troubling origins in structural and societal racism. From a perspective in the humanities, when we think about AI, AGI, smart devices, learning devices, or the internet of things, it is important for us to consider exactly what we’re talking about.

More soon!

 

 

Leave a Comment