This is an Urbanscale Weeknote titled “Week 22: Undoing AR,” written by Adam Greenfield in New York on the 1st of June 2011.

Week 22: Undoing AR

Adam Greenfield on 1 June 2011

A week of subtle but terrifically important developments for our work, for networked cities everywhere and the people who live in them.

- Whether everyone present quite realized it or not, the first took place at the last-ever Mobile Monday Amsterdam, an event at which I had the privilege of speaking.

Held amidst the awe-inspiring beauty and occasional kitsch of a deconsecrated canalside church, this last MoMo brought luminaries like Jyri Engestrom, Ben Hammersley and Kevin Slavin together, alongside ys tly, to discuss what happens next when “mobile” per se is simply the way the world is. (Matt Cottam’s wonderful pictures of the event can be seen here.)

Everyone was on, but the day clearly belonged to the very last speaker. In a small way, being in the audience for Kevin’s talk put me in mind of the fabled Sex Pistols show at Manchester’s Lesser Free Trade Hall in June of ’76 — the history-breaking moment of which it’s famously been said that forty people were there, but thousands claimed to have been. What he had to offer was nothing less than a diamond bullet through the heart of augmented reality as it’s currently constructed, and if you happened to be possessed of a certain sort of antennae, you could feel things in the world shift around his words as he uttered them.

Now, I’ve long maintained that there is, and should be, no business model for AR. It’s, at most, a presentation layer, a set of instructions for the contingent rendering of information in a specified context; in this, it’s precisely analogous to CSS in its relationship to HTML. So while I could see a market for authoring tools or development environments, the idea of a dedicated AR browser has always struck me as a little foolish.

But I also happen to think that AR is a profoundly anti-urban(e) technology, and this is the real crux of my beef with its advocates.

It’s not that I’m not sympathetic to the idea of an intercessory intelligence, smoothing the burred edges of our engagements with the world. I happen to be more than a little face-blind, and I often inadvertently hurt or offend people I’ve met before by introducing myself with a hearty “Nice to meet you.” By happy coincidence, Nurri is a super-recognizer, and I’ve seen how often she charms people on second or third meetings — not merely recognizing them, despite significant changes in weight, hair length or context, but remembering the circumstances of their lives in quite a good deal of detail. It’s a faculty that lets her ask after a partner’s job search or a grandmother’s illness, genuinely and with feeling, and it’s easy to see how important that is to people. Who doesn’t like to feel that they’ve been vividly remembered?

So would I, personally, benefit from a layer that dropped a bright outline over the faces of those I encounter, identifying them and otherwise providing me with all the intimate details Nurri has at her disposal naturally? Of course I would. But I don’t believe that any current combination of technologies, physical platforms, databases and interfaces could deliver that information to me in a way that was less awkward and disruptive to the moment than my existing inability to remember people’s faces — and the same goes double or triple for information about cities, delivered in the mobile context. As currently proposed, the paltry AR layer interposes itself between you and your ability to connect with the much richer circumstances, events and textures of actuality.

Certainly as delivered through mobile devices, contemporary AR imposes significant limits on your ability to derive information from the flow of streetlife. It’s not just the “I must look like a dork” implications of walking down the street with a mobile held visor-like before you, though those are surely present and significant. It’s that the city is already trying to tell you things, most of which are likely to be highly, even existentially salient to your experience of place. I can’t help but think that what you’re being offered through the tunnel vision of AR is starkly impoverished by comparison — and that’s even before we entertain the very high likelihood of that information’s being inaccurate, outdated, or commercial or otherwise exploitative in nature.

Take, as an excellent example from a domain closely parallel to AR, the enhanced Streetside view Microsoft is just today bragging about, and more specifically the very segment of New York City they offer as a case in point. In just this one view — presumably more rigorously vetted than others they might have shared — the Pietrasanta restaurant clearly visible at the corner of W 47th Street and Ninth Avenue is misidentified as Amarone (and in any event the relevant tag denotes a location in the precise middle of the intersection), while the sole service Bing identifies on the block of 47th between Eighth and Ninth is Tahitian Noni Juice & Products…which ought to surprise habitués of the Starbucks on the corner. The point isn’t so much to mock Bing as to point out that the technical accomplishment inherent in these impressive representations of urban space is nowhere matched by a usefully accurate or comprehensive database of urban services. Even for midtown Manhattan, we’re a long way off from offering the pedestrian meaningfully augmentive information.

Kevin’s argument went much deeper than either of these charges, though. It drilled right down to the model of optics that’s inscribed in AR — the way in which it willy-nilly proposes the sovereignty of visual focus as a way of making sense of the world, to the exclusion of other channels of knowledge. It was a devastatingly thorough takedown, technically literate and culturally savvy, and I have a hard time seeing any way AR remains a viable enterprise afterward, either commercially or philosophically.

At any rate, it was both a pleasure and a genuine privilege to watch his argument unfolding in real time. For once in my life, I feel like I can say I was present for the actual moment a bubble burst; I hope Kevin’s talk goes up on video ASAP, so you can see what it looked like. My sincerest gratitude to Momo’s Marc, Yuri, Sam, Martijn and Maarten for making that moment possible, and to Juha, Matt and Maia for holding this Amsterdam trip to the very high standard set by my previous adventures there.

- I alluded to “developments,” multiple. The other majorish thing I wanted to share with you is that this week marks the launch of our initiative to keep the pernicious data-gathering advertisements we call “spyscreens” out of New York City.

As you’ll see in a more comprehensive piece we’ll be posting shortly, we think these screens produce absolutely no benefit to the public, while imposing considerable and insidious costs, and we want New York City to be the first municipality in the world to ban them entirely. (Spyscreen developer Immersive Labs predictably cites Minority Report in their pitch; what they’ve apparently failed to grasp is that the film is a dystopia. We haven’t.)

This is a Jane Jacobs moment for all of us: spyscreens are a blight on our city, and we hope that those of you who happen to be New Yorkers will join us in calling upon Mayor Bloomberg, our City Council, and our borough presidents to take swift and decisive action against them. (A further goal of the initiative is to help develop model regulation that will be useful to folks elsewhere who want to prevent such screens from appearing in their communities.) This is fundamental for us, something we’ll be focusing on alongside our other, more affirmative work.

- Logistics: M remains in Helsinki, doing yeoman work on Urbanflow HEL alongside Nordkapp’s Tia, Kate and Sami; I’m in the studio all week, spec’ing out Projects LAFAYETTE, GANSEVOORT and JANE — all of which you’ll be hearing much more about in short order — and hopefully meeting with Benedetta and Justin of GROUNDlab to discuss progress on PERRY.