This isn't really what I had in mind when I alluded to a half-written essay on tech and accessibility--that was going to be about new technology that actually makes stuff faster or easier, including generative models but also including power tools and kitchen appliances, as tools for increasing accessibility--but Mike Grindle's post about enshittification and then Adam Conover's interview of my beloved Dan Olson in an AI-hype-tempering crossover extravaganza both have me thinking about disability.

These are, in many ways, very similar texts making very similar arguments. They both draw connections between Web3, AI, VR, and more mundane examples of tech execs promising big and idealistic futures then ultimately becoming suspiciously scam-like. Mike is focusing more on older examples that were actually good, once while Dan and Adam are coverign a newer realm of promises that never come to fruition at all. Neither one is really about disability.

But they kept talking about the ongoing vague disappointment of VR and I kept thinking about 2D glasses[1].

I work with a lot of pretty techie people, and I often disagree with a lot of them about what tech is cool and necessary. I can be a bit of a cynic. But nothing makes me feel like as much of an outsider to the tech research world (other than the brief moment that everybody was all in on NFTs) as virtual reality.

I have eyestrain, migraine, and motion sickness issues. My eyes are delicate flowers, and my vestibular system is made out of crepe paper. None of this is necessarily disabling in the day to day, thanks to a few one-off expensive purchases, a little planning, and religious dark-moding, but VR is bad for me. The goggles are bad, the rooms are bad if they move too much too fast. It's all bad. Even the high-end ones.

This sort of thing gives you perspective. When looking into a pair of glasses can trigger a two-day headache and make you lose your lunch, it makes you ask what the hell you're getting out of it in the first place.

Putting aside video games, where the accessibility conversation is ongoing and contentious, you're left with some artists and museums trying to offer tours of specific spaces to people who aren't near, but most of the remaining VR pundits are reinventing second life, but with your boss and ads this time.

Here's the thing: when a website or other software is on a 2d screen, it fits into the rest of your world. It occludes an incomplete portion of your vision. It's just another object. That's part of the point, part of the convenience. Some might argue it's why we're so prone to multitasking with said devices. The dream of Immersive VR separates you from the physical world. It demands all of you.

Except... it's not mimicking the world very well. We've really only worked seriously on two senses, and even those ones have some problems.

I have a strong suspicion that the disconnect between the actual focal length a VR screen demands of my individual eye lenses to keep things in unblurry focus and the binocular focal length the goggles are mimicking is the thing that makes it an instant eyestrain trigger. So let's play the implementation detail game. What would it take to fix that?

Barring the possibility of more complex screens with the ability to shine different light in different directions based on where it would, theoretically, have been coming from, you'd have to put the screens further from the eyes. The "immersive rooms" are better, but to really fix the problem, you'd have to have something shining or reflecting light at the same depths as the objects in the virtual space.

Which is to say, you need a bigger room, and some furniture.

And I feel like this keeps happening to me when I imagine more fully immersive VR. The appeal of current VR tech, insofar as there is an appeal to people who aren't me, is that it allows you to mimick places that couldn't practically be made in your physical space for one reason or another. If we imagine all of the kit required to make physical feedback real, to allow you to walk around without hitting walls or fucking up your inner ear, to let me actually participate equally in this new world... we're back to making physical spaces. We're back to physicality, and physical forces. We're back to trying to identify every single human sense and perfectly replicate it, independently yet in sync, and our decisions about which senses do or don't matter will always unveil unknown sensory diversity and lead to some bodies becoming disabled by the new reality[2].

Google glass, Oculus, PSVR, and all the others are ultimately just screens in front of your eyes. We know so much about what makes good design for screen-based interfaces, and it's not a 2D imitation of the physical world missing half the information that makes it navigable.

In short, I think we should design to the strengths of screens when we're designing virtual worlds, and that the frenzy for VR would die down a little if fewer people thought they were too good for LARPing.


  1. That is to say, a thing that people with sensitivity to 3D movies can spend extra money on in order to be able to participate equally in an experience that even most non-sensitive people agree is rarely worth the headache. That is to say, an ingeniously-invented solution to an accidentally-invented problem. That is to say, a way of making disposable glasses precious by making them "wrong". That is to say, an upcharge on an upcharge. That is to say, the symbol that lives in my heart for being left behind by (some small part of) society for novelty that no one is even sure they want. ↩︎

  2. There is still sensory diversity that is known and not designed for, of course. How is a blind person supposed to engage with the current visual-first virtual reality? Where are any of the cues they use to navigate their existing physical reality? This is hardly a new problem. In the US, we built everything for cars first, and now there are about four places my blind friends can live independently without a car. ↩︎