I’m not so sure about Vision OS Simulator

I am bouncing-off-the-walls excited about Apple Vision Pro. I’m most excited about the opportunity to use the device to have multiple screens. I want to put my todo list on the wall beside me. I want to put a concert video on the wall in the corner as if I had a big TV. I want to have an infinite Figma canvas to play in. I’ve even started fooling around with Swift and 3D rendering again so maybe I can make some simple apps. I have plenty of ideas of web experiments I’d love to move into native code.

When I first launched the Vision OS Simulator, I was blown away by how smooth and impressive it was. Xcode has iterated a lot over the years, and I’m impressed by how much power it has while still feeling simpler and more straight-forward than other IDEs. Good for Apple! But then I struggled to get the Simulator to do things, so I gave up. Bugs are normal, that’s fine.

Then I saw a video of someone else getting the Simulator to do things I couldn’t do. For example, you can move windows around on the screen but only when you’re in the right mode. I was apparently in the wrong mode when I first tried, so I booted it back up again. Much better! Except …

The only thing I want to do is take a window and “hang” it on the wall like a painting. But I cannot figure out how to move things forward and backward on a Z axis. So I have a living room littered with a bunch of windows at strange angles, and nothing hanging nicely on the wall. It’s aggravating.

This doesn’t mean the headset itself will have the same problem. I assume moving forward and backward in space is going to be pretty intuitive when you’re controlling with your fingers, Minority Report-style. But gosh, the thing I’m most excited about trying in the Simulator won’t work and it’s making me less excited.