Foveated streaming! That's a great idea. Foveated rendering is complicated to implement with current rendering APIs in a way that actually improves performance, but foveated streaming seems like a much easier win that applies to all content automatically. And the dedicated 6 GHz dongle should do a much better job at streaming than typical wifi routers.
> Just like any SteamOS device, install your own apps, open a browser, do what you want: It's your PC.
It's an ARM Linux PC that presumably gives you root access, in addition to being a VR headset. And it has an SD card slot for storage expansion. Very cool, should be very hackable. Very unlike every other standalone VR headset.
> 2160 x 2160 LCD (per eye) 72-144Hz refresh rate
Roughly equivalent resolution to Quest 3 and less than Vision Pro. This won't be suitable as a monitor replacement for general desktop use. But the price is hopefully low. I'd love to see a high-end option with higher resolution displays in the future, good enough for monitor replacement.
> Monochrome passthrough
So AR is not a focus here, which makes sense. However:
> User accessible front expansion port w/ Dual high speed camera interface (8 lanes @ 2.5Gbps MIPI) / PCIe Gen 4 interface (1-lane)
Full color AR could be done as an optional expansion pack. And I can imagine people might come up with other fun things to put in there. Mouth tracking?
One thing I don't see here is optional tracking pucks for tracking objects or full body tracking. That's something the SteamVR Lighthouse tracking ecosystem had, and the Pico standalone headset also has it.
More detail from the LTT video: Apparently it can run Android APKs too? Quest compatibility layer maybe? There's an optional accessory kit that adds a top strap (I'm surprised it isn't standard) and palm straps that enable using the controllers in the style of the Valve Index's "knuckles" controllers.
> Foveated streaming! That's a great idea.
Back when I was in Uni, so late 80s or early 90s, my dad was Project Manager on an Air Force project for a new F-111 flight simulator, when Australia upgraded the avionics on their F-111 fighter/bombers.
The sim cockpit had a spherical dome screen and a pair of Silicon Graphics Reality Engines. One of them projected an image across the entire screen at a relatively low resolution. The other projector was on a turret that pan/tilted with the pilot's helmet, and projected a high resolution image but only in a perhaps 1.5m circle directly in from of where the helmet was aimed.
It was super fun being the project manager's kid, and getting to "play with it" on weekends sometimes. You could see what was happening while wearing the helmet and sitting in the seat if you tried - mostly ny intentionally pointing your eyes in a different direction to your head - but when you were "flying around" it was totally believable, and it _looked_ like everything was high resolution. It was also fun watching other people fly it, and being able to see where they were looking, and where they weren't looking and the enemy was speaking up on them.
I'll share a childhood story as well.
Somewhere between '93 and '95 my father took me abroad to Germany and we visited a gaming venue. It was packed with typical arcade machines, games where you sit in a cart holding a pistol and you shoot things on the screen while cart was moving all over the place simulating bumpy ride, etc.
But the highlight was a full 3D experience shooter. You got yourself into a tiny ring, 3D headset and a single puck hold in hand. Rotate the puck and you move. Push the button and you shoot. Look around with your head. Most memorable part - you could duck to avoid shots! Game itself, as I remember it, was full wireframe, akin to Q3DM17 (the longest yard) minus jump pads, but the layout was kind of similar. Player was holding a dart gun - you had a single shot and you had to wait until the projectile decayed or connected with other player.
I'm not entirely sure if the game was multiplayer or not.
I often come back to that memory because shortly after within that time frame my father took me to a computer fair where I had the opportunity to play doom/hexen with VFX1 (or whatever it was called) and it was supposed to revolutionize the world the way AI is suppose to do it now.
Then there was a P5 glove with jaw dropping demo videos of endless possibilities of 3D modelling with your hands, navigating a mech like you were actually inside, etc.
It never came.
That sounds like you're describing dactyl nightmare. [1] I played a version where you were attacking pterodactyls instead of other players, but it was more or less identical. That experience is what led me to believe that VR would eventually take over. I still, more or less, believe it even though it's yet to happen.
I think the big barrier remains price and experiences that are focusing more on visual fidelity over gameplay. An even bigger problem with high end visual fidelity tends to result in motion sickness and other side effects in a substantial chunk of people. But I'm sticking to my guns there - one day VR will win.
It is precisely that! My version was wireframe and I can't recall the dragon, but everything else is exactly like I remembered it!
For me this serves as an example.
Few years later VFX1 was the hype, years later Occulus, etc.
But 3D graphics in general - as seen in video games - are similar, minus recent lumen, it's still stuff from graphics gems from 80-90s, just on silicone.
Same thing is happening now to some degree with AI.
Nah, people spend 700 on consoles, the biggest barriers is comfort.
As long as the headsets are heavy, I won't get one, no matter how great the graphics are or how good the game is
And even more so for people with corrective lenses and/or weird eye behaviors.
Didn't stop me from getting two different Oculus headsets (and some custom corrective lense inserts) but ultimately, comfort is what made me give up.
Bigscreen Beyond 2 is 107g
1400 Euro, yeah... No...I would like to have something light, with just enough power to stream from my pc via WiFi.
That's it. No idea why something like this doesn't exist. ( Or it exists and I don't know it?)
I expect part of it is that the contemporary recommendations for VR are extremely meaty - something like 2160x2160 and 120hz with stereoscopic rendering meaning you're rendering every frame twice.
That's more than 1.1 billion pixels per second. At 24 bits a pixel that's something like 26Gb/s of raw data. And that's just in bandwidth - you also need to hit that 120hz of latency, in an environment where hiccups or input lag can cause physical discomfort for a user. And then even if you remote everything you need the headset to have enough juice to decompress and render all of this and hit these desired throughputs.
I'm napkin mathing all of this, and so I'm sure there have been lots of breakthroughs to help along these lines, but it's definitely not a straightforward problem to solve. Of course it's arguable I'm also just falling victim to the contemporary trappings of fidelity > experience, that I was just criticizing.
I played that game in Berlin in the late 90s. There were four such pods, iirc, and you could see the other players. The frame rate was about 5 frames per second, so it was borderline unplayable, but it was fun nevertheless.
Later, I found out that it was a game called ”Dactyl Nightmare” that ran on Amiga hardware:
Maybe something like this?
https://en.wikipedia.org/wiki/Virtuality_(product)
I think I played with the 1000CS or similar in a bar or arcade at some point in early 90's
Yes!
The booth depicted on the 1000CS image looks exactly how I recall it, and the screenshot looks very similar to how I remember the game (minus dragon, and mine was fully wireframe), but the map layout looks very similar. It has this Q3DM17 vibe I was talking about.
Isn't this crazy, that we had this tech in ~'91 and it's still not just there yet?
On similar note - around that time, mid 90s, my father also took my to CEBIT. One building was almost fully occupied by Intel or IBM and they had different sections dedicated to all sorts of cool stuff. One of I won't forget was straight out of Minority Report, only many years earlier.
They had a whole section dedicated to showcasing a "smart watch". Imagine Casio G-Shock but with Linux. You could navigate options by twisting your wrist (up or down the menu) and you would press the screen or button to select an option.
They had different scenarios built in form of an amusement park - from restaurant where you would walk in with your watch - it would talk to the relay at the door and download menu for you just so you could twist your wrist to select your meal and order it without a human interaction and... leave without interaction as well, because the relay at the door would charge you based on your prior selection.
Or - and that was straight out of Minority Report - a scenario of an airport, where you would disembark at your location and walk past a big screen that would talk to your watch and display travel information for you, prompting question if you'd like to order a taxi to your destination, based on your data.
I remember a guy I know went to japan/asia around 1985ish and came back with a watch. It had hands, but also a small LCD display. You could draw numbers on the face with your finger, like 6 then X then 3 then = and the LCD would show the values, and finally 18
This is completely uninteresting now, but this was 40 years ago
EDIT: I think Casio AT-552
It was a really interesting and weird time growing up when Japan was the king of tech. I had a friend who's dad was often over there and bringing all sorts of weird stuff back. There was this NES/Famicon game where you played with a sort of gyroscope. I have no idea how you were supposed to play the game, but found the gyroscope endlessly fascinating. Then of course there were the pirated cartridges with 100 in 1 type games. Oh then we found the box full of his dad's "special" games. Ah, good times.
Special games? I thought NES was controlled by Nintendo?
There were some licensed games in Japan that they'd never release in the West, and also a relatively large scene for unlicensed/'bootleg' games. Fun slightly related factoid - the Game Genie was an unlicensed hardware mod and they actually got sued by Nintendo, and won.
I somehow suspect in modern times they'd have lost.
> Isn't this crazy, that we had this tech in ~'91 and it's still not just there yet?
Not really, because feeding us ads and AI slop attracted all the talent.
Oh wow, I also played with this one in what might have been a COMDEX, in the 90s.
I remember the game was a commercially available shooter though, but the machine was exactly the same, with the blue highlights.
>It never came.
Everything you described and more is available from modern home Vr devices you can purchase right now.
Mecha, planes, skyrim, cinema screens. In VR, with custom controllers or a regular controller if you want that. Go try it! It’s out and it’s cheap and it’s awesome. Set IPD FIRST.
[flagged]
My dad had an Apple Newton.
Tell us more about how Microsoft Bob was a user agent LLM? :P
William Gibson's 1984 novel Neuromancer, about 2 AIs with the same creator, locked in conflict, is actually prophetic. About Microsoft Bob and Clippy in the 1990s.
That’s reality cool. My first job out of college was implementing an image generator for the simulator for the landing signal officer on the USS Nimitz, also using SGI hardware. I would have loved to have seen the final product in person but sadly never had the chance.
I remember there was a flight simulator project that had something like that, or even it was that.
it was called ESPRIT, which I believe was eye slaved programmed retinal insertion technique.
> 2160 x 2160 LCD (per eye) 72-144Hz refresh rate
I question that we could not create a special purpose video codec that handles this without trickery. The "per eye" part sounds spooky at first, but how much information is typically different between these frames? The mutual information is probably 90%+ in most VR games.
If we were to enhance something like x264 to encode the 2nd display as a residual of the 1st display, this could become much more feasible from a channel capacity standpoint. Video codecs already employ a lot of tricks to make adjacent frames that are nearly identical occupy negligible space.
This seems very similar (identical?) to the problem of efficiently encoding a 3d movie:
I'm entirely unfamiliar with the vr rendering space, so all I have to go on is what (I think) your comment implies.
Is the current state of VR rendering really just rendering and transporting two videostreams independent of eachother? Surely there has to be at least some academic prior-art on the subject, no?
Foveated streaming is cool. FWIW the Vision Pro does that for their Mac virtual display as well, and it works really well to pump a lot more pixels through.
It's the same amount of pixels though, just with reduced bitrate for unfocused regions so you save time in encoding, transmitting, and decoding, essentially reducing latency.
For foveated rendering, the amount of rendered pixels are actually reduced.
At least when we implemented this in the first version of Oculus Link, the way it worked is that it was distorted (AADT [1]) to a deformed texture before compression and then rectilinear regenerated after compression as a cheap and simple way to emulate fixed foveated rendering. So it’s not that there’s some kind of adaptive bitrate which applies less bits outside the fovea region but achieves a similar result by giving it fewer pixels in the resulting image being compressed; doing adaptive bitrate would work too (and maybe even better) but encoders (especially HW accelerated ones) don’t support that.
Foveated streaming is presumably the next iteration of this where the eye tracking gives you better information about where to apply this distortion, although I’m genuinely curious how they manage to make this work well - eye tracking is generally high latency but the eye moves very very quickly (maybe HW and SW has improved but they allude to this problem so I’m curious if their argument about using this at a low frequency really improves meaningfully vs more static techniques)
[1] https://developers.meta.com/horizon/blog/how-does-oculus-lin...
Although your eye moves very quickly your brain has a delay in processing the completely new frame you switched to. It's very hard to look left and right with your eyes and read something quickly changing on both sides
That depends on the specifics of the encode/decode pipeline for the streamed frames. Could be the blurry part actually is lower res and lower bitrate until it's decoded, then upscaled and put together with the high res part. I'm not saying they do that, but it's an option.
It’s the same number of pixels rendered but it lets you reduce the amount of data sent , thereby allowing you to send more pixels than you would have been able to otherwise
I think it works really well to pump the same amount of pixels, just focusing them on the more important parts.
Always PIP, Pump Important Pixels
It lets you pump more pixels in a given bandwidth window.
People are conflating rendering (which is not what I’m talking about) with transmission (which is what I’m talking about).
Lowering the quality outside the in focus sections lets them reduce the encoding time and bandwidth required to transmit the frame over.
Foveated streaming is wild to me. Saccades are commonly as low as 20-30ms when reading text, so guaranteeing that latency over 2.4Ghz seems Sisyphean.
I wonder if they have an ML model doing partial upscaling until the eyetracking state is propagated and the full resolution image under the new fovea position is available. It also makes me wonder if there's some way to do neural compression of the peripheral vision optimized for a nice balance between peripheral vision and hints in the embedding to allow for nicer upscaling.
I worked on a foveated video streaming system for 3D video back in 2008, and we used eye tracking and extrapolated a pretty simple motion vector for eyes and ignored saccades entirely. It worked well, you really don't notice the lower detail in the periphery and with a slightly over-sized high resolution focal area you can detect a change in gaze direction before the user's focus exits the high resolution area.
Anyway that was ages ago and we did it with like three people, some duct tape and a GPU, so I expect that it should work really well on modern equipment if they've put the effort into it.
It is amazing how many inventions duck tape found its way into.
Foveated rendering very clearly works well with a dedicated connection, wiht predictable latency. My question was more about the latency spikes inherent in a ISM general use band combined with foveated rendering, which would make the effects of the latency spikes even worse.
They're doing it over 6GHz, if I understand correctly, which with a dedicated router gets you to a reasonable latency with reasonable quality even without foveated rendering (with e.g. a Quest 3).
With foveated rendering I expect this to be a breeze.
Even 5.8Ghz is getting congested. There's a dedicated router in this case (a USB fob), but you still have to share spectrum with the other devices. And at the 160Mhz symbol rate mode on WiFi6, you only have one channel in the 5.8GHz spectrum that needs to be shared.
You're talking about "Wi-Fi 6" not "6 GHz Wi-Fi".
"6 GHz Wi-Fi" means Wi-Fi 6E (or newer) with a frequency range of 5.925–7.125 GHz, giving 7 non-overlapping 160 MHz channels (which is not the same thing as the symbol rate, it's just the channel bandwidth component of that). As another bonus, these frequencies penetrate walls even less than 5 GHz does.
I live on the 3rd floor of a large apartment complex. 5 GHz Wi-Fi is so congested that I can get better performance on 2.4 in a rural area, especially accounting for DFS troubles in 5 GHz. 6 GHz is open enough I have a non-conflicting 160 MHz channel assigned to my AP (and has no DFS troubles).
Interestingly, the headset supports Wi-Fi 7 but the adapter only supports Wi-Fi 6E.
Not so much of an issue when neighbors with paper thin walls see that 6ghz as a -87 signal
That said, in the US it is 1200MHz aka 5.925 GHz to 7.125 GHz.
The One Big Beautiful Bill fixed that. Now a large part of this spectrum will be sold out for non-WiFi use.
Different spectrum. They're grabbing old radar ranges.
Also talking about adding more spectrum to the existing ISM 6GHz band.
Here's the overview: https://arstechnica.com/tech-policy/2025/06/senate-gop-budge...
This is part of my job, dealing with spectrum and Washington.
I communicate with the FCC and NTIA fairly often at this point.
You need to pay attention to Arielle Roth, Assistant Secretary of Commerce for Communications and Information Administrator, National Telecommunications and Information Administration (NTIA).
https://policy.charter.com/2025-ntia-spectrum-policy-symposi...
From the article, about the November event:
"... administration’s investment in unlicensed access in 6 GHz ensures the benefits of the entire spectrum band are delivered directly to American families and businesses in the form of more innovation and faster and more reliable connectivity at home and on the go, which will continue to transform and deliver long-lasting impact for communities of all sizes across the country.
Charter applauds Administrator Roth's leadership, and her recognition of the critical role unlicensed spectrum plays today and in the future, both in the U.S. and across the globe."
---
Now here: https://www.ntia.gov/speech/testimony/2025/remarks-assistant...
"... To identify the remainder, NTIA plans to assess four targeted spectrum bands in the range set by Congress: 7125-7400 MHz; 1680-1695 MHz; 2700-2900 MHz; and 4400-4940 MHz."
"On the topic of on-the-ground realities, let’s also not forget what powers our networks today. While licensed spectrum is critical, the majority of mobile traffic is actually offloaded onto Wi-Fi. Born in America, led by America, Wi-Fi remains an area where we dominate, and we must continue to invest in this important technology. With Wi-Fi, the race has already been won. China knows it cannot compete and for that reason looks for ways to sabotage the very ingenuity that made Wi-Fi a global standard."
Roth is not going to take away 6GHz from current ISM allocation.
If Wi-Fi 6E goes upto 7125 and the targeted spectrum band includes 7125 (onwards), what will happen exactly at 7125?
The same thing that happens with every frequency range?
Depending on the spectrum and technology there can be a small slice of guard band between usable portions, which is what we have today.
Nothing there today as provisioned is going to change.
Oh goody! I hope some of it can be used for DRM encrypted TV broadcasts too.
I know you're attempting humor here, but I am not aware of anyone investing in broadcast tv.
I'm just amazed you can do bidirectional ATSC 3.0 with two PlutoSDRs, a minor investment to hack on.
https://www.reddit.com/r/sdr/comments/1ow80n5/help_needed_ho...
More of an issue when your phone's wifi or your partner watching a show while you game is eating into that one channel in bursts, particularly since the dedicated fob means that it's essentially another network conflicting with the regular WiFI rather than deeply collaborating for better real time guarantees (not that arbitrary wifi routers would even support real time scheduling).
MIMO helps here to separate the spectrum use by targeted physical location, but it's not perfect by any means.
IMO there is not much reason to use WiFi 6 for almost anything else. I have a WiFi 6 router set up for my Quest 3 for PC streaming, and everything else sits on its 5GHz network. And since it doesn't really go through walls, I think this is a non-issue?
The Frame itself here is a good example actually - using 6GHz for video streaming and 5GHz for wifi, on separate radios.
My main issue with the Quest in practice was that when I started moving my head quickly (which happens when playing faster-paced games) I would get lag spikes. I did some tuning on the bitrate / beam-forming / router positioning to get to an acceptable place, but I expect / hope that here the foveated streaming will solve these issues easily.
The thing is that I'd expect foveated rendering to increase latency issues, not help them like it does for bandwidth concerns. During a lag spike you're now looking at an extremely down sampled image instead of what in non foveated rendering had been just as high quality.
Now I also wonder if an ML model could also work to help predict fovea location based on screen content and recent eye trackng data. If the eyes are reading a paragraph, you have a pretty good idea where they're going to go next for instance. That way a latency spike that delays eye tracking updates can be hidden too.
My understanding is that the foveated rendering would reduce bandwidth requirements enough that latency spikes become effectively non-existent.
We’ll see in practice - so far all hands-on reviewers said the foveated rendering worked great, with one trying to break it (move eyes quickly left right up down from edge to edge) and not being able to - the foveated rendering always being faster.
I agree latency spikes would be really annoying if they end up being like you suggest.
Enough bandwidth to absolve any latency issues over a wireless connection is not really a thing for a low latency use case like foveated rendering.
What do you do when another device on the main wifi network decides to eat 50ms of time in the channel you use for the eye tracking data return path?
I believe all communication with the dongle is on 6GHz - both the video and the return metadata.
So again, you just make sure the 6GHz band in the room is dedicated to the Frame and its dongle.
The 5GHz is for WiFi.
On the LTT video he also said that Valve had claimed to have tested with a small number of devices in the same room, but hadn’t tried out larger scenarios like tens of devices.
My guess based on that is you likely dont need to totally clear 6GHz in the room the Frame is in, but rather just make sure its relatively clear.
We’ll know more once it ships and we can see people try it out and try and abuse the radio a bit.
Pretty funny to me that you're backseat engineering Valve on this one. If it didn't have a net benefit they wouldn't have announced it as a feature yet lmao
I'm not saying it doesn't work; I'm asking what special sauce they've added to make it work, and noting that despite the replies I've gotten, foveated streaming doesn't help latency, and in fact makes the effects of latency spikes worse.
Why are you assuming the fob would use the same WiFi channel as your regular 6GHz network? That would be extremely poor channel selection.
MU-MIMO is very nice.
The real trick is not over complicating things. The goal is to have high fidelity rendering where the eye is currently focusing so to solve for saccades you just build a small buffer area around the idealized minimum high res center and the saccades will safely stay inside that area within the ability of the system to react to the larger overall movements.
Picture demonstrating the large area that foveated rendering actually covers as high or mid res: https://www.reddit.com/r/oculus/comments/66nfap/made_a_pic_t...
It was hard for me to believe as well but streaming games wirelessly on a Quest 2 was totally possible and surprisingly latency-free once I upgraded to wifi 6 (few years ago)
It works a lot better than you’d expect at face value.
At 100fps (mid range of the framerate), you need to deliver a new frame every 10ms anyway, so a 20ms saccade doesn't seem like it would be a problem. If you can't get new frames to users in 30ms, blur will be the least of your problems, when they turn their head, they'll be on the floor vomiting.
> Saccades are commonly as low as 20-30ms when reading text
What sort of resolution are one's eyes actually resolving during saccades? I seem to recall that there is at the very least a frequency reduction mechanism in play during saccades
During a saccade you are blind. Your brain receives no optical input. The problem is measuring/predicting where the eye will aim next and getting a sharp enough image in place over there by the time the movement ends and the saccade stabilizes.
Yeah. I’d love to understand how they tackle saccades. To be fair they do mention they’re on 6ghz - not sure if they support 2.4 although I doubt the frequency of the data radio matters here.
I would guess that the “foveated” region that they stream is larger than the human fovea, large enough to contain the saccades movement (with some good-enough probability).
Saccades afaik can jump to an arbitrary part of the eye which adds to the latency of finding the iris; basically the the software ends up having to look through the entire image to reacquire the iris whereas normally it’s doing it incrementally relative to the previous position.
Are you really sure overrendering the fovea region would really work?
Not sure, probably depends on the content too. When you read text, the eye definitely isn’t jumping “arbitrarily”, it’s clustered around what you’re focusing on. Might be different for a FPS game where you’re looking out for ambushes.
I’m not sure what you mean by “look through the entire image to reacquire the iris”? You’re talking about the image from the eye tracking camera?
> You’re talking about the image from the eye tracking camera?
Yes. A normal trick is to search just a bit outside the last known position to make eye tracking cheap computationally and to reduce latency in the common case.
They use a 6 Ghz dongle
> Roughly equivalent resolution to Quest 3 and less than Vision Pro. This won't be suitable as a monitor replacement for general desktop use. But the price is hopefully low.
Question, what is the criteria for deciding this to be the case? Could you not just move your face closer to the virtual screen to see finer details?
There's no precise criteria but the usual measure is ppd (pixels per degree) and it needs to be high enough such that detailed content (such as text) displayed at a reasonable size is clearly legible without eye strain.
> "Could you not just move your face closer to the virtual screen to see finer details?"
Sure, but then you have the problem of, say, using an IMAX screen as your computer monitor. The level of head motion required to consume screen content (i.e., a ton of large head movements) would make the device very uncomfortable quite quickly.
The Vision Pro has about ~35ppd and generally people seems to think it hits the bar for monitor replacement. Meta Quest 3 has ~25ppd and generally people seem to think it does not. The Steam Frame is specs-wise much closer to Quest 3 than Vision Pro.
There are some software things you can do to increase legibility of details like text, but ultimately you do need physical pixels.
Even the vision pro at 35ppd simply isn't close to the PPD you can get from a good desktop monitor (we can calculate PPD for desktop monitors too, using size and viewing distance).
Apple's "retina" HiDPI monitors typically have PPD well beyond 35 at ordinary viewing distances, even a 1080p 24 inch monitor on your desk can exceed this.
For me personally, 35ppd feels about the minimum I would accept for emulating a monitor for text work in a VR headset, but it's still not good enough for me to even begin thinking about using it to replace any of my monitors.
Oh yeah for sure. Most people seem to accept that 35ppd is "good enough" but not actually at-par with a high quality high-dpi monitor.
I agree with you - I would personally consider 35ppd to be the floor for usability for this purpose. It's good in a pinch (need a nice workstation setup in a hotel room?) but I would not currently consider any extant hardware as full-time replacements for a good monitor.
Most people in what age group?
I'm 53 and the Quest 3 is perfectly good as a monitor replacement.
I'm in the same boat. Due to my vision not being perfect even after correction, a Quest 3 is entirely sufficient.
I keep hearing this argument, and it baffles me. I find that, as I age and my vision gets worse, I need progressively finer text rendering. Using same-size displays (27") at the same distance, with text the same physical size on screen, 1440p gives me a much worse reading experience than 4k with 2x scaling.
Are you saying ppd requirements for comfortable usage vary with age?
They vary with quality of eyesight which usually correlates with age.
I think there is a missing number here: angular resolution of human eyeballs is believed to be ~60 ppd(some believes it's more like 90).
We get by with lower resolution monitors with lower pixel density all the time.
I think part of getting by with a lower PPD is the IRL pixels are fixed and have hard boundaries that OS affordances have co-evolved with.
(pixel alignment via lots of rectangular things - windows, buttons; text rendering w/ that in mind; "pixel perfect" historical design philosophy)
The VR PPD is in arbitrary orientations which will lead to more aliasing. MacOS kinda killed their low-dpi experience via bad aliasing as they moved to the hi-dpi regime. Now we have svg-like rendering instead of screen-pixel-aligned baked rasterized UIs.
I'm not sure most of us do anymore - see my 1080p/24 inch example.
No one who has bought almost any MacBook in the last 10 years or so has had PPD this low either.
One can get by with almost anything in a pinch, it doesn't mean its desirable.
Pixel density != PPD either, although increasing it can certainly help PPD. Lower density desktop displays routinely have higher PPD than most VR headsets - viewing distance matters!
Not only would it be a chore to constantly lean in closer to different parts of your monitor to see full detail, but looking at close-up objects in VR exacerbates the vergence-accommodation mismatch issue, which causes eye strain. You would need varifocal lenses to fix this, which have only been demonstrated in prototypes so far.
Couldn't you get around that by having a "zoom" feature on a very large but distant monitor?
Yes. You can make a low-resolution monitor (like 800x600px, once upon a time a usable resolution) and/or provide zoom and panning controls
I've tried that combination in an earlier iteration of Lenovo's smart glasses, and it technically works. But the experience you get is not fun or productive. If you need to do it (say to work on confidential documents in public) you can do it, but it's not something you'd do in a normal setup
Yes but that can create major motion sickness issues - motion that does not correspond top the user's actual physical movements create a dissonance that is expressed as motion sickness for a large portion of the population.
This is the main reason many VR games don't let you just walk around and opt for teleportation-based movement systems - your avatar moving while your body doesn't can be quite physically uncomfortable.
There are ways of minimizing this - for example some VR games give you "tunnel vision" by blacking out peripheral vision while the movement is happening. But overall there's a lot of ergo considerations here and no perfect solution. The equivalent for a virtual desktop might be to limit the size of the window while the user is zooming/panning.
For a small taste of what using that might be like turn on screen magnification on your existing computers. It's technically usable but not particularly productive or pleasant to use if you don't /have/ to use it.
This all sounds a bit like the “better horse” framing. Maybe richer content shouldn’t be consumed as primarily a virtualized page. Maybe mixing font sizes and over sized text can be a standard in itself.
It's just about what pixel per degree will get you close to the modern irl setup. Obviously it's enough for 80 char consoles but you'd need to dip into large fonts for a desktop.
I did the math on this site and I'd have to hunch less than a foot from the screen to hit 35 PPD on my work provided Thinkpad X1 Carbon with a 14" 1920x1200 screen. My usual distance is nearly double that so my ppd normally is more like 70 ppd, roughly.
https://phrogz.net/tmp/ScreenDensityCalculator.html#find:dis...
And foveated streaming has a 1-2ms wireless latency on modern GPUs according to LTT. Insane.
That's pretty quick. I've heard that in ideal circumstances Wi-Fi 6 can get close to 5ms and Wi-Fi 7 can get down to 2ms.
I's impressive if they're really able to get below 2ms motion-to-photon latency, given that modern consumer headsets with on-device compute are also right at that same 2ms mark.
Wow, that's just 1 frame of latency at 60 fps.
Edit: Nevermind, I'm dumb. 1/60th of a second is 16 milliseconds, not 1.6 milliseconds.
No, thats between 0.06 and 0.12 frame latency on 60fps. It's not even a frame on 144Hz (1s/144≈7ms)
Much less than, 1 frame is 16ms
60 fps is 16.67 ms per frame.
> Roughly equivalent resolution to Quest 3 and less than Vision Pro. This won't be suitable as a monitor replacement for general desktop use.
The real limiting factor is more likely to be having a large headset on your face for an extended period of time, combined with a battery that isn't meant for all-day use. The resolution is fine. We went decades with low resolution monitors. Just zoom in or bring it closer.
The battery isn't an issue if you're stationary, you can plug it in.
The resolution is a major problem. Old-school monitors used old-school OSes that did rendering suitable for the displays of the time. For example, anti-aliased text was not typically used for a long time. This meant that text on screen was blocky, but sharp. Very readable. You can't do this on a VR headset, because the pixels on your virtual screen don't precisely correspond with the pixels in the headset's displays. It's inevitably scaled and shifted, making it blurry.
There's also the issue that these things have to compete with what's available now. I use my Vision Pro as a monitor replacement sometimes. But it'll never be a full-time replacement, because the modern 4k displays I have are substantially clearer. And that's a headset with ~2x the resolution of this one.
> There's also the issue that these things have to compete with what's available now. [...] But it'll never be a full-time replacement, because the modern 4k displays I have are substantially clearer.
What's available now might vary from person to person. I'm using a normal-sized 1080p monitor, and this desk doesn't have space for a second monitor. That's what a VR headset would have to compete against for me; just having several virtual monitors might be enough of an advantage, even if their resolution is slightly lower.
(Also, I have used old-school VGA CRT monitors; as could be easily seen when switching to a LCD monitor with digital DVI input, text on a VGA CRT was not exactly sharp.)
VR does need a lot of resolution when trying to display text.
Can get away with less for games where text is minimized (or very large)
The weight on your face is half that of Quest 3, they put the rest of the weight on the back which perfectly balances it on your head. It's going to be super comfortable.
Yeah, already many people use something like the Bobovr alternative headstrap for the Quest3 that has an additional battery pack in the back, which helps balancing the device in the front.
Which doubles the weight on your head, which increases the inertia you feel when moving around playing active games. The Frame is half the weight on your face, so active games are going to be a lot more comfortable.
Whether or not we used to walk to school uphill both ways, that won't make the resolution fine.
To your point, I'd use my Vision Pro plugged in all day if it was half the weight. As it stands, its just too much nonsense when I have an ultrawide. If I were 20 year old me I'd never get a monitor (20 year old me also told his gf iPad 1 would be a good laptop for school, so,)
One problem is that in most settings a real monitor is just a better experience for multiple reasons. And in a tight setting like an airplane where VR monitors might be nice, the touch controls become more problematic. "Pardon me! I was trying to drag my screen around!"
> (20 year old me also told his gf iPad 1 would be a good laptop for school, so,)
Yikes. How'd that relationship end up? Haha.
Lol, I laughed then 20 seconds later started taking this literally: I think that was July, it had been two years, and it was over by November (presumably due to my other excellent qualities!) (all joking aside, for younger members in our audience, it was sweet and she was around in my life for at least another decade)
2k X 2k doesn't sound low res it is like full HD, but with twice vertical. My monitor is 1080p.
Never tried VR set, so I don't know if that translates similarly.
Your 2K monitor occupies something like a 20-degree field of view from a normal sitting position/distance. The 2K resolution in a VR headset covers the entire field of view.
So effectively your 1080p monitor has ~6x the pixel density of the VR headset.
Thank you for explaining, it makes sense now.
The problem is that 2k square is spread across the whole FOV of the headset so when it's replicating a monitor unless it's ridiculously close to your face a lot of those pixels are 'wasted' in comparison to a monitor with similar stats.
Totally true, but unlike a real monitor you can drag a virtual monitor close to your face without changing the focal distance, meaning it's no harder on your eyes. (Although it is harder on your neck.)
To get the same pixel per degree as my work laptop I'd have to put it's virtual replacement screen 11 (virtual) inches from my face and that's probably the lowest PPD screen in my normal life unless I get a bad desk at work that day. Just pasting them inches from your nose is not a great solution, you can already do that with a good set of monitor arms and there's a reason almost no one does it.
- [deleted]
Why hasn't Meta tried this given the huge amount of R&D they've put into VR and they had literally John Carmack on the team in the past?
They prioritized cost, so they omitted eye tracking hardware. They've also bet more on standalone apps rather than streaming from a PC. These are reasonable tradeoffs. The next Quest may add eye tracking, who knows. Quest Pro had it but was discontinued for being too expensive.
We'll have to wait on pricing for Steam Frame, but I don't expect them to match Meta's subsidies, so I'm betting on this being more expensive than Quest. I also think that streaming from a gaming PC will remain more of a niche thing despite Valve's focus on it here, and people will find a lot of use for the x86/Windows emulation feature to play games from their Steam library directly on the headset.
It will be interesting to see how the X86 emulation plays out. In the Verge review of the headset they mentioned stutters when playing on the headset due to having to 'recompile x86 game code on the fly', but they may offer precompiled versions which can be downloaded ahead of time, similar to the precompiled shaders the Steam Deck downloads.
If they get everything working well I'm guessing we could see an ARM powered Steam Deck in the future.
Despite the fact it uses a Qualcomm chip, I'm curious on whether it retains the ability to load alternative OS's like other Steam hardware.
> Despite the fact it uses a Qualcomm chip, I'm curious on whether it retains the ability to load alternative OS's like other Steam hardware.
I think it should: we have Linux support/custom operating systems on Snapdragon 8 Gen 2 devices right now today, and the 8 Gen 3 has upstream support already AFAIK
If you mean foveated streaming - It’s available on the Quest Pro with Steam Link.
What do you mean? What part have they not tried?
I use a 1920x1080 headset as a monitor replacement. It's absolutely fine. 2160x2160 will be more than workable as long as the tracking is on point.
> But the price is hopefully low.
The main value of Meta VR and AR products is the massive price subsidy which is needed because the brand has been destroyed for all generations older than Alpha.
The current price estimate for the Steam Frame is $1200 vs Quest 3 at $600 which is still a very reasonable price given the technology, tariffs, and lack of ad invading privacy
Quest 3 is $499 and Quest 3S is $299 in the US
> Very cool, should be very hackable. Very unlike every other standalone VR headset. That might be the reason I'm going to buy it. I want to support this and Steam has done a lot to get gaming on linux going.
I guess there's a market for this but I'm personally disappointed that they've gone with the "cram a computer into the headset" route. I'd much rather have a simpler, more compact dumb device like the Bigscreen Beyond 2, which in exchange should prove much lighter and more comfortable to wear for long time periods.
The bulk and added component cost of the "all in one" PC/headset models is just unnecessary if you already have a gaming PC.
I'm personally quite hyped to see the first commercially available Linux-based standalone VR headset announced. This thing is quite a bit lighter than any of the existing "cram a computer in" solutions.
Strictly speaking the mobile Oculus/Meta Go/Quest headsets were linux/android based, you can run Termux terminal with Fedora/Ubuntu on them and use an Android VNC/X app to run the 2D graphical part. But I share your SteamOS enthousiasm.
Yeah, this is exactly what I've been waiting for for quite a long time. I'm very excited.
They crammed a computer into the headset, but UNLIKE Meta's offerings, this is indeed an actual computer you can run linux on. Perhaps even do standard computer stuff inside the headset like text editing, Blender modeling, or more.
As a current and frequent user of this form factor (Pico 4, with the top strap, which the Steam Frame will also have as an option, over Virtual Desktop) I can assure you that it's quite comfortable over long periods of time (several hours). Of course it will ultimately depend on the specific design decisions made for this headset, but this all looks really good to me.
Full color passthrough would have been nice though. Not necessarily for XR, but because it's actually quite useful to be able to switch to a view of the world around you with very low friction when using the headset.
There's always going to be a computer in it to drive it. It's just a matter of how generalised it is and how much weight/power consumption it's adding.
It's nice to have some local processing for tracking and latency mitigation. Cost from there to full computer on headset is marginal, so you might as well do that.
You can get a Beyond if that's what you want. It's an amazing device, and will be far more comfortable and higher resolution than this one. Valve has supported Bigscreen in integrating Lighthouse tracking, and I hope that they continue that support by somehow allowing them to integrate the inside-out tracking they've developed for this device in the next version of the Beyond.
That would probably add a lot of extra weight and it would need to make the device bigger.
I don't think it would be too bad. Cameras are tiny. The processing would still happen on the PC, and you could delete the lighthouse tracking sensors. I guess the hardest part would be sending that much camera data back to the PC over the cable.
Its worse than that. Cameras (plus DToF if you want tracking in the dark), imus, gyros, and necessarily onboard compute/SOC to handle processing that data. Shipping it all off to a remote computer and then making the round trip creates an untenable amount of lag. Thats not even accounting for controller and hand tracking.
And once you have the pipeline and computation power to enable inside out tracking all on device, adding an OS is essentially free.
It already has an IMU and gyro, obviously. Time of flight cameras are unnecessary. Steam Frame doesn't have them either. At most you would put IR LEDs for illumination which are tiny but also optional (Quest 3 doesn't have them), and there's no reason they have to be in the headset, you could just have a standalone IR illuminator on your desk.
As for sending data over a cable, there's nothing inherently laggy about it. After all, the display signal already travels over the cable, and the cable transfer is by far not the limiting factor in latency. The camera data is lower bandwidth than the display signal, too.
I agree. Hopefully Bigscreen continues making hardware. I still have the original bigscreen beyond and im very happy with it besides the glare.
How is Linux support?
From the review section:
Nikos Q: Linux Desktop support? A: Hi,
Linux is not officially supported but can absolutely work with the Beyond 2. I'd suggest joining the Bigscreen Beyond Discord server for more information
Thanks By Bigscreen Support Team
---
Rant: they have disabled selected text for the reviews for some inexplicable reason.
Lol, doesn't sound confidence inspiring. "More info in Discord" is such a non-starter.
I wish Valve every bit of success, if they deliver an open platform people can own and hack.
its using SteamVR, so it should work
It's super light compared to Quest 3, half the weight on your face, the rest is on the back which balances the headset. Big Screen Beyond isn't wireless and has a narrower field of view.
> has a narrower field of view.
On the beyond 2, only by 2 degrees horizontally. I don't think that would even be noticeable.
I was worried about the built in computer as well, but then I found out it's only 185g. It is 78g more than the Bigscreen Beyond 2, but it's still pretty light.
- [deleted]
I once lived in a place that had a bathroom with mirrors that faced each other. I think I convinced myself that not only is my attention to detail more concentrated at the center, but that my response time was also fastest there (can anyone confirm that?).
So this gets me thinking. What would it feel like to correct for that effect? Could you use the same technique to essentially play the further parts early, so it all comes in at once?
Kinda a hair brained idea, I know, but we have the technology, and I'm curious.
Peripheral vision is extremely good at spotting movement at low resolution and moving the eye to look at it.
I don't know if it's faster, but it's a non-trivial part of the experience.
Yea, I've heard and noticed that as well (thought about adding a note about it to my original comment). But what I'm curious about is the timing. What I suspect is that peripherals are more sensitive to motion, but still lag slightly behind the center of focus. I'm not sure if it's dependent on how actively you are trying to focus. I'd love to learn more about this, but I didn't find anything when I looked online a bit.
It's good enough to see flickering on crt monitors at 50-60hz for some people.
I can see the spinning color wheels inside cheaper projectors as rapidly-changing rainbow lights leaking out of their ventilation grilles, but only with peripheral vision and mostly only if I'm moving my head at the same time.
> Foveated streaming! That's a great idea.
It would be interesting to see⁰ how that behaves when presented with weird eyes like mine or worse. Mine often don't always point the same way and which one I'm actually looking through can be somewhat arbitrary from one moment to the next…
Though the flapping between eyes is usually in the presence of changes, however minor, in required focal distance, so maybe it wouldn't happen as much inside a VR headset.
----
[0] Sorry not sorry.
Have a look at this video by Dave2D. In his hands-on, he was very impressed with foveated streaming https://youtu.be/356rZ8IBCps.
Yet this is shaping up to be one of the most interesting VR releases
How the hell would foveated streaming even work, it seems physically impossible. Tracking where your eye is looking then sending that information to a server, it processing it and then streaming that back seems impossible.
The data you're sending out is just the position and motion vectors of the pupils. And you probably only need about 16 bits for each of these numbers for 2 eyes. So the equivalent of two floating point numbers along a particular channel or 32 bits at minimum. Any lag can be compensated for by simply interpolating the motion vectors.
It actually makes a lot of sense!
Eye tracking hardware and software specifically focus on low latency, e.g. FPGA close to the sensor. The resulting packets they send is also ridiculously small (e.g 2 numbers as x,y positions of the pupils) so ... I can see that happening.
Sure eyes move very VERY fast but if you do relatively small compute on dedicated hardware it can also go quite fast while remaining affordable.
It just needs to be less impossible than not doing it. I.e. sending a full frame of information must be an even more impossible problem.
> Mouth tracking?
What a vile thought in the context of the steam… catalogue.
I'm guessing it's main use case will be VR chat syncing mouths to avatars.
The porn industry disagrees.
If the porn industry likes it, it's bad?
Guess we have to get rid of physical home media.
And the internet.
It was a good run I guess.
They're probably thinking of it in comparison to the Apple Pro which attempts to do some facial tracking of the bottom of your face to inform their 'Personas', it notably still fails quite badly on bearded people where it can't see the bottom half of the face well.
I gathered as much, but still.
Funny enough the Digital Foundry folks put a Gabe quote about tongue input in their most recent podcast.