games industry network programming

Can OnLive work, technically? If so, how?

This week, a game service was announced that would stream games to your home TV without you needing to own a console or PC. A lot of people are wondering: are these guys smoking crack?

EDIT: Richard Leadbetter at Eurogamer has an article with some great “side by side compare” video to show theoretical quality you can achieve with off the shelf compressors right now. He comes to a similar conclusion on the issue of latency, although having seen the video I disagree on the quality (he feels it’s unacceptable) – it’s clearly inferior, but it still provides a very nice experience, especially if you’re sitting 6-12 feet away from the screen.


Latency. You need >25 FPS rendering and you want 60 FPS rendering, especially on an HDTV. But … that’s just for the rendering of frames. The responsiveness of the game, the cycle “see something, make decision, press button, wait for game to process inputs, game changes, see new screen” runs at a rather different rate.

Humans can typically easily detect FPS differences up to around 50 FPS. Up to around 75 FPS seems to be the limit for most people to see any difference at all – even a very subtle “can’t quite consciously see the difference”. Without looking at the research (based purely on experience), if you provide a video that alternates between white and black each frame, we can detect frame rate differences up to perhaps 200 FPS.

(there will probably be some maths mistakes in this post, of course. There always are :))


So, you need to be delivering at 1080p. That’s 2,073,600 pixels, each of which carries 4 bytes of colour information, i.e. about 8MB per frame.

Remember: internet connection / broadband speeds are measured in bits, not bytes, so an “8Mbps” connection (the fast connection available ubiquitously today) is only capable of 1 MB a second.

The fastest European and US home broadband connections I’ve yet seen peak at 24 Mbps, i.e. less than 1 % of the speed required to deliver full-screen HD video uncompressed.

If you look at the specs for those special HDMI cables you use to connect up your HDTV, you’ll see they are measured not in Mbps (like internet) but in Gbps – i.e. one thousand times as fast.

Fast Video Data

But … in practice, some videogames tend to exhibit extremely high frame to frame coherency, with the most extreme examples being slow-moving strategy and platform games, and racing games.

I noted that the screenshots for OnLive seem to have quite a few racing games in there. Hmm.

The question now becomes: how many pixels can change in the gap of time from one frame to the next?

Assuming you can go into a spin in your car where the whole car rotates 360 degrees in 3 seconds, revolving around the camera’s centre, and you had a viewing angle of 120 degrees (most racing games tend to be wide-angle), you’d be shifting your 1920 columns of pixels out every second.

i.e. 1 FPS. In bandwidth terms, you need to send a mere 64 Mbits in one frame. Your fastest home internet connection outside Asia would “only” be too slow by a factor of 2. In the best connected cities in the Asian countries with the very best internet infrastructures, you’d be able to manage this OK. Just.

Custom Compression

The above assumes two things:

  1. a “perfect” scenario to make compression as easy as possible: as few pixels change as is theoretically possible
  2. the only ways you have of setting pixels are copying from an adjacent pixel or downloading it from the server

In practice, of course, neither is true. OnLive’s super sekrit proprietary video compression clearly is setup to provide a lot more alternative ways to set pixels, which reduces the data requirements, but on the other hand in practice even in a racing game it tend’s to be much much harder to retain pixels for copying.

In a platformer, you’re typically fine, but in any kind of moving-camera game, you’re generally screwed.

Custom compression will get you a long way, but don’t expect to see it help much in FPS’s, for instance.

Ping times

For input processing to precisely match rendering speed of 60 FPS, you’d need a ping time of 16 ms or less. Unlike bandwidth, there’s no way to make up for occasional drops in connection quality by rendering lower quality video for a few frames without having a fat client (this is what fat clients are great for, and is one reason why many people think OnLive is a stupid idea). You’d have to *guarantee* that 16 ms latency worst-case, and so you’d probably be looking at more like 10 ms latency as your requirement (have to handle all sorts of other random internet traffic on your system – AND this assumes you don’t share your internet connection with any home PC’s that do any web browsing).

Much has been made of the claim that OnLive is impossible because the best ping times anyone gets to the internet are measured in tens of milliseconds, and the averages are measured in hundreds of milliseconds.

The ping time from London to New York, limited by the speed of light, is physically incapable of ever getting much less than about 20 ms.

Ah. But … your ping time to your ISP is measured in single-digit milliseconds. The sensible thing for OnLive to do would be to partner with ISP’s, and pay them substantial amounts of money to host OnLive servers in each of their POP’s around the country, so that the ISP’s consumers get sub 10 ms ping times to OnLive. This works.

Surprisingly few people in the online games industry today seem to have ever realised this was a viable solution to ping time issues. People who were around in the early 1990’s remember doing exactly this kind of thing back when ping times were much much higher – it used to be that every ISP ran it’s own game servers – but most people who’ve not lived through that don’t seem to ever think of it.

Over the years, I’ve thought of using it, but the only business plan I saw that *demanded* it was the MMOFPS with hundreds or thousands of players per single deathmatch level. Now OnLive comes along and I’d say that OL needs it too. I’m waiting to see an announcement of equity ownership from some major multinational ISP any day now…

9 replies on “Can OnLive work, technically? If so, how?”

Pretty much what I thought, ignore the tech and assume it works mostly which isn’t that unreasonable, it will succeed or fail based on relationships with ISPs.

So I suspect it will be a strange local thing, with a very long roll out time.

What it reminds me of is hearing about (as a kid) this thing called cable TV that was popular in America but not something I had the option of experiencing until many many many years later.

“The fastest European and US home broadband connections I’ve yet seen peak at 24 Mbps”

That seems odd. My impression was that 100 Mbps fiber right into apartments where not that uncommon. I’ve had that for the last seven years and I’ve moved twice to new cities during that time. I’m typically able to utilize about 60-80% of it.

Which cities?

Akamai just published a review of US and worldwide broadband, and (IIRC) reported a mere 50% or so of US citizens were getting over 5Mbps even in the most connected states.

100Mbps for the last seven years anywhere outside of the new super-cities (most of which are in Asia) is insanely high – I know of cases, but these were tiny European cities that got upgraded at “the right time”, or were all super expensive executive apartments, luxury flats, or rare exceptions that happened to be sited right on an internet peering point. Basically … far from what any normal people could get. Where was that?

Think about the economics behind unrolling big iron almost locally to your players. In order to get a good *and constant* 1.5Mb/s (which they claim is their SD bandwidth) you either need to place the servers at the exchanges, probably VERY uneconomical, or indeed at ISPs.

So yes you’d need to partner with ISPs, and that might be fine for europe. It might not for the US, which territories are much bigger meaning some subscribers would probably fall out of the “acceptable latency” range.

But most importantly, you have to think about the underlying network telcos use. You’d think that the link between your copper wires and the ISP is a “perfect” 24Mb/s (or 8, 4… or whatever is sold to you) . But in practice, it will not be. For instance, at least in some of the biggest countries in Europe, telcos backbones are built on top of ATM. ATM has serveral adaptation layers, meaning that you can choose different levels of quality and garanties. That’s great because it means that you can “reserve” a 64Kb/s link between 2 hosts in order to perform a glitchless phone communication… But that costs (hence why phone communication is so expensive in comparison typical data comm) as you need to immobilise resources all along the link. So, in order to cut costs, the link that separates you from you ISP in a typical (european, I guess, I don’t know about the US) is implemented using AAL5, which means it’s a best effort protocol, very much like typical IP.

So, that means in turn that you will indeed experience your X Mb/s to your ISP *most of the time*, but there is *no garanties* you will constantly. If your data rate is not constant for your video game video stream, that means you either have to skip altogether frames, or start buffering, which is no good news either for your response roundtrip.

To come to the conclusion (sorry that was long), this means that in order to support high bandwidth, constant-ish data rate, you need to partner with ISPs *and* someone will have to pay for the low contention ratio on the telco backbone…

I’m not even going into the hardware necessary to run the games. :)
Even though it’s technically possible, my gut feeling is that in practice it’s going to be very expensive… Or very shit.

Skövde where I studied, a really small town connected through OpenNet, whose business plan is to build an open fiber net, and now Uppsala with Bredbandsbolaget. Apparently, 19% of all broadband subscribers have fiber in Sweden according to OECD’s report. With a total of 2 933 014 broadband subscribers that’s ~557 272 subscribers with fiber.

Actually the sensible thing to do would be to sell the server tech to convention centers and LAN parties and so on. Low latency environments where having a few rackmount servers and a bunch of low-spec PC’s works very well.

Comments are closed.