Categories
computer games games design MMOG development network programming networking programming system architecture web 2.0

MMO scalability is finally irrelevant for Indie MMOs

Here’s a great post-mortem on Growtopia (launched 2012, developed by a team of two)

It’s slightly buried in there, but I spotted this:

“I had guessed conservatively, 600 players online would be our max.”

(a sensible estimate, in 2000-2005, for an unoptimized, badly-written server where you threw it together quickly because it was more important to get your game launched than waste time on “beautiful code”)

In mid April we hit 2,000 concurrent users … our server began to buckle. Round trip to punch something could take a full second and people were constantly being disconnected.

(Both “1 second RTT” and “people being disconnected a lot” are classic signs of a server that is FUBAR and needs some emergency work to fix the scaling problems)

“I would write a “V2 server” … that would [use] multiple cores; it made no sense that our hardware had 16 cores and 32 threads but our entire Growtopia server process was run in a single thread.”

Wait, … what? You’re single-threaded?

Everything I wrote above (about “normal” conservative estimates etc) was valid assuming you used multi-threaded code, that was badly written, that had poor synchronization design (it locked often because you’d been too lazy to think about your code carefully), etc.

Growtopia isn’t unique – but I believe it’s symptomatic of what’s happened more widely. Off-the-shelf tech has reached the point that Scalability is finally irrelevant for Indie MMOs. Just write your code (badly) as multi-threaded, and you’ll be fine. (by “off the shelf” I mean: standard libraries + standard VM’s + standard hardware + standard OS’s)

Your assumption today – when you hack together version 0.1, quickly – should be “this should be OK for 5,000 concurrent users (pessimistically)”.

Things have changed…

5k concurrent is somewhere between 100k and 200k actual users. If you’ve got 100k users and you’re not making a considerable chunk of money … you’re doing something very wrong with your business model :). By the time your scalability becomes an issue, you’ll have the cash to pay someone (maybe yourself) to write “version 1.0” of your server code.

To be clear: Not “something amazing, highly optimized, super-slick and efficient”, but rather: “doesn’t suck”.

When I started in online games / MMOs, scalability was a “mega-critical” issue. I did a lot of work on theory and application of server optimization (from the architecture design, through the choices of programming languages, to the usage of low-level calls on specific OS’s) – I even got a patent for my work, and wrote a chapter for the Game Programming Gems series.

Today, there’s still a lot of FUD around scalability – and it’s made worse IMHO by the branding/commercialization of scalability (cloud computing, noSQL etc are really marketing ideas from startups and corporates, *not* tech ideas). People are *afraid* of scaling.

For a while, that was a good thing: there are many “failed” MMOs from the 2000’s (names omitted to protect the innocent) that were hurried along to their deaths by terrible, non-scalable, tech.

But the world has moved on. Server speed tends to advance slower than client speed (compare servers to graphics cards…) – but we’re at the point now where your servers are so fast already that IT DOESN’T MATTER. Yay!

5 replies on “MMO scalability is finally irrelevant for Indie MMOs”

“Today, there’s still a lot of FUD around scalability – and it’s made worse IMHO by the branding/commercialization of scalability (cloud computing, noSQL etc are really marketing ideas from startups and corporates, *not* tech ideas). People are *afraid* of scaling.”

Hm, that’s a very interesting point. I’ve not been investigating for very long though – but cloud computing services are really great these days.

I’d say you were doing something wrong if you were actually running your own servers – when you could rent them, have them off your back, and probably get what you’re paying for without any setup and investment overhead.

But I’ve never done that so I wouldn’t know – you’re welcome to correct me/fill me in.

Follow the money: the “Cloud computing” mantra lead to a massive increase in the profitability of hosting companies. This is no accident! To be clear: I like cloud services, I use them often – but in a narrow focus. They are *not* a good universal solution.

“doing something wrong if you were actually running your own servers”

This is *only* true in a limited set of cases, for two reasons:

1. The overhead/profit margin on cloud services is ENORMOUS – most businesses save a considerable amount of money by running their own servers.

2. The SLA on cloud is TERRIBLE. Witness what happened when AWS went down for a couple of days – most of the cloud providers went off the web, because they were all piggybacking off other cloud providers, who were ultimately using Amazon instead of ownign their own hardware.

The benefits of cloud are considerable – mostly: low starting costs, rapid ability to “spend more money to get more service”, low technology expertise to “get a server running”.

…but most startups will rapidly outgrow those benefits. And most SME’s IME quickly find that those are only short-term benefits, that hide the fact the company has lost its skilled IT department. Soon enough, other parts of their IT fail, and it’s a disaster. .. OTOH, if they have a skilled IT department, maintaining their own servers is very easy.

(and, obviously: both of those issues I list above can be worked around – but typically fixing one makes the other worse. I’ve seen cloud with great SLA’s, but it costs you an arm and a leg. Vice versa I’ve seen ultra-cheap cloud that goes head to head with dedicated servers – but it’s bad quality internet (your server bandwidth is low, and/or your peering connections are very poor, so you get poor download/upload speeds in practice … etc)

Thanks, that was insightful to read :)

I think that cloud computing can be useful on demand – at least for websites and web applications, but maybe for online games as well.

Like, a combination of your own servers and cloud servers. There are services that let you rent hardware for hourly prices – that seems very useful for peak times so your game can continue to be snappy.

How many times was a simply logged out of WoW because there were “too many players” online on that server… I think issues such as those can be solved by outsourcing your servers, on demand.

But still somehow a lot of games launch that fall apart with just a hundred or so players in the same vacinity.

Bad architecture tends to fail at most any scale. The problem in the game industry is that a lot of developers have bad habits that lead to bad code. They don’t take the time to look to see if a problem is already solved, they don’t really believe in testing, it’s just not a good development culture in general and it shows.

Outside of games like Eve, there isn’t a game out there that can handle several hundred players in the same area without seriously degraded functionality or performance on the client.

And a quick comment on cloud services. We did a lot of testing of vm’s versus physical hardware where I last worked, and for games, physical hardware won every time for cost/performance. Deterministic response times are important for multiplayer games, and virtual machines are really bad at delivering deterministic performance. Plus, most cloud providers are running ancient hardware. The last time I provisioned hardware for a game we used 8 core xeon I7’s, and that was almost 2 years ago. Most cloud providers are still running 4-6 year old opteron chips.

Comments are closed.