Consortya is a Social Massively Multiplayer Online Game. One musical artist user broadcasts their own music live from their home studio to everyone else. You’ll probably need to check out the experience for yourself to really understand it. Here’s how we optimized it.
Here is an in-game screenshot showing the updated graphics quality settings.
The last concert with Bunny was so awesome, and everyone had an amazing attitude about it. One thing stood out though and made me sad. Our newest fan noticed his MacBook Pro was working way too hard to run the game. So I thought about it, wondering why a fairly fast computer would have trouble with Consortya. In this blog post, you’ll see some of my findings, and the next time you join a concert in Consortya, you’ll feel the results.
What We’re Measuring
When you run a program on your PC, you can observe the CPU usage with Task Monitor. On a Mac, you can observe the CPU usage with Activity Monitor. When the CPU usage is high, your computer is working really hard and will probably get warmer to the touch.
We aimed to increase the overall Frames Per Second (FPS), of the game. There are two things a game does. It makes a frame, and it displays that frame. Most monitors can show 60 frames per second. So why would we try to increase the frame rate to over 100 FPS? The answer is that we know that the game is more optimized when it can hit higher FPS. In the version you’ll use on your computer, we will cap the frame rate to 60 (or 30 depending on your setting).
So for instance
1. the game was able to render 60 FPS
2. we increased the performance so that it can render 100 FPS
3. when we then cap the frame rate to 60 FPS, it will theoretically consume 40% less CPU.
Here you can see the CPU usage decreasing by about 40% on Medium Quality. I nicknamed my PC The Oblivionator.
The first thing that I did was run the game through the Unity Profiler. That’s a tool with the Unity3D game engine that lets me see different things that the game is doing, and it’s incredibly useful in helping me find what might be using up too much CPU. I’m using a package called NGUI to manage the interface. When a clickable view is not being shown, I move that view off screen. The biggest performance increase I got was that when I move that view off screen, I also make it inactive. That keeps it from doing any updates.
The next thing the profiler showed was an update loop for a certain object that detects if your mouse is on another character or a U IView. This was ridiculously wrong (I put a note of it on our internal Wall of Shame), but I was doing a GameObject.Find (one of the most taxing functions Unity3D offers) inside of an Update loop. I’m sure it was some piece of test code, but I moved that call to only run once in Start.
Once the profiler stopped showing that update loop, it started showing some calls related to the the user’s name tag and emoji display. Each user has a name tag and emoji display that follows them. The default settings update that every single update. I found a great optimization by only having the name tag update this way. After a half a second, each update attempt the name tag has a 10% chance to update it’s anchors. So, even with dozens of people in the club at once, not everyone’s name tag will update at the same time (or every frame as it had done before).
Some users reported that their audio stream was cutting out. I measured our average Mbps (Mega bits per second) to about 1.2 MBPS. That’s awfully high. Each user is sending a UDP update packet containing their location every 1/10th of a second to help the Extrapolation algorithm keep track of where every user is. That’s a bunch of jargon, but it is essentially how all of your friends in Consortya can see where you are and see you walking/running smoothly around. This is of course unnecessary because sometimes the user is not even moving, but just dancing and having fun at the same spot. Previously, when users are in their character’s dressing room, I sent the packet every 5 seconds because I knew they were not moving. I realized that this same optimization would work if I just check if the user has moved. So, the optimization there (which will also help our server) is to only send an update packet at lease every 5 seconds, and every 0.1 seconds if the user has moved.
I added a new Quality setting called UltraLow. This is targeted at user’s who don’t particularly care how flashy the game looks but rather at how smooth and consistent it is on their computer. It’s a really great setting, but it only offered a slight improvement.
After the former optimizations, I noticed we were getting at least 30 FPS higher on every display setting on my desktop PC. But on my wife Carin’s MacBook, the FPS she could render at was nearly the same. So I made sure to turn on Vsync every other frame for the Low and the new Ultra Low settings. This then allowed me to set the target frame rate for those two settings to 30 FPS. We noticed a huge reduction in CPU usage on all computers once we added that. Previously, I had Vsync turned off for the the low settings because I figured people wanted the best frame rate they could get. I realized that on a faster computer, the CPU usage was the same on those low settings because they were rendering as many FPS as possible even though they only needed to render 30/60 FPS. This was an awesome way to harness all of the benefits of the earlier optimizations. By capping the frame rate, and making it easier to render a frame, we saved the computer a lot of work.
See the way we capped the FPS? That makes it run very smooth for faster computers, and keeps slower computers from trying too hard.
You can think you know all the answers, but it mostly takes time and clever tricks to pinpoint and fix the bottlenecks in your Massively Multiplayer Online experience (MMO). I can proudly say that CPU usage has gone way down and computers should run smoother on every graphics setting. So the next time I see you in Consortya, I’ll see you a lot longer!
Here is all of the data in a huge chart. For reference, UltraLow and Low are targeting 30 FPS and have Vsync set to Every Second V Blank. Medium and High have VSync set to every V Blank and are attempting to render at 60 FPS. I think there is still a lot of room for improvement because you can see that some system settings combos don’t yield much of an FPS increase, although the CPU usage is always better. I offer settings in game for users to cap their frame rate and Vsync.
I wrote a unit test and measured the FPS and CPU on each computer for each version at each quality level… I should figure out how to to do automated testing. But, at least it was a uniform test and no user error could mess it up.
See the way we capped the FPS to the refresh rate of the computer? That makes it run very smooth for faster computers, and keeps slower computers from trying too hard.