You probably heard of a concept called “emulation” if you read this blog often.  I even mentioned it several times when I was talking about glitch hunting or even just being able to see more in-depth what the game is doing.  If you actually never heard of it, the featured image of this post is actually illustrating that pretty well, you have Super Paper Mario, a wii game running on something that is clearly not a wii because of the specs info next to that window (its actually my system 🙂 ).  In fact, it’s clearly running of a program on the system which is again, nothing like a wii.  This is mainly what emulation allows you to do and not only it sounds awesome that you can do that, there’s actually much more extended uses of these programs that only these can offer.

First, I just want to tell a little experience I have been having with Dolphin in particular because I actually did (and probably will still) contributed to the project as it is open source (and learning about the project allowed me to do this post).

My experience with Dolphin

I love the Gamecube, it’s seriously my favorite console ever mainly because the quality of games on it is insanely high compared to basically every system I saw.  Most of the best games I played were on it and it really was the system that got me into gaming and by extension, computer science.

So naturally, I began to be interested in the Dolphin project quite early on (I began to use it where it was at like 3.5 which was released like 4-5 years old as of writing this post (the latest stable release, 5.0 was 6 months from now).  Since I didn’t know much about computers at the time, I couldn’t do much and I didn’t have a good enough machine to run it smoothly, but I got interested.  It’s after years later that I really got serious about the project because it became an invaluable tool for my glitch hunting and I ended up reading every progress report they did on their development blog.

Eventually, I knew enough of C++ so I could contribute to it, so far, I mainly worked or making the debugger way more stable (a very useful glitch hunting tool btw) and I also reworked the interface for configuring inputs like controllers or hotkey since these were quite a mess.  So I didn’t go TOO deep about the inner working of it (idk if I will one day, it’s fun to learn at least), but I at least am much more aware of how difficult it is to make an emulator and most of the trouble associated with it.  Also I happen to know a bit more because I naturally got more active to talk with the other main developers on their irc channel (heck I even was featured on some progress reports).  If you are interested about this blog (which honestly features very weird bugs with some humorous entries), I recommend you read it which can be found here: https://dolphin-emu.org/blog/ it also really shows how the emulator progressed since some years from now which is in my opinion quite impressive.

I figured I knew enough to do an exhaustive post about emulators so here it is.

Describing an emulator

It’s basically a program that allows you to emulate something which means you will be trying to simulate a different environment on an environment that has likely zero support at all for that.  For example, consider a SNES cartridge, you can’t read cartridges THAT big on a pc (not even floppy where that big) so good luck finding a way to physically insert it and second, of course you can’t read it normally because the connector is proprietary to the SNES.  The thing is, even if you WERE able to understand and read the SNES cartridge format, just good luck even doing something meaningful with it because the cart was designed to run on a system that is just not a pc (seriously, the architecture and the hardware differs way too much) so even then, it’s quite obvious you need to understand much more about the system and the media to be able to do anything.

This is why emulators exists.  Their goal is to simulate that system via mostly software strategies so that their media and peripherals would run as if you were on the actual hardware.  That way, that cartridge can be read (though, since it’s still a problem to insert it, you have to use the digital format of it usually called ROM or ISO), it could boot because it’s going to boot on a simulated system and you could even be able to play it because since you are simulating the system, it’s not a huge thing to get that it also simulates peripheral and controllers, sounds and of course graphics.

A little clear up

If you ever heard of another concept called virtualization, don’t confuse it with emulation because although they are very similar concept, there’s one main key difference.  Virtualization is basically like emulation, but instead of bringing a 100% incompatible environment supported by software simulation, you would literally create a somewhat similar enough environment via a mix of hardware resources and software.  If you don’t understand what I mean, take a look at this shot showing my system running a virtualised machine (aka, virtual machines or VM):

y9ijxnp

Both machines have a relatively similar hardware because what is happening is the machine from the right (physical one) is dedicating some of its resources (CPU, RAM, disk space, even pci and usb devices, etc…) to make the virtualised machine on the left run.  This means that the operating system put on the virtualised machine here could honestly be anything that would work if I was on my physical machine so I can run Windows 10 even though I use Linux and I do this because it solve a huge compatibility problem, mostly with games.

But here, it’s a bit more efficient because a huge chunk of the work done to create that environment is actually my own hardware, not mostly software which means it’s not that hard with slightly redundant hardware to have a vm running at near full speed and it’s not even surprising to see them running at native speed very easily, you just need slightly better hardware than you would normally need (cause you are dedicating some parts of them).

With an emulator however, EVERYTHING is software based, the emulated machine would just be too different from the PC so there’s no way you can virtualise something like a Gamecube or a wii.  This is a huge performance problem because your CPU here would have to do most of the work to do things like translating instructions (or dynamically compile them if the emulator uses a JIT), managing the emulated memory, manage the system and stuff like peripherals would have to be correctly redirected to the devices you actually want to use (like controller input) so this is extra work.  Even worse, in graphics intensive consoles, even your GPU might be desirable to be high-end, (though, mostly the CPU makes a big difference).

So let’s now detail why you would use an emulator over the original hardware.

The different use cases of an emulator

The first thing that would probably come to mind is yes, you can play console only game on your pc.  That sounds awesome because well it avoids having to buy the hardware which btw, it becomes harder to get in some cases (unless you need the hardware to dump your game in which case 😦 ).  It also allows if your pc was already designed for game to extend your library more easily instead of having 2 machines and last but not least, it offers very extended features that aren’t on the original hardware, a classic one is save states which is like snapshots for vm, you take the state of the machine saved into a file so when you load it, it’s not just loading the save file of the game, it loads the entire state of the machine at the point.  You also get to use different devices like using a 360 controller on a SNES game (btw, I don’t recommend doing that, but you can).

Just for that already makes the most use cases, but there’s actually other more specific and uncommon use cases that are only possible on emulator.  First, TAS, the emulator is the most important tool to do a TAS if the game isn’t on PC because how can you go frame by frame, return to previous state to redo movements and even create the input file without an emulator?  You NEED an emulator for this.  The other nice use case is glitch hunting, I talked about this a lot already on this blog, but having the game run on a controlled environment like your PC allows many things to be done to analyse the game.  The most common one is having a RAM search like Cheat Engine run and being able to search, see and modify the emulated memory in real-time (that’s like hardware cheat devices, but extremely more powerful and you have search).  You can also have a debugger if the emulator supports it to even being able to see the assembly code of the game so with time, you can even figure out what the game is even doing when and why it would crash for example.  Glitch hunting on console is very limited, but using an emulator pretty much allows to get the most info you can possibly get on the game as it runs which is what you want for glitch hunting.

Here’s however a little thing with these 2 use cases: not only they are uncommon but as for as how accurate the emulator is, these are EXTREMELY demanding case.  So much demanding that it doesn’t even matter if performance is an issue on these because making a TAS will have the taser go way slower than if it was running slow in the first place and for glitch hunting, performance becomes a convenient thing to have and that’s it, you don’t care about having smooth gameplay, you just care about analysing the game.  However since TAS does need a lot of research on the game like glitch hunting, they tend to be merged together anyway into the same idea that goes as follows: you should only care about what the game is doing, not the emulator.  It’s logical, when I glitch hunt, I should ask “Why is the game doing this?”, “Why this memory addresses that controls this gets changed here?”, “What this function does, when is it called and how often is it called?” and the classic “Why are you doing this game?” 🙂 (seriously, it sometime gets emotional of how weird a game reacts to things).  You should NEVER ask yourself “Is the emulator doing what the game wants?” because it’s just a uneccessary question to have while it already is hard to analyse the game in the first place and if it’s that bad, might as well test on console with many memory cards.

I found that kind of interesting because performance wise, most users will care and won’t care as much about accuracy, but having accuracy allows extreme cases to be done as well.

These 2 concerns are btw the main concerns any emulators must face and this is mostly what I am going to detail in the following section.

Performance concerns

It becomes a huge concern depending on the emulated console to have decent performance.  If the game runs natively at 60 fps, you ideally want that the most amount of people can run said game at 60 fps or better,  The problem is as I said, the hardware requirement compared to the original hardware goes way up.  To put this in perspective, the Gamecube or Wii CPU is a PowerPC one which is so old nobody uses that anymore (old mac used to, but now they are on x64 like everybody else) and that architecture btw was more designed for embedded devices, not for high performance pc.  However, it takes A MUCH BETTER CPU than that to have Dolphin which has quite decent performance run it.  According to the Dolphin FAQ, the kind of recommended CPU for comfortable playing is something like “Newer Core i5 and i7 processors such as the i5-4670K and i5-3570K” (source: https://dolphin-emu.org/docs/faq/#which-cpu-should-i-use) and they don’t even recommend to use AMD one due to the lower single thread performance. These CPU are like…..about 2-3 years old from writing this post which isn’t that old.

Now, if you are emulating something like a SNES, of course this becomes slightly easier because it doesn’t need that much, but it’s still a concern because of the another main constraint an emulator just HAS to pay attention to.

Accuracy of the emulator

This is a huge one.  If this is a simulated environment, you would want it to be at least accurate enough so that your task (mostly playing here) would be fine to do.  However, it’s not easy, there is a TON of stuff to take care about.  The main problem is having to accurately figure out how the actual hardware works at a VERY low-level so that you can try to do the same thing on a pc.  This is called reverse engineering which essentially is just analysing the hardware and understand it enough to be able to engineer the same thing again.  This is of course very prone to error (like not understanding everything) and complicates things A LOT.  You aren’t going to magically make a console work exactly as it should off of the emulator, it just isn’t that simple because if there are accuracy errors, it’s going to show the weirdest bugs you could ever imagine when you play your games or worse, not even boot.

Now if that wasn’t hard enough, it gets worse when you combine these 2 main constraints together.

A trade-off between accuracy and performance

Logically, if you are emulating more things for the sake of accuracy, you are going to give more work for your poor CPU which has to do most of the work. If you do that, it’s going to indirectly decrease performance because you bring a bigger workload which means for the same hardware it’s going to be slower.  So, not only you have to make sure your emulator is accurate and offers decent performance, but now you have to CHOOSE between either of them in several cases.  Emulating a feature could allow to add support to more games and features, but with the expense of having to put the hardware requirement even higher. There’s also the problem that doing things more accurate requires more development effort so in the end, you can’t have a 100% accurate system so much precise that a laser isn’t precise enough and have actually really good performance.  You just have to make choices.

To bring Dolphin into this example, yeah, it went through a lot of these choices such as dropping support for 32 bit build to allow better graphics emulation, having to switch to synchronous audio which made the audio very slow if you didn’t ran at 100% (but gave you cleaner audio than before) and several several several other decisions that brings the emulator more accurate, but harder to run.

One that I personally experienced for like at least a year is Dolphin having a better hardware implementation of a graphical feature called bounding box emulation which is basically almost required to play both Paper Mario games because these uses that so much in their animations.  Before, it was buggy (the punies in the chapter 2 would just glitch out graphically and constantly, water reflection was also urgh), then it got a working software implementation, but it wasn’t good enough so they switched to a good hardware implementation which bring the requirement to have a GPU with at least OpenGL 4.3 support which Mesa, at the time, didn’t have it on integrated graphics so it didn’t had the extension I needed with (it took a long time before I got an nvidia GPU).  It was actually a very justifiable choice because the total amount of games that needs that feature can be counted with one hand so you don’t lose much here.  The result is I ended up to use the last revision before that change for over a year until finally the support for the OpenGL extension was added in mesa (was a bit slow, but it worked).  It was actually refreshing when I could update because the new features were exciting :).

Anyway, the idea here is for an emulator, it’s not just about doing the right things for games to be played, but also make logical decisions on what should be emulated accurately and what shouldn’t to preserve decent performance.

The different ways to see this trade-off

This is where it makes the difference between an “accurate” emulator and a “working” emulator.  You see, you coud look at this situation and interpret it in many ways, but the most 2 common I saw was to “just make the emulator work” or to give more care about accuracy while maintaining reasonable performance.  There’s emulators out there that don’t have a very high focus on accuracy (I saw some Nintendo 64 ones that went that way) and their goal is to just make the game run at native speed fine.  This does mean that you are emulating the least necessary to get really high performance and it does mean you are allowing yourself to “hack” some implementation so that they are not really accurate, but easier to run and seemingly respect most cases.

It would seem to work and…..it actually does in most cases, but the problem comes from the fact that you are basically asking for problems to happen.  There’s always going to be edge cases that are going to spam one feature that wasn’t emulated correctly so hard that you will end up getting the weirdest results ever until you decide to actually do it correctly.  Even worse, don’t think that emulators developers knows all the cases used by each games, I mean they know a lot more than usual, but they can’t know everything, do you really think they can for like hundreds if not thousands of games?  This raise a problem because if you went with this way of doing this trade-off, you might already have lots of easily encouterable bugs without even being aware of it until one user points it out and it becomes a huge problem for many games.  Lastly, it will end up being a mess, if you keep hacking your way through because you will have to do so much hacks that you might even lock your code base to a too rigid state so feature wise, it might be very hard to add some in the future.  Oh and it won’t be very good for TAS or glitch hunt 😦

Now, the other way of seeing this does have its cons too because since you do pay attention to accuracy, naturally it will be harder to run, BUT that doesn’t mean that you need to be as precise as a laser, you can evaluate whether or not emulating features is worth the performance loss.  Also, it does require much more work since emulating accurately is a lot harder than hacking the implementation so that it works, this would be a factor to consider when deciding to emulate something.  The pros however are very surprising, you will end up solving previous bugs that you weren’t even aware of them (it’s logical, if you do things correctly, you don’t need to know the games that uses it, they will naturally get fixed), you are opening the doors for extensibility a lot more and lastly, it will bring a lot of support for lots of games.  Though, TASer and glitch hunter would likely use the emulator and be very happy that it’s there so that’s nice too.

Dolphin is the kind of emulator that went with the accurate route and having learned a lot about how the project progressed, I really need to say my opinion on this.

My take on this trade-off

I have been aware that a lot of people don’t like how hard it is to run Dolphin and there has been several controversy on several decisions that was done (the main one I am thinking is the synchronous audio switch) and to that, yeah I agree it is kinda hard to run it because of the CPU requirement even though there is a lot of performance improvements too.  However, having seen the progress reports, previous infamous bugs and how the emulator is now compared to what was done before I will forever swear by that the accurate way of doing things was probably the best thing that Dolphin offers and it makes it in my opinion one of the best emulator out there.

I remember when I was on dolphin 3.5 and I really need to say this: IT WAS AWFUL! It was also bad performance wise, but oh my god I had so much problems while trying to play games.  Could either be crashes, MANY HORRIBLE audio problems, the weirdest graphics glitches I saw, etc…..  Without being aware of it too, there’s a lot of things the emulator didn’t support at the time like having a basically approximated implementation of environment textures, not being able to boot VC games, several input devices and the list goes on and on.  I guess this was maybe the residue of not having fully completed the switch on making stuff work yet because Dolphin 2.0 didn’t used to focus on accuracy but rather to have plugins and kinda hacking their way through it, it changed on 3.0 and 3.5 was a release made to give better performance while waiting for 4.0

4.0 however, this is the release that made me realise how much awesome of a decision that was.  Not only it did solved many thing (and improved performance a bit), but it made me realise how, in the future, the emulator is just going to keep getting better at a very accelerating pace.  To better explain this, I need to bring what happened with audio because it apparently got a lot of discussions on this and it imo really represent what accuracy brings.

3.5 had asynchronous audio and the progress report detailing how it used to work is actually pretty hilarious to say that it was so much of a mess that even the comments in the code said things like “WTF is going on here?”.  Source: https://dolphin-emu.org/blog/2015/08/19/new-era-hle-audio/ and https://dolphin-emu.org/blog/2014/11/12/the-rise-of-hle-audio/ .  The gist of it is that it was wrong, inaccurate and just bad.  In practice, well I am usually someone who pays a lot of attention when it comes to audio and I seriously got super annoyed of how messy it was.  I was having channels getting cut, random static, some tracks getting muted so I only heard the SFX, and it was basically horrible and it’s so sad because I was playing Paper Mario on the Gamecube which imo has the best game ost I heard.  Sure it was glitch hunting, but god it’s comforting to hear it still.  The thing however is even though it was what I could describe a “garbled mess”, it was actually running at the right speed even though I was running at like 70%.  The alternative to solve this (low-level audio instead of high level) would have been impossible because it’s the best audio you can get for accuracy, except my machine couldn’t take it performance wise so I had to stuck with the most awful audio ever,

When I got 4.0 and it switched to synchronous, yes, it was VEEERRRRRYYY slow like you would essentially hear the audio slowed down by twice but you know what?  I don’t care, I FINALLY have listenable audio.  Hearing good audio slowed down is by orders of magnitude A LOT better than the mess I was enduring before.  In some cases, I could even run the game fast enough to have very clean audio at almost the right speed….compared to it being wrong depending on how you would roll dices that day (they were so random).  So I have an improvable clean slow audio vs a random garbled mess at the right speed that won’t have room for improvement, it’s honestly not hard of a choice.  Today though, because the performance was improved, it’s actually becoming harder to tell the difference between high level audio and low-level audio (low-level will be accurate, but requires much more to run).  This just shows me how accurate and stable the audio is, to get THIS close to low-level audio and still being nice with performance was a really good decision.  Even if you like more the garbled mess that was there before, would you at least think that BEING ABLE to not be played incorrectly is a better thing than randomly and unavoidably being incorrect?

This is how you could describe the choices that the Dolphin team did, they are hard, choices, but at the end of the day, it’s just a better alternative for the progress of the project.

On that note, because of such improvements, we went from an emulator that was kinda working alright for playing game and being okay with having issue to I can freaking glitch hunt without worrying about the emulator, not even be scared of using it for casual play (I even bought an 2004 IDE DVD drive recently for the entire purpose of being able to dump my discs), being able to use hardware devices like the actual GC controller using the WiiU adapter, being able to use the wii menu and so much more.  It is insane how far this emulator got like at this rate, I might not even use my gc or wiiu in wii mode because although it will never be exact, it’s close enough that it start to not even matter (unless you are speedrunning in which case, it matters).

So, Dolphin has learned me a lesson, even though it’s a very hard decision and it would certainly seem like it’s not that beneficial, it is always always always better to have a high focus on accuracy. even on things that seems to not have much case, as long as it’s worth it performance wise and what it brings, as long as the question is asked, it’s always better than ignoring it and “just make it work”.

Conclusion

Despite some debates on should you use emulators or original hardware to play games, no matter which you prefer, emulators have their own uses that well justify to use them.  However, it is very hard for an emulator to get its users to trust it, not only it has to deliver performance good enough on a completely incompatible environment, but it also has to be accurate enough so that you can get more features, more game support and even getting the trust of glitch hunters.  Having followed and contributing to the Dolphin project, I can say that having this kind of emulator is very possible, but it needs to go to a long way before reaching that point.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s