no more entropic (cursed) equipment!

I’ve been reworking a lot of equipment for the next Mysterious Space release. I’ve noticed that when a player finds equipment with entropic effects, they almost immediately consider it garbage not even worth thinking about. equipment that is spawned with an entropic effect IS given two extra “levels” worth of bonuses, but this rarely made up for it; or even if you DID find some shield with higher regen rate than your current shield, but a “+10% chance to take an extra 1 damage when hit”, that entropic effect just FEELS lame. “thanks, but I’ll stick with something that doesn’t try to dick me over.”

SO: I decided to remove ALL entropic effects from Mysterious Space. it was kind of painful to do: a lot of code went into supporting them, and most entropic effects were unique things I spent time not only coding, but coming up with; things I hoped might interact with other systems. for example, one entropic effect sometimes scrambled your sensors, but a couple OTHER effects gave you bonuses while your sensors were scrambled. it was my hope that some of these entropic interactions might be taken advantage of by players. in retrospect, the chances of getting JUST the right combination of effects was much too low; in the vast majority of cases “sometimes scrambles your sensors” is just an annoying effect that’s not worth equipping.

a couple equipment and item effects needed to be adjusted now that entropic effects aren’t around (notably, the Advanced Technology that applies entropy to a random equipment), but I ALSO took this opportunity to tweak some stasis (blessed) effects!

stasis

my main goal in tweaking stasis effects was to make the effect something that can “stack”. it’s always been possible for a piece of equipment to get multiple levels of stasis (or, previously, entropy), but some stasis effects didn’t do anything when you had multiple levels of them. for example “clears all negative effects from your ship” and “fires bullets in all four directions while your sensors are scrambled” aren’t effects that can be made MORE powerful if the item gets a second level of stasis (well, in the “fires bullets…” example, maybe more bullets could be fired, but that seemed less-exciting, since we already have several other effects that fire rings of bullets).

in the end, some effects were entirely rebuilt from scratch, others were tweaked, and a couple – at the time of this writing – STILL don’t stack; I’ll finish them up at a later time.

“what programming language should I learn?”

I’ve been asked this before; here’s AN answer:

it depends, but lots of the ones whose names you’ve heard (C, Java, PHP, JavaScript, Python…) are probably good, and in some senses, “programming is programming”: learning ANY language will help you with ALL languages. every language has its own quirks, for sure, and some have unusual syntax, and/or are built for a specific purpose, but in the end, if you’re interested in programming, you’re going to learn a LOT of languages, and each has something to teach you. which you pick first is kind of up to you, and what you want to do.

here are my opinions of the various languages I know. please note: I do a lot of miscellaneous coding on the side, mostly game development, but my day job is as a web developer and manager, so a lot of my opinions come from a web development point of view.

C/C++

you may have heard of “C-style languages”. these are languages that share a lot of syntax with C (and C++). C and C++ have been around for a long time, and have had an enormous impact on many languages that have come since. despite their age, these languages have not been left to languish and rot; they’ve been continuing to develop over the decades, and have many-many applications today.

C and C++ are “strongly-typed” languages. this means that you have to tell the language, explicitly, what kind of data you’re working with as you pass it around. “this is an integer”; “this is a string of characters”; etc. a lot of languages are strongly typed, and a lot aren’t (so-called “loosely”-typed); whether or not a language is strongly or loosely typed influences a lot about how you use the language, and each has pros and cons.

in my opinion, the strongest “secret” pro of a strongly-typed language is that it reduces programmer error, and increases the helpfulness of your IDE (Visual Studio, Eclipse, etc). if your IDE knows “this is an array of strings”, then it can give you a LOT of help when working on that piece of data in code, offering up inline drop-downs of available operations you can take on that data, etc.

one of the greatest weakness of a strongly-typed language that I encounter as a web developer is that JSON – one of the most-popular languages of the internet – is loosely-typed, and strongly-typed languages have a tough time dealing with loosely-typed data.

final note I have about C and C++: they are pretty “low-level” languages, in that the instructions you write (especially in C) translate to what your CPU actually does more closely than most (all?) of the other languages on this list. this relative “closeness” to the CPU grants additional performance and power. some of this power can create more programming hazards (pointers come to mind), but for these reasons, C and C++ are more popular in the hardware coding world than other languages. operating systems are written in C and C++; not in JavaScript.

C# and Java

C# and Java (no relation to JavaScript; more on this later) are both strongly-typed C-style languages that went HAM on “object-oriented” programming.

whether you think object-oriented programming is OBVIOUSLY better or worse than functional programming is a debate you can have with many people on the internet 😛 both are used, both are incredibly useful, and the two can be used together. you’ll learn a lot by learning both approaches. C# provides many methods which support functional programming, but it is best-suited to object-oriented programming. I’m less familiar with Java, but I assume the same is true for it as well.

(“what exactly ARE object-oriented programming and functional programming?” is a question I’m not prepared to answer here. maybe another time.)

C#, as a Microsoft product, is very popular in Microsoft-y parts of the world, but it’s found use in a other interesting places, including Unity (a popular 3D game-making IDE) where it is the preferred language.

if you’re interested in making Android apps, Java is the preferred language in that world. (Apple prefers “objective C”, which I know basically nothing about, so cannot comment on.)

finally: Java was designed from the start to be cross-platform in a time when cross-platform was harder to do! as a result of this easy cross-platformness, desktop applications and games are sometimes written in Java, including Open Office, and Minecraft. I also just learned, while writing this article, that parts of Twitter are written in Java! C# started Windows-only, but became cross-platform with Xamarin/MonoDevelop. (also worth noting: C, C++, and all the other languages on this list are also cross-platform.)

just a quick public service announcement before we move on: Java has NOTHING to do with JavaScript. I mention this because…

JavaScript

JavaScript started as a little language for making web sites COOLER. it was named JavaScript because, at the time, Java was insanely popular, and JavaScript’s creators wanted a piece of that popularity pie. JavaScript has since EXPLODED in popularity, not just on the web, but everywhere. it can run web sites entirely (both back-end and front-end), you can make mobile apps with it, you can makes games with it (JavaScript is Unity’s second-preferred language, the most-recent versions of RPG Maker use JavaScript, and there are several JS/HTML5 frameworks for games), and you can even make desktop apps with it.

JavaScript is also well-known for JSON – JavaScript Object Notation – which is one of the most popular ways in which web applications and APIs communicate with one another. (XML is also up there, but it has long-since fallen out of favor in modern web apps, despite Microsoft’s insistence on continuing to push XML (Microsoft… why are you always so behind on web tech?))

(note: JSON is a very small subset of JavaScript; you don’t have to learn JavaScript to learn JSON, and you won’t learn JavaScript by learning JSON. if you do ANY coding for the web, regardless of language, you’ll probably pick up JSON along the way. almost every modern language either supports JSON out-of-the-box, or there’s a super-popular third-party library available for it.)

JavaScript is C-style, loosely-typed, and object-oriented, but despite it’s C-like appearance, JavaScript has some core features and behaviors that will seem SUPER WEIRD to people who learned a more “traditional” C-style language first.

finally, the odd history of JavaScript, and the wide-spread use and many applications that JavaScript has today means there are a lot of KINDS of JavaScript, and many ways to do things, many of which are wrong. this increases the learning curve of the language. if you’re developing for browsers, you HAVE to stick to what’s called “ES5”, a subset of JavaScript. even within just ES5, there are a lot of bad ways to do things, but actually, you probably don’t want to program in plain ol’ ES5 anyway, and instead learn a framework, like Angular (which runs on TypeScript, a superset of JavaScript created by Microsoft that’s strongly-typed, but “transpiles” into ES5 JavaScript so it can be used in browsers), or React/Redux (an invention of Facebook’s which runs on ES6, and which is also transpiled into ES5). and then there’s node.js, for running JavaScript server-side… and even though it’s terrible, there’s still a lot of jQuery in the world… and omg, mobile development and even desktop development in JavaScript are also things!

in short: JavaScript is a huge, complicated world, and it’s moving fast as people find new things to do with it.

but if you’re now feeling turned off and scared by JavaScript, don’t be! my recommendation would be to decide what you want to do with JavaScript, and learn the toolset that’s best for THAT job.

  • do you want to make mobile apps? check out PhoneGap, and Ionic.
  • do you want to make desktop apps? check out Electron.
  • do you want to make games? check out Phaser, PixiJS, or maybe even Unity (although really, for Unity, you should be using C#).
  • do you want to make websites? learn about MEAN stacks, SPAs (single-page applications), Angular, and React/Redux.
  • do you need to maintain decade-old websites, or WordPress sites? learn jQuery, and look for a better job 😛 (how’s that for a hot take?)

most (all?) of these frameworks have excellent “getting started” guides on their websites which will lead you through creating your first project with them.

PHP

PHP is still the undisputed king of server-side web development, though JavaScript and Python have been increasingly nibbling on that pie.

about three-quarters of all websites are running PHP, including Tumblr, Yahoo, Wikipedia, and Facebook (although it may be worth noting that Facebook engineers have said they’d love to get away from PHP).

PHP, like JavaScript, is C-style, loosely-typed, and object-oriented, but unlike JavaScript, PHP is more “traditional” in its approach, and in recent versions is offering more strongly-typed features to those who want to use them.

also like JavaScript, PHP has a long, and let’s say “rich” history, that can make parts of it confusing for new users: weird, old, inconsistent functions, and many features and ways of doing things you simply SHOULD NOT use. as with JavaScript, learning how to program PHP right can be hard.

to help learn PHP right, and since PHP’s main role today is for serving websites and web APIs, I’d recommend learning one of the popular web frameworks for PHP.

  • for a traditional website, check out Symfony, Laravel, or maybe Drupal.
  • for an API, there’s Symfony’s little brother Silex, Laravel’s little brother Lumen, and many other RESTful frameworks.

again, as with JavaScript, most (all?) of these frameworks have excellent “getting started” guides on their websites which will lead you through creating your first project with them.

I would strongly recommend AGAINST learning PHP via an archaic framework such as WordPress or Media Wiki. these are incredibly popular websites you’ve probably heard about, but they are based on very old code that teach bad practices rife with security flaws and performance problems. you will honestly be doing yourself a disservice by learning PHP this way, not only by learning bad habits, but also by NOT learning the design patterns used by more-modern frameworks.

Python

one of my weaknesses as a programmer is in not knowing Python. but I’ll tell you what I can.

Python, unlike everything else on this list, is NOT a C-style language. this isn’t a bad thing, and it does not mean that the language is lacking! all it means is that the syntax is one-of-a-kind. this might sound like it’ll make it harder to learn, or make what you learn in Python less-translatable to other languages, but I would argue that that’s not the case:

  • as I mentioned earlier, JavaScript is a C-style language, but this LOOK can be deceiving, because it ACTS very different from other C-style languages in some super-important ways. similar syntax can sometimes make learning harder, rather than easier.
  • you’re going to learn TONS of syntaxes, anyway. JSON, regular expressions, XML and HTML, CSS, MySQL and SQL… it’s not a problem. your brain is up to the task.
  • Python is relatively unique in that indentation is REQUIRED. proper indentation is a great habit to learn early. it’s NOT required in C, C++, C#, Java, JavaScript, or PHP, but it should be. indentation increases the readability of your code, and is basically a requirement when doing any kind of collaborative work (like, for example, at a job).

as mentioned, Python IS used for web development (ex: Django), but the biggest reason to learn Python – in my mind – is to get into AI. if you’re interested in artificial intelligence, machine learning, and all that, Python is MILES ahead of JavaScript, PHP or C#. (I’ve heard that C/C++ and Java are also decent for AI.)

Others You May Have Heard Of

  • Go. I know nothing about Go. I hear people have taken a liking to it recently? maybe google up some info yourself, and see what people are saying about it. (no relation to the ancient board game of the same name.)
  • (My)SQL. SQL, and it’s variants, are languages that are used to talk to databases. you’re (almost?) never going to write a full program in SQL, but you MAY write bits of SQL as part of another program that talks to a database. it’s often preferable to use a third-party library to talk to a database, though (Dapper for C#, Doctrine or Eloquent for PHP…), so you can get by without knowing SQL. you will want to learn it eventually, but to start, I wouldn’t make it a priority, and instead focus on learning a bigger language while picking up SQL on the side.
  • HTML & CSS. using a web framework like Symfony, or even a JS framework like Angular, doesn’t mean you won’t use HTML and CSS! if you program for the web (or even for the desktop with something like Electron), you WILL use them! you will have to learn them. fortunately, HTML and CSS are relatively easy to pick up. chances are, you already know a bit of HTML, and I’m willing to bet you’ve encountered hex-coded colors (ex: #FF9900).
  • Ruby. people were going NUTS about Ruby for a hot minute. that minute has passed.
  • CoffeeScript. CoffeeScript – like TypeScript and ES6 – is not supported by browsers, but transpiles into ES5 JavaScript (the kind of JavaScript that browsers DO support). it was invented before ES6 was as popular as it is today, and it’s now much-preferred to write in ES6 rather than CoffeeScript.
  • JSP. it’s Java, so you might think it’s good, but it’s not. back when it was worth talking about, PHP was still better; today, JSP is old, and rarely used.
  • ASP. I like Microsoft – I like their OS, and their gaming console – but they just CAN’T seem to get it right with web-based technologies. their most-popular internet browser was IE6. IIS is miles behind Apache and nginx. ASP has always sucked; they tried again with ASP.NET Core, and failed again. I don’t know why they can’t figure out the internet, recently, but they can’t.
  • Visual Basic. it was slightly exciting in the late 90s/early 2000s. anyone still developing in Visual Basic today is probably doing weird, Access database stuff. you don’t want to be part of that world. there’s a better life for you.

pixel shaders in MonoGame: a tutorial of sorts for 2019

suppose, like me, you’re a crazy person who likes to make 2D games with MonoGame, instead of a super-popular and full-featured IDE like Unity >_>

hello, fellow crazy person!

it’s fun doing stuff yourself, right? it’s slower, and harder, but that’s fine: we’re LEARNING stuff. also, we get full control over things, like the input system (which Unity has its own ideas about that we might find cumbersome for our particular project).

one of the hardest things – for me – has been using PIXEL SHADERS in MonoGame. pixel shaders have their own language, and are pretty low-level, which makes information on them harder to find than, say, how to get input from your gamepad (something EVERYONE wants to do, and which is pretty easy TO do). to top it off, MonoGame has a couple of its own little quirks in how it uses pixel shaders, and its toolset is VERY bad at providing error messages when your pixel shader code is bad! put all this together, and it can be really hard to get started using pixel shaders in MonoGame!

I hope this tutorial can you help you out with that!

Assumptions

I’m going to assume that you know C#, and have followed a couple basic MonoGame tutorials. maybe you even know how to load sprites and move them around the screen! (if so, you’re a little ahead of the game, because the first thing I’m going to show you is…)

… Loading & Drawing Sprites!

first, you’re going to need at least a sprite, and a pixel shader. you should have a Content.mgcb file, which, when opened, yields something like this:

pixel shaders.png

if you don’t have this file, or it won’t open in the MonoGame Pipeline Tool, you may have created a blank C# project, instead of using a MonoGame template. you CAN create & setup this file manually, but that’s a whole other weird thing. I may post about how to do that another time; for now, you’ll have to google up some help… or just start over again with a new C# project using a MonoGame template!

also: the screenshot above shows off some pixel shaders. don’t worry that you don’t have any yet. we’ll get to that!

ALRIGHT: let’s load up a sprite (assuming you have one called “Graphics/Characters/LittleDude.png”)

Texture2D littleDude = Content.Load<Texture2D>("Graphics/Characters/LittleDude");

remember that MonoGame doesn’t care about your assets’ file extensions. LittleDude here may be called “LittleDude.png” on the disk, but when asking MonoGame to load content files, you leave the file extension off!

to draw this sprite on the screen, you’d write something like…

SpriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.PointClamp);

SriteBatch.Draw(littleDude, new Vector2(150, 50), new Rectangle(0, 0, 20, 20), Color.White);

SpriteBatch.End();
  1. SpriteBatch.Begin(...) BEGINS the drawing process.
    • SpriteSortMode.Deferred means that the sprites will not be drawn until SpriteBatch.End() is called. this improves performance.
    • BlendState.AlphaBlend means that the alpha channel of the sprite should be respected, and partially-transparent pixels will be blended with whatever is underneath.
    • SamplerState.PointClamp means that if we sretch an image, we want a pixelated effect, rather than any kind of smoothing/blurring. I’ve got a hard on for the pixel aesthetic, but you might want something different for your game.
  2. SpriteBatch.Draw(...) draws a sprite!
    • littleDude is the sprite to draw; the one we loaded earlier!
    • new Vector2(150, 50) represents where on the screen to draw the image.
    • new Rectangle(0, 0, 20, 20) represents a rectangle of pixels from within the littleDude image to draw. for example, LittleDude.png might be an 80×20 image, where each 20×20 square contains the dude facing in a different direction.
    • Color.White represents the TINT to apply to the sprite. using full white leaves the image unaltered.
  3. SpriteBatch.End tells MonoGame that we’re done with this batch, and it should draw everything out to the screen.

Why Even Shade Pixels?

pixel shaders make it easy to do some cool things. here’s a few examples:

  • let players customize the skin, hair, & clothes colors of their character’s sprite (with minimal code)
  • flash the character’s sprite all white, to indicate receiving damage, invincibility, or some other effect, WITHOUT hand-creating all-white versions of your sprites.
  • you could make the whole screen wobble & wave, get super-pixely, or invert a bunch of colors, or some other visual effect like that. (perhaps as part of some status effect?)
  • similarly, you could render everything in grayscale, perhaps to represent a scene from a memory.

some of these things you might imagine being able to code up yourself, but pixel shaders often let you do these things with WAY less code. also, pixel shaders are processed by the GPU, in parallel with whatever the CPU is doing. ALSO also, GPUs are CUSTOM-BUILT to churn through pixel shaders like they’re nothing. (more on this later!)

Writing a Pixel Shader

you’re probably pretty comfy making littleDude.png, even if it’s just in MSPaint, but how do we make a pixel shader?

let’s start with a simple one: a shader that turns all the pixels in your sprite white (while still respecting alpha transparency!) this is a great way to indicate to the player that a character has received damage, and/or to show that a character is invulnerable. it’s also a good introduction to pixel shaders, so let’s get to it!

sampler inputTexture;

float4 MainPS(float2 textureCoordinates: TEXCOORD0): COLOR0
{
	float4 color = tex2D(inputTexture, textureCoordinates);
	color.rgb = 1.0f;
	return color;
}

technique Techninque1
{
	pass Pass1
	{
		PixelShader = compile ps_3_0 MainPS();
		AlphaBlendEnable = TRUE;
		DestBlend = INVSRCALPHA;
		SrcBlend = SRCALPHA;
	}
};

look weird? don’t worry, we’ll go over it together, BUT: I do want to throw a little disclaimer out there: I know some things about pixel shaders, but I also DON’T know some things! this particular shader has several lines that I cannot fully explain to you; I found them online, and I know they work, but I’m NOT aware of the full range of options, etc. I’ll definitely call these out as we get to them; feel free to google up more info on your own!

good?

good!

let’s take it from the top.

  1. sampler inputTexture; at the top of any pixel shader, you can define all kinds of what look like global variables. these are actually PARAMETERS for the entire pixel shader. MonoGame provides a way for you to pass data into a pixel shader. MonoGame ALSO expects your pixel shaders to have a “inputTexture” parameter of type “sampler”, and it will set this parameter for you, without you needing to ask it to. this parameter, as you may have guessed, is a reference to the image which the shader is being applied to.
  2. float4 MainPS(...): COLOR0 is the definition of our pixel shader function. the name “MainPS” is not special; it can be whatever you want. the return type, however, must be of type “float4”. a “float4” is an exciting pixel shader data type. think of it as a struct or class with four member variables which are all floats. so far that sounds normal, however the way pixel shaders allow you to access and manipulate the member variables gets weird, as we’ll soon see! anyway: your pixel shader is expected to return a color – the new color for your pixel – and a color is simply four floats: r, g, b, and a. finally: “COLOR0”. this is one of those things I don’t really understand. can you leave it off entirely? I don’t even know. I haven’t tried. feel free to experiment 😛
  3. float2 textureCoordinates: TEXCOORD0 – the parameter for MainPS – is a “float2” representing where in the image we’re pulling pixel data from. this might start to give you an idea of how this method is going to be used, but I’ll just tell you: MainPS is going to be called for every pixel in your image. it’s given the coordinate of the pixel, and expected to return the color you’d like to use for that pixel. a pixel shader COULD simply read the color of the pixel at the given coordinate, and return it, but that wouldn’t be a very interesting pixel shader; we’ll be doing more-interesting things.
  4. float4 color = tex2D(inputTexture, textureCoordinates); alright: so here we’re grabbing the color of the image at the given coordinate. “tex2D” is a pixel shader built-in method that accepts an image, and a coordinate, and returns the color of the pixel at that location. we’re storing the result in a new “float4” called “color”, because we’d like to do something before returning it.
  5. color.rgb = 1.0f; okay, wtf is up with this syntax? you’re probably thinking “what? there’s a variable called ‘rgb’, and I’m assigning 1.0 to it. what’s the big deal?” the big deal is that this is not what’s happening. a “float4” has four properties, and they can be referred to individually as r, g, b, and a, OR as x, y, z, and w. for example, “color.y = 0.5f;” is valid code, and would set the second value (which happens to be green) to 0.5. you can also combine any of the properties you want, for example “color.ra = color.g” would set the red and alpha components to whatever the green component is. weird! it gets weirder, too, but we’ll get there later. for the purposes of this shader, we’re setting r, g, and b all to 1.0 (full white), and leaving alpha alone! (note: this syntax hints at the GPU’s power of parallelization that makes it so fast. GPUs are designed to operate on lists of complex structures all at once by using their HUGE internal bandwidth.)
  6. return color; we’ve altered the underlying color – it’s now full white – so we return it! done!

well, except we’re not QUITE done. that’s the logic of the pixel shader, but there’s also some setup we have to do. here’s what that looks like, again, for easy reference:

technique Techninque1
{
	pass Pass1
	{
		PixelShader = compile ps_3_0 MainPS();
		AlphaBlendEnable = TRUE;
		DestBlend = INVSRCALPHA;
		SrcBlend = SRCALPHA;
	}
};

we have to give the GPU a little bit of meta data about how to use our MainPS function. this is where my personal knowledge really starts to break down, but I’ll explain it as best I can.

  • technique Techninque1 ... pass Pass1 so: I do know that the name of the technique does not matter, nor does the name of the pass inside. and I know you can have multiple passes in a technique (and I know they can be individually refered to in C#/MonoGame). but the full, formal definition of a “technique” and a “pass” I am NOT aware of. for everything I’ve encountered so far, however, just copy-pasting this basic technique/pass block setup has worked fine.
  • PixelShader = compile ps_3_0 MainPS(); defines which function to actually call for this pixel shader. in this case, MainPS. again: we could have called it anything (“AllWhite” might have been a better name, for example…), so long as the names match. the compile ps_3_0 part tells the MonoGame pipeline how to compile this pixel shader. specifically, “ps_3_0” refers to pixel shader version 3.0. you may have noticed video cards bragging about supporting such-and-such version of pixel shader, or games listing some pixel shader version among their requirements. congratulations! your game now requires pixel shader 3.0! I am not aware of the differences between the various pixel shader versions, but I have been unable to compile a MonoGame project using ps_4_0, and I’ve seen very few examples online of ps_2_0 code, so I’m just sticking with ps_3_0.
  • AlphaBlendEnable = TRUE; DestBlend = INVSRCALPHA; SrcBlend = SRCALPHA; I’m going to cover all of these at once by saying “I have no clue what these really mean.” I mean, from the names and values, we can infer that they enable alpha blending, somehow. in fact, without these lines, a pixel shader will not pull in the alpha value for any of the pixels when using the tex2D function, causing all transparent areas to become solid after passing through the shader. but what other properties exist? why “INVSRCALPHA”? are there other useful values? do we really need all three? unfortunately, I have no idea 😛

hopefully this pixel shader is making a bit more sense to you. feel free to scroll back up and take another look at the code. in fact: feel free to copy this shader wholesale into your own game.

and if you’re not turned off by my lack of knowledge on all the details, let’s move on to actually APPLYING this pixel shader to a sprite!

Loading & Applying a Pixel Shader

ALRIGHT: let’s load that pixel shader up!

this is done very similarly to how you load sprites, or any other content, in MonoGame: add it to your Content.mgcb file, then load it in code with Content.Load:

Effect allWhite = Content.Load<Effect>("Shaders/AllWhite");

done!

applying shaders is a little weird, however. you might expect to do it while drawing an individual sprite, but actually, it’s done as part of a SpriteBatch.Begin call:

SpriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, effect: allWhite);
	
SpriteBatch.Draw(littleDude, new Vector2(150, 50), new Rectangle(0, 0, 20, 20), Color.White);
	
SpriteBatch.End();

P.S. if the effect: allWhite syntax looks strange to you… yeah, it’s a little strange 😛 it’s an uncommon – but useful! – bit of C# syntax that lets you skip tons of optional parameters, and pass in values only for the ones you actually care about. here’s a contrived example that illustrates this:

void CrazyMethod(int a, int b = 2, int c = 0, float? d = null, float e = 5);

...

CrazyMethod(10, d: 5);

CrazyMethod has a lot of optional parameters, and we only wanted to set a value for d. thanks to this helpful C# syntactical goodness, we’re able to easily do this!

back to pixel shaders:

because we have to define the effect as part of SpriteBatch.Begin, and because there is some overhead in starting and ending batches, we would LIKE to batch up as much sprite-drawing as possible! you might think “oh, I’ll write a helper method to draw sprites, which takes an Effect as a parameter, and wraps the Draw call in a SpriteBatch.Begin and SpriteBatch.End”, but you ABSOLUTELY DO NOT WANT TO DO THIS. imagine drawing a level out of 32×32 tiles… if your game runs at 1080p, you might have have about 2400 tiles on screen at a given time, and that’s just one layer of tiles! doing a SpriteBatch.Begin and SpriteBatch.End for each and every draw will slow things down A LOT.

conceptually, however, we’re going to be THINKING about drawing sprites one at a time, which means we’ll often want to write code this way, too. for example: every player and enemy on the screen MIGHT blink white to indicate damage, but you won’t know until you get to that particular character. also, you probably actually really care about the order your sprites are drawn in, for example a reasonable draw order for your game might be:

  1. background terrain
  2. enemies
  3. bullets
  4. players
  5. foreground terrain
  6. UI elements

maybe a player is blinking white (which also causes a bit of UI related to the player – their health bar – to also blink), and so is an enemy. to reduce SpriteBatch Begin/End calls, it’d in some senses be optimal to group up all three of these, but not only would you need to add a lot of code just to accomplish this grouping, you obviously do not want to sacrifice your draw order just to batch things up (placing the player in front of the foreground terrain, or the UI behind the foreground terrain, would be madness).

so we DON’T want to do tons of SpriteBatch Begin/End blocks, but we ALSO can’t group everything up all the time, AND we want to be able to just kinda’ draw sprites one at a time without thinking too hard about it… how do we accomplish this?

sprites.png

not to worry: we can write some code that helps us draw sprites in a conceptually-one-at-a-time way, without worrying about SpriteBatch Begin/End calls, AND with a touch of extra smarts to reduce SpriteBatch Begin/End calls. it won’t be 100% optimal, but unless your game goes absolutely HAM on the pixel shaders, a handful of extraneous SpriteBatch Begin/End blocks isn’t going to murder you (a handful is way less than 2400+!)

here’s a simple implementation. (we’ll expand on it a little later, when we get to full-screen shaders, but don’t worry about it 😛 we’ll get there!)

Effect? currentEffect = null;
	
void StartDrawing()
{
	currentEffect = null;
	SpriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, effect: currentEffect);
}
	
void DrawSprite(Texture2D sprite, int x, int y, int spriteX, int spriteY, int spriteWidth, int spriteHeight, Effect? effect = null)
{
	if(currentEffect != effect)
	{
		currentEffect = effect;

		SpriteBatch.End();
		SpriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, effect: currentEffect);
	}
		
	SpriteBatch.Draw(sprite, new Vector2(x, y), new Rectangle(spriteX, spriteY, spriteWidth, spriteHeight), Color.White);
}
	
void FinishDrawing()
{
	SpriteBatch.End();
}

so what’s going on here? how do we use this?

the idea is to End and Begin a new SpriteBatch only when you need to switch shaders. for example, when you’re drawing all the background terrain tiles in a level, you may not change pixel shaders at all. if we use the above code (I’ll show an example of that shortly) in such a situation, it will correctly never End and Begin a second SpriteBatch. at the same time, when we switch to drawing enemies and things, it might get a little mixier, but the above code will handle this as well, ending and starting new sprite batches as pixel shaders change. you may end up with a few extraneous batches in the end, but most games are going to have way fewer enemies on-screen than players, anyway.

an example use of the above helper methods would look something like this (the following code is incomplete):

void DrawScene()
{
	StartDrawing();
		
	for(int y = 0; y < levelHeight; y++)
	{
		for(int x = 0; x < levelWidth; x++)
		{
			DrawSprite(levelTileSpriteSheet, x * 32, y * 32, XXXX, YYYY, 32, 32);
			// ^ figure out the sprite sheet offsets XXXX & YYYY based on your level's tilemap
		}
	}

	foreach(var enemy in enemies)
	{
		DrawSprite(...); // draw each enemy on the screen, according to whatever logic they need
	}
		
	foreach(var bullet in bullets)
	{
		DrawSprite(...); // draw each bullet on the screen, according to whatever logic THEY need
	}
		
	// draw the player, accounting for whether or not they're invulnerable:
	DrawSprite(littleGuy, littleGuyX, littleGuyY, 0, 0, 20, 20, littleGuyIsInvulnerable ? allWhite : null);
		
	// maybe draw some UI stuff here...
		
	FinishDrawing();
}

the DrawSprite helper method does the hard work of tracking what the currently-applied “Effect” (pixel shader) is, and only ending and beginning a new sprite batch when that changes. this frees us from worrying about these deetails, and can simply draw sprites with whatever shader we want, as we want.

Pixel Shader Parameters (and a Full-Screen Shader!)

a while ago, I mentioned that pixel shaders can take PARAMETERS. let’s see how to do that with a shader that over-pixelates your image to varying degrees.

by “over-pixelate”, I mean that every square of four (or 9, or 16…) pixels will take on the color of a single pixel within that square. this could be used as part of a pixelating transition (ex: start a scene by applying crazy-high pixelation, and then reduce the pixelation until you reveal the true image), or perhaps as part of a damage visual effect (briefly pixelate the screen when the player takes damage), etc.

here’s the shader code:

sampler inputTexture;
int pixelation;

float4 MainPS(float2 originalUV: TEXCOORD0): COLOR0
{
	// my game runs at 960x540; change to reflect the resolution YOUR game runs at
	originalUV *= float2(960, 540);
		
	float2 newUV;
	newUV.x = round(originalUV.x / pixelation) * pixelation;
	newUV.y = round(originalUV.y / pixelation) * pixelation;
		
	// again: change this to match your screen's resolution
	newUV /= float2(960, 540);
		
	return tex2D(inputTexture, newUV);
}

technique Techninque1
{
	pass Pass1
	{
		PixelShader = compile ps_3_0 MainPS();
	}
};

there’s some interesting things going on here. let’s break it down:

  1. int pixelation; represents a parameter for the shader. we haven’t talked about how to use it yet, but don’t worry: we’ll get there soon!
  2. float4 MainPS(float2 originalUV: TEXCOORD0): COLOR0 is our pixel shader function declaration again. here, I’ve called the texture coordinate “originalUV”, instead of “textureCoordinates”. the name is not important; honestly, the only reason I’m using “originalUV” here, is because this pixel shader is a modification of another I found online, and in that original shader, the parameter was called “originalUV”, and I don’t really care what that parameter is called. (as long as it’s not some garbage abbreviation like “origTexCoor”; P.S. graphics people really like to call X and Y coordinates U and V for some reason. I don’t know why. you may have seen things like “UV mapping” in 3D modeling programs, for example. I believe Unity uses UV as well. whatever.)
  3. originalUV *= float2(960, 540) just like you can assign to multiple properties at once (remember color.rgb = 1.0?), you can perform all kinds of other math on muliple properties, as well! this line multiplies the first part of originalUV (x) by the first part of float2(960, 540), which is 960, and the second part (y) by the second part (540). why do we do this step? something I haven’t mentioned before is that texture coordinates always range from 0.0-1.0, no matter the dimensions of the texture. if your texture is 1250×10, then texture coordinates 0.5,0.5 refer to pixel 625,10 (or maybe 624,9 – whatever). but we’re dealing with pixelation here, so we really want to think of our texture in terms of its pixels. multiplying this 0.0-1.0 coordinate value by the dimensions of the source image – which this shader assumes is 960×540 (the size of my game’s screen) – turns the value into a pixel value that we can work with.
  4. then we do the pixelation magic! divided by the pixelation factor, round, and multiply by the pixelation factor. if you haven’t seen math like this before, it’s easier to think about it one coordinate at a time, for example, suppose the pixelation is “3”, meaning every 3 pixels in a line should be the same color. now think of what happens if we take pixels 11, 12, and 13 and run them through this math. first, we divide by 3, and get 3.66, 4, and 4.33. rounding all of those values yields 4. multiply by 3 again, and we’re at pixel 12, for all three pixels. when we use this new value to do a pixel lookup, it means that pixels 11, 12, and 13 will all use the color from pixel 12! similarly, pixels 14, 15, and 16 will all become 15. now, if we do the same on a second axis, we get 3×3 squares where all pixels of that square are the color of the center pixel from the original image!
  5. divide by float2(960, 540) to scale back to the 0.0-1.0 range that pixel shaders expect
  6. use tex2D to return the color of the texture at the divided, rounded, and multiplied location, which finally achieves the pixelation effect!

pixelation.png

the “technique” and “pass” stuff you’ve seen before, so I won’t go over it again, however note that THIS time all that alpha stuff has been left out. why? because I’m intending to use this shader on the WHOLE screen. I’m not sure how much extra work it adds to the GPU to ask it to think about alpha blending, but since I know I’m not going to need it, I’m not going to spare the GPU the trouble of thinking about it.

so now let’s see how to tell this shader what “pixelation” value you want. this is a shader parameter, and MonoGame makes it pretty easy to use these.

suppose you loaded this shader in this way:

Effect pixelationShader = Content.Load("Shaders/Pixelate");

to set a parameter:

pixelationShader.Parameters["pixelation"].SetValue(3);

now apply the shader as before:

DrawSprite(littleGuy, littleGuyX, littleGuyY, 0, 0, 20, 20, pixelationShader);

but wait: weren’t we going to pixelate the WHOLE display with this shader? the above code would only pixelate littleGuy!

for full-screen shaders, you’ll need to change how you draw. previously, we did this:

StartDrawing();
	
// draw a ton of sprites, maybe with effects
	
FinishDrawing();

we’d LIKE to be able to pass a shader into FinishDrawing, to tell it “hey: finish drawing, but also, do some full-screen pixel shader”. something like:

StartDrawing();
	
// draw a ton of sprites, maybe with effects

// apply the pixelationShader, using a pixelationFactor variable from code:
if(pixelationFactor > 1)
{
	pixelationShader.Parameters["pixelation"].SetValue(pixelationFactor);
	
	FinishDrawing(pixelationShader);
}
else
{
	FinishDrawing();
}

however, FinishDrawing doesn’t support an optional shader argument. also, I wasn’t lying earlier when I said that pixel shaders have to be passed in to SpriteBatch.Begin calls. they really do. so how is passing a shader into FinishDrawing supposed to help anything?

the answer is RENDER TARGETS.

Render Targets?!

render targets.

a render target is an object you can ask MonoGame to draw to, instead of drawing to the screen. it’s a special bit of memory typically kept in the GPU to make things as speedy as possible. once you’ve drawn there, you can then draw the entire render target to the screen. this additional draw will give us an opportunity to do a SpriteBatch.Begin, which is where we’ll pass in a shader, causing it to be applied to the entire screen!

you can use render targets to do other cool things, too. for example in a split-screen co-op game, it makes a lot of sense to give each player a render target that’s half the size of the physical screen, draw each players’ view to their individual render target, then draw each render target to a different place on the physical screen.

minimaps are another potential use for render targets.

anyway, here’s an updated copy of the helper methods (StartDrawing, FinishDrawing, etc) from above, this time with a render target, and a FinishDrawing method that accepts a pixel shader.

Effect? currentEffect = null;
RenderTarget2D renderTarget;
	
// you'll need to call this once, before you start drawing!
void Initialize()
{
	// my game happens to run at 960x540; yours may run at a different resolution! change this accordingly:
	renderTarget = new RenderTarget2D(GraphicsDevice, 960, 540, false, GraphicsDevice.PresentationParameters.BackBufferFormat, DepthFormat.Depth24);
}
	
void StartDrawing()
{
	currentEffect = null;
		
	// draw to the renderTarget, instead of to the screen:
	GraphicsDevice.SetRenderTarget(renderTarget);
		
	SpriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, effect: currentEffect);
}
	
void DrawSprite(Texture2D sprite, int x, int y, int spriteX, int spriteY, int spriteWidth, int spriteHeight, Effect? effect = null)
{
	if(currentEffect != effect)
	{
		currentEffect = effect;

		SpriteBatch.End();
		SpriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, effect: currentEffect);
	}
		
	SpriteBatch.Draw(sprite, new Vector2(x, y), new Rectangle(spriteX, spriteY, spriteWidth, spriteHeight), Color.White);
}
	
void FinishDrawing(Effect? fullScreenShader = null)
{
	SpriteBatch.End();
		
	// no more render target; we'll now draw to the screen!
	GraphicsDevice.SetRenderTarget(null);
		
	SpriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, effect: fullScreenShader);
		
	// again: change 960x540 to match your resolution. consider keeping the resolution values in a couple "const"s somewhere, so you can easily change them later, if you want.
	SpriteBatch.Draw(renderTarget, new Vector2(0, 0), new Rectangle(0, 0, 960, 540), Color.White);
		
	SpriteBatch.End();
}

you can see that we now track a new object: a RenderTarget2D. we have to initialize this thing before we can use it; that’s what the new Initialize method does. (make sure to call Initialize – just once – before you start drawing anything!)

this new StartDrawing method now instructs MonoGame to draw everything to the renderTarget, instead of the default location (the screen). FinishDrawing has been updated to then draw renderTarget to the screen, applying the given pixel shader (if any).

the SpriteBatch.Draw call insider FinishDrawing is exactly like the call we make for drawing any other sprite, but ask it to draw from our renderTarget instead of from our littleGuy (or whatever other) sprite. in fact, SpriteBatch.Draw is capable of drawing from Texture2D objects, RenderTarget2D objects, and a few others as well.

you can see how this starts to allow you to easily make a split-screen co-op game, too! if you created TWO RenderTarget2D objects whose widths were half the screen (480, in this case), you could put them side-by-side in the FinishDrawing method with something like:

// assume renderTarget is now an array or list of RenderTarget2D objects, each 480x540 in size:
SpriteBatch.Draw(renderTarget[0], new Vector2(0, 0), new Rectangle(0, 0, 480, 540), Color.White);
SpriteBatch.Draw(renderTarget[1], new Vector2(480, 0), new Rectangle(0, 0, 480, 540), Color.White);

Other Shader Possibilities: Character Customization

a while ago I mentioned that pixel shaders could be useful for making customizable character sprites.

here’s the idea:

  1. draw a grayscale person sprite in your favorite drawing program
    • make sure that all the hair pixels are all the same shade of gray
    • make sure that all the shirt pixels are all the same different shade of gray
    • etc, for whatever parts of the sprite you want to be customizeable
  2. write a pixel shader that has some color parameters – ex: “float4 hairColor;” “float4 shirtColor;” etc – and replaces the grays in the character sprite with the colors passed in

pixel shaders allow “if” statements, so it’s pretty easy to read the value of each pixel in the texture, and replace with a passed-in color, ex:

float4 color = tex2D(inputTexture, textureCoordinates);

if(color.r == 10) color.rgb = hairColor.rgb;
else if(color.r == 30) color.rgb = shirtColor.rgb;
// etc.

return color;

in this way, you can let your players choose any colors they want for hair, clothes, skin, etc, and it all works with a single sprite and a single pixel shader.

I haven’t written this particular shader myself, so I can’t give you more-complete code, but it should be fairly easy to figure out, and examples of this kind of shader can be found online.

One More Pixel Shader For You: Grayscale

this one I HAVE written, so can give you.

possible uses include: grayscale a single sprite (zombie player!) or the entire screen (flashback!)

sampler inputTexture;

float4 MainPS(float2 textureCoordinates: TEXCOORD0): COLOR0
{
	float4 color = tex2D(inputTexture, textureCoordinates);
		
	// does this look weird? more on this later:
	color.rgb = color.r * 0.2126 + color.g * 0.7152 + color.b * 0.0722;
		
	return color;
}

technique Techninque1
{
	pass Pass1
	{
		AlphaBlendEnable = TRUE;
		DestBlend = INVSRCALPHA;
		SrcBlend = SRCALPHA;
		PixelShader = compile ps_3_0 MainPS();
	}
};

this is very similar to the AllWhite shader from before; so similar, I’m not going to step through everything it does. HOWEVER: what’s up with that weird grayscale math? 0.2126 and all that…

the intuitive answer to “how do I turn RGB into gray” would be to simply average the red, green, and blue components; ex: (color.r + color.g + color.b) / 3, however, human eyes are not sensitive to all waves of light equally, and monitors were unfortunately NOT designed to scale red, blue, and green in ways that match what human eyes and brains perceive. so we need to do a little extra math to grayscaleify things in a way that matches what human eyse and brains expect.

you can find this “red * 0.2126, etc” formula, and an explanation about its exact values, on various places around the internet. here’s one such place: https://en.wikipedia.org/wiki/Grayscale

And That’s It!

I hope this tutorial has been helpful for you! I had a lot of trouble finding information on how to get pixel shaders working in MonoGame. hopefully this article can be useful to some people, including my future self!

unfortunately, I cannot offer timely support on anything written here, but if you spot any errors, or have any questions, definitely send them my way! I’ll read them (eventually!) and update this article if needed!

audio convolvers for underwater sound effects (WIP)

today I played around with adding audio convolvers to Mysterious Space.

what are these strange “audio convolver” things? they’re some special algorithms that let you mutate sounds in interesting ways, in real time, to accomplish all kinds of neat effects.

I first encountered them when working on a game in RPG Maker MV. I wanted to add an echo effect to the sounds made by the player, when the player was in a cave. making echo-y versions of the various sound effects (walking, picking up coins, starting fights…), and then choosing which to play based on location, would be time-consuming, error-prone, and difficult to scale up (every new sound added later would require that much more work). further, I was pretty positive that better solutions existed, I just didn’t know what they were! so I started to look around online for how to accomplish the effect I was looking for.

the answer, it turns out, is audio convolution! in particular, a technique where you create an “Impulse Response” by recording a sharp sound (like a clap) made in a space, and then using MAGIC to combine that Impulse Response with whatever sound you want. the result is a new sound that sounds like it’s being played in the location that your Impulse Response was recorded.

you can create an Impulse Response in a large room, a narrow hallway, an echo-y bathroom, wrapped in thick cloth, or whatever you want, and use audio convolution to playback your existing sounds so that they sound like they were recorded in those places/environments.

how this mathematical wizardry works, I don’t know, but happily I don’t HAVE to know (and neither do you!) RPG Maker MV runs on JavaScript, and – to my surprise – vanilla JS has support for audio convolutions built in! many other platforms have support for these as well, including FMOD.

FMOD is a library you can use for playing sounds. I use it to handle all the sound and music in Mysterious Space. MonoGame (the library I use to do graphics drawing, input handling, and tons of other stuff) has audio capabilities as well, however it does not support as many audio file formats, and I could never get it to loop sounds seamlessly (important for background music!) (I’m also skeptical that MonoGame supports audio convolutions, but I haven’t looked into it.)

FMOD takes a bit of work to get set up (it’s a very “low-level” library: it provides advanced features, but rarely wraps them up in convenient-to-use ways), but it offers a lot of power, including the power of audio convolvers.

so I spent a bit of time today figuring out how to get audio convolvers going in Mysterious Space using FMOD, but ran into a problem. not a problem with FMOD, but with the design of Mysterious Space itself!

the effect I want to accomplish with audio convolvers in Mysterious Space is to add an underwater effect to all sounds while your ship is underwater. the problem I’ve run into, however, is that Mysterious Space can be played in local, split-screen co-op mode. up to four screens. why is this a problem? because SOME of those screens might be underwater, while some are not! and I’ve never had a need to play sounds differently for different players before, so Mysterious Space is currently set up to play all sounds through a single, master, sound-player. that has to be changed, or else all sounds played everywhere will have to be subjected to the same audio convolutions.

so I modified the sound-player to have multiple channels – up to five (one per player, plus a player-independent one for music) – and made it require that when you ask it to play a sound, that you also specify which channel to play that sound on! this way, each player can grab their own channel, which can each have different audio convolvers going on.

BUT WAIT: that works really easily for sounds which are clearly triggered by an individual player, like menu cursor bleeps, but even shooting a weapon… what if I shoot a weapon while I’m under water, and you – a co-op player – are nearby, but ABOVE water. how do we play that weapon fire noise through a single speaker system? and maybe it would be good enough to resolve this by saying “if player 1 shot it, then player 1 plays the sound according to whatever rules, and player 2 doesn’t play the sound at all”, but what about for enemies? what if I’m underwater, and you’re above water, and an enemy that’s underwater shoots a weapon, and an enemy that’s above water shoots a weapon…

two possibilities come to mind:

  1. whichever player is closest to the source of the sound, THAT player is responsible for playing the sound
  2. all players play the sound according to their rules, and we use some additional magic to average the way they each want to play it

I’m thinking #2 is both better, and probably supported by FMOD (even if I don’t quite know how). but I don’t really know. this will require further investigation.

problems like this make me a little sad that I’m not using Unity to make Mysterious Space, where these kinds of problems have already been solved. in Unity, you’d simply attach a “Listener” to each camera, and that’s it! Unity has already figured out what to do when a single sound is heard by multiple “Listeners”. unfortunately, I didn’t know Unity when I started Mysterious Space, and I’m not about to try to rewrite/translate the game into Unity >_> (I am writing Mysterious Space in C# already, which is great, but split-screen logic, input-handling, and SO much core stuff would have to be totally redone…)

and anyway, these are interesting problems to solve. I don’t mind having to figure them out myself, even if it means things take a little longer! 🙂

I haven’t solved the “split-screen audio with convolvers” problem for Mysterious Space yet, and I’m a little tired of working on this particular problem for now, but I’ll get there! and in the meanwhile, there are plenty of other interesting problems to work on solving for Mysterious Space (speech controls??), so I’m going to pivot to one of those 😛

still working on that update; here’s an update on that update :P

I’m still working on that update I mentioned two weeks ago. I got a little carried away trying to figure out voice commands – ran into trouble getting FMOD & Syn.Speech talking to each other – so I’m going to pass on that for now. it’s something I’ve been interested in trying out for a while, though, so you better believe I’ll be back on it in the near future 😛

in lieu of that, I’ve been focusing on more quality-of-life improvements, and minor additions…

alien-cities.png

here’s a run-down of what HAS been worked on since I last posted:

  • I’ve added some mouse controls to the game, limited to use on certain menus. this is in preparation for a mouse-only player mode that I started a long time ago, but never finished. that mode won’t be ready this update, but this work sets some groundwork.
  • using the “causes random enemies to appear nearby” advanced technology now has an effect when on the sector map!
  • when a gamepad is unplugged during gameplay, you’re now taken to a screen where you can assign controllers to any players that lost theirs. you have the option to save & quit from this screen.
  • on the sector map, when pointing at a planet with a civilization/shop, the planet preview now shows a city skyline (the four possible city graphics are shown above).
  • there’s a few new outpost graphics, and a new request that outposts can have for you.

‘not sure exactly when I’ll release the update, but I’m now focusing on fixing remaining bugs & polishing up after the big DirectX->OpenGL move, so the release should come relatively soon; maybe a little before Christmas; maybe a little after 🙂

a wild Mysterious Space update appears!

it’s not quite ready yet, but I’ve been working on an update for Mysterious Space!

whoa!

crazy!

a summary of what’s definitely coming

  • support for generic USB gamepads
  • fixed full screen lag that some devices were experiencing
  • better translation capability, and improved French translation
  • a new type of planet: Crystal!
  • various, smaller bug-fixes
  • much smaller game install size

the long version

behind the scenes, this update includes updates to various 3rd-party libraries, the effects of which you mostly won’t see (more on “mostly” later :P) so why did I bother, if most of the effects won’t be seen? I want to make sure Mysterious Space continues to work on newer computers; by getting away from super-old libraries, I can (help) make sure that Mysterious Space doesn’t stop working when a Windows (or even Steam) update drops support for something that Mysterious Space happens to be using.

one of the most-significant updates was a switch from DirectX to OpenGL. this was happily relatively easy, since Mysterious Space uses MonoGame under the hood. (though I do have questions about MonoGame, in terms of software longevity, but that’s neither here nor there…) the switch to OpenGL does two major things that you might actually care about:

  1. allows the use of non-Xbox-compatible controllers. for example, I’m now able to play Mysterious Space with an N64-style USB controller – about the weirdest test case I could imagine 😛
  2. makes a cross-platform release of Mysterious Space more possible. (though there would still be a bit of work to do, and it’s not something I’m looking to do right now.)

the switch to OpenGL did NOT actually fix the full-screen lag issue; that was something bad in my full-screen logic (which I also changed, but the details there are probably not super-interesting :P)

finally, I want to mention the very-reduced game size in this release: this was accomplished by encoding the music as ogg files. previously, they were flac, because I’d had trouble getting ogg files to play reliably. later, for different reasons, I switched to using fmod for sound and music playing (MonoGame’s player is (was?) very bad at looping music). fmod handles ogg files flawlessly, it turns out, but I’d never tried!

the future

some other changes I am gearing up for, but which may or may not be included in this update:

  • a return to itch.io releases (near-term)
  • true pixelated graphics (long-term)
  • voice commands?! (medium term?)

itch.io

Steam is more-prohibitive than itch.io in many ways, and with few added benefits (in my opinion). itch.io has also improved since I first released Mysterious Space: they have a desktop client, and the “butler” deployment tool for developers. I’m definitely not abandoning Steam, but it makes a lot of sense to ALSO release on itch.io.

pixelated graphics

Mysterious Space uses a kind of “fake” pixelated look: the graphics themselves are double-size. you can see this in-game: sometimes graphics seem to be half-a-pixel off from each other. this is because they’re actually 1 pixel off, but all the graphics are drawn at double size! this “fake” pixelization has some advantages and disadvantages:

  • in theory, there’s a performance disadvantage to using images which are 4x as large as they need to be, but Mysterious Space’s demands on your graphics card are actually quite low, so there’s little to no practical disadvantage here.
  • if someone wanted to “reskin” Mysterious Space with higher-res graphics, they totally could, and could easily achieve 4x the detail than the built-in graphics by doing so!
  • Mysterious Space runs natively at 960×540. this resolution was very specifically picked, so that up to four screens can fit in an area 1920×1080 (for multi-player), or the game can be drawn at double size to fill 1920×1080 (for single player). however, this 960×540 resolution includes graphics which are ALREADY double-sized, so even though it SEEMS like the game should be able to run in 480×270, it can’t. this limits the scaling options of the game. for example: if you like playing windowed, and 960×540 seems to small, you’re kinda’ SOL, because the only other option is going 1920×1080, but if the game actually ran at 480×270, then 3x scaling for a 1440×810 window would be possible. more fine-grained scaling would also help future-proof Mysterious Space as monitor resolutions change.

the advantages and disadvantages are relatively minor, especially given the amount of work that the change requires (I tried implementing it once, a couple years ago, and gave up, due to just how much work it turned out to be!) but it IS something that bothers me, and something I’d like to revisit.

some related work that I did was a change to how the game arranges and scales the viewports in a multi-player game, especially when playing full-screen. the game now tries to figure out how best to fit the viewports in the available area, with the largest possible zoom level, and tries a couple different scenarios. it also adds padding between each viewport, if there’s space to do so. finally, when playing single-player, the game scales to fill the entire screen, even if that would result in a non-integer scaling factor, unless an integer scaling factor is very close, in which case it prefers to place a small black area around the screen.

there is some additional work here I could do to improve things for super-wide monitors, or monitors with a vertical orientation, but until/unless I get Mysterious Space running natively at 480×270, I’m not super worried about it.

voice commands?!

yeah, so, the reasoning here is “why not? sounds fun” 😛 these would be a totally optional aspect of the game, of course, allowing you to perform some actions via voice command, ex:

  • “use red (alien artifact)” to use a red alien artifact
  • “use accessory” to activate an accessory
  • maybe even “use tractor beam”?

of course, using your controller or gamepad is almost certainly faster; mainly, I’m just curious to try alternate methods of input, and see how it works! as a developer, voice input is not something I have much experience with, and this sounds like a fun way to experiment. and if it turns out to be dumb, and no one ends up using it, I’m cool with that 😛

Mysterious Space 0.10.1!

long time no update!

to make a long story short, I’ve been working on various other projects (mostly web-based, trying out a bunch of the bells and whistles that Amazon Web Services has to offer), but in the end, nothing I worked on has really stuck. the games I made haven’t quite been fun enough; certainly not as fun as Mysterious Space!

so I’ve jumped back on Mysterious Space – at least for now – and finished up a release that I’d left half-done. it features a new boss, a new tutorial, a new enemy, new equipment blessings, new particle effects, new firesticks, and probably one or two other new things that are slipping my mind…

check out the full changelog for more.

I don’t know how long I’m going to work on Mysterious Space this time, but I do have a few more minor bugs I’d like to fix, and there’s always a need for more enemies and minibosses, so all I can really say is: we’ll see!

I’ll keep you posted!

(buy Mysterious Space on Steam!)

Mysterious Space 0.9.5 release, and other dev news

I’ve finally released a new version of Mysterious Space – 0.9.5 – which introduces ENDLESS MODE. you can read about that here, but I thought I’d also take a moment to talk about my other projects, which are in varying stages of development.

Space Man (working title)

a Mega Man-like I started a few months ago. I made a TON of progress, but haven’t worked on it much at all this last month or so. I’ve given the source code to a couple friends, and they’ve been playing with it on and off, as time has permitted them. perhaps some collaboration will happen there, later, I don’t know. it’s something I put a lot of work into, and I’d hate to see it go to waste, but there’s a lot of other things I’m interested in working on, too, so… we’ll see…

Not PsyPets (working title)

in 2004 I started a browser-based game called PsyPets. I’ve probably rambled about it before, so I won’t say much about it again, except that it was created about the time that other browser games like Kingdom of Loathing and Gaia Online were starting up, which may give you a sort of an idea about the kind of game PsyPets is.

anyway, several years ago I gave the game to another developer, but it’s a game that’s always on some part of my mind. “Not PsyPets” is a mobile-first re-imagining of PsyPets that I’ve been poking at on and off. PsyPets is something which will always be special to me; “Not PsyPets” is definitely something I’d like to give more time to.

Game of Choices (working title)

this is a game I never expected to grab my attention in the way it has. which isn’t to say that it’s got me super-excited – it hasn’t, quite? – but something about it compels me to work on it for a week every couple of months.

it’s a text-only game that I guess you could say is inspired by Oregon Trail and simulationist roguelikes: you travel in a straight line toward a goal with a small group of characters across a procedurally-generated fantasy world in which all kinds of things – mostly bad – might happen to you. this isn’t a new idea, of course, but it’s still fun to work on from time to time. and there’s something about playing a game in a DOS window… I dunno… I like it 😛

And That’s It

those are the bits of code and ideas that I’ve been working on this last year or so.

I’m definitely going to continue developing and supporting Mysterious Space, but I’m also always going to work on other little projects on the side. whether any of them are things you’ll see on itch.io, or Steam, or anywhere else, I don’t know; I’ll definitely let you know about any interesting progress I may make on them, though!

thanks for reading 🙂

Roguelike Radio 124: Shoot Em Ups

a monthish ago, three makers of roguelike shoot ’em ups were interviewed on the Roguelike Radio broadcast: me, James Whitehead (Really Big Sky), and Chris Park (Starward Rogue). that episode has finally been released!

we talk about what inspired us to make such strange games in the first place, thoughts on bullet hell, different types of in-game abilities and how they affect game-play, and other things.