#7 A sizable update (loads of stuff inside)


First of all, I'm going to be doing fewer of these "update" devlogs because (1) I noticed they go into itch.io's public listing, and I really just want to be able to share bare bones notes to subscribers and anyone passing through. 2) the last few times the updates introduced problems that would've required more updates to patch over.

The reason I'm sharing this update is two-fold. 1) I've changed how XInput buttons are mapped and I want to get feedback to find out if this solves some problems I got the impression some had had. 2) I had a really awesome idea I wasn't expecting that I want to be able to share. It's pretty much the final leg in the edge-aliasing system I've written glowingly about in the project's description. There are some other things I will list off at the end. And I intend to add some notes about how to play, i.e. how Sword of Moonlight's controls are different from the PlayStation games.

First, about (2) this is really cool. Last year I got to add a second major component to the AA system. This system is something no one knows about in the world. It does what games normally do but with no image-processing. AA generally takes up a good chunk of GPU time for games today and makes them require expensive hardware, but SOM's system which I've invented and developed does all of this, better, with no GPU time at all. So, it's designed to work on any desktop system with a GPU.

This "second system" added AA in texture space, that is between the pixels that are mapped onto the polygons. It's especially important for cutout techniques, like around blades of grass that are really just "textures" and not polygons. In fact I think I first developed it for the compass you can see in my demo. Of course, the "first system" is the edges of polygons. I discovered it many years ago now. I haven't been able to interest anyone in it. But both of these techniques have one shortcoming compared to the traditional methods, that is, they need everything to be still to work. So, you only get a nice, clean geometric image when you stop moving, or the thing you're looking at stops moving. I thought this was maybe as far as it could go. I'm modest, I accepted that, you know, when things are moving you can't get a good look at them anyway. And that's that.

Except, you really can get a pretty good look sometimes, especially in high contrast cases, like a very bright thing on a dark background.  And that looks like little artifacts, and they're easier to notice when moving at a very gradual speed.

But, I had an idea! The leap of logic was the contrast on the edges can be reasoned about efficiently and adjusted for. Part of the system is it uses two frame buffers that receive the game's picture. They take turns every other time, and the picture you see, is really a half image of both of them. So you see an afterimage in every frame, and the current image is also a half image, so you only get a full image when they are both nearly identical.

To reason about how the images contrast is not so simple. This is because the other thing about how the system works, fundamentally, is it manipulates the "jaggies" so when they're superimposed (both in those dueling frames and in your eyes) they appear to be one unbroken line. This means that depending where the "jaggies" are different pixels are lit up different colors. So, to reason about their contrast you need to know about groups of pixels. What's new is each of the frames are now made to be converted into a single "mipmap" (each) that's the same image only down-scaled to half size, i.e. one quarter area. In that image every pixel is the average of a neighborhood of 2-by-2 pixels taken from the original. There's hardware available for doing this, it's called mipmapping, but that's not what it's being used for in this system.

Now in the dissolve step these half images can be compared (and contrasted) so the difference between them is a measure of contrast. The matching pixels are then blended with each other's mipmaps depending on the measure of contrast. What this achieves is artificially reduced contrast along moving edges. It's not anti-aliasing per se, but it masks the unwanted artifacts effectively.  This applies globally to all pixels, functioning as a relatively no-cost (compared to image-processing) solution to add to these other two techniques, and eliminates the final weakness of the system, so that it can't be criticized except on the grounds it's not as lavish as other systems. It's a pure 3D system, whereas others are 2D systems. They're just fudging pixels after the fact. This system doesn't have any of the usual weaknesses of AA and none of the costs. It can now do edge-detection with one texture access. When I began work on SOM I consciously limited myself to one texture access because I believe this will make it compatible with all kinds of computers, including my own, which are very modest. I'm only recently thinking of getting an expensive system to be able to work on VR for SOM soon.

* This is the end of my AA journey. It's come full circle. I'm pleased. I think god's on SOM's side. I wish everyone else would join with us (https://www.patreon.com/swordofmoonlight)

* The new ZIP has this (a very early form since I've implemented it in one afternoon today) and I have included 3 fully functioning monsters (they are headeater, kraken, and slime) that I've been working on lately. I want you to see how misbehaved they are. I think before I do a proper full/next demo I want to spend time on the clipper system for NPCs since it's practically in the same state From Software left it in. It needs to be upgraded to be as good as the PC's clipper.

* One last thing is I've changed the mipmaps (there's that word again) to be generated with a so-called point-filter. This is more thinking outside the box. I've had a lot of difficulty with the green caves texture. I got it to be the right color, but it was still mushy around the distance where the black fog breaks in. So I tried to do different things since I want it to be true to the original King's Field II and not "muddy" to the extent that's possible. I'm still trying to solve these problems, but I realized one source of mud is mipmaps, because they average the colors, just like I described for the AA system, where that's desirable.  But it makes the white chalk and green lichen colors of the caves a gray porridge. So instead of averaging them I've changed it to just pick a color (to some degree pseudo-randomly) for each pixel in the mipmap so the real colors aren't degraded. It's not great but I think it's at least acceptable. For my system the first true mipmap is barely visible, blended in, so the difference is pretty minor. While I was working on this I found out to my surprise my Intel GPU blends mipmaps when magnifying textures. I don't know yet if this is normal practice. It seems counterintuitive. So I'm worried maybe the demo looks different for me than it might for you. There's a system that enlarges the original textures off the game disc, by making a new one that's four times as large in area. I've made it to soften its corners so it's not just 2-by-2 pixels of the exact same smaller images. I wonder if everyone is seeing that image or if you're seeing, like me, both that and the original images superimposed up close.

* Almost forgot. The previous update fixed a problem with "object" activation. To do it I had to route it through a different subroutine, and I forgot that it wasn't set up to correctly handle two classes of objects, that are containers that don't play animations. Those are things like a barrel or a pile of bones. Because the timing was off the titles text would appear prematurely and incorrectly when crouching down to examine.

Files

KING'S FIELD 25th PROJECT.zip 38 MB
Jan 30, 2021

Get KING'S FIELD 25th PROJECT

Leave a comment

Log in with itch.io to leave a comment.