## Friday, August 30, 2013

### One pixel at a time

Over the course of this project, the way I've rendered images to the screen has rather drastically changed. Well, let me clarify. Everything is blitted to the screen in exactly the same way ( OSystem::copyRectToScreen ), however, how I get/modify the pixels that I pass to copyRectToScreen() has changed. (Disclaimer: From past experiences, we know that panorama images are stored transposed. However, in this post, I'm not really going to talk about it, though you may see some snippets of that in code examples) So a brief history:

In my first iteration an image would be rendered to the screen as such:
1. Load the image from file to a pixel buffer
2. Choose where to put the image.
3. Choose what portion of the image we want to render to the screen. We don't actually specify the height/width, just the (x,y) top-left corner

4. Call renderSubRect(buffer, destinationPoint, Common::Point(200, 0))
5. Create a subRect of the image by clipping the entire image width/height with the boundaries of the window and the boundaries of the image size
6. If we're in Panorama or Tilt RenderState, then warp the pixels of the subRect. (See post about the panorama system)
7. Render the final pixels to the screen using OSytem::copyRectToScreen()
8. If we're rendering a background image (boolean passed in the arguments), check if the dimensions of the subRect completely fill the window boundaries. If they don't them we need to wrap the image so it seems like it is continuous.
9. If we need to wrap, calculate a wrappedSubRect and a wrappedDestination point from the subRect dimensions and the window dimensions.
10. Call renderSubRect(buffer, wrappedDestination, wrappedSubRect)
At first glance, this seems like it would work well; however, it had some major flaws. The biggest problem stemmed from the Z-Vision technology.

To understand why, let's review how pixel warping works:
1. We use math to create a table of (x, y) offsets.
2. For each pixel in the subRect:
1. Look up the offsets for the corresponding (x, y) position
2. Add those offsets to the the actual coordinates
3. Look up the pixel color at the new coordinates
4. Write that pixel color to the destination buffer at the original coordinates
Let's give a specific example:
1. We want to render a pixel located at (183, 91)
2. We go to the RenderTable and look up the offsets at location (183, 91)
3. Add (52, 13) to (183, 91) to get (235, 104)
4. Look up the pixel color at (235, 104). In this example, the color is FFFC00 (Yellow).
5. Write to color FFFC00 to (183, 91) in the destination buffer
The problem occurs when you're at the edges of an image. Let's consider the same scenario, but the image is shifted to the left:

When we try to look up the pixel color at (235, 104) we have a problem. (235, 104) is outside the boundaries of the image.

So, after discussing the problem with wjp, we thought that we could let the pixel warping function ( mutateImage() ) do the image wrapping, instead of doing it in renderSubRectToScreen. Therefore, in renderSubRectToScreen(), instead of clipping subRect to the boundaries of the image, I expand it to fill the entire window. Then inside of mutateImage, if the final pixel coordinates are larger or smaller than the actual image dimensions, I just keep adding or subtracting image widths/heights until the coordinates are in the correct range.
void RenderTable::mutateImage(uint16 *sourceBuffer, uint16* destBuffer, int16 imageWidth, int16 imageHeight, int16 destinationX, int16 destinationY, const Common::Rect &subRect, bool wrap) {
for (int16 y = subRect.top; y < subRect.bottom; y++) {
int16 normalizedY = y - subRect.top;
int32 internalColumnIndex = (normalizedY + destinationY) * _numColumns;
int32 destColumnIndex = normalizedY * _numColumns;

for (int16 x = subRect.left; x < subRect.right; x++) {
int16 normalizedX = x - subRect.left;

int32 index = internalColumnIndex + normalizedX + destinationX;

// RenderTable only stores offsets from the original coordinates
int16 sourceYIndex = y + _internalBuffer[index].y;
int16 sourceXIndex = x + _internalBuffer[index].x;

if (wrap) {
// If the indicies are outside of the dimensions of the image, shift the indicies until they are in range
while (sourceXIndex >= imageWidth) {
sourceXIndex -= imageWidth;
}
while (sourceXIndex < 0) {
sourceXIndex += imageWidth;
}

while (sourceYIndex >= imageHeight) {
sourceYIndex -= imageHeight;
}
while (sourceYIndex < 0) {
sourceYIndex += imageHeight;
}
} else {
// Clamp the yIndex to the size of the image
sourceYIndex = CLIP<int16>(sourceYIndex, 0, imageHeight - 1);

// Clamp the xIndex to the size of the image
sourceXIndex = CLIP<int16>(sourceXIndex, 0, imageWidth - 1);
}

destBuffer[destColumnIndex + normalizedX] = sourceBuffer[sourceYIndex * imageWidth + sourceXIndex];
}
}
}


With these changes, rendering worked well and wrapping/scrolling worked well. However, the way in which Zork games calculate background position forced me to slightly change the model.

Script files change location by calling "change_location(<world> <room> <nodeview> <location>). location refers to the initial position of the background image. Originally I thought this referred to distance from the top-left corner of the image. So for example, location = 200 would create the following image:

However, it turns out that this is not the case. location refers to distance the top-left corner is from the center line of the window:
Therefore, rather than worry about a subRect at all, I just pass in the destination coordinate, and then try to render the entire image (clipping it to window boundaries):
void RenderManager::renderSubRectToScreen(Graphics::Surface &surface, int16 destinationX, int16 destinationY, bool wrap) {
int16 subRectX = 0;
int16 subRectY = 0;

// Take care of negative destinations
if (destinationX < 0) {
subRectX = -destinationX;
destinationX = 0;
} else if (destinationX >= surface.w) {
// Take care of extreme positive destinations
destinationX -= surface.w;
}

// Take care of negative destinations
if (destinationY < 0) {
subRectY = -destinationY;
destinationY = 0;
} else if (destinationY >= surface.h) {
// Take care of extreme positive destinations
destinationY -= surface.h;
}

if (wrap) {
_backgroundWidth = surface.w;
_backgroundHeight = surface.h;

if (destinationX > 0) {
// Move destinationX to 0
subRectX = surface.w - destinationX;
destinationX = 0;
}

if (destinationY > 0) {
// Move destinationY to 0
subRectX = surface.w - destinationX;
destinationY = 0;
}
}

// Clip subRect to working window bounds
Common::Rect subRect(subRectX, subRectY, subRectX + _workingWidth, subRectY + _workingHeight);

if (!wrap) {
// Clip to image bounds
subRect.clip(surface.w, surface.h);
}

// Check destRect for validity
if (!subRect.isValidRect() || subRect.isEmpty())
return;

if (_renderTable.getRenderState() == RenderTable::FLAT) {
_system->copyRectToScreen(surface.getBasePtr(subRect.left, subRect.top), surface.pitch, destinationX + _workingWindow.left, destinationY + _workingWindow.top, subRect.width(), subRect.height());
} else {
_renderTable.mutateImage((uint16 *)surface.getPixels(), _workingWindowBuffer, surface.w, surface.h, destinationX, destinationY, subRect, wrap);

_system->copyRectToScreen(_workingWindowBuffer, _workingWidth * sizeof(uint16), destinationX + _workingWindow.left, destinationY + _workingWindow.top, subRect.width(), subRect.height());
}
}


So to walk through it:

1. If destinationX/Y is less than 0, the image is off the screen to the left/top. Therefore get the top left corner of the subRect by subtracting destinationX/Y.
2. If destinationX/Y is greater than the image width/height respectively, the image is off the screen to the right/bottom. Therefore get the top left corner of the subRect by adding destinationX/Y.
3. If we're wrapping and destinationX/Y is still positive at this point, it means that the image will be rendered like this:
4. We want it to fully wrap, so we offset the image to the left one imageWidth, and then let mutateImage() take care of actually wrapping.
The last change to the render system was not due to a problem with the system, but due to a problem with the pixel format of the images. All images in Zork Nemesis and Zork Grand Inquisitor are encoded in RGB 555. However, a few of the ScummVM backends do not support RGB 555. Therefore, it was desirable to convert all images to RGB 565 on the fly. To do this, all image pixel data is first loaded into a Surface, then converted to RGB 565. After that, it is passed to renderSubRectToSurface().

Since I was alreadly preloading the pixel data into a Surface for RGB conversion, I figured that was a good place to do 'un-transpose-ing', rather than having to do it within mutateImage().

So, with all the changes, this is the current state of the render system:

1. Read image pixel data from file and dump it into a Surface buffer. In the case of a background image, the surface buffer is stored so we only have to read the file once.
void RenderManager::readImageToSurface(const Common::String &fileName, Graphics::Surface &destination) {
Common::File file;

if (!file.open(fileName)) {
warning("Could not open file %s", fileName.c_str());
return;
}

// Some files are true TGA, while others are TGZ

uint32 imageWidth;
uint32 imageHeight;
uint16 *buffer;
bool isTransposed = _renderTable.getRenderState() == RenderTable::PANORAMA;
// All ZEngine images are in RGB 555
Graphics::PixelFormat pixelFormat555 = Graphics::PixelFormat(2, 5, 5, 5, 0, 10, 5, 0, 0);
destination.format = pixelFormat555;

bool isTGZ;

// Check for TGZ files
if (fileType == MKTAG('T', 'G', 'Z', '\0')) {
isTGZ = true;

// TGZ files have a header and then Bitmap data that is compressed with LZSS

buffer = (uint16 *)(new uint16[decompressedSize]);
} else {
isTGZ = false;

// Reset the cursor
file.seek(0);

// Decode
return;
}

Graphics::Surface tgaSurface = *(tga.getSurface());
imageWidth = tgaSurface.w;
imageHeight = tgaSurface.h;

buffer = (uint16 *)tgaSurface.getPixels();
}

// Flip the width and height if transposed
if (isTransposed) {
uint16 temp = imageHeight;
imageHeight = imageWidth;
imageWidth = temp;
}

// If the destination internal buffer is the same size as what we're copying into it,
// there is no need to free() and re-create
if (imageWidth != destination.w || imageHeight != destination.h) {
destination.create(imageWidth, imageHeight, pixelFormat555);
}

// If transposed, 'un-transpose' the data while copying it to the destination
// Otherwise, just do a simple copy
if (isTransposed) {
uint16 *dest = (uint16 *)destination.getPixels();

for (uint32 y = 0; y < imageHeight; y++) {
uint32 columnIndex = y * imageWidth;

for (uint32 x = 0; x < imageWidth; x++) {
dest[columnIndex + x] = buffer[x * imageHeight + y];
}
}
} else {
memcpy(destination.getPixels(), buffer, imageWidth * imageHeight * _pixelFormat.bytesPerPixel);
}

// Cleanup
if (isTGZ) {
delete[] buffer;
} else {
tga.destroy();
}

// Convert in place to RGB 565 from RGB 555
destination.convertToInPlace(_pixelFormat);
}

2. Use the ScriptManager to calculate the destination coordinates
3. Call renderSubRectToScreen(surface, destinationX, destinationY, wrap)    (see above)
1. If destinationX/Y is less than 0, the image is off the screen to the left/top. Therefore get the top left corner of the subRect by subtracting destinationX/Y.
2. If destinationX/Y is greater than the image width/height respectively, the image is off the screen to the right/bottom. Therefore get the top left corner of the subRect by adding destinationX/Y.
3. If we're wrapping and destinationX/Y is still positive at this point, offset the image to the left one imageWidth
4. If we're in PANORAMA or TILT state, call mutateImage()     (see above)
1. Iterate over the pixels of the subRect
2. At each pixel get the coordinate offsets from the RenderTable
3. Add the offsets to the coordinates of the pixel.
4. Use these new coordinates to get the location of the pixel color
5. Store this color at the coordinates of the original pixel
5. Blit the final result to the Screen using OSystem::copyRectToScreen()

That's it! Thanks for reading. As always, feel free to ask questions or make comments. Happy coding!

-RichieSams

## Sunday, August 18, 2013

### Moving through time

Before I start, I know it's been a long time since my last post. Over the next couple days I'm going to write a series of posts about what I've been working on these last two weeks. So without further ado, here is the first one:

While I was coding in the last couple of weeks, I noticed that every time I came back to the main game from a debug window, the whole window hung for a good 6 seconds. After looking at my run() loop for a bit, I realized what the problem was. When I returned from the debug window, the next frame would have a massive deltaTime, which in turn caused a huge frame delay. This was partially a problem with how I had structured my frame delay calculation, but in the end, I needed a way to know when the game was paused, and to modify my deltaTime value accordingly.

To solve the problem, I came up with a pretty simple Clock class that tracks time, allows pausing, (and if you really wanted scaling/reversing):
/* Class for handling frame to frame deltaTime while keeping track of time pauses/un-pauses */
class Clock {
public:
Clock(OSystem *system);

private:
OSystem *_system;
uint32 _lastTime;
int32 _deltaTime;
uint32 _pausedTime;
bool _paused;

public:
/**
* Updates _deltaTime with the difference between the current time and
* when the last update() was called.
*/
void update();
/**
* Get the delta time since the last frame. (The time between update() calls)
*
* @return    Delta time since the last frame (in milliseconds)
*/
uint32 getDeltaTime() const { return _deltaTime; }
/**
* Get the time from the program starting to the last update() call
*
* @return Time from program start to last update() call (in milliseconds)
*/
uint32 getLastMeasuredTime() { return _lastTime; }

/**
* Pause the clock. Any future delta times will take this pause into account.
* Has no effect if the clock is already paused.
*/
void start();
/**
* Un-pause the clock.
* Has no effect if the clock is already un-paused.
*/
void stop();
};


I'll cover the guts of the functions in a bit, but first, here is their use in the main run() loop:
Common::Error ZEngine::run() {
initialize();

// Main loop
while (!shouldQuit()) {
_clock.update();
uint32 currentTime = _clock.getLastMeasuredTime();
uint32 deltaTime = _clock.getDeltaTime();

processEvents();

_scriptManager->update(deltaTime);
_renderManager->update(deltaTime);

// Update the screen

// Calculate the frame delay based off a desired frame time
int delay = _desiredFrameTime - int32(_system->getMillis() - currentTime);
// Ensure non-negative
delay = delay < 0 ? 0 : delay;
_system->delayMillis(delay);
}

return Common::kNoError;
}


And lastly, whenever the engine is paused (by a debug console, by the Global Main Menu, by a phone call, etc.), ScummVM core calls pauseEngineIntern(bool pause), which can be overridden to implement any engine internal pausing. In my case, I can call Clock::start()/stop()
void ZEngine::pauseEngineIntern(bool pause) {
_mixer->pauseAll(pause);

if (pause) {
_clock.stop();
} else {
_clock.start();
}
}


All the work of the class is done by update(). update() gets the current time using getMillis() and subtracts the last recorded time from it to get  _deltaTime. If the clock is currently paused, it subtracts off the amount of time that the clock has been paused. Lastly, it clamps the value to positive values.
void Clock::update() {
uint32 currentTime = _system->getMillis();

_deltaTime = (currentTime - _lastTime);
if (_paused) {
_deltaTime -= (currentTime - _pausedTime);
}

if (_deltaTime < 0) {
_deltaTime = 0;
}

_lastTime = currentTime;
}


If you wanted to slow down or speed up time, it would be a simple matter to scale _deltaTime. You could even make it negative to make time go backwards. The full source code can be found here and here.

Well that's it for this post. Next up is a post about the rendering system. Until then, happy coding!

-RichieSams

## Saturday, August 3, 2013

### The making of psychedelic pictures (AKA, the panorama system)

In the game, the backgrounds are very long 'circular' images. By circular, I mean that if you were to put two copies of the same image end-to-end, they would be continuous. So, when the user moves around in the game, we just scroll the image accordingly. However, being that the images are flat, this movement isn't very realistic; it would seem like you are continually moving sideways through an endless room. (Endless staircase memories anyone?)

To counter this, the makers of ZEngine created 'ZVision': they used trigonometry to warp the images on the screen so, to the user, it looked like you were truly spinning 360 degrees. So let's dive into how exactly they did that.

The basic premise is mapping an image onto a cylinder and then mapping it back onto a flat plane. The math is all done once and stored into an offset lookup table. Then the table is referenced to warp the images.
Without warping

With warping

You'll notice that the images are pre-processed as though they were captured with a panorama camera.

Video example:

Here is the function for creating the panorama lookup table:
void RenderTable::generatePanoramaLookupTable() {
memset(_internalBuffer, 0, _numRows * _numColumns * sizeof(uint16));

float halfWidth = (float)_numColumns / 2.0f;
float halfHeight = (float)_numRows / 2.0f;

float fovRadians = (_panoramaOptions.fieldOfView * M_PI / 180.0f);
float halfHeightOverTan = halfHeight / tan(fovRadians);
float tanOverHalfHeight = tan(fovRadians) / halfHeight;

for (uint x = 0; x < _numColumns; x++) {
// Add an offset of 0.01 to overcome zero tan/atan issue (vertical line on half of screen)
float temp = atan(tanOverHalfHeight * ((float)x - halfWidth + 0.01f));

int32 newX = int32(floor((halfHeightOverTan * _panoramaOptions.linearScale * temp) + halfWidth));
float cosX = cos(temp);

for (uint y = 0; y < _numRows; y++) {
int32 newY = int32(floor(halfHeight + ((float)y - halfHeight) * cosX));

uint32 index = y * _numColumns + x;

// Only store the x,y offsets instead of the absolute positions
_internalBuffer[index].x = newX - x;
_internalBuffer[index].y = newY - y;
}
}
}


I don't quite understand all the math here, so at the moment it is just a cleaned-up version of what Marisa Chan had. If any of you would like to help me understand/clean up some of the math here I would be extremely grateful!

Putting aside the math for the time being, the function creates an (dx, dy) offset at each (x,y) coordinate. Or in other words, if we want the pixel located at (x,y), we should instead look at pixel (x + dx, y + dy). So to blit an image to the screen, we do this:
1. Iterate though each pixel
2. Use the (x,y) coordinates to look up a (dx, dy) offset in the lookup table
3. Look up that pixel color in the source image at (x + dx, y + dy)
4. Set that pixel in the destination image at (x,y)
5. Blit the destination image to the screen using OSystem::copyRectToScreen()

Steps 1 - 4 are done in mutateImage()
void RenderTable::mutateImage(uint16 *sourceBuffer, uint16* destBuffer, uint32 imageWidth, uint32 imageHeight, Common::Rect subRectangle, Common::Rect destRectangle) {
bool isTransposed = _renderState == RenderTable::PANORAMA

for (int y = subRectangle.top; y < subRectangle.bottom; y++) {
uint normalizedY = y - subRectangle.top;

for (int x = subRectangle.left; x < subRectangle.right; x++) {
uint normalizedX = x - subRectangle.left;

uint32 index = (normalizedY + destRectangle.top) * _numColumns + (normalizedX + destRectangle.left);

// RenderTable only stores offsets from the original coordinates
uint32 sourceYIndex = y + _internalBuffer[index].y;
uint32 sourceXIndex = x + _internalBuffer[index].x;

// Clamp the yIndex to the size of the image
sourceYIndex = CLIP<uint32>(sourceYIndex, 0, imageHeight - 1);

// Clamp the xIndex to the size of the image
sourceXIndex = CLIP<uint32>(sourceXIndex, 0, imageWidth - 1);

if (isTransposed) {
destBuffer[normalizedY * destRectangle.width() + normalizedX] = sourceBuffer[sourceXIndex * imageHeight + sourceYIndex];
} else {
destBuffer[normalizedY * destRectangle.width() + normalizedX] = sourceBuffer[sourceYIndex * imageWidth + sourceXIndex];
}
}
}
}


• Since the whole image can't fit on the screen, we iterate over a subRectangle of the image instead of the whole width/height.
• destRectangle refers to where the image will be placed on the screen. It is in screen space, so we use it to offset the image coordinates in the lookup table (line 10).
• We clip the coordinates to the height/width of the image to ensure no "index out of range" exceptions.

You may have noticed the last bit of code hinted at panoramas being transposed. For some reason, the developers chose to store panorama image data transposed. (Perhaps it made their math easier?) By transposed, I mean a pixel (x,y) in the true image would instead be stored at (y, x). Also the image height and width would be swapped. So an image that is truly 1440x320 would instead be 320x1440. If you have any insights into this, I'm all ears. Swapping x and y in code was trivial enough though. I would like to note that prior to calling mutateImage, I check if the image is a panorama, and if so, swap the width and height. So the imageWidth and imageHeight in the function are the width/height of the true image, not of the actual source image. This code that does the swap can be found in the function RenderManager::renderSubRectToScreen.

Well, that's it for now. My next goal is to get the majority of the events working so I can load a room and the background image, music, etc. load automatically. So until next time, happy coding!

-RichieSams