We are getting closer and closer to generating an actual cubic panorama. In case you just jumped on in from elsewhere, I’m working on a project in which I have to create a cubic panorama with hotspots that light up and it has to be in AS2, so this is my journal on the way there. In the previous post we learned how to split a plane into multiple segments to reduce the texture distortion errors. We also mentioned that perspective is not the same as distortion. I played around a bit with perspective in the GridRider intermezzo post, which I will convert into a full game as soon as I’m done with these tutorials. After yet another short night with caffeine and Volbeat to keep me awake, it’s 10 AM on a rainy Sunday morning, good excuse to post the next bit!
To clarify the difference between perspective and distortion check out the following image (and this post):
So in order to add perspective to a plane I made some adjustments to the DistortedPlane class. And not all changes were required by the addition of perspective, rather I ran into some issues when creating the GridRider prototype, so I included the extensions to fix those issues as well.
First change I made was that I wanted to able to use a tiling texture. This required the ability to scale the bitmap on the plane.
As you might recall from a previous post the size of a piece of bitmap was derived using the following formulas:
var lInvPieceWidth:Number = lXSegments/_bitmap.width;
var lInvPieceHeight:Number = lYSegments/_bitmap.height;
But now I want my texture to be smaller so that I can tile. This is the same as using a larger bitmap with copies of the original, so imagine I want the texture to tile twice in the x and y direction, I could create a bitmap called _bitmap2 with width = _bitmap.width*2 and height=_bitmap.height*2:
var lInvPieceWidth:Number = lXSegments/(_bitmap.width*2);
var lInvPieceHeight:Number = lYSegments/(_bitmap.height*2);
Another way to put this is that I’m scaling the texture down by (sx = 0.5, sy = 0.5). This amounts to:
var lInvPieceWidth:Number = (lXSegments * _sx)/_bitmap.width ;
var lInvPieceHeight:Number = (lYSegments * _sy)/_bitmap.height;
Enable the “Animate texture scale” in the interactive gadget to see this in action. Best to keep ‘repeat’ on when scaling down the texture.
Second thing I thought I needed for the GridRoad prototype (but it turned out that wasn’t the way to go), was translating the texture with respect to the plane. To see what happens to our equations we simply need to input in1.x+dx and in1.y+dy instead of in1.x and in1.y. Putting those values through our equations shows that almost everything stays the same (not shown here, but you can try it for yourself), with the exceptions of the tx and ty equations (which is kind of to be expected I guess).
If we look in the previous post, we reached a point where we reduced the tx/ty equations to:
matrix.tx = out1.x – (matrix.a * in1.x) – (matrix.c * in1.y);
matrix.ty = out1.y – (matrix.b * in1.x) – (matrix.d * in1.y);
And we saw that the coordinates for in1 were (x * piecewidth, y * pieceheight).
Looking at tx only (ty goes the same way), we get:
matrix.tx = out1.x – (matrix.a * (x * piecewidth + dx)) – (matrix.c * (y * pieceheight + dy));
However we ‘optimized’ our equations, so we have to write this a little differently:
lMatrix.tx = lULx – ((lLRx – lLLx) * x) – (lMatrix.a * lDx) – ((lLLx – lULx) * y) – (lMatrix.c) * lDy;
lMatrix.ty = lULy – ((lLRy – lLLy) * x) – (lMatrix.b * lDx) – ((lLLy – lULy) * y) – (lMatrix.d) * lDy;
where (matrix.a * (x * piecewidth + dx)) is written here as ((lLRx – lLLx) * x) – (lMatrix.a * lDx), since
matrix.a was (out2.x-out3.x)/piecewidth; and out2x is lLRx and out3.x is lLLx in our setup.
But I’m starting to lose you, in fact the only thing you have to remember is that when you take the starting equations for mapping an arbitrary triangle to an arbitrary triangle and incorporate the added dx and dy you will eventually get this.
Check out how this works by enabling the “Animate texture offset” in the interactive gadget to see this in action. Best to keep ‘repeat’ on when translating the texture.
I didn’t need this yet, and although the math probably isn’t that complex, after all I think the key is probably to first transform the bitmap points and then interpolate between them, the calculations will get more involved and thus slower. Translation and scaling came at a pretty low cost, so we’ll leave it at that.
Separating interpolation from rendering
Due to the fact that the points could stay the same now while the texture is being transformed, I didn’t want to re-interpolate the points all the time before rendering. So we split the render method in a separate interpolation section and render section and added some optimizations to check if interpolation was really required.
Direct access to the interpolation array
Although not always recommendable, direct access to the interpolation array does provide greater power over what you can do while rendering the plane. Use with caution or not at all ;).
The probably most difficult part: the perspective rendering of the planes. Well first of all without going into perspective projection: 3d projection requires 3d coordinates. Up until the part where we actually project the 3d coordinates, we don’t have to change that much: instead of an x and y to interpolate, we have to interpolate an x, y and z.
However I didn’t want to turn the DistortedPlane class into something that wouldn’t allow simple 2d distortion anymore, so I did the following:
- instead of Points the class now uses Point3D objects.
- if you pass a Z coordinate for all 4 corners of the plane, 3d perspective projection is used (and you should set the 3d properties)
- if any of the Z coordinates is missing we revert to simply 2d image distortion
So if we are using 3d projection, all z coordinates are interpolated as well.
The last step before rendering is projecting these interpolated coordinates. Basically projection is just what it says, you have a point with a certain z, but you want to render it on a 2d plane at depth f. If z == f there is no projection, since the point is at the plane you want to project in on already. If the z is twice as far as the projection plane, the point where a line to that point intersects the projection plane will be twice as close to the focal point.
That’s the short version, a much better explanation can be found here.
So basically the projection formulas are:
ProjX = -(farplaneDistance/z) * x
ProjY = -(farplaneDistance/z) * y
This assumes objects in front of the camera have a negative z, which seems to be normal in 3d projections since the objects are on ‘the other side’ of the camera.
So you see as z gets bigger, x and y get closer to 0. If you want them to get closer to FocalX and FocalY, use this instead:
ProjX = -(farplaneDistance/z) * (x – focalX) + focalX;
ProjY = -(farplaneDistance/z) * (y – focalY) + focalY;
So what should you use for your farplane distance? For now whatever looks good. I’ll get back to this later when we are discussing field of view calculations.
One other thing I added was some sort of clipping (drag the clip handles in the interactive example). The idea with clipping is that we only render segments if at least one corner is within the clipping rectangle. Better material mapping would be good too (backface culling, two sided materials), but I’ll keep that for another post.
Download the sources: 3d Distortedplane Class V2 (673 downloads)
You might notice I used nearplane and farplane, but should have used nearplane, farplane and projectionplane, I will fix this in a next version.