Thursday, December 31, 2009

Floating Point Depth Buffer

Update: a more complete and updated info about the use of reverse floating point buffer can be found in post Maximizing Depth Buffer Range and Precision.

I had always thought that using a floating point depth buffer on modern hardware would solve all the depth buffer problems, in a similar way than the logarithmic depth buffer but without requiring any changes in the shader code, and having no artifacts and potential performance losses in their workarounds. So I was quite surprised when swiftcoder mentioned in this thread that he had found that the floating point depth buffer had insufficient resolution for a planetary renderer.

The value that gets written into the Z-buffer is value of z/w after projection, and it has an unfortunate shape that gives enormous precision to a very narrow part close to the near plane. In fact, almost half of the possible values lie within two times the distance of the near plane. In the picture below it's the red "curve". The logarithmic distribution (blue curve), on the other hand, is optimal with regards to object sizes that can be visible at given distance.


Floating point depth buffer should be able to handle the original z/w curve because the exponent part corresponds to the logarithm of the number.

However, here's the catch. Depth buffer values converge towards value of 1.0, or in fact most of the depth range gives values very close to it. The resolution of floating point around 1.0 is entirely given by the mantissa, and it's approximately 1e-7. That is not enough for planetary rendering, given the ugly shape of z/w.

However, the solution is quite easy. Swapping the values of far and near plane and changing the depth function to "greater" inverts the z/w shape so that it iterates towards zero with rising distance, where there is a plenty of resolution in the floating point.

I've also found an earlier post by Humus where he says the same thing, and also gives more insight into the old W-buffers and various Z-buffer properties and optimizations.

Thursday, December 10, 2009

Early vehicle physics test

First test of vehicle physics in the engine. Camera isn't yet attached to the vehicle so I had to chase it, without much success I admit
Also it's really easy to lose sight of the truck in the big world.

Thursday, November 26, 2009

Collada Importer

Angrypig managed to get Collada importer into beta stage, so here are some first screens with new Cessna model and Gaz 66 truck. Too bad neither AO nor shadows are ready yet, the screens would be much nicer with it.


Cessna cockpit has quite a high resolution; the indicators should become alive soon.


Views from the outside. Propeller is already turning, but there's no motion blur on it yet. Transparency works too but there's a bug in the model. The windows ask for bit of dirt here and there, too.








Approximately 3,900 feet above the ground.








Mount Etna in the background.


Gaz 66 truck ..


.. and a fleet of them.

Thursday, November 12, 2009

Horizontal displacement

Recently I've been working on the code that computes fractal data - a bit of reorganization to be able to have more independent fractal channels that are used everywhere in the engine, as I was running short of them already. Previously we had one channel where the terrain elevation fractal was computed, another channel with low-pass filtered terrain slope and 2 independent fractal channels. After the redesign we have 4 independent fractal channels, but additionally the filtered slope is computed independently for u,v terrain directions.

The two-dimensional slope values allow for better horizontal displacement effect on the generated terrain, because now it's possible to make the rock bulge from the hill slope in the right direction. In the previous version only the absolute slope value was known, and the fractals extruded the mesh independently in two orthogonal directions, and of course that did not always look good.

The equation for the displacement was in addition parametrized, to be able to get more effects out of it. Currently it's possible to vary the dominant wavelength, bias and amplitude of the used fractal.

Here is a quick comparison of what the parameters do.

Bias +0.5 (bulging outwards from the slope), wavelength 19m


Wavelength 38m


Wavelength 76m



In the last screenshot the fractal used for the displacement is already only slightly visible and the bulbous shape of purely slope-dependent displacement shows up.
Here's how the displacement looks like when the bias is even larger, i.e. when the sloped parts are pushed even more outwards (bias +1.5):


It's also possible to use a negative bias values, that make the sloped parts carved into the hill (bias -1.0):



On the other hand, amplitude boost can emphasize the effect of the fractal, creating more visible overhangs here and there:


For comparison, here's how the same terrain looks like without any horizontal fractal effect at all:


... and without the vertical fractal, only a bicubic subdivision of original 76m terrain grid:



Next thing to try could be using a texture containing rough shape of specific type of erosion one would like to achieve. Current technique still cannot generate proper cliff and canyon walls, but combining it with the shape map lookup should theoretically do the job.

There's also an interactive comparison

Friday, October 2, 2009

Web Site Update

Some time ago I had an idea to make the web site with white background. That idea was inspired by a bug in early atmosphere rendering that made Earth as if submerged in watery milk. I remembered it and managed to reproduce the bug in RenderMonkey, tuned it and here's how it looks:

earth

The whole new look built around that idea can be seen here: outerra.com

Wednesday, September 2, 2009

Chromium & Google Maps

Another short video showing the embedded Chromium browser and Google Maps that is synchronized to engine camera.

Monday, August 31, 2009

Short flight sim video

It's been one year since we released the first videos from Outerra engine. So it's about time to release another short one featuring a lifeless Jalapeno and a pilotless Cessna plane with broken propeller

Monday, August 17, 2009

Flight sim?

Two guys I'm working with on the engine, Laco 'angrypig' Hrabcak and Tomas 'jonsky' Mihalovic have some time ago worked on FNPT II MCC trainer with Boeing 737-800 cockpit replica (Laco has got some photos in his history page)
They are still being quite drawn to simulators, so when we got positive reception from flight sim community with the engine we decided to lean the development slightly more that way.

Recently, Laco got preliminary Collada loader working, loading a Cessna model. He is also incorporating JSBSim Flight Dynamics Model library into the engine, while Tomas works on the cockpit.

Here are some first screens.


Here are some screens with (now featureless) cockpit, recent Z-buffer development allows to handle the cockpit and terrain without problems.




A look below while grounded.


And some screens from outside.


Wednesday, August 12, 2009

Logarithmic Depth Buffer

I assume pretty much every 3D programmer runs into Z-buffer issues sooner or later. Especially when doing planetary rendering; the distant stuff can be a thousand kilometers away but you still would like to see fine details right in front of the camera.

Previously I have dealt with the problem by splitting the depth range in two and using the first part for near stuff and another for distant stuff. The boundary was floating, somewhere around 5km - quad-tree tiles up to certain level were using the distant part, and the more detailed tiles that by law of LOD are occurring nearer the camera used the other part.
Most of the time this worked. But in one case it failed miserably - when a more detailed tile appeared behind a less detailed one.
I was thinking about the ways to fix it, grumbling why we can't have a Z-buffer with better distribution, when it occurred to me that maybe we can.

Steve Baker's document explains common problems with Z-buffer. In short, the depth values are proportional to the reciprocal of Z. This gives amounts of precision near the camera but little off in the distance. Common method is then to move your near clip plane further away, which helps but also brings its own problems, mainly that .. the near clip plane is too far.

A much better Z-value distribution is a logarithmic one. It also plays nicely with LOD used in large scale terrain rendering.
Using the following equation to modify depth value after it's been transformed by the projection matrix:

    z = log(C*w + 1) / log(C*Far + 1) * w      //DirectX with depth range 0..1
or 

    z = (2*log(C*w + 1) / log(C*Far + 1) - 1) * w   //OpenGL, depth range -1..1
 
Note: you can use the value of w after your vertices are transformed by your model view projection matrix, since the w component ends up with the view space depth. Hence w is used in the equations above.

Update: Logarithmic depth buffer optimizations & fixes

Where C is constant that determines the resolution near the camera, and the multiplication by w undoes in advance the implicit division by w later in the pipeline.
Resolution at distance x, for given C and n bits of Z-buffer resolution can be computed as

    Res = log(C*Far + 1) / ((2^n - 1) * C/(C*x+1))

So for example for a far plane at 10,000 km and 24-bit Z-buffer this gives the following resolutions:
            1m      10m     100m    1km     10km    100km   1Mm     10Mm
------------------------------------------------------------------------
C=1         1.9e-6  1.1e-5  9.7e-5  0.001   0.01    0.096   0.96    9.6     [m]
C=0.001     0.0005  0.0005  0.0006  0.001   0.006   0.055   0.549   5.49    [m]

Along with the better utilization of z-value space it also (almost) gets us rid of the near clip plane.

And here comes the result.


Looking into the nose while keeping eye on distant mountains ..


10 thousand kilometers, no near Z clipping and no Z-fighting! HOORAY!

More details

The C basically changes the resolution near the camera; I used C=1 for the screenshots, having theoretical resolution 1.9e-6m. However, the resolution near the camera cannot be utilized fully as long as the geometry isn't finely tessellated too, because the depth is interpolated linearly and not logarithmically. On models such as the guy on the screenshots it is perfectly fine to put camera on his nose, but with models with long stripes with vertices few meters apart the bugs from the interpolation can be visible. We will be dealing with it by requiring certain minimum tessellation.


Fragment shader interpolation fix

Ysaneya suggested a fix for the artifacts occurring with thin or large triangles when close to the camera, when perspectively interpolated depth values diverge too much from the logarithmic values, by writing the correct Z-value at the pixel shader level. This disables fast-Z mode but he found the performance hit to be negligible.

Update: a more complete and updated info about the logarithmic depth buffer and the reverse floating point buffer can be found in post Maximizing Depth Buffer Range and Precision.

Thursday, June 25, 2009

Roads

Added support for vector data which is currently used for simple roads, later it will be used also for rivers and similar stuff.
The points are interpolated using Catmull-Rom spline, and the generated path is baked into tile's material map with specific artifical material. Tile height map is modified too by setting the elevation from the path but gradually blending the borders to terrain.
The roads can also be slanted.



The actual algorithm runs in shader, finding the distance to spline from each point and outputs a blending coefficient and elevation. Asphalt material id is generated where the coefficient is one, otherwise only the height map is modified by blending.
Once the map is created it incurs no additional performance penalty - it's just another set of materials used.



While the algorithm can also generate the road surface markings as specific materials, the resolution of the material map for the tile isn't sufficient for such highly contrasting stuff. This will have to be drawn separately as an overlay on the tile, possibly using the same algorithm but when drawing to the screen.



The algorithm


I'm drawing quads with road segments into an intermediate texture that contains elevation (as interpolated by the spline) and blending coefficient which is 1 for road surface and slowly falling to 0 on road borders.
To get the distance to spline for particular pixel I had to transform world coordinates to road segment's lengthwise and crosswise coordinates.

The final solution got somewhat more complicated, though.

Since the linear interpolation on quads is ambiguous, I had to treat the segment as a bilinear surface and compute the inverse of it so as to get these lengthwise/crosswise coordinates. See http://stackoverflow.com/questions/808441/inverse-bilinear-interpolation for explanation.

But as it turned out this wasn't sufficient enough, because of course the spline road segment isn't a bilinear surface. It led to roads that narrowed in sharper turns or disappeared.
Actual road segment surface equation could be:
    p = q(s) + t*n(s)
where q(s) is the spline equation, n(s) is 2D normal to the spline, s is the lengthwise coordinate and finally t is the crosswise coordinate. However, this would lead to solving 5th degree polynomial and one would have to pick the correct root too ...

But ultimately I used this equation to take one step of Newton-Raphson approximation, seeding it with value of s computed by the inverse bilinear transform. The approximation converges rapidly so one step was enough.
The equation for one step of Newton-Raphson is:
    s1 = s0 - f(s0)/f'(s0)
with
    f(s) = (px - qx(s)) * qx'(s) + (py - qy(s)) * qy'(s)
f'(s) = (px - qx(s)) * qx"(s) - qx'(s) * qx'(s)
+ (py - qy(s)) * qy"(s) - qy'(s) * qy'(s)
So you need the first and second derivative of the spline equation to compute it.

Friday, May 29, 2009

Horizontal displacement

As always I do not keep strictly to the plan, and decided to try one of things I wanted to do someday - horizontal displacement of terrain.

The fractal map computed for quadtree node already contains 3 independent fractal noise channels. The first one computes elevation and is seeded from heightmap data. Other two are used for detail material mixing and other things. There is also a fourth channel containing global slope.
I modified the shader that computes vertex positions to displace also in horizontal directions, using one of the two independent fractal channels. Amount of displacement also varies with global slope - areas in flat regions are shifted minimally, but sloped parts that are also treated as rock are displaced a lot. This makes rocky parts much more interesting. For the record, the actual equation used for displacing point on a sphere in tangent space is this:



Next thing that had to be done was to compute the normals of such deformed surface. Article in GPU Gems about deformers provides nice info about Jacobian matrix that can be used for the job. After some pounding to my math circuits I managed to produce the following Jacobian of the above equation (in tangent space):



Normal is then computed as cross product between the second and third column, since the tangent and binormal are {0,1,0} and {0,0,1} respectively.

Finally, here is the result - left side original with only the vertical displacement, on the right side vertical&horizontal displacement:



There are still some issues but the overall effect is quite nice. Of course, collisions with sloped parts are no longer accurate and I'll have to do something with it later.

Outerra planetary engine

Friday, May 22, 2009

Progressive download

Complete dataset for Earth with 76m sampling is 192GB, after wavelet compression it is roughly 14GB. That is still too much for direct download, and it will be even slightly larger after more detailed data are used for some parts at higher latitudes that are currently only coarsely defined. Possible solution could be to use a coarser sampling, but this rapidly washes out nice terrain features that the fractal cannot supersede easily.
Fortunately it seems we will be able to use the 76m dataset since progressive download works quite nicely.
Wavelet compression of elevation data helps here because it not only nicely compresses the data but perhaps more significantly it arranges the data in layers by level of detail. Particular level of detail for quad-tree nodes can be then downloaded when needed.

Results are that landing on the ground in a mountainous area (where the data are largest) requires around 30-40MB of compressed data to be downloaded progressively and cached. Further data (3-8MB) are needed again only after traveling some 300km from the spot. So it seems that even camera following a cruise missile can be handled too, though possibly it could skip the most detailed data at that speed anyway.

Lastly, the downloader uses libtorrent library so the data can be downloaded using p2p, initially from http seed.

Using finer sampling manifests itself mainly in mountainous areas where high frequency features occur most.
Here's comparison of High Tatras as rendered by the engine and by the nature.


Currently we are working on material mixer that will allow us to assign different materials to different climatic bands, so the peaks here will not be covered by grass and mountain pine will grow at higher elevations too.


Outerra planetary engine

Friday, March 6, 2009

chromium

Angrypig experimented with chromium code and ultimately managed to integrate the browser into our engine. Works fairly well, with mouse and keyboard too. It's still just a prototype, though; the integration with the engine will be reworked and optimized for example to have to update only the changed parts in the web page and to make the whole thing asynchronous.

Also, we are going to hook into V8 engine to handle javascript events in Google Maps, so that the camera moves to the selected location in the maps, and vice versa.







Outerra planetary engine

Thursday, February 26, 2009

Procedural terrain algorithm visualization

When I tried to visualize resolution of elevation dataset we are currently using, it occurred to me it would be a good idea to visualize also the various steps and enhancements to the procedural refinement algorithm. These more or less show also how the algorithm evolved.


First, the terrain rendered with bilinear subdivision just to show the resolution of elevation data. We currently use ~150m dataset for whole Earth. All detail below 150m is generated procedurally.
To have a completely procedural planet, another algorithm would generate rough map with continental plates and mountain ranges, that will be then used as the basis for further procedural refinement.



The same terrain now with bicubic subdivision of tiles producing molten ice cream hills. The terrain is textured mostly according to the slope, with slight perturbation with another fractal.



Adding homogeneous wavelet noise to the bicubically subdivided terrain. Doesn't look right, though - no erosion except for the glacial one, that is visible even in the original data.
Shows symptoms of simple fractal terrains - different parts look similar.



To fix the erosion, the amplitude of noise is modulated by slope - flat areas have less noise, while the steeper get more.
This starts to get right, at least for the valleys, but the slopes are still too monotonous.



The idea for the next step is that anything below the angle of repose is subject to excessive sediment deposition, and is then more even and less noisy.
This creates larger continuous grass surfaces on moderate slopes, with rocks here and there.



One problem with slope-dependent noise is that the mountaintops tend to be smooth because the slope is low there.
The subdivision also distributes the flatness to wider area.
The solution is to make the amplitude of noise rise with positive values of curvature too, depending also on elevation.


Also an interactive comparison here.