Monday, December 20, 2010

Book Title and OpenGlobe Update

Previously, I mentioned we were working on finalizing our book's title. Now it's finalized:

   3D Engine Design for Virtual Globes

I think the "3D Engine Design" branding will help get the book more attention by developers who might not initially think they are interested in virtual globes. Many book's have nicknames, so we're calling ours the Virtual Globe Book for short. Let's see if it sticks.

This blog still remains as Virtual Globe and Terrain Rendering. Since it is about more than just our book (or at least it will be), it doesn't need to stay in sync with the book's title. The url will not change either.

In other news, all of our example code now runs on Linux. After our first port, two examples needed more work.

Finally, we changed the license for the example code from the boost license to the MIT license, mainly because the MIT license is more popular. All we care about is that readers, and random people that stumble upon our code, can use it unrestricted, including in commercial applications.

Saturday, December 18, 2010

Math Foundations

We haven't been writing chapters in the order they will appear in our book. The most recent chapter, Math Foundations, is no exception. It will be Chapter 2, but is basically the last chapter I am writing. After this, I only have to finish the introduction and smooth out a few other sections. Kevin has a similar amount of work left, so we are on schedule!

This chapter is not on math foundations for general 3D graphics; rather, it is on useful mathematics for virtual globes, with a focus on ellipsoids. This chapter is unique in that it contains some derivations, but we are computer science practitioners, not mathematicians, so it also provides working code in a handy Ellipsoid class. Our colleague, Jim Woodburn, helped significantly with the derivations.

This chapter covers:
  • Geographic (longitude, latitude, height) and WGS84 (x, y, z) coordinates.
  • Ellipsoids
    • Oblate spheroids (Earth's shape).
    • Surface normals: geodetic vs. geocentric.
    • Conversion between geographic and WGS84 coordinates.
    • Scaling a arbitrary point in space to the ellipsoid surface.
    • Curves on ellipsoids.
I also wrote two related examples applications, one demonstrating geodetic and geocentric surface normals, and another for computing curves on an ellipsoid by slicing a plane through it. Here's a video of the later:

Monday, November 29, 2010

A Tip for Finishing Chapters

It's always fun to start writing a chapter, especially if it is on a particularly interesting topic. Finishing the chapter can be a bit harder - sounds like a software project, right?

Finishing involves lots of details: rewording, double checking equations, actually doing TODOs (or sometimes just removing them), etc. After reading and writing so much about a topic, I want to move on to the next chapter - but I can't, not until I finish what I started.

A tip that helps me finish a chapter is I tell people when I'm going to finish it. In particular, I email potential reviewers asking them if they are interested in reviewing it, and telling them when I expect to be done. I have to finish it now; I can't ask these kind people to do me a favor, and not follow through.

This tip doesn't work magic, but it helps keep me on track. I can't expect to write 30 pages overnight just because I told someone I would. But if I'm 2/3 through a chapter, I can predict how long it will take to finish at a reasonable pace, and tell reviewers to help ensure I keep that pace.

Wednesday, November 24, 2010

OpenGlobe on Linux

We develop the example code for our book using Windows, but we realize not all readers use Windows. The good news for them is, over the past few days, we got the vast majority of our code to build and run on Linux using Mono. Here's a screen shot of one example from the globe rendering chapter:

We tested using Ubuntu 10.04, which comes prepackaged with MonoDevelop 2.2 and Mono 2.4.4. Our code builds cleanly with no warnings, and, thankfully, all of our unit tests pass:

(Yes, we actually write unit tests for the book code.) We need to investigate our examples that use a third party Shapefile reader a bit more, but we may wind up writing our own Shapefile reader when the manuscript is done.

I have to say that I am quite impressed with Mono, MonoDevelop, OpenTK, and NVIDIA's Linux drivers. Porting to Linux was even easier than I hoped.

We also plan to test on Mac when Apple releases OpenGL 3.3 drivers.

Sunday, November 21, 2010

Vertex Transform Precision

I just finished writing a chapter on a rendering artifact that many people don't even know exists until they try to use massive world coordinates, and suddenly objects start to bounce or jitter as shown in this video:

Eliminating jitter is important for high precision rendering in virtual globes, and in massive-world games, like flight simulators. When world coordinates are large, yet users can zoom in very close to objects, floating point round-off errors manifest themselves as jitter.

This chapter is on techniques for eliminating jitter. Major topics include:
  • Why exactly jitter artifacts occur.
  • Eliminating jitter by rendering relative to center, i.e., using a floating origin [code].
  • Rendering relative to eye.
    • A slow CPU implementation that I only recommend for certain cases [code].
    • A fast GPU implementation we use in Insight3D [code], an even more precise GPU implementation [code], and a technique I've called Precision LOD [code].
  • Lots of details on the trade-offs between approaches.
If you love (or hate) jitter like I do, also read our coworker's article, which served as the primary reference for this chapter. Also see the DSFUN90 Fortran library, which contains routines for performing double-single arithmetic, i.e., emulating doubles in software.

Tuesday, November 16, 2010

A K Peters Catalog, Amazon, and More

I just received a hard copy of A K Peter's 2010-2011 catalog in the mail. Exciting. What's even more exciting is our book is listed - with an ISBN and all!

To add to the excitement, I just found our book on Amazon. It is also on the CRC Press website. (A K Peters is now part of CRC Press.)

If you visit the Amazon or CRC Press page, you may wonder why the title is 3D Engine Design for Virtual Globes, not Virtual Globe and Terrain Rendering, which is listed in A K Peter's catalog and our blog. They are actually the same book (check the ISBN).

In the original book proposal, I suggested the title 3D Engine Design for Virtual Globes. After a few months of writing, I suggested a title change to Virtual Globe and Terrain Rendering since there is more content on rendering algorithms than engine design, and I wanted to stress our coverage of terrain. The title still isn't final though. Once we firm it up, we'll let everyone know. Of course, the content remains unchanged.

I also wanted to supplement the description of our book in the AK Peter's catalog. I am listed as the only author, but I assure you, Kevin is the coauthor. The catalog must have went to press before Kevin came on board. Our book is listed as approximately 350 pages. We actually already surpassed that, and Kevin and I will still be writing furiously for another six weeks. The page count could climb as high as 450 in our draft, but once it goes through copyediting, layout, etc., I can see it becoming shorter as figures are organized, fonts are selected, etc.

Finally, we plan on posting some sample content in February or March, after the manuscript goes through copyediting. In the meantime, we'll continue writing posts on chapters as we finish them.

Tuesday, November 2, 2010


Short version: I'm on twitter now: pjcozzi.

Longer version:

I remember when I first learned what a blog was back in 2000. I was an intern at Intel, and stumbled across Joel's now famous blog. The first post I read was The Guerrilla Guide to Interviewing. After that I was hooked, and followed Joel almost religiously until 2006 or so. I kept following after that, but not as frequently (maybe grad school had something to do with it). At the start of 2008, AGI launched a blog for Insight3D, and I wrote my first blog post. I also started reading dozens of graphics blogs, many of which are in our blogroll.

Since then, I've gotten use to the idea of writing blogs to share technical ideas or products news, so it was natural to launch this blog for our book. During all this blogging, I somehow managed to ignore twitter. Clearly, I was missing out, so I'm on twitter now: pjcozzi. All I need to do now is make a LinkedIn page and I'll be completely connected! Maybe once the manuscript is done...

Sunday, October 31, 2010

Renderer Design

With about two months left to complete the manuscript, Kevin and I are in overdrive. I suppose it helps that I wrote the example code for the latest chapter almost a year ago, so writing the chapter was just writing, selecting code snippets, and creating figures - in theory anyway. Since this chapter is code-heavy, I went back and fine-tuned a lot of code.

This chapter is on renderer design, that is, designing the layer of a 3D engine that sits between the underlying rendering API, like OpenGL or Direct3D, and the rest of the engine. Since my experience is largely with OpenGL, the chapter focuses on OpenGL, but it still mentions Direct3D in many places for comparison. Although I'd love to fill an entire book on the topic (hmm), the chapter is only 70 pages, so it requires some previous experience with OpenGL or Direct3D, as does our book in general.

As I said a while back, this is the third time I've designed such an abstraction layer, so much of my advice is battle tested. I've learned to favorite ease of use and flexibility over performance, although I mention where performance can be improved, perhaps only marginally in the grand scheme of things, at the cost of something else.

The major components of our renderer are shown here:

The highlights of this chapter include:
  • Motivation for a renderer layer: ease of development, portability, flexibility, robustness, performance, additional functionality. I can't come up with a reason not to do it except for "hack something together as fast as possible."
  • Big picture: the device [code] and contexts [code].
  • State management: global render state vs. state objects [code], draw state [code], clear state [code], sorting by state.
  • Shaders: compiling and linking [code], built-in constants [code], vertex attributes [code], uniforms [code], automatic uniforms [code], and shader caching [code].
  • Vertex data: vertex buffers [code], index buffers [code], vertex arrays [code], meshes [code], creating vertex arrays from meshes [code].
  • Textures: read and write pixel buffers [code], 2D textures [code], samplers [code], texture units [code].
  • Framebuffers [code].

Sunday, October 24, 2010

Game Engine Gems, Volume 2

I'm excited that the table of contents for Game Engine Gems, Volume 2 is now available. The book is scheduled to be released at GDC 2011, this coming March.

Although I don't know anything beyond the titles and authors, I am looking forward to many of the articles.

Rémi Arnaud's 3D in a Web Browser sound timely and is always a hot topic where I work. Recently, I've played with JOGL Java Applets, but I'd also like to look at WebGL. I'm curious to read what Rémi has to say.

Noel Llopis's High-Performance Programming with Data-Oriented Design also sounds worthwhile. I feel my design skills have plateaued over the past year or two; I couldn't even give you a completely legit definition of "Data-Oriented Design." Some reading in this area will be good for me.

I'm happy to see that the Systems Programming part has a good bit of threading coverage, including Julien Hamaide's Thread Communication Techniques, Martin Fleisz's A Cross-Platform Multithreading Framework, and Matthew Johnson's Producer-Consumer Queues. I can't imagine that anyone is writing a new engine without at least some multithreading support. I'm curious to read about communication between threads. I have to admit to being a big fan of message queues but I'm sure they are not the be all end all. (P.S. Our book also has a chapter on multithreading.)

Finally, I contributed two articles on OpenGL techniques. Here are their abstracts:

Delaying OpenGL Calls
It is a well known best practice to write an abstraction layer over a rendering API such as OpenGL. Doing so has numerous benefits that include improving portability, flexibility, performance, and above all, ease of development. Given OpenGL’ s use of global state and selectors, it can be difficult to implement clean abstractions for things like shader uniforms and frame buffer objects. This chapter presents a flexible and efficient technique for implementing OpenGL abstractions using a mechanism that delays OpenGL calls until they are finally needed at draw time.
A Framework for GLSL Engine Uniforms
The OpenGL 3.x and 4.x core profiles present a clean, shader-centric API. Many veteran developers are pleased to say goodbye to the fixed function pipeline and the related API entry points. The core profile also says goodbye to the vast majority of GLSL built-in uniforms, such as gl_ModelViewMatrix and gl_ProjectionMatrix. This chapter addresses the obvious question: what to use in place of GLSL built-in uniforms.
I'll write an actual review when the book comes out (excluding my articles, of course, and with the disclaimer that, as a contributor, I am biased) and I am going to finishing reading and reviewing GPU Pro 2 as soon as I am done writing the manuscript for my own book!

Tuesday, October 12, 2010

Welcome Kevin

My coworker, Kevin Ring, has joined me as a coauthor to write a few of the final, but core, chapters. Kevin has served as a reviewer for many of the chapters I already wrote, has contributed quite a bit to the book's example code (including the message queue and administering the continuous integration server). He even contributed some book content to the section on implementing a message queue. At this point, he is practically a coauthor anyway.

He will be writing about terrain LOD and imagery tiling, while I finish up chapters on renderer design and precision. We have plenty to do by the end of the year but I think we are in good shape, and I am not one to wait until the last minute to panic.

Tuesday, September 21, 2010

Writing Captions for Figures

Before reading a book or paper, I always browse its figures and read their captions. If it seems the content will interest me, I go back and actually read it. I suspect almost everyone does this.

Jim Kajiya makes an excellent point about this in How to Get Your SIGGRAPH Paper Rejected, which I believe applies equally to books:
Ivan Sutherland once told me that Scientific American articles are constructed so that you can get the point of the article just by reading the captions to the illustrations. Now, I'm not suggesting that you write a technical comic book; but you should take a look at those SIGGRAPH papers you were initially attracted to and see how they went about getting their point across.
Given that a reader is likely to read captions for figures before reading the main text, it is important to write good captions! A trick I use is to write the captions before writing the main text. This has worked pretty well so far. You need to be careful that you don't include too much in the caption but a little redundancy isn't bad: I like captions to reiterate a key point from the main text.

Sunday, September 12, 2010

Multithreaded Resource Preparation

I gave myself three weeks to write our chapter on multithreaded resource preparation. I manged to come in right on time even with Labor Day weekend in the mix. I am exhausted.

This is not an introduction to multithreading but rather how to apply multithreading to improve the performance and responsiveness of a 3D engine by moving I/O, CPU intensive algorithms (think triangulation, vertex cache optimization, etc), and renderer resource creation to worker threads.

The chapter highlights include:
  • Brief review of hardware parallelism
    • CPU: Pipelining, superscalar, SIMD, multithreading, Hyper-Threading, multi-core.
    • GPU: Pipelining, shaders, CPU/GPU parallelism.
  • Architectures for multithreaded resource preparation
    • Using message queues to communicate between threads [code contributed by Kevin Ring].
    • Coarse grain threads [code].
    • Pipeline of fine grain threads.
  • Multithreading with OpenGL
    • One GL thread, multiple worker threads.
    • Multiple threads, one context.
    • Multiple threads, multiple contexts.
      • Shared contexts.
      • CPU vs GL synchronization. Fences [code].
Also, here's a list of resources on OpenGL multithreading that I found useful:

Wednesday, September 1, 2010

Proofreading your own Writing

Proofreading our own writing is hard because, well, we wrote it. We tend to read what we think we wrote and not what we actually wrote.

When I first write something, it usually isn't that great. That doesn't worry me, I want to get ideas down and avoid staring at a blank screen. I write a few pages, create a pdf, proofread it, and immediately rewrite the parts I don't like or that have blatant grammar errors.

This type of proofreading isn't enough though.

The next day I work on the book, I reread what I wrote the previous day, hopefully after forgetting most of it. I'm able to improve the quality to the point where I'm not ashamed, or perhaps even proud, to show it to reviewers.

This second proofreading also helps me get back into the zone for writing the next section and helps each section flow into the next one. I find getting into the zone for writing much hard than getting into the zone for coding so I feel this trick goes a long way.

I always proofread my previous day's work before writing. Sometimes, I even proofread previous work from a few days. By the time I "finish" a chapter, I've probably read it five times - and reviewers still find things!

Sunday, August 22, 2010

Rendering Vector Data

I just finished writing our chapter on rendering vector data on a globe. Leave it to me to write 60 pages on something as simple as rendering polylines, polygons, and points! Of course, it is not quite as simple as it sounds. I'd like to share an overview of the content and screen shots of the example code, which you can download now: Chapter07VectorData in OpenGlobe.

Country polygons, state and
river polylines, and city points (billboards)
Country polygons and river polylines

This chapter begins with two short sections on sources of vector data and on avoiding z-fighting between vector data and the globe it is drawn on. The bulk of the chapter is then in these three sections:
  • Polylines
    • Layouts: strips, loops, and indexed lines.
    • Batching and static vertex buffers.
    • Rendering wide lines using a geometry shader [code].
    • Shaders for rendering outlined (two color) lines [code].
  • Polygons
    • Overview of raster techniques for polygon rendering.
    • A pipeline for geometry-based polygon rendering, including:
      • Triangulation: ear clipping [code], including a nifty way to ear clip polygons on an ellipsoid without projecting to a tangent plane [code].
      • Subdivision to make a polygon's triangulation better approximate the ellipsoid [code].
  • Billboards
    • Rendering billboards with a geometry shader [code].
    • Using and packing texture atlases [code].
    • Text rendering.
The example code also includes a partial ESRI Shapefile reader. The vector, raster, and icon data used is included in the example and can be downloaded from our data page.

Of course, the chapter contains much more than the above bullets but these are the highlights. Now, I'm off to write the chapter on moving much of this work (disk/network access, triangulation, texture atlas packing, GL resource creation) off the main thread. Since most the code for this is already done, I'm hoping this is a pretty easy chapter to write.

Monday, August 2, 2010


I'd like to share some of my notes from SIGGRAPH. I promise this will be my last SIGGRAPH post, then I will return to directly book-related things.
  • Courses: Morgan McGuire organized an excellent course: Stylized Rendering in Games. There was lots of talk on rendering silhouettes, where, like a lot of things, "the devil is in the details." I liked how the Borderlands talk went into almost painful detail on using a Sobel filter for silhouette rendering. BTW, a Sobel filter is also useful for computing terrain normals.

    Beyond Programmable Shading was a hit as always. I go to this course ever year and it always has new content. In particular, Jonathan Ragan-Kelley gave a great talk on scheduling the graphics pipeline. It's rare to find good information on this kind of stuff. The course ended with a panel on programmable vs fixed function hardware. For me, the takeaway was fixed function hardware is fast and uses very little power so it will remain for algorithms that are unlikely to change (e.g. video decode, rasterization, etc.). My opinion from a developer's perspective is: fixed function is fine for things I have no interest in changing but I want everything else to be programmable.
  • Panels: There was a really unique panel called "CS 292: The Lost Lecture." In which, Richard Chuang, co-founder of PDI, and Ed Catmull, president of Pixar and Disney Animation Studios, reflected on a graphics course Ed Catmull and Jim Blinn taught at Berkley in 1980, which Richard Chuang attended. The panel included video clips from the course and several memoriable quotes (disclaimer: I tried to write these down word for word but no promises):

    • "It's the kind of thing where you understand it and it is still hard." - Ed Catmull on visible surfaces in 1980.
    • "It would be nice if this thing could do a little more. In fact, it would be good if it were programmable." - Ed Catmull on the GPU in 1980.
    • "Because we didn't know any better, we were on the forefront of a brand new field." - Richard Chuang.
    • "The result of teaching these two classes created our biggest competitor" - Ed Catmull speaking about PDI/Dreamworks.
    • "If we don't have any major surprises then we are becoming too conservative." - Ed Catmull.

    When asked during the panel, Ed Catmull did not object to having the videos of his 1980 course released. It would be great to have such historic videos available.
  • Books: One of my favorite things about SIGGRAPH is all the new books. I'm happy to see the fifth edition of the OpenGL SuperBible is out, and it only covers the core profile! It's about time we got a book covering just "modern" OpenGL. Another book that looks interesting is Writing for Computer Science. Its been out since 2004 so I'm surprised that I haven't ran into it before. I probably should have read it before starting on my own book!
  • Posters: Stefan Elsen had an interesting poster on real-time procedural generation of planets using fractals: "WorldSeed: Fractal Worlds in Realtime." His website has much more information including an impressive video. Our poster, GPU Ray Casting of Virtual Globes, was also well received. I actually ran out of hard copies of the abstract!
  • OpenGL: Those notes are in my previous post.
There was a whole lot more to see at SIGGRAPH but these were the highlights for me. Now that all the SIGGRAPH fun is behind us, I need to lock myself in my house and finish this manuscript!

Saturday, July 31, 2010

OpenGL 4.1 and more at SIGGRAPH

After attending SIGGRAPH, I am convinced now, more than ever, is a great time to be using OpenGL. There was lots of exciting news:
  • The OpenGL 4.1 and GLSL 4.1 specs were released, very shortly followed by NVIDIA drivers. This includes long requested features such as ARB_separate_shader_objects and ARB_get_program_binary. I was pleasantly surprised to see the extra debugging information now available with ARB_debug_output. This will be the first thing I try out.

    Of particular interest to virtual globe developers is ARB_vertex_attrib_64bit which will help the common "jitter" problem on machines with GL 4.x hardware. Dealing with this problem on pre-4.x hardware is a topic in our book. Until it comes out, check out Deron's Precisions, Precisions article.

  • The OpenGL SDK reference pages were updated for 3.3 and 4.1 core profiles! You no longer have to dig through the spec to find reference material for the latest GL features (not that it was that bad). The 2.1 reference pages are still around if you need to look up deprecated functions.

  • If you didn't get to attend the OpenGL BOF (or even if you did), I recommend reading through the slides. There's lots of exciting news, including a lightweight texture file format, KTX, for OpenGL and OpenGL ES, a 0.9 version of a modern GLU: GLU3, and progress towards OpenGL conformance tests.

    As lame as it sounds, I am pumped about the conformance tests. They should really improve the quality of OpenGL drivers, which have already been increasingly stable on recent hardware and operating systems.

  • NVIDIA's OpenGL 4.0 for 2010 presentation is also worth a look. I was glad to see how crowded this session and the BOF were!

Even though I am thrilled with the direction of OpenGL, there are two things I'd like to see:
  • 3.4 - I was expecting to see 3.4 released at the same time as 4.1 but instead 3.x gained new ARB extensions, including the ones listed above minus ARB_vertex_attrib_64bit. Since not all vendors support all ARB extensions, I would have rather seen these great features rolled into 3.4. This way the features are guaranteed to be implemented, and application developers can simply say their application requires 3.4 instead of saying 3.4 plus whatever extensions. I would not mind seeing 3.4 released with 4.2 (or 5.0?) and include all possible extensions from 4.1 and 4.2.
  • I'd also like to see debugging support for modern GLSL shaders. If OpenGL really wants to be a Direct3D superset, it needs better tool support. Right now, there is glslDevil but it does not support core profile development. There is also NVIDIA's Parallel Nsight (see their SIGGRAPH presentation), which currently has very little OpenGL support and the support requires the paid version. Although, I am under the impression that NVIDIA is working on more OpenGL and GLSL features. They will hit a home run if we can seamlessly debug GLSL shaders!

Monday, July 19, 2010

Geometry Shader Silhouettes without Adjacency Information

Rendering silhouette edges is a classic problem in NPR. It has other uses to - for example, in terrain rendering, it can convey quite a bit about where ridge lines are:

Of course, the above comparison is not very fair. The image on the left is just shaded by height (no lighting) which can hide terrain features, especially for horizon views. Regardless, silhouettes are cool and most graphics developers are familiar with the standard geometry shader approach based on adjacency information (if not see Inking the Cube: Edge Detection with Direct3D 10 or Single Pass GPU Stylized Edges). What I'd like to briefly share with you is a geometry shader approach that does not require adjacency info, which means you won't need one index buffer with adjacency info and another without it!

Tuesday, July 13, 2010

OpenGL at SIGGRAPH 2010 Update

The OpenGL Community Drink at SIGGRAPH has now been scheduled:

   Monday, July 26th, 6pm
   Veranda Bar in the Figueroa Hotel

The Figueroa Hotel is one of the closest hotels to the convention center. See the Hotel Map. There is no need to RSVP but you are welcome to chime in here, email Christophe, leave a comment below, or just show up, we're not exactly going for anything formal.

Friday, July 9, 2010

OpenGL at SIGGRAPH 2010

One of the many reasons I love SIGGRAPH is it allows me to stay current with the now quickly moving world of OpenGL. This year there are many sessions covering OpenGL:
  • OpenGL BOF: Always has the latest OpenGL news. It might even be the most attended BOF at the conference - the free beer doesn't hurt. It is on Wednesday, 5:15-7:15pm
  • OpenGL 4.0 for 2010: Hosted by NVIDIA on Wednesday, 10:15-11:30 am.
  • OpenGL Community Drink: Currently being organized by Christophe Riccio (creator of GLM and the OpenGL Samples Pack). This will be a great opportunity to meet people that are active on the forums or just using OpenGL in general. The time and place have not been set yet but it is likely to be Monday night. Join the planning discussion here. This is an informal meeting and not an official SIGGRAPH event.

Thursday, July 1, 2010

GPU Ray Casting of Virtual Globes at SIGGRAPH 2010

SIGGRAPH is just around the corner! I am excited to present our poster: GPU Ray Casting of Virtual Globes. The papers and presentations page contains the abstract, video, and all that good stuff.

The poster sessions are Tuesday and Wednesday, 12:15-1:15pm. If you are interested in GPU ray casting, rendering ellipsoids, or anything related, stop by our poster at location 80B in the West Lobby.

Wednesday, June 30, 2010

GPU Pro Review - 3D Engine Design Section

I recently started reading GPU Pro and it is outstanding! I love the full color and syntax highlighting! Many of the articles are even useful research for our own book. I'd like to write a full review but who knows when I'll get through the 700+ pages. So I'm going to write a review as I read each section. These reviews will be more about the ideas the articles give me than a complete review of the contents, writing style, etc. But isn't that the point of reading anyway - to get new ideas?

The 3D Engine Design section was the first section that peeked my interest. Of which, I read two of the four articles:

Porting Code between Direct3D 9 and OpenGL 2.0 - Wojciech Sterna

A chapter in our book is on designing an abstraction layer over OpenGL so the rest of the book can contain API agnostic discussion and code examples (with the exception of the use of GLSL for shaders). This isn't just important for book writing, all major graphics applications should have a renderer abstraction layer for flexibility, portability, performance, and most importantly, ease of development.

If you count a graduate class project, this is third time I've designed such a layer (check it out on sourceforge). Every time, I've done so with OpenGL so I was excited to see this article on the differences between Direct3D and OpenGL. While designing the layer, I looked at the Direct3D documentation from time to time but I am not positive that it is ideal for both APIs so I looked forward to learning more from this article.

And I did learn quite a bit. Perhaps I should be embarrassed to say, but I didn't know anything about the fine grain control available in Direct3D with memory pools. I also felt good learning that many things like vertex buffers and textures seem very similar between the two APIs.

I liked the mention of Cg - since I was recently asked why not use Cg instead of duplicating shaders in GLSL and HLSL. Although it sounds like two slightly different Cg shaders would need to be written anyway. If anyone has experience with maintaining shaders for different APIs, I'd like to hear about it.

Overall, this article was pretty good and I like that it was only 11 pages and right to the point. If I had to criticize it, I would have liked it to be about Direct3D 10/11 and OpenGL 3.2 core profile. But I understand that Direct3D 9 is still widely used and OpenGL 3.2 might not of even been out when this article was written. It would have also been nice to see more tips on designing an abstraction layer that allows for reasonable implementations using both APIs instead of just the API differences - although, the example code more than makes up for this.

Practical Thread Rendering for DirectX 9 - David Pangerl

This article also jumped out at me since I am preparing to write the threading material for our book. My focus is on using threads for out-of-core rendering: reading data from secondary storage, CPU intensive processing of that data (.e.g, computing normals, etc), then finally creating renderer resources (e.g., vertex buffers, textures, etc) - assuming I get all the example code working!

This short article is on another use of threading: using a dedicated thread for issuing rendering commands. The basic idea is to fill a command buffer instead of directly issuing Direct3D calls (I'm pretty sure this would also work with OpenGL). A dedicated rendering thread then executes the command buffer. Filling the command buffer is much faster than calling Direct3D functions, which reduces the CPU usage of your main thread. An interesting statistic from the article is the author found an average Direct3D function call takes 15,231 instructions. Using the dedicated rendering thread, the author saw improvements up to 200% in tests, and a 15% in a real-time game. Considering all that is going on in a game, 15% is great!

One thing I'm curious about is what happens when uploading large objects like vertex buffers and textures? Its not clear to me if a copy of the data is made for the rendering thread or if this falls into their category of commands that synchronize with the rendering thread and execute immediately. I'm also curious about operations like compiling and linking shaders which may be threaded by the driver anyway. I suppose this can happen offline in Direct3D.

It's worth mentioning that if you have an abstraction layer over your rendering API (and you should!), this could all be done behind the scenes with the user having no idea that there is a dedicated rendering thread.

In closing, I think this is a worthwhile idea that I'd like to implement myself at some point. Also, I'm not sure how this compares to multithreaded rendering in Direct3D 11.

Now onto the Game Postmortems section!

Saturday, June 26, 2010


Welcome to our graphics blog! This will be a journal of sorts for our upcoming book tentatively titled Virtual Globe and Terrain Rendering to be published by AK Peters, Ltd. in time for SIGGRAPH 2011. We'll talk about our experiences preparing the manuscript and the heaps of example code (which you can download as we work on it, before the book is available!). The about page has more information on our book, including the planned contents.

Even if you don't think you're interested in our book project, our blog should be useful for real-time computer graphics in general. Many of the techniques used for virtual globe rendering are applicable in all sorts of places - GIS, games, simulations, etc. We anticipate at least as much general graphics content as we do book-related content.

Stay tuned!