Monday, August 22, 2011

Electronic Version of Our Book

We have had several requests for an electronic version of our virtual globe book.  I'm happy to say that it is now available on VitalSource.  You can read it on your desktop or transfer it to your iPhone, iPad, or iPod Touch.  This electronic version is discounted, and there are rental options that cost even less.  Pretty cool.


Update: Unfortunately, VitalSource only accepts credit cards from the US and Canada because of rights issues with some publishers. Our publisher, CRC Press, is working on a new website that will solve this problem. We'll keep you posted.

Sunday, August 14, 2011

SIGGRAPH 2011 Trip Report: Day Five

I spent the morning in the Mobile BOF. One theme I am very happy with is that using HTML5, including WebGL, sounds like a viable strategy for targeting mobile devices. In his talk, Jon Peddie predicted HTML5 will have 100% penetration on mobile platforms by late 2012. In all fairness, flash already has 100%. Jon's claim, which was echoed elsewhere during the conference, is that WebGL's biggest problem will be misinformation, not technical.

Neil Trevett also highlighted WebGL in his talk, which included a demo of the WebGL Aquarium with 100 fish running on a Tegra 2 tablet! This was an early, pre-optimized WebGL implementation in WebKit on Android, but it is very promising.

Someone brought up an interesting question: if we develop in HTML5 instead of native apps, how is Apple, who makes a lot of money through their App Store, going to respond? Are they going to treat it like they did flash? I don't recall how this was answered, but my take on it is Apple will have to allow it; otherwise, consumers will reject Apple's products if HTML5 is not supported. HTML5 is not some corner-case technology; it is the web.

There were several other great presentations at the BOF, but my favorite was Tom Olson's talk on writing portable OpenGL ES 2 code (similar GDC 2011 slides). Between this talk and Aras Pranckevičius' talk earlier in the week, I really gained an appreciation for targeting multiple mobile devices. Although mobile segmentation is better than it use to be, there are still several different operating systems and a wide array of hardware with varying performance.

Just like in desktop OpenGL, OpenGL ES has implementation-dependent limits like number of vertex attributes and number of textures. In my own code, I usually try to stay within these limits by doing things like packing multiple values into a single vertex attribute. In Tom's talk, I also learned that the precision qualifiers can cause cross-platform issues. lowp guarantees at least 10-bits, mediump guarantees 16, and highp guarantees 24. This sounds OK at first, except that some platforms may always use 32-bits, so if we only test on these platforms, we never test with lower precision. In addition, highp isn't always supported in fragment shaders; some platforms will silently ignore it, and others will fail to compile the shader. Ugh. This reminds me of shaders that compile with warnings on NVIDIA, and fail to compile on AMD.

It sounds like some platform-compatibility problems can be solved by cleaning up drivers, and others can be solved by cleaning up the specification. Tom noted that the spec doesn't say what to do with bad code. For example, what does divide-by-zero yield? Nan? Infinite? Zero?

Overall, this was a great BOF. I want to thank Tom Olson for organizing it, and for giving me a time-slot to announce the call for authors for OpenGL Insights.

Books

Real-Time Shadows
No SIGGRAPH would be complete without checking out all the new books. For starters, Real-Time Shadows by Elmar Eisemann, Michael Schwarz, Ulf Assarsson, and Michael Wimmer is out, and looks excellent. At first glance, techniques like shadow mapping and shadow volumes seem simple, but the devil is in the details; efficient, robust implementations that yield high visual quality are quite difficult. Therefore, a lot can be said about shadows even though it is a sub-field of real-time rendering, which is a sub-field of rendering, which is a sub-field of computer graphics, which is a sub-field of computer science, etc.


I was also happy to see proofs for the second edition of Graphics Shaders: Theory and Practice by Mike Bailey and Steve Cunningham. I am a huge fan of the first edition and am glad to see the second edition is updated for OpenGL 4. However, I was surprised to see that immediate mode, built-in vertex attributes, and built-in uniforms were still used in the example code. Perhaps the code will be updated before the book is printed.


3D Graphics for Game ProgrammingI'm also excited about two intro graphics books that I browsed during the conference. I like to go over the fundamentals once in a while because it keeps me somewhat broad, fills gaps (and I have plenty of them!), and seeing material over again from a different perspective is a great way to deepen knowledge. 3D Graphics for Game Programming by JungHyun Han looks like a pragmatic, concise introduction to graphics.


I was pleasantly surprised to see a partial draft of the 3rd edition of the classic Computer Graphics: Principles and Practice (3rd Edition) by James Foley, Andries van Dam, Steven Feiner, and John Hughes. At almost 1800 pages, this thing is heavy and appears to cover just about every topic in computer graphics, from 2D with WPF to GPU architecture with an NVIDIA GeoForce GTX 9800 case-study. I am definitently getting a copy of this when it comes out (March 2012 according to Amazon), but reading it cover-to-cover is going to take some effort.
3D Engine Design for Virtual Globes

Of course, our book, 3D Engine Design for Virtual Globes, was also new this SIGGRAPH. I was happy to meet many people who were excited about it, and glad to hear that it was selling well. I'm taking my 75 cents to Vegas. Kevin is risking his in biomedical startups.

Final Thoughts


This was my fourth SIGGRAPH; each one keeps getting better. I didn't spend as much time in courses as I usually do, but I enjoyed many BOFs and met several people submitting to OpenGL Insights. As I said after attending my first SIGGRAPH, SIGGRAPH is all about meeting people and sharing ideas.



Full SIGGRAPH Trip Report: day one | two | three | four | five

SIGGRAPH 2011 Trip Report: Day Four

Today had two very important events:  the WebGL BOF and the OpenGL BOF. Over the past six months, I have been developing with WebGL full-time, so I have been watching it closely (mainly through the WebGL Camps and Gile's Learning WebGL blog). The WebGL BOF exceeded my already high expectations; it was standing room only and even people outside of the room were trying to peak their heads in. There was a bit of news and lots of demos.

WebGL 1.0.1 is expected to be out in the fall to cover some corner-cases. We can also expect compressed textures soon  The most exciting news is that web workers will be able to pass typed arrays without cloning them!  We'll have to see how fast it is, but this will make web workers much more useful and suitable for a wider array of tasks like computing bounding volumes and vertex cache optimization, since hopefully thread communication overhead will not be the bottleneck. Even more interesting is that Microsoft worked on this specification. Perhaps they are adding WebGL support to IE. If they want to continue to be a player, I don't see how they will not support WebGL.

The BOF was full of exciting demos, which are on the WebGL Wiki. I'll highlight a few that I really enjoyed. Ken Russell showed a 3D cloth simulation used to flip through Chrome tabs written by a few Google interns. Very cool. Neil Trevett showed the WebGL Aquarium demo running on a Xoom Tablet using a to-be-released version of the native browser. This is super-important to me because we are banking on using WebGL to target both cross-platform desktop and mobile devices.

Mark Danks demoed My Robot Nation, which is a creative business-idea combing WebGL and 3D printing. Users model a robot using a WebGL application. A full-color 3D-printed version of the robot can then be ordered. Pricing wasn't discussed, but I wonder if this will be cheap enough to get wind-spread adoption among our youth. Mark discussed some interesting implementation details, including that the robot's mesh is never actually stored. Instead, the commands to recreate the robot are stored, which can be used to generate the rendering and 3D-printing.

Erik Möller gave an excellent talk and demo on using WebGL and HTML as a game platform. Erik works for Opera, whose browser only has 2-3% market-share, but is used on the Nintendo Wii and has more than 20% market-share on mobile devices. He discussed a platform game developed in HTML5, Emberwind, developed by three summer interns (full original version). It has the very handy feature of being able to switch between Canvas 2D and WebGL for rendering, showing WebGL to be significantly faster. Some numbers I saw showed Canvas 2D at 15 fps and WebGL above 60 fps. Part of this was due to batching draw calls together in WebGL using a texture atlas. Erik made the excellent point that WebGL has a higher barrier to entry but allows more flexibility.
This bird smelled my English Muffin

The BOF included several other exciting demos, including the BrainBrowser by Nicolas Kassis, which uses XHR2 for transferring binary data over HTTP; PhiloGL by Nicolas Garcia Belmonte; and Chrysaora by Aleksandar Rodic, which is doing the bone simulation on the server. See the wiki for the full list of talks and demos. I want to thank Ken Russell for organizing an awesome event and for giving me a time-slot to announce the call for authors for OpenGL Insights.

The OpenGL BOF was also excellent this year. Of course, the big news was the release of OpenGL 4.2. This release has a number of new features that expose hardware features, including ARB_shader_atomic_counters and ARB_shader_image_load_store. Both of which allow shader instances to communicate to some extent. Shaders can now have side-effects. GLSL shaders are starting to feel an awful lot like CUDA (and OpenCL) kernels.

GL 4.2 also introduces ARB_texture_storage, which helps guarantee a texture is complete. This reminds me of using templates for immutability in Longs Peak. I'm glad to see those API designs making their way into OpenGL. For much, much more information on GL 4.2, check out Christophe Riccio's review.

In other OpenGL news, a version of the conformance test suite for OpenGL 3.3 and selected extensions is expected to be complete in November. GL drivers have been getting much better in recent years, and this test suite is a huge step in the right direction. In his ecosystem update, Jon Leech also mentioned they are tidying up the spec to have less undefined-behavior. I want to thank him for mentioning our call for authors for OpenGL Insights.

The BOF ended with an excellent talk, Brink Preferred Rendering with OpenGL, by Mikkel Gjøl. He described the rendering in Splash Damage, including its deferred rendering pipeline, use of occlusion queries, and virtual texturing. He said OpenGL works for AAA games, and had several useful requests including a lower-level API (not the first time we heard this at SIGGRAPH); performance warnings; and display lists, which are widely used on consoles.


Full SIGGRAPH Trip Report: day one | two | three | four | five

Thursday, August 11, 2011

SIGGRAPH 2011 Trip Report: Day Three

I spent the morning in the Out of Core talks - a topic quite dear to me considering my master's thesis.

Won Chun's talk, Google Body: 3D Human Anatomy in the Browser, was a modified version of a similar talk from WebGL Camp 3. He discussed the mesh compression and WebGL rendering used in Google Body. My favorite part was how vertex cache optimization, which is used to optimize rendering, also helped improve compression by increasing coherence. There was lots of other goodness like how rendering with float vertex components was faster than using short components even though floats require more data. Won's compression code is now open source.

Cyril Crassin gave a very impressive talk titled Interactive Indirect Illumination Using Voxel Cone Tracing: An Insight. It showed fast, approximate, two-bounce global illumination by grouping coherent rays of reflected light in a pre-integrated cone. I obviously do not work in GI, but if you do, this talk is definitively worth checking out.

Another very impressive talk was Out-of-Core GPU Ray Tracing of Complex Scenes presented by Kirill Garanzha. They interactively rendered the Boeing 777 model, a classic "massive model", on an NVIDIA GeForce GTX 480 at 1024x768. For many views, it looked like it was 200-300 ms per frame with a cache size of 21% of the model (360 million polygons). I'm pretty sure only diffuse shading was used, but even so, this is outstanding work.


One course I never miss at SIGGRAPH is Beyond Programmable Shading. A lot of material from this SIGGRAPH course makes its way into our course at Penn. I wasn't able to make all the sessions this year, but I will certainly catch the rest on SIGGRAPH Encore.

A major course theme was system-on-a-chip (SOC), where both CPU and GPU cores are on the same chip. This has the benefit of eliminating the system bus between the CPU and GPU, which is often a bottleneck.


I really enjoyed the panel What Is the Right Cross-Platform Abstraction Level for Real-Time 3D Rendering? with David Blythe, Chas Boyd, Mike Houston, Raja Koduri, Henry Moreton, and Peter-Pike Sloan. They discussed the tension between application developers, middleware developers, OS developers, and hardware vendors when it comes to APIs like Direct3D and OpenGL. It was generally accepted that D3D/OpenGL are at the right level of abstraction, but need tweaking. Various ideas were discussed, including shorter specs; merging compute and rendering APIs; new APIs for system-on-a-chip; and even having one low-level rendering API and multiple high-level rendering APIs, with the argument that there are many abstractions (programming languages) that do the same thing: change the CPU's instruction pointer.

My favorite part about the panel was the discussion on what goes on in the driver. There is significant pressure for hardware vendors to do well on game benchmarks so many hacks are added to optimize for specific games. The games may not be using best practices, so they are "rewritten in the driver" making the driver really bloated - what a mess. This reminds me of the special allocator mode added to Windows 95 to workaround a bug in SimCity, who was using memory right after it was freed.

All of these game-specific hacks (I almost called them optimizations) lead to non-obvious fast-paths. What vertex-format should I use? As an application developer, I don't know. Well, I sort of know, but when killer-next-gen-game comes out, which uses double-precision texture coordinates, should I also switch to make my application run faster? Introducing a low-level rendering API would fix many of these problems and remove the bloat from the drivers.

Another interesting topic in this discussion was why closed-source drivers are higher quality than open-source drivers. The reasons are quite understandable: closed-source drivers can hide hardware bugs, hide intellectual property that isn't patented, and hide third-party code. It also sounds like there are a lot less open-source developers (40-to-1?) and changes in the Linux kernel can affect the drivers.

Every year, the Beyond Programmable Shading ends with a thought-provoking panel, but this panel was by far the best one!


Full SIGGRAPH Trip Report: day one | two | three | four | five

Wednesday, August 10, 2011

SIGGRAPH 2011 Trip Report: Day Two

I started off today at some of the NVIDIA exhibitor sessions. In the OpenGL & CUDA-Based Tessellation talk, Philippe Rollin made an excellent point about tessellation shaders: the result can be cached using transform-feedback, and used for several frames. This is just one of the many examples of the synergy among recent hardware features. He also convinced me that tessellation can be useful for real-world terrain data by reducing the amount of preprocessing - and everyone hates preprocessing! In the final part of this talk, Miguel Ortega talked about tessellation in Thor. An interesting stat he mentioned is that the biggest asset used 900 4K textures - wow! Movies are quite a bit different than real-time rendering.

I also stayed for the Parallel Nsight 2.0 and CUDA 4.0 for the Win talk by Jeff Kiel. Parallel Nsight has some very impressive Direct3D debugging capabilities, and I'm looking forward to full OpenGL support. I will be ecstatic when I can set breakpoints in a shader. Parallel Nsight is also a great tool for GPU Compute debugging. I want to work this into our GPU course, but requiring two GPUs will call for some careful logistics. However, it will run on some laptops.


I spent some time in the Advances in Real-Time Rendering course, but I spent the bulk of my afternoon in the How to Write Fast iPhone and Android Shaders in Unity Studio Workshop by Aras Pranckevičius and Renaldas Zioma. So far, this was my favorite talk of the conference. It was full of battle-won tips on optimizing shaders for mobile platforms. I haven't done any mobile development yet, but it sounds messy with all the different architectures. For example, on some architectures you should pack varyings into a vec4, and on others you shouldn't. Some architectures scale better than others as more ALU instructions are used. Some architectures care what precision qualifier you use (lowp, mediump, highp), some don't, and some are slow when swizzling lowp precision variables.

Some themes were uniform across all architectures though: baking lighting into textures to avoid heavy ALU instructions like the pow() function in specular lighting; combining several post-processing passes into a single pass to avoid fill-rate; and pragmatic front-to-back rendering for early-z, e.g., render the large player, followed by the environment, followed by enemies which are likely occluded, and finally the skybox. I really enjoyed how this talk was realistic, and even mentioned the reality of optimizations not working and tools crashing. These things happen to me too, and I always tell my friends so you really want to be a graphics developer?!?!? I hope they give a similar talk next year, and that SIGGRAPH gives them a bigger room with more seats.


Full SIGGRAPH Trip Report: day one | two | three | four | five

Monday, August 8, 2011

SIGGRAPH 2011 Trip Report: Day One

Vancouver is awesome. It is clean; the people are unbelievably nice; and the convention center is right on the water and easy to navigate. Let's stop going to LA every other year, and start going to Vancouver!


I spent the afternoon in the Introduction to Modern OpenGL Programming course taught by Edward Angel and Dave Shreiner. It was packed, which shows how important OpenGL has become, especially given OpenGL ES and WebGL. I've been a big fan of this course since I first attended it in 2008. They've done a great job of keeping it up to date. It is a nice introduction to OpenGL, including VBOs, VAOs, GLSL, uniforms, transforms, lighting, and texture mapping. They briefly covered tessellation and geometry shaders, but time was tight. I'm looking forward to this class being a full day in the future.

I usually don't make it to the paper sessions, but I always attend the papers fast forward to get an overview of the research being presented. Even the overflow room was packed this year. One paper that jumped out at me was HDR-VDP-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions by Rafal Mantiuk et al. It describes an algorithm for comparing images. They suggest use-cases like determines the quality-loss due to compression. I am interested in it for a much simpler use-case: unit tests. That is, I want to compare images rendered by our 3D engine on different hardware and different drivers, and I don't what false failures for slight differences. This is a surprising hard problem to solve, and at some point, I'd like to look into HDR-VDP for doing so.

A tip for SIGGRAPH attendees: you can buy merchandise from previous SIGGRAPHs at a steep discount at the SIGGRAPH Store. I bought two tee-shirts for $3 each. They also have things like polo shirts, hats, and coffee mugs for dirt-cheap.

One more tip: there is a separate registration line for contributors that is much shorter than the normal line.



Full SIGGRAPH Trip Report: day one | two | three | four | five

Saturday, August 6, 2011

The Passionate Programmer Review

Jon McCaffrey's review of The Passionate Programmer by Chad Fowler motivated me to read it myself. I used to read lots of books like these, but over the past six or seven years, I’ve buried myself so deep in graphics that I've read very little general development books. It’s time to change that.

The Passionate Programmer: Creating a Remarkable Career in Software Development (Pragmatic Life)The Passionate Programmer is well worth a read. At about 200 pages, it is short, readable, and inspiring. It contains 53 bitesize chapters on being productive, marketing yourself, staying sharp, etc. I agree with the vast majority of the book's advice. It is a particularly great read for students and new developers; it will get them in the proper mindset to be awesome developers as Chad would say. If you are more experienced, you are probably already doing many things in this book; if not, you should consider them.

One of my favor chapters is 4. Be the Worst, which suggests surrounding yourself with the best developers you can, because doing so makes you grow faster and perform better. I couldn't agree more!

You would think that 14. Be a Mentor contradicts the advice of being the worst, but it does not. It is important to be a mentor to new developers because teaching is one of the best ways to learn. Do I really know how the automated build and test system works? I will once I try to explain it to someone.

Being a mentor also helps a team gel and integrate new developers quickly. If there is one thing our industry needs to get better at, it is mentoring. Electricians, plumbers, and tattoo artists do apprenticeships. Software developers do not. How often do we sit down with the experts in our development teams and learn from them? Probably never, or not often enough at best.

In 28. Eight-Hour Burn, Chad argues to only work eight-hour days but work with the utmost intensity. Working longer days leads to burning out, wasting time, and less overall long-term productivity. I agree; however, I am a hypocrite. Given my outside writing and teaching activities, I am consistently over 70 hours a week, and sometimes much higher. The only justification I have is writing and teaching are different enough from developing that the burn out isn’t as severe. I know I can't maintain this pace forever though.

Perhaps 39. Let Your Voice Be Heard is my favorite piece of advice. Chad suggests thinking beyond your current employer and contributing to the industry as a whole, first with a weblog (I don't know why he didn't just say blog), and eventually through publications and presentations. I really like this advice because there are a lot of sharp people in our field, and it is great to have everyone share ideas.

I don't think any advice in the book is terrible, but some needs to be put into perspective. For example, in 27. Learn to Love Maintenance, it is argued that maintenance development can allow for freedom, creativity, and direct-customer interaction. For old, large applications, this isn't always true. If you are working on a twenty-year old, multi-million line application that has seen the hands of hundreds of developers at various stages in their careers, maintenance is often a test of patience. For example, do we really want to continue to design legacy APIs using COM? Is making our code const correct easy when all the code we call is not? Do we really want to integrate old code using a struct for vectors with new code using a class? No. we want to build our skills using modern technology and techniques.

I advise interns and new grads to get in on the ground floor of a new project. They will be given the best opportunity and most responsibly. They will see the big picture, and will accelerate their skills faster than being bogged down by long build-times, legacy code, and legacy mistakes. The contractor who built the house learned much more than the contractor who remodeled the bathroom. With all this said, if you get an opportunity to work on a large, outstanding piece of software - say the Linux kernel, for example - you should go for it.

One final comment: this book is a revised edition of the book My Job Went to India: 52 Ways to Save Your Job. Naming a book is really important, just like naming software, classes, functions, variables, etc. With the original title, I would not have paid this book any attention. The revised title – and I assume content – made it really appealing. Naming is hard.

Overall, The Passionate Programmer is an inspiring, worthwhile read.