Wednesday, November 9, 2011

OpenGlobe moved to GitHub

OpenGlobe, the virtual globe 3D engine we developed for our book, is now on GitHub!
We think GitHub is really cool because it makes it easy to collaborate on open source projects. Please fork at will, and if you do something cool, send us a pull request.

Monday, October 17, 2011

WebGL in Internet Explorer

Currently, even Internet Explorer 10 will not support WebGL. Meanwhile, all the other major browsers support this new standard. I'm still confident that Microsoft will add WebGL support to IE, but as WebGL developers what can we do in the meantime?
  • Chrome Frame - an IE plugin that uses the Chrome engine to render web-pages that request so. If an IE user has this plugin installed, all we need to do is include a single meta tag in our WebGL-enabled pages:
    <meta http-equiv="X-UA-Compatible" content="chrome=1">
    The major downside is Chrome Frame is a plugin, and, therefore, requires an install.  However, we don't need administer privileges to install Chrome Frame, making this option very attractive.
  • IEWebGL - an IE plugin that implements WebGL. It is lightweight, and was fairly simple to port our large engine to. Unlike Chrome Frame, IE's JavaScript engine still runs our JavaScript, and only the WebGL calls are forwarded to the plugin. The advantage over Chrome Frame is the install is smaller.
  • jebgl - uses a Java applet to emulate WebGL in IE by forwarding WebGL calls to JOGL. This sounds really promising because it does not require installing a plugin. Unfortunately, I have not been able to get it to work.
  • webgl-compat - a work-in-progress that implements WebGL with Canvas. I can't imagine that this is going to perform well. The first proof-of-concept rendered 25 rotating triangles at 42 fps without depth testing. Today's hardware pushes billions of triangles per second.
  • cwebgl - implements WebGL using JavaScript and Canvas, similar to webgl-compat. Again, I don't think this will meet the performance needs of most applications.
Today, I think our best bet is to use Chrome Frame. It worked well in my tests, can be installed without admin privileges, and supports even IE6 (admittetly, I only tested using IE9). Even when IE supports WebGL, we will need to consider these options for our users who can't upgrade right away.

Monday, September 26, 2011

WebGL: GPU acceleration for the open web

For the past six months, I've been developing with WebGL full-time. Coming from C++, I was not exactly thrilled about the idea of doing web development and coding in JavaScript. To my surprise, I really enjoy JavaScript, and the development tools are quite good.

Joe Kider kindly allowed me to share my enthusiasm for WebGL with his students in CIS 565: GPU Programming and Architecture at the University of Pennsylvania. My talk focused on the motivation for WebGL; WebGL support in current desktop and mobile browsers; and basic JavaScript:

Download the ppt.

Monday, September 5, 2011

The Google Resume Book Review

I'm not prepping for a job search, but I did just read The Google Resume by Gayle Laakmann McDowell. Gayle is a Penn alum (me too) so I wanted to check out one of her books.

Overall, this is a really outstanding book on the entire hiring process process for technical jobs: networking, career fairs, resumes, references, programming interviews, negotiating an offer, excelling once you are hired, etc. It is useful for developers of all experience levels, but it will be most useful for undergraduate and masters students in computer science or a similar major.  Its technical focus makes it hit home more than the general advice given by a university's career services department.

Being on both sides of the hiring process many times myself, I can say that the advice is practical and modern. It also includes lots of stories, like a candidate who used himself as a reference, and when the author interviewed with Microsoft.

Some of the advice you've probably heard, and some you may not have. The advice I like includes getting to know your professors; GPA isn't everything - excel at something; customize your resume for each company; keep your resume concise; use twitter for networking; have an online presence such as a blog, portfolio, or active forum participation; your initial email is really a cover letter; focus on accomplishments over responsibilities; map out your career 7-10 years ahead; find a mentor; and build relationships. There are also a number of subtle tips like watch what you write in an email because it may get forwarded (side note: I also recommend watching what you write about competitive works in a book proposal because it might get forwarded to the work's author. I am two for two on this!).

The chapter on programming interviews and the appendix on behavior questions are quite good. There is a great section on approaches to algorithm design that is useful way beyond an interview.

The section on evaluating an offer is great because it includes important considerations that I think many people ignore.  For example, we all consider location from the perspective of do we want to live there and cost of living, but do we consider if there will be other job opportunities there in the future?  Also, when looking at an offer, we should consider what the average annual raise is.

This book is really excellent, but if I had to critique a few minor things I would have liked to see more emphasis on contributing to open-source to gain project experience, especially considering that companies like Google and IBM contribute significantly.  The chapter on getting into gaming had lots of quotes but appeared to be less based on experience than the other chapters - and also focused more on social and casual games than AAA games that many people aspire to.  In fairness, an entire book could be written on gaming though.

All in all, I really recommend this book, especially for students before they start searching for their first internship or co-op.

Monday, August 22, 2011

Electronic Version of Our Book

We have had several requests for an electronic version of our virtual globe book.  I'm happy to say that it is now available on VitalSource.  You can read it on your desktop or transfer it to your iPhone, iPad, or iPod Touch.  This electronic version is discounted, and there are rental options that cost even less.  Pretty cool.

Update: Unfortunately, VitalSource only accepts credit cards from the US and Canada because of rights issues with some publishers. Our publisher, CRC Press, is working on a new website that will solve this problem. We'll keep you posted.

Sunday, August 14, 2011

SIGGRAPH 2011 Trip Report: Day Five

I spent the morning in the Mobile BOF. One theme I am very happy with is that using HTML5, including WebGL, sounds like a viable strategy for targeting mobile devices. In his talk, Jon Peddie predicted HTML5 will have 100% penetration on mobile platforms by late 2012. In all fairness, flash already has 100%. Jon's claim, which was echoed elsewhere during the conference, is that WebGL's biggest problem will be misinformation, not technical.

Neil Trevett also highlighted WebGL in his talk, which included a demo of the WebGL Aquarium with 100 fish running on a Tegra 2 tablet! This was an early, pre-optimized WebGL implementation in WebKit on Android, but it is very promising.

Someone brought up an interesting question: if we develop in HTML5 instead of native apps, how is Apple, who makes a lot of money through their App Store, going to respond? Are they going to treat it like they did flash? I don't recall how this was answered, but my take on it is Apple will have to allow it; otherwise, consumers will reject Apple's products if HTML5 is not supported. HTML5 is not some corner-case technology; it is the web.

There were several other great presentations at the BOF, but my favorite was Tom Olson's talk on writing portable OpenGL ES 2 code (similar GDC 2011 slides). Between this talk and Aras Pranckevičius' talk earlier in the week, I really gained an appreciation for targeting multiple mobile devices. Although mobile segmentation is better than it use to be, there are still several different operating systems and a wide array of hardware with varying performance.

Just like in desktop OpenGL, OpenGL ES has implementation-dependent limits like number of vertex attributes and number of textures. In my own code, I usually try to stay within these limits by doing things like packing multiple values into a single vertex attribute. In Tom's talk, I also learned that the precision qualifiers can cause cross-platform issues. lowp guarantees at least 10-bits, mediump guarantees 16, and highp guarantees 24. This sounds OK at first, except that some platforms may always use 32-bits, so if we only test on these platforms, we never test with lower precision. In addition, highp isn't always supported in fragment shaders; some platforms will silently ignore it, and others will fail to compile the shader. Ugh. This reminds me of shaders that compile with warnings on NVIDIA, and fail to compile on AMD.

It sounds like some platform-compatibility problems can be solved by cleaning up drivers, and others can be solved by cleaning up the specification. Tom noted that the spec doesn't say what to do with bad code. For example, what does divide-by-zero yield? Nan? Infinite? Zero?

Overall, this was a great BOF. I want to thank Tom Olson for organizing it, and for giving me a time-slot to announce the call for authors for OpenGL Insights.


Real-Time Shadows
No SIGGRAPH would be complete without checking out all the new books. For starters, Real-Time Shadows by Elmar Eisemann, Michael Schwarz, Ulf Assarsson, and Michael Wimmer is out, and looks excellent. At first glance, techniques like shadow mapping and shadow volumes seem simple, but the devil is in the details; efficient, robust implementations that yield high visual quality are quite difficult. Therefore, a lot can be said about shadows even though it is a sub-field of real-time rendering, which is a sub-field of rendering, which is a sub-field of computer graphics, which is a sub-field of computer science, etc.

I was also happy to see proofs for the second edition of Graphics Shaders: Theory and Practice by Mike Bailey and Steve Cunningham. I am a huge fan of the first edition and am glad to see the second edition is updated for OpenGL 4. However, I was surprised to see that immediate mode, built-in vertex attributes, and built-in uniforms were still used in the example code. Perhaps the code will be updated before the book is printed.

3D Graphics for Game ProgrammingI'm also excited about two intro graphics books that I browsed during the conference. I like to go over the fundamentals once in a while because it keeps me somewhat broad, fills gaps (and I have plenty of them!), and seeing material over again from a different perspective is a great way to deepen knowledge. 3D Graphics for Game Programming by JungHyun Han looks like a pragmatic, concise introduction to graphics.

I was pleasantly surprised to see a partial draft of the 3rd edition of the classic Computer Graphics: Principles and Practice (3rd Edition) by James Foley, Andries van Dam, Steven Feiner, and John Hughes. At almost 1800 pages, this thing is heavy and appears to cover just about every topic in computer graphics, from 2D with WPF to GPU architecture with an NVIDIA GeoForce GTX 9800 case-study. I am definitently getting a copy of this when it comes out (March 2012 according to Amazon), but reading it cover-to-cover is going to take some effort.
3D Engine Design for Virtual Globes

Of course, our book, 3D Engine Design for Virtual Globes, was also new this SIGGRAPH. I was happy to meet many people who were excited about it, and glad to hear that it was selling well. I'm taking my 75 cents to Vegas. Kevin is risking his in biomedical startups.

Final Thoughts

This was my fourth SIGGRAPH; each one keeps getting better. I didn't spend as much time in courses as I usually do, but I enjoyed many BOFs and met several people submitting to OpenGL Insights. As I said after attending my first SIGGRAPH, SIGGRAPH is all about meeting people and sharing ideas.

Full SIGGRAPH Trip Report: day one | two | three | four | five

SIGGRAPH 2011 Trip Report: Day Four

Today had two very important events:  the WebGL BOF and the OpenGL BOF. Over the past six months, I have been developing with WebGL full-time, so I have been watching it closely (mainly through the WebGL Camps and Gile's Learning WebGL blog). The WebGL BOF exceeded my already high expectations; it was standing room only and even people outside of the room were trying to peak their heads in. There was a bit of news and lots of demos.

WebGL 1.0.1 is expected to be out in the fall to cover some corner-cases. We can also expect compressed textures soon  The most exciting news is that web workers will be able to pass typed arrays without cloning them!  We'll have to see how fast it is, but this will make web workers much more useful and suitable for a wider array of tasks like computing bounding volumes and vertex cache optimization, since hopefully thread communication overhead will not be the bottleneck. Even more interesting is that Microsoft worked on this specification. Perhaps they are adding WebGL support to IE. If they want to continue to be a player, I don't see how they will not support WebGL.

The BOF was full of exciting demos, which are on the WebGL Wiki. I'll highlight a few that I really enjoyed. Ken Russell showed a 3D cloth simulation used to flip through Chrome tabs written by a few Google interns. Very cool. Neil Trevett showed the WebGL Aquarium demo running on a Xoom Tablet using a to-be-released version of the native browser. This is super-important to me because we are banking on using WebGL to target both cross-platform desktop and mobile devices.

Mark Danks demoed My Robot Nation, which is a creative business-idea combing WebGL and 3D printing. Users model a robot using a WebGL application. A full-color 3D-printed version of the robot can then be ordered. Pricing wasn't discussed, but I wonder if this will be cheap enough to get wind-spread adoption among our youth. Mark discussed some interesting implementation details, including that the robot's mesh is never actually stored. Instead, the commands to recreate the robot are stored, which can be used to generate the rendering and 3D-printing.

Erik Möller gave an excellent talk and demo on using WebGL and HTML as a game platform. Erik works for Opera, whose browser only has 2-3% market-share, but is used on the Nintendo Wii and has more than 20% market-share on mobile devices. He discussed a platform game developed in HTML5, Emberwind, developed by three summer interns (full original version). It has the very handy feature of being able to switch between Canvas 2D and WebGL for rendering, showing WebGL to be significantly faster. Some numbers I saw showed Canvas 2D at 15 fps and WebGL above 60 fps. Part of this was due to batching draw calls together in WebGL using a texture atlas. Erik made the excellent point that WebGL has a higher barrier to entry but allows more flexibility.
This bird smelled my English Muffin

The BOF included several other exciting demos, including the BrainBrowser by Nicolas Kassis, which uses XHR2 for transferring binary data over HTTP; PhiloGL by Nicolas Garcia Belmonte; and Chrysaora by Aleksandar Rodic, which is doing the bone simulation on the server. See the wiki for the full list of talks and demos. I want to thank Ken Russell for organizing an awesome event and for giving me a time-slot to announce the call for authors for OpenGL Insights.

The OpenGL BOF was also excellent this year. Of course, the big news was the release of OpenGL 4.2. This release has a number of new features that expose hardware features, including ARB_shader_atomic_counters and ARB_shader_image_load_store. Both of which allow shader instances to communicate to some extent. Shaders can now have side-effects. GLSL shaders are starting to feel an awful lot like CUDA (and OpenCL) kernels.

GL 4.2 also introduces ARB_texture_storage, which helps guarantee a texture is complete. This reminds me of using templates for immutability in Longs Peak. I'm glad to see those API designs making their way into OpenGL. For much, much more information on GL 4.2, check out Christophe Riccio's review.

In other OpenGL news, a version of the conformance test suite for OpenGL 3.3 and selected extensions is expected to be complete in November. GL drivers have been getting much better in recent years, and this test suite is a huge step in the right direction. In his ecosystem update, Jon Leech also mentioned they are tidying up the spec to have less undefined-behavior. I want to thank him for mentioning our call for authors for OpenGL Insights.

The BOF ended with an excellent talk, Brink Preferred Rendering with OpenGL, by Mikkel Gjøl. He described the rendering in Splash Damage, including its deferred rendering pipeline, use of occlusion queries, and virtual texturing. He said OpenGL works for AAA games, and had several useful requests including a lower-level API (not the first time we heard this at SIGGRAPH); performance warnings; and display lists, which are widely used on consoles.

Full SIGGRAPH Trip Report: day one | two | three | four | five

Thursday, August 11, 2011

SIGGRAPH 2011 Trip Report: Day Three

I spent the morning in the Out of Core talks - a topic quite dear to me considering my master's thesis.

Won Chun's talk, Google Body: 3D Human Anatomy in the Browser, was a modified version of a similar talk from WebGL Camp 3. He discussed the mesh compression and WebGL rendering used in Google Body. My favorite part was how vertex cache optimization, which is used to optimize rendering, also helped improve compression by increasing coherence. There was lots of other goodness like how rendering with float vertex components was faster than using short components even though floats require more data. Won's compression code is now open source.

Cyril Crassin gave a very impressive talk titled Interactive Indirect Illumination Using Voxel Cone Tracing: An Insight. It showed fast, approximate, two-bounce global illumination by grouping coherent rays of reflected light in a pre-integrated cone. I obviously do not work in GI, but if you do, this talk is definitively worth checking out.

Another very impressive talk was Out-of-Core GPU Ray Tracing of Complex Scenes presented by Kirill Garanzha. They interactively rendered the Boeing 777 model, a classic "massive model", on an NVIDIA GeForce GTX 480 at 1024x768. For many views, it looked like it was 200-300 ms per frame with a cache size of 21% of the model (360 million polygons). I'm pretty sure only diffuse shading was used, but even so, this is outstanding work.

One course I never miss at SIGGRAPH is Beyond Programmable Shading. A lot of material from this SIGGRAPH course makes its way into our course at Penn. I wasn't able to make all the sessions this year, but I will certainly catch the rest on SIGGRAPH Encore.

A major course theme was system-on-a-chip (SOC), where both CPU and GPU cores are on the same chip. This has the benefit of eliminating the system bus between the CPU and GPU, which is often a bottleneck.

I really enjoyed the panel What Is the Right Cross-Platform Abstraction Level for Real-Time 3D Rendering? with David Blythe, Chas Boyd, Mike Houston, Raja Koduri, Henry Moreton, and Peter-Pike Sloan. They discussed the tension between application developers, middleware developers, OS developers, and hardware vendors when it comes to APIs like Direct3D and OpenGL. It was generally accepted that D3D/OpenGL are at the right level of abstraction, but need tweaking. Various ideas were discussed, including shorter specs; merging compute and rendering APIs; new APIs for system-on-a-chip; and even having one low-level rendering API and multiple high-level rendering APIs, with the argument that there are many abstractions (programming languages) that do the same thing: change the CPU's instruction pointer.

My favorite part about the panel was the discussion on what goes on in the driver. There is significant pressure for hardware vendors to do well on game benchmarks so many hacks are added to optimize for specific games. The games may not be using best practices, so they are "rewritten in the driver" making the driver really bloated - what a mess. This reminds me of the special allocator mode added to Windows 95 to workaround a bug in SimCity, who was using memory right after it was freed.

All of these game-specific hacks (I almost called them optimizations) lead to non-obvious fast-paths. What vertex-format should I use? As an application developer, I don't know. Well, I sort of know, but when killer-next-gen-game comes out, which uses double-precision texture coordinates, should I also switch to make my application run faster? Introducing a low-level rendering API would fix many of these problems and remove the bloat from the drivers.

Another interesting topic in this discussion was why closed-source drivers are higher quality than open-source drivers. The reasons are quite understandable: closed-source drivers can hide hardware bugs, hide intellectual property that isn't patented, and hide third-party code. It also sounds like there are a lot less open-source developers (40-to-1?) and changes in the Linux kernel can affect the drivers.

Every year, the Beyond Programmable Shading ends with a thought-provoking panel, but this panel was by far the best one!

Full SIGGRAPH Trip Report: day one | two | three | four | five

Wednesday, August 10, 2011

SIGGRAPH 2011 Trip Report: Day Two

I started off today at some of the NVIDIA exhibitor sessions. In the OpenGL & CUDA-Based Tessellation talk, Philippe Rollin made an excellent point about tessellation shaders: the result can be cached using transform-feedback, and used for several frames. This is just one of the many examples of the synergy among recent hardware features. He also convinced me that tessellation can be useful for real-world terrain data by reducing the amount of preprocessing - and everyone hates preprocessing! In the final part of this talk, Miguel Ortega talked about tessellation in Thor. An interesting stat he mentioned is that the biggest asset used 900 4K textures - wow! Movies are quite a bit different than real-time rendering.

I also stayed for the Parallel Nsight 2.0 and CUDA 4.0 for the Win talk by Jeff Kiel. Parallel Nsight has some very impressive Direct3D debugging capabilities, and I'm looking forward to full OpenGL support. I will be ecstatic when I can set breakpoints in a shader. Parallel Nsight is also a great tool for GPU Compute debugging. I want to work this into our GPU course, but requiring two GPUs will call for some careful logistics. However, it will run on some laptops.

I spent some time in the Advances in Real-Time Rendering course, but I spent the bulk of my afternoon in the How to Write Fast iPhone and Android Shaders in Unity Studio Workshop by Aras Pranckevičius and Renaldas Zioma. So far, this was my favorite talk of the conference. It was full of battle-won tips on optimizing shaders for mobile platforms. I haven't done any mobile development yet, but it sounds messy with all the different architectures. For example, on some architectures you should pack varyings into a vec4, and on others you shouldn't. Some architectures scale better than others as more ALU instructions are used. Some architectures care what precision qualifier you use (lowp, mediump, highp), some don't, and some are slow when swizzling lowp precision variables.

Some themes were uniform across all architectures though: baking lighting into textures to avoid heavy ALU instructions like the pow() function in specular lighting; combining several post-processing passes into a single pass to avoid fill-rate; and pragmatic front-to-back rendering for early-z, e.g., render the large player, followed by the environment, followed by enemies which are likely occluded, and finally the skybox. I really enjoyed how this talk was realistic, and even mentioned the reality of optimizations not working and tools crashing. These things happen to me too, and I always tell my friends so you really want to be a graphics developer?!?!? I hope they give a similar talk next year, and that SIGGRAPH gives them a bigger room with more seats.

Full SIGGRAPH Trip Report: day one | two | three | four | five

Monday, August 8, 2011

SIGGRAPH 2011 Trip Report: Day One

Vancouver is awesome. It is clean; the people are unbelievably nice; and the convention center is right on the water and easy to navigate. Let's stop going to LA every other year, and start going to Vancouver!

I spent the afternoon in the Introduction to Modern OpenGL Programming course taught by Edward Angel and Dave Shreiner. It was packed, which shows how important OpenGL has become, especially given OpenGL ES and WebGL. I've been a big fan of this course since I first attended it in 2008. They've done a great job of keeping it up to date. It is a nice introduction to OpenGL, including VBOs, VAOs, GLSL, uniforms, transforms, lighting, and texture mapping. They briefly covered tessellation and geometry shaders, but time was tight. I'm looking forward to this class being a full day in the future.

I usually don't make it to the paper sessions, but I always attend the papers fast forward to get an overview of the research being presented. Even the overflow room was packed this year. One paper that jumped out at me was HDR-VDP-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions by Rafal Mantiuk et al. It describes an algorithm for comparing images. They suggest use-cases like determines the quality-loss due to compression. I am interested in it for a much simpler use-case: unit tests. That is, I want to compare images rendered by our 3D engine on different hardware and different drivers, and I don't what false failures for slight differences. This is a surprising hard problem to solve, and at some point, I'd like to look into HDR-VDP for doing so.

A tip for SIGGRAPH attendees: you can buy merchandise from previous SIGGRAPHs at a steep discount at the SIGGRAPH Store. I bought two tee-shirts for $3 each. They also have things like polo shirts, hats, and coffee mugs for dirt-cheap.

One more tip: there is a separate registration line for contributors that is much shorter than the normal line.

Full SIGGRAPH Trip Report: day one | two | three | four | five

Saturday, August 6, 2011

The Passionate Programmer Review

Jon McCaffrey's review of The Passionate Programmer by Chad Fowler motivated me to read it myself. I used to read lots of books like these, but over the past six or seven years, I’ve buried myself so deep in graphics that I've read very little general development books. It’s time to change that.

The Passionate Programmer: Creating a Remarkable Career in Software Development (Pragmatic Life)The Passionate Programmer is well worth a read. At about 200 pages, it is short, readable, and inspiring. It contains 53 bitesize chapters on being productive, marketing yourself, staying sharp, etc. I agree with the vast majority of the book's advice. It is a particularly great read for students and new developers; it will get them in the proper mindset to be awesome developers as Chad would say. If you are more experienced, you are probably already doing many things in this book; if not, you should consider them.

One of my favor chapters is 4. Be the Worst, which suggests surrounding yourself with the best developers you can, because doing so makes you grow faster and perform better. I couldn't agree more!

You would think that 14. Be a Mentor contradicts the advice of being the worst, but it does not. It is important to be a mentor to new developers because teaching is one of the best ways to learn. Do I really know how the automated build and test system works? I will once I try to explain it to someone.

Being a mentor also helps a team gel and integrate new developers quickly. If there is one thing our industry needs to get better at, it is mentoring. Electricians, plumbers, and tattoo artists do apprenticeships. Software developers do not. How often do we sit down with the experts in our development teams and learn from them? Probably never, or not often enough at best.

In 28. Eight-Hour Burn, Chad argues to only work eight-hour days but work with the utmost intensity. Working longer days leads to burning out, wasting time, and less overall long-term productivity. I agree; however, I am a hypocrite. Given my outside writing and teaching activities, I am consistently over 70 hours a week, and sometimes much higher. The only justification I have is writing and teaching are different enough from developing that the burn out isn’t as severe. I know I can't maintain this pace forever though.

Perhaps 39. Let Your Voice Be Heard is my favorite piece of advice. Chad suggests thinking beyond your current employer and contributing to the industry as a whole, first with a weblog (I don't know why he didn't just say blog), and eventually through publications and presentations. I really like this advice because there are a lot of sharp people in our field, and it is great to have everyone share ideas.

I don't think any advice in the book is terrible, but some needs to be put into perspective. For example, in 27. Learn to Love Maintenance, it is argued that maintenance development can allow for freedom, creativity, and direct-customer interaction. For old, large applications, this isn't always true. If you are working on a twenty-year old, multi-million line application that has seen the hands of hundreds of developers at various stages in their careers, maintenance is often a test of patience. For example, do we really want to continue to design legacy APIs using COM? Is making our code const correct easy when all the code we call is not? Do we really want to integrate old code using a struct for vectors with new code using a class? No. we want to build our skills using modern technology and techniques.

I advise interns and new grads to get in on the ground floor of a new project. They will be given the best opportunity and most responsibly. They will see the big picture, and will accelerate their skills faster than being bogged down by long build-times, legacy code, and legacy mistakes. The contractor who built the house learned much more than the contractor who remodeled the bathroom. With all this said, if you get an opportunity to work on a large, outstanding piece of software - say the Linux kernel, for example - you should go for it.

One final comment: this book is a revised edition of the book My Job Went to India: 52 Ways to Save Your Job. Naming a book is really important, just like naming software, classes, functions, variables, etc. With the original title, I would not have paid this book any attention. The revised title – and I assume content – made it really appealing. Naming is hard.

Overall, The Passionate Programmer is an inspiring, worthwhile read.

Tuesday, July 5, 2011

Book Signing and More at SIGGRAPH 2011

Stop by the A K Peters / CRC Press booth at the SIGGRAPH exhibition on Tuesday, 08/09/2011, from 1:15-2pm for our virtual globe book book signing. Even if you don't have (or want) a copy of our book, stop by to chat about anything related: OpenGL, 3D engine design, computer science education, etc. If you have a copy you want me to sign but can't make the signing, email me (, and we'll meetup sometime during the conference. If you're interesting in submitting a proposal for OpenGL Insights, also email me if you want to chat during SIGGRAPH.

If you really want to stalk me, and I fully encourage it, stop by the poster session either right before the book signing on Tuesday, 12:15-1pm, or on Wednesday, also 12:15-1pm. I will be presenting a poster I worked on with Deron Ohlarik called A screen-space approach to rendering polylines on terrain. We are quite happy with the technique and are even using it in commercial code. We are also honored that it was mentioned on the Real-Time Rendering blog (full disclaimer: Eric reviewed the abstract before we submitted it. He also reviewed the abstract for the poster I presented last year. Man, I owe him).

In other book-related news, Brano Kemen from Outerra wrote the first book review we are aware of, and there has been some book-related discussion on the NASA World Wind forums. We're excited that a lot of people are excited about our work.

Saturday, July 2, 2011

Virtual Globe Book Updates

We recently added several things to our book's website, including a zip of the example code with build instructions for Windows and Linux (Ubuntu); a zip of most of the book's figures, available under fair use; the bibliography with hyperlinks that all work as of today; and the back cover:

Posting the figures and bibliography is a great idea we borrowed from Real-Time Rendering, who has set the bar for book websites quite high.

More exciting news: Amazon says our book is "in stock but may require an extra 1-2 days to process." Hopefully it is shipping by the time you read this.

Friday, June 24, 2011

Resume Tips for Computer Science Students

Here are some resume tips I've learned over the years. I've interviewed about 100 computer science candidates, mostly interns, and have been a technical recruiter at least a dozen times at university career fairs. I've seen thousands of resumes, and have hopefully learned some things you will find useful. For the most part, I've left out obvious recommendations and, instead, focused on tips you may not have heard - or heard from the same point of view. In my youth, I was heavily influenced by Joel, so I probably borrowed some of these ideas from him.

1. Backup your Buzzwords

Almost every computer science resume has a section that looks like:
Skills:  C, C++, C#, Java, .NET, WPF, OpenGL, Direct3D, GLSL, Visual Studio, Eclipse, NUnit, JUnit, NAnt, ...
I'm tempted to recommend removing this section entirely. It does have some uses though; it is good for google searches and HR/manager buzzword hunters. It can also paint a quick profile: is this person a client-side graphics developer? a server-side PHP/MySQL developer? An architect astronaut?

In order to play the game with the less technical folks, I am OK with candidates including this section, but these skills need to be backed up throughout the resume. For example, if OpenCL is listed but I don't see any evidence elsewhere in the resume, I will be skeptical. If I ask about it in an interview and the candidate doesn't have any experience or even exposure to the topic, they have just lost integrity points.

If you list something in the skills section, make sure it is proven elsewhere in your resume, e.g., "Implemented a GPU-accelerated cloth simulation using OpenCL, which resulted in a 30x performance improvement over a multithreaded C++ version." The buzzword hunters will give you extra points for having multiple instances of the buzzword, and technical folks are more likely to believe you.

2. List Coursework Strategically

This motorcycle has nothing to do with
resumes but it sure is fun to ride.
For a student or recent graduate, it is a great idea to including a "Selected Courses" or, better yet, "Favorite Courses" section that highlights some unique courses you have taken. It is an even better idea to tailor this section, and your resume in general, to each employer. Applying to Google search? Include Distributed Systems and Machine Learning. Applying to Pixar? List Computer Graphics and Physically-Based Animation.

Do not list a course that you cannot discuss intelligently. You should have a reasonable grasp of the subject and a story about a related project. For example, I took machine learning in grad school. I got an A and somehow even passed the PhD qualifier, but I can't do machine learning to save my life. I can pronounce fancy algorithms like boosting and principle component analysis, but I can barely tell you the difference between them - not something worth discussing in an interview.

Do not list required courses! This is common with underclassmen, but I still see it done by upperclassmen who have more interesting things to list. If I know your major is computer science and you go to a reputable school, I also know you took intro to programming, calculus, and discrete math. Don't list them; they get in the way of the good stuff.

Sunday, May 22, 2011

OpenGL Insights: Call for Authors

Christophe Riccio and I are starting an exciting new book project. Please join us:

It is with great enthusiasm that we invite you to contribute to OpenGL Insights, a book containing original articles on OpenGL, OpenGL ES, and WebGL techniques by the OpenGL community and for the OpenGL community: from game programmers to web developers to researchers. OpenGL Insights will be published by A K Peters, Ltd. / CRC Press in time for SIGGRAPH 2012.

Monday, May 16, 2011

Book Update

We have some exciting book-related updates. For starters, we made a small website for our book,, and posted several samples:
We also found our book on the cover the latest CRC Press catalog. Exciting! Finally, we wrote a short book-related article in our company's newsletter.

Things are wrapping up nicely and I suspect the manuscript will be off to the printer any day now.

Sunday, May 8, 2011

Reflections on Teaching GPU Programming and Architecture

I just finished teaching CIS 565: GPU Programming and Architecture, a graduate-level course on programming GPUs using mostly GLSL and CUDA, at the University of Pennsylvania. Being my first time teaching, I gained a lot of valuable insights, which I will share here. These ideas are on teaching computer science in general, not necessarily about the GPU.

For eye candy, I sprinkled screenshots of our student projects throughout this post.

1. Exams have a hidden agenda.

At least my exams. On the surface, exams are used to evaluate students, and ultimately assign grades. Historical, this course never had exams. Given its pragmatic, hands-on material, grades were primarily based on coding assignments and a final project.

Screen Space Fluid Rendering - Leftheris Kaleas
This year I added a final - not because I wanted to evaluate students on their ability to think about the GPU under pressure, but because I wanted them to study for the final. It is a valuable exercise to go over material a second time, especially after we've had hands-on experience, to gain new insights. For example, the performance implications of register pressure will seem trivial once the student has some coding experience, but may have been foreign when first introduced. Likewise, going back and looking at GLSL after learning CUDA, will lead to a deeper understanding of GLSL.

In general, one reason people go to grad school is to gain a deeper understanding of topics they already have some exposure to. Studying for a final has the same effect on a more local scale.

So if I am only interested in students studying for a final, but not so much in how they do, what did I do? Two things. First, I made the final only worth 10% of the semester grade. That is not enough to ruin someone's grade if they perform poorly, but it can be the difference between an A and a B. The second technique I use is a bit more interesting...

2. Crowdsourcing the final is fun.

To help students study, I requested they make a ten question exam (and answer it!) with questions of different types, e.g., short answer, coding, etc., and of different difficulties, e.g., easy, medium, hard, and challenge. I made this "take-home" portion of their final worth 10%. It is a good studying technique to get in the instructor's head and think about what they are going to ask. I also joked that I might take all of their exams in the time they take my final.

After I handed out the final, I told the class that their exams were so good and hard that I wouldn't be able to take them while they took my final. In fact, I didn't even make their final - they did. Every single question on their final was from one of their exams. The class got a big kick out of it.

When looking through the student-made exams, I was simply amazed at the collective breadth and depth of knowledge of the class. It was also interesting to see different students with different specialties - each exam covered diverse topics, but some where biased towards GPU architecture, while others were heavy on GPU compute, while others favored rendering. Overall, the quality of their questions were quite good. Some of the exams were really hard too - harder than I made the final.

Am I nervous that a student taking this course in the future will read this post or hear from previous students how the final is made? Maybe the entire class will self-organize and send their exams to each other before the final. If a student reviewed every exam, they would know every question that will be on the final. Is that cheating? I don't know, but students rigorously studying for a final sounds like a great problem to have. After all, the main point of my final is to have them revisit the material.

Mobile Depth of Field as Post-Processing using
OpenGL ES 2.0 - Han Li and Qing Sun

Monday, April 4, 2011

Under the Hood of Virtual Globes

Update: We have posted a pdf of our slides on our book's website.

We are teaching a half-day course, Under the Hood of Virtual Globes, at COM.Geo this May in Washington D.C. COM.Geo is a conference on computing for geospatial research and applications. Looks like a number of interesting papers were accepted.

We'll post slides from our course when we are finishing making them; in the meantime, here is the course outline.

Tuesday, March 29, 2011

GPU Course Projects

This semester I am teaching GPU Programming and Architecture at the University of Pennsylvania. My students have chosen some interesting projects ranging from rendering to simulation to GPU computing and beyond. Here are their blogs:
I'm sure they would love to hear ideas from outsiders so don't be shy!

Wednesday, March 9, 2011

Our Book Cover

We are pumped that the cover for our book is ready:

When you work out a book contract, one thing you can negotiate is say on the cover. We did not negotiate for this - thankfully, we didn't need to. Our publisher was nice enough to take our suggested cover pretty much as is. We, of course, can't take credit for the work; I would be quite embarrassed to reveal the original artwork I created. Instead, two of our fellow AGIer's, Fran Kelly and Jason Martin, did a bang up job designing the cover and creating the artwork. Even cooler, they used a product we work on, STK, to create the visuals. Total overkill perhaps, but cool nonetheless.

Friday, March 4, 2011

GLSL Engine Uniforms Revisited

Game Engine Gems 2
A few months back, I mentioned my two articles in Game Engine Gems 2. Now that the book is available, I want to revisit a piece of advice in one article, A Framework for GLSL Engine Uniforms.

This article is about implementing a framework for GLSL uniforms that are automatically set by the engine. A shader author simply declares the uniform to use it. Since the engine has a list of names for all engine uniforms, it can identify what engine uniforms a shader uses, and track them accordingly. A shader author is ensured that the engine will update a uniform to the appropriate value before the shader is invoked. In addition to standard stuff like transformation matrices, these uniforms are also used for more engine-specific values, e.g., time, sun position, etc.

At the end of the article I suggest:

If you are up for parsing GLSL code, you could also eliminate the need for
shader authors to declare engine uniforms by carefully searching the shader's
source for them. This task in nontrivial considering preprocessor transformations,
multiline comments, strings, and compiler optimizations that could eliminate
uniforms altogether

Well, I should have thought more about this because you do not need to parse GLSL for uniform names to eliminate the need to declare engine uniforms. Foruntently, it is much easier; since glShaderSource takes an array of strings, just have one string contain the declarations for all the engine uniforms and let the compiler throw out the unused ones.

I've used this approach for a few months now and am quite happy with it. At first, I was not sure about how useful it would be, but I've found it makes copying and pasting code among shaders much easier because I don't have to remember to declare engine uniforms.

Thursday, February 3, 2011

Syntax highlighting C# and GLSL source code with LaTeX and the 'Listings' package

We wanted to syntax highlight the source code listings in our book in order to make them easier to read. Unfortunately, the languages used for most of the listings, C# and GLSL, are not supported out of the box by the LaTeX Listings package. What to do?

Well, C# and GLSL both have roots in the C language, so we started out tagging our code listings as C++ in order to get some minimal highlighting. The results aren't very good with that approach. Keywords that are shared with C++, like float and return, are nicely highlighted, but not so with
GLSL-specific keywords like vec3. And wouldn't it be nice if built-in GLSL functions like sqrt
and cos were highlighted, like they are in NShader?

I should mention before I get too far that I'm far from a LaTeX expert.
What I'm about to describe worked for me, but please let me know if I'm doing something odd.

C# and GLSL language definitions

The Listings LaTeX package makes it fairly easy to define new languages. Several language definitions for C# can be found around the web. Here's ours:
abstract, as, base, break, case,
catch, checked, class, const, continue,
default, delegate, do, else, enum,
event, explicit, extern, false,
finally, fixed, for, foreach, goto, if,
implicit, in, interface, internal, is,
lock, namespace, new, null, operator,
out, override, params, private,
protected, public, readonly, ref,
return, sealed, sizeof, stackalloc,
static, struct, switch, this, throw,
true, try, typeof, unchecked, unsafe,
using, virtual, volatile, while, bool,
byte, char, decimal, double, float,
int, lock, object, sbyte, short, string,
uint, ulong, ushort, void},
Surprisingly, we weren't able to find a language definition for GLSL. It was pretty easy to put one together based on the GLSL spec, though:
attribute, const, uniform, varying,
layout, centroid, flat, smooth,
noperspective, break, continue, do,
for, while, switch, case, default, if,
else, in, out, inout, float, int, void,
bool, true, false, invariant, discard,
return, mat2, mat3, mat4, mat2x2, mat2x3,
mat2x4, mat3x2, mat3x3, mat3x4, mat4x2,
mat4x3, mat4x4, vec2, vec3, vec4, ivec2,
ivec3, ivec4, bvec2, bvec3, bvec4, uint,
uvec2, uvec3, uvec4, lowp, mediump, highp,
precision, sampler1D, sampler2D, sampler3D,
samplerCube, sampler1DShadow,
sampler2DShadow, samplerCubeShadow,
sampler1DArray, sampler2DArray,
sampler1DArrayShadow, sampler2DArrayShadow,
isampler1D, isampler2D, isampler3D,
isamplerCube, isampler1DArray,
isampler2DArray, usampler1D, usampler2D,
usampler3D, usamplerCube, usampler1DArray,
usampler2DArray, sampler2DRect,
sampler2DRectShadow, isampler2DRect,
usampler2DRect, samplerBuffer,
isamplerBuffer, usamplerBuffer, sampler2DMS,
isampler2DMS, usampler2DMS,
sampler2DMSArray, isampler2DMSArray,
usampler2DMSArray, struct},
GLSL has a ton of keywords, especially since we've included the built-in functions and variables in separate keyword lists so that they can be highlighted separately.

Coloring the source code

Now that the languages are defined, we can specify how they are highlighted. The color scheme presented here is loosely based on Visual Studio and NShader:
backgroundcolor=\color[rgb]{0.95, 0.95, 0.95},
prebreak = \raisebox{0ex}[0ex][0ex]{\ensuremath{\hookleftarrow}},
I was surprised to learn that the Listings package does not allow styles be defined per language; a single style is applied to all languages. This is not as limiting as it at first appears, however, because we can define multiple groups of keywords and specify a style for each group individually.

With the languages and style defined, we can write LaTeX code like this:
// Vertex shader
in vec4 position;
uniform mat4 og_modelViewPerspectiveMatrix;

void main()
gl_Position = og_modelViewPerspectiveMatrix * position;

// Fragment shader
out vec3 fragmentColor;
void main() { fragmentColor = vec3(0.0, 0.0, 0.0); }
To generate a nice listing like this:

Highlighting type names

You may have noticed that the Visual Studio editor highlights the names of classes, structs, interfaces, and enums. How can we do that with LaTeX? LaTeX, of course, has no idea which identifiers in our source listings are class names and which are the names of variables, methods, etc., so we have to tell it. We can do that by specifying, for each listing, the identifiers that belong to a keyword group:
// ... Set other states
Which produces the following:

Notice that GL and EnableCap are highlighted because they have been explicitly listed, using classoffset and morekeywords, as belonging to keyword group 3 - an offset of 2 from the base.


The Listings package has a couple of limitations that we were unable to work around:
  • Highlighting is entirely keyword based and cannot take context into account. It's fairly common in C# to have a single identifier that is a class name in one context and a property name in another context. If both usages of the identifier occur in a single listing, there does not appear to be any way to highlight the identifier only where it refers to a class name. Either it's highlighted everywhere, or nowhere. Update 2011/05/27: I found a way to do this after all. See below.
  • The Listings package allows us to specify a keyword prefix so that all keywords starting with a character sequence are highlighted. Unfortunately, only a single prefix can be specified. It would have been helpful to highlight all identifiers in GLSL listings that start with gl or og.
  • There does not appear to be a way to highlight numeric literals (such as 1.23) or operators (such as ==).
If you know a solution to any of these problems, please let me know!

Update 2011/05/27

I wrote above that I was unable to find a way to highlight some occurrences of an identifier but not others within a single listing. It turns out there is a way after all. The trick is to insert a do-nothing "escape to LaTeX" somewhere in the middle of the occurrence of the identifier that you don't want highlighted.

First, define an escapechar. It can be any character you want, so long as the character will not otherwise occur in your listings. Here I've used backtick (`) as my escapechar:
Then write your listing like this:
\begin{lstlisting}[language=CSharp, caption={RenderState Properties.},classoffset=2,morekeywords={RenderState,PrimitiveRestart,FacetCulling,RasterizationMode,ScissorTest,StencilTest,DepthTest,DepthRange,Blending,ColorMask}]
public class RenderState
public PrimitiveRestart P``rimitiveRestart { get; set; }
public FacetCulling F``acetCulling { get; set; }
public RasterizationMode R``asterizationMode { get; set; }
public ScissorTest S``cissorTest { get; set; }
public StencilTest S``tencilTest { get; set; }
public DepthTest D``epthTest { get; set; }
public DepthRange D``epthRange { get; set; }
public Blending B``lending { get; set; }
public ColorMask C``olorMask { get; set; }
public bool DepthMask { get; set; }
Notice the double backticks (``) in the names of the properties. The first backtick escapes to LaTeX mode, and the second returns to listing mode. You could include LaTeX commands between the backticks, but there's no need; the interruption alone is enough to cause the identifier to not be highlighted. LaTeX will render the listing above like this:

Sunday, January 2, 2011

Manuscript Submitted

Kevin and I submitted our manuscript today. Writing this book was even harder than I thought. It was particularly time consuming because of the amount of code we also wrote. It was great fun though, and all the reviewers kept us engaged.

We'd like to post some samples in the next month or two. Check out the

  Table of contents [pdf]

and let us know what we should post by leaving a comment or emailing us:

Also, those readers subscribed to our blog may have noticed a draft of this post in your reader a week ago. Sorry, I scheduled the post for 2010 instead of 2011!