I already posted this in #gosu a couple of days ago, but this is something I am working on lately.
Its an implementation of Normal Mapping
for libgosu, using it to "fake" lighting and detail on normal images. (see video at the end of the post, you actually cant see the effect on the screenshot)
The current implementation needs three passes:
1. Render the normal map of an image (or multiple images) to an texture
2. Render the images that are to be "normal mapped" to an texture
3. Render everything with the vertex/pixel shader enabled
To do this I had to extend my PostProcessing-Framework (I think I actually have to rename it) to handle custom vertex shader aswell, and I still need to think of an easy and fast way to actually combine normal drawing within gosu and normal mapping.
(At the moment everything you will draw before pass 1 will not be visible anymore)
Unfortunately, the developer still has to manage the z-ordering itself for the normal-mapped image, the same as it is with my current implementation of fragment shaders. (It's not possible for extensions to interact with the DrawOpQueue.)Video demonstration of normal mapping (YouTube)