although its labelled and oft quoted as such the black book is really an optimisation manual in disguise imo.
the content on graphics is virtually non-existent except for outdated information about old hardware interfaces from a time before DirectX, OpenGL and any particularly friendly rendering interfaces (so there was GDI and Win32 for early Windows, but we weren't yet in the time where programmers could spunk away resources and the hardware suck it up without destroying your performance).
there is some discussion on VSD techniques, the bresenham line algorithm etc. but these are also outdated - libraries today draw lines for us and even AAA games can ship without giving a serious crap about culling and hit framerate...
i do still highly recommend reading it, just don't expect to learn anything useful about graphics. my main takeaway from this book was 'optimise by measuring then experimenting and measuring to confirm the optimisation'. its so obvious but a large number of programmers i've worked with seem to prefer to guess and not measure...
I actually started reading the book slowly a few days ago, and am now in Chapter 17. So far there hasn't been anything about graphics; it's been mostly about low-level optimization and discussion of differences between 8086, 286, 386, and 486... yes, it's that outdated. There are some ideas that apply more generally (optimize your design and algorithms first, profile or measure, etc.). I don't think you need a book to learn these things, and I think they're close enough to common sense (for programmers) now anyway.
Having said that, I still find the book interesting, and I can't wait to get to the part where Mr. Abrash gets to talk about the software rendering in Quake (which he worked on). It might be outdated information for most of you, but I'm actually interested in seeing how far you could push software rendering with modern CPUs; it's sad that progress on that front pretty much ended around the time Unreal was released. If it was good enough for games back in the 90s, why wouldn't it be good for games in 2014?
If someone else here happens to be interested in 90s software rendering, make sure you read this bit about the Thief engine.. :-)
You might want to look into pixomatic... which I believe Mr. Abrash worked on until relatively recently. Its a very full featured software renderer and i especially love the small footprint - 255k lib + 4k alloc (just the vsd on the last AAA game i worked on took 640k just for PVS data and my compatriots thought that was a small amount of memory - using whole megabytes for trivia is not uncommon these days sadly)
there has also been work done to rasterise depth buffers using the PS3 SPUs - but not sure if there is anything public domain.
real-time 'software' rendering is not quite dead yet... :)
i'm generally of the opinion though that 'we' have lost a lot of knowledge. i taught myself C/C++ through quake 3 modding and the quake and quake 2 sources. this was invaluable coming into the modern games industry where the barrier for entry is now extremely low, and consuming megabytes or millions of cycles on trivia doesn't stop your game shipping...
real-time 'software' rendering is not quite dead yet... :)
i'm generally of the opinion though that 'we' have lost a lot of knowledge.
Yes, that is kind of what I meant. I know there are still people doing software rendering, and I know it's sometimes used to complement GPU based rendering. And there are interesting projects out there. So, not quite dead, and I'm not even going to say yet because I don't think it's dying at all, on the contrary.
Still though the "state of art" has by and large ignored software for so many years now, with almost all advances and techniques being primarily done on the GPU (sometimes with help from the CPU).
So I totally agree we've lost knowledge. I personally hope to regain that knowledge, maybe come up with something new, perhaps even push some boundaries... but we'll see. :-)
I started writing one and got discouraged when per-pixel Phong shading on the teapot ran at like 10 FPS. A profiler showed me that I was bottlenecked by simple unavoidable work like matrix multiplies in my shaders. Maybe that profiler was wrong though... now I want to hack on it some more.
You might also want to look into Ingo Wald's thesis on realtime ray tracing. In a ray tracing tutorial Jacco Bikker wrote that Wald is reporting speeds of several frames per second for scenes consisting of thousands of polygons at a resolution of 1024x768 pixels, on a single 2.5Ghz laptop. That was written ten years ago, in 2004.
i can hint at you that this is not an impossible problem even with a fairly decent res teapot :)
p.s. why are you doing any matrix multiplies in a shader? i guess these are per-pixel? per-vert? even per-vert ones can be removed or reduced if you don't mind a little loss of generality and have e.g fixed view point or a non-rotating teapot ;)
edit: as a note the teapot mesh is a bit squashed - this was originally such an optimisation. it was originally meant to be rendered rotating around the vertical axis on a 4:3 display, so the teapot mesh has been deformed 3:4 where 3 is the vertical and 4 are the two horizontal axes... this saves the aspect ratio correcting multiply when its rendered out.
Thanks for the link on Thief rendering. I remember playing the demo (or more like watching my brother playing the demo) and was scared to death so we never bought it.
All that sneaking around scared me to death. I now play racing games. During the daytime.
I haven't followed progress recently, but one question that's gotten some research is how to make GPU-ized raytracing work on dynamic scenes. Static scenes can be handled fairly well by preprocessing data into efficient spatial data structures like KD-trees or octrees. For example, here's an nVidia paper from 2009 that raytraces scenes made up of huge numbers of voxels, using a representation called a sparse voxel octree: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.156...
Last time I looked at this research, though, how to handle highly dynamic scenes efficiently was a bit of an open question. It seems to largely be a data-structure research problem.
https://github.com/jagregory/abrash-black-book