"Again, this was a heavily CPU based technique while the industry was trending towards being CPU-bound." -- isn't the industry now trending in the opposite way? ie - everything is becoming a CPU (and it has more and more cores)
GPUs are becoming more CPU-like. They are gaining more general-purpose instructions and supporting additional complexity in computations.
CPUs are becoming more GPU-like. They are increasingly pipelined and parallelized.
However, there are still fundamental differences between CPUs and GPUs that make them suited for varied tasks. CPUs tend to support branch prediction (although not always, see game consoles) and various volatile threading operations. GPUs gain a lot of their speed by making assumptions about side effects and causality in their calculations. Branching is rarely, if ever, needed at runtime and almost all of the operations performed by most applications can be encoded as vector operations.
The future of powerful computers is probably a hybrid of various processing units, memory architectures, and special purpose hardware. We're going to need software abstractions to deal with this complexity.
Concerning software abstraction layers: What do you think of OpenCL? It's advertised as a way to utilise the combined computing power of multi-cores and GPUs. I am not at all an expert on this but it seems just too good to be true.
I think all of the C-style languages are doomed in the increasingly parallelized world. I don't know much of OpenCL specifically, but all of the HLSL/Cg/GLSL derivatives seem woefully unprepared for tasks which are not "embarrassingly parallel". Some of the newer derivatives might be capable of solving tougher problems, but that doesn't mean they are good at it.
"Again, this was a heavily CPU based technique while the industry was trending towards being CPU-bound." -- isn't the industry now trending in the opposite way? ie - everything is becoming a CPU (and it has more and more cores)