文档介绍:In Praise of Programming Massively Parallel Processors:
A Hands-on Approach
Parallel programming is about performance, for otherwise you’d write a
sequential program. For those interested in learning or teaching the topic,
a problem is where to find truly parallel hardware that can be dedicated to
the task, for it is difficult to see interesting speedups if its shared or only
modestly parallel. One answer is graphical processing units (GPUs), which
can have hundreds of cores and are found in millions of desktop and laptop
computers. For those interested in the GPU path to parallel enlightenment,
this new book from David Kirk and Wen-mei Hwu is a godsend, as it intro-
duces CUDA, a C-like data parallel language, and Tesla, the architecture
of the current generation of NVIDIA GPUs. In addition to explaining the
language and the architecture, they define the nature of data parallel pro-
blems that run well on heterogeneous CPU-GPU hardware. More con-
cretely, two detailed case studies demonstrate speedups over CPU-only C
programs of 10X to 15X for naı¨ve CUDA code and 45X to 105X for expertly
tuned versions. They conclude with a glimpse of the future by describing the
next generation of data parallel languages and architectures: OpenCL and
the NVIDIA Fermi GPU. This book is a valuable addition to the recently
reinvigorated puting literature.
David Patterson
Director, The puting Research Laboratory
Pardee Professor puter Science, . Berkeley
Co-author puter Architecture: A Quantitative Approach
Written by two teaching pioneers, this book is the definitive practical refer-
ence on programming massively parallel processors—a true technological
gold mine. The hands-on learning included is cutting-edge, yet very read-
able. This is a most rewarding read for students, engineers and scientists
interested in putational resources to solve today’s and
tomorrow’s hardest problems.
Nicolas Pinto
MIT, NVIDIA Fellow 2009
I have always admired