Ebook: Computer Architecture: A Quantitative Approach
We said the fourth edition of Computer Architecture: A Quantitative Approach
may have been the most significant since the first edition due to the switch to
multicore chips. The feedback we received this time was that the book had lost
the sharp focus of the first edition, covering everthing equally but without empha-
sis and context. We’re pretty sure that won’t be said about the fifth edition.
We believe most of the excitement is at the extremes in size of computing,
with personal mobile devices (PMDs) such as cell phones and tablets as the cli-
ents and warehouse-scale computers offering cloud computing as the server.
(Observant readers may seen the hint for cloud computing on the cover.) We are
struck by the common theme of these two extremes in cost, performance, and
energy efficiency despite their difference in size. As a result, the running context
through each chapter is computing for PMDs and for warehouse scale computers,
and Chapter 6 is a brand-new chapter on the latter topic.
The other theme is parallelism in all its forms. We first idetify the two types of
application-level parallelism in Chapter 1: data-level parallelism (DLP), which
arises because there are many data items that can be operated on at the same time,
and task-level parallelism (TLP), which arises because tasks of work are created
that can operate independently and largely in parallel. We then explain the four
architectural styles that exploit DLP and TLP: instruction-level parallelism (ILP)
in Chapter 3; vector architectures and graphic processor units (GPUs) in Chapter
4, which is a brand-new chapter for this edition; thread-level parallelism in
Chapter 5; and request-level parallelism (RLP) via warehouse-scale computers in
Chapter 6, which is also a brand-new chapter for this edition. We moved memory
hierarchy earlier in the book to Chapter 2, and we moved the storage systems
chapter to Appendix D. We are particularly proud about Chapter 4, which con-
tains the most detailed and clearest explanation of GPUs yet, and Chapter 6,
which is the first publication of the most recent details of a Google Warehouse-
scale computer.
may have been the most significant since the first edition due to the switch to
multicore chips. The feedback we received this time was that the book had lost
the sharp focus of the first edition, covering everthing equally but without empha-
sis and context. We’re pretty sure that won’t be said about the fifth edition.
We believe most of the excitement is at the extremes in size of computing,
with personal mobile devices (PMDs) such as cell phones and tablets as the cli-
ents and warehouse-scale computers offering cloud computing as the server.
(Observant readers may seen the hint for cloud computing on the cover.) We are
struck by the common theme of these two extremes in cost, performance, and
energy efficiency despite their difference in size. As a result, the running context
through each chapter is computing for PMDs and for warehouse scale computers,
and Chapter 6 is a brand-new chapter on the latter topic.
The other theme is parallelism in all its forms. We first idetify the two types of
application-level parallelism in Chapter 1: data-level parallelism (DLP), which
arises because there are many data items that can be operated on at the same time,
and task-level parallelism (TLP), which arises because tasks of work are created
that can operate independently and largely in parallel. We then explain the four
architectural styles that exploit DLP and TLP: instruction-level parallelism (ILP)
in Chapter 3; vector architectures and graphic processor units (GPUs) in Chapter
4, which is a brand-new chapter for this edition; thread-level parallelism in
Chapter 5; and request-level parallelism (RLP) via warehouse-scale computers in
Chapter 6, which is also a brand-new chapter for this edition. We moved memory
hierarchy earlier in the book to Chapter 2, and we moved the storage systems
chapter to Appendix D. We are particularly proud about Chapter 4, which con-
tains the most detailed and clearest explanation of GPUs yet, and Chapter 6,
which is the first publication of the most recent details of a Google Warehouse-
scale computer.
Download the book Computer Architecture: A Quantitative Approach for free or read online
Continue reading on any device:
Last viewed books
Related books
{related-news}
Comments (0)