Wonder Club world wonders pyramid logo
×

Parallel Programming In Openmp Book

Parallel Programming In Openmp
Parallel Programming In Openmp, The rapid and widespread acceptance of shared-memory multiprocessor architectures has created a pressing demand for an efficient way to program these systems. At the same time, developers of technical and scientific applications in industry and in governm, Parallel Programming In Openmp has a rating of 3 stars
   2 Ratings
X
Parallel Programming In Openmp, The rapid and widespread acceptance of shared-memory multiprocessor architectures has created a pressing demand for an efficient way to program these systems. At the same time, developers of technical and scientific applications in industry and in governm, Parallel Programming In Openmp
3 out of 5 stars based on 2 reviews
5
0 %
4
50 %
3
0 %
2
50 %
1
0 %
Digital Copy
PDF format
1 available   for $99.99
Original Magazine
Physical Format

Sold Out

  • Parallel Programming In Openmp
  • Written by author Rohit Chandra
  • Published by Elsevier Science, October 2000
  • The rapid and widespread acceptance of shared-memory multiprocessor architectures has created a pressing demand for an efficient way to program these systems. At the same time, developers of technical and scientific applications in industry and in governm
  • "This book will provide a valuable resource for the OpenMP community."- Timothy G. Mattson, Intel Corporation"This book has an important role to play in the HPC community-both for introducing practicing professionals to OpenMP and for educating stud
Buy Digital  USD$99.99

WonderClub View Cart Button

WonderClub Add to Inventory Button
WonderClub Add to Wishlist Button
WonderClub Add to Collection Button

Book Categories

Authors

Forewardvii
Prefacexiii
Chapter 1Introduction1
1.1Performance with OpenMP2
1.2A First Glimpse of OpenMP6
1.3The OpenMP Parallel Computer8
1.4Why OpenMP?9
1.5History of OpenMP13
1.6Navigating the Rest of the Book14
Chapter 2Getting Started with OpenMP15
2.1Introduction15
2.2OpenMP from 10,000 Meters16
2.2.1OpenMP Compiler Directives or Pragmas17
2.2.2Parallel Control Structures20
2.2.3Communication and Data Environment20
2.2.4Synchronization22
2.3Parallelizing a Simple Loop23
2.3.1Runtime Execution Model of an OpenMP Program24
2.3.2Communication and Data Scoping25
2.3.3Synchronization in the Simple Loop Example27
2.3.4Final Words on the Simple Loop Example28
2.4A More Complicated Loop29
2.5Explicit Synchronization32
2.6The reduction Clause35
2.7Expressing Parallelism with Parallel Regions36
2.8Concluding Remarks39
2.9Exercises40
Chapter 3Exploiting Loop-Level Parallelism41
3.1Introduction41
3.2Form and Usage of the parallel do Directive42
3.2.1Clauses43
3.2.2Restrictions on Parallel Loops44
3.3Meaning of the parallel do Directive46
3.3.1Loop Nests and Parallelism46
3.4Controlling Data Sharing47
3.4.1General Properties of Data Scope Clauses49
3.4.2The shared Clause50
3.4.3The private Clause51
3.4.4Default Variable Scopes53
3.4.5Changing Default Scoping Rules56
3.4.6Parallelizing Reduction Operations59
3.4.7Private Variable Initialization and Finalization63
3.5Removing Data Dependences65
3.5.1Why Data Dependences Are a Problem66
3.5.2The First Step: Detection67
3.5.3The Second Step: Classification71
3.5.4The Third Step: Removal73
3.5.5Summary81
3.6Enhancing Performance82
3.6.1Ensuring Sufficient Work82
3.6.2Scheduling Loops to Balance the Load85
3.6.3Static and Dynamic Scheduling86
3.6.4Scheduling Options86
3.6.5Comparison of Runtime Scheduling Behavior88
3.7Concluding Remarks90
3.8Exercises90
Chapter 4Beyond Loop-Level Parallelism: Parallel Regions93
4.1Introduction93
4.2Form and Usage of the parallel Directive94
4.2.1Clauses on the parallel Directive95
4.2.2Restrictions on the parallel Directive96
4.3Meaning of the parallel Directive97
4.3.1Parallel Regions and SPMD-Style Parallelism100
4.4threadprivate Variables and the copyin Clause100
4.4.1The threadprivate Directive103
4.4.2The copyin Clause106
4.5Work-Sharing in Parallel Regions108
4.5.1A Parallel Task Queue108
4.5.2Dividing Work Based on Thread Number109
4.5.3Work-Sharing Constructs in OpenMP111
4.6Restrictions on Work-Sharing Constructs119
4.6.1Block Structure119
4.6.2Entry and Exit120
4.6.3Nesting of Work-Sharing Constructs122
4.7Orphaning of Work-Sharing Constructs123
4.7.1Data Scoping of Orphaned Constructs125
4.7.2Writing Code with Orphaned Work-Sharing Constructs126
4.8Nested Parallel Regions126
4.8.1Directive Nesting and Binding129
4.9Controlling Parallelism in an OpenMP Program130
4.9.1Dynamically Disabling the parallel Directives130
4.9.2Controlling the Number of Threads131
4.9.3Dynamic Threads133
4.9.4Runtime Library Calls and Environment Variables135
4.10Concluding Remarks137
4.11Exercises138
Chapter 5Synchronization141
5.1Introduction141
5.2Data Conflicts and the Need for Synchronization142
5.2.1Getting Rid of Data Races143
5.2.2Examples of Acceptable Data Races144
5.2.3Synchronization Mechanisms in OpenMP146
5.3Mutual Exclusion Synchronization147
5.3.1The Critical Section Directive147
5.3.2The atomic Directive152
5.3.3Runtime Library Lock Routines155
5.4Event Synchronization157
5.4.1Barriers157
5.4.2Ordered Sections159
5.4.3The master Directive161
5.5Custom Synchronization: Rolling Your Own162
5.5.1The flush Directive163
5.6Some Practical Considerations165
5.7Concluding Remarks168
5.8Exercises168
Chapter 6Performance171
6.1Introduction171
6.2Key Factors That Impact Performance173
6.2.1Coverage and Granularity173
6.2.2Load Balance175
6.2.3Locality179
6.2.4Synchronization192
6.3Performance-Tuning Methodology198
6.4Dynamic Threads201
6.5Bus-Based and NUMA Machines204
6.6Concluding Remarks207
6.7Exercises207
Appendix AA Quick Reference to OpenMP211
References217
Index221


Login

  |  

Complaints

  |  

Blog

  |  

Games

  |  

Digital Media

  |  

Souls

  |  

Obituary

  |  

Contact Us

  |  

FAQ

CAN'T FIND WHAT YOU'RE LOOKING FOR? CLICK HERE!!!

X
WonderClub Home

This item is in your Wish List

Parallel Programming In Openmp, The rapid and widespread acceptance of shared-memory multiprocessor architectures has created a pressing demand for an efficient way to program these systems. At the same time, developers of technical and scientific applications in industry and in governm, Parallel Programming In Openmp

X
WonderClub Home

This item is in your Collection

Parallel Programming In Openmp, The rapid and widespread acceptance of shared-memory multiprocessor architectures has created a pressing demand for an efficient way to program these systems. At the same time, developers of technical and scientific applications in industry and in governm, Parallel Programming In Openmp

Parallel Programming In Openmp

X
WonderClub Home

This Item is in Your Inventory

Parallel Programming In Openmp, The rapid and widespread acceptance of shared-memory multiprocessor architectures has created a pressing demand for an efficient way to program these systems. At the same time, developers of technical and scientific applications in industry and in governm, Parallel Programming In Openmp

Parallel Programming In Openmp

WonderClub Home

You must be logged in to review the products

E-mail address:

Password: