000 06369nam a22005655i 4500
001 u375091
003 SIRSI
005 20160812084307.0
007 cr nn 008mamaa
008 100906s2010 gw | s |||| 0|eng d
020 _a9783642156465
_9978-3-642-15646-5
040 _cMX-MeUAM
050 4 _aQA76.9.A43
082 0 4 _a005.1
_223
100 1 _aKeller, Rainer.
_eeditor.
245 1 0 _aRecent Advances in the Message Passing Interface
_h[recurso electrónico] :
_b17th European MPI Users’ Group Meeting, EuroMPI 2010, Stuttgart, Germany, September 12-15, 2010. Proceedings /
_cedited by Rainer Keller, Edgar Gabriel, Michael Resch, Jack Dongarra.
264 1 _aBerlin, Heidelberg :
_bSpringer Berlin Heidelberg,
_c2010.
300 _aXIV, 308p. 120 illus.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aLecture Notes in Computer Science,
_x0302-9743 ;
_v6305
505 0 _aLarge Scale Systems -- A Scalable MPI_Comm_split Algorithm for Exascale Computing -- Enabling Concurrent Multithreaded MPI Communication on Multicore Petascale Systems -- Toward Performance Models of MPI Implementations for Understanding Application Scaling Issues -- PMI: A Scalable Parallel Process-Management Interface for Extreme-Scale Systems -- Run-Time Analysis and Instrumentation for Communication Overlap Potential -- Efficient MPI Support for Advanced Hybrid Programming Models -- Parallel Filesystems and I/O -- An HDF5 MPI Virtual File Driver for Parallel In-situ Post-processing -- Automated Tracing of I/O Stack -- MPI Datatype Marshalling: A Case Study in Datatype Equivalence -- Collective Operations -- Design of Kernel-Level Asynchronous Collective Communication -- Network Offloaded Hierarchical Collectives Using ConnectX-2’s CORE-Direct Capabilities -- An In-Place Algorithm for Irregular All-to-All Communication with Limited Memory -- Applications -- Massively Parallel Finite Element Programming -- Parallel Zero-Copy Algorithms for Fast Fourier Transform and Conjugate Gradient Using MPI Datatypes -- Parallel Chaining Algorithms -- MPI Internals (I) -- Precise Dynamic Analysis for Slack Elasticity: Adding Buffering without Adding Bugs -- Implementing MPI on Windows: Comparison with Common Approaches on Unix -- Compact and Efficient Implementation of the MPI Group Operations -- Characteristics of the Unexpected Message Queue of MPI Applications -- Fault Tolerance -- Dodging the Cost of Unavoidable Memory Copies in Message Logging Protocols -- Communication Target Selection for Replicated MPI Processes -- Transparent Redundant Computing with MPI -- Checkpoint/Restart-Enabled Parallel Debugging -- Best Paper Awards -- Load Balancing for Regular Meshes on SMPs with MPI -- Adaptive MPI Multirail Tuning for Non-uniform Input/Output Access -- Using Triggered Operations to Offload Collective Communication Operations -- MPI Internals (II) -- Second-Order Algorithmic Differentiation by Source Transformation of MPI Code -- Locality and Topology Aware Intra-node Communication among Multicore CPUs -- Transparent Neutral Element Elimination in MPI Reduction Operations -- Poster Abstracts -- Use Case Evaluation of the Proposed MPIT Configuration and Performance Interface -- Two Algorithms of Irregular Scatter/Gather Operations for Heterogeneous Platforms -- Measuring Execution Times of Collective Communications in an Empirical Optimization Framework -- Dynamic Verification of Hybrid Programs -- Challenges and Issues of Supporting Task Parallelism in MPI.
520 _aParallel Computing is at the verge of a new era. Multi-core processors make parallel computing a fundamental skill required by all computer scientists. At the same time, high-end systems have surpassed the Peta?op barrier, and s- ni?cant e?orts are devoted to the development of hardware and software te- nologies for the next-generation Exascale systems. To reach this next stage, processor architectures, high-speed interconnects and programming models will go through dramatic changes. The Message Passing Interface (MPI) has been the most widespread programming model for parallel systems of today. A key questions of upcoming Exascale systems is whether and how MPI has to evolve inordertomeettheperformanceandproductivitydemandsofExascalesystems. EuroMPI is the successor of the EuroPVM/MPI series, a ?agship conf- ence for this community, established as the premier international forum for - searchers,usersandvendorsto presenttheir latestadvancesinMPIandmessage th passingsystemingeneral.The17 EuropeanMPIusersgroupmeetingwasheld in Stuttgart during September 12-15,2010.The conferencewasorganizedbythe High Performance Computing Center Stuttgart at the University of Stuttgart. ThepreviousconferenceswereheldinEspoo(2009),Dublin(2008),Paris(2007), Bonn(2006),Sorrento(2005),Budapest(2004),Venice(2003),Linz(2002),S- torini (2001), Balatonfured (2000), Barcelona (1999), Liverpool (1998), Krakow (1997), Munich (1996), Lyon (1995) and Rome (1994). The main topics of the conference were message-passing systems – especially MPI, performance, scalability and reliability issues on very large scale systems.
650 0 _aComputer science.
650 0 _aComputer network architectures.
650 0 _aComputer Communication Networks.
650 0 _aSoftware engineering.
650 0 _aComputer software.
650 0 _aComputer simulation.
650 1 4 _aComputer Science.
650 2 4 _aAlgorithm Analysis and Problem Complexity.
650 2 4 _aComputer Communication Networks.
650 2 4 _aSoftware Engineering.
650 2 4 _aProgramming Techniques.
650 2 4 _aComputer Systems Organization and Communication Networks.
650 2 4 _aSimulation and Modeling.
700 1 _aGabriel, Edgar.
_eeditor.
700 1 _aResch, Michael.
_eeditor.
700 1 _aDongarra, Jack.
_eeditor.
710 2 _aSpringerLink (Online service)
773 0 _tSpringer eBooks
776 0 8 _iPrinted edition:
_z9783642156458
830 0 _aLecture Notes in Computer Science,
_x0302-9743 ;
_v6305
856 4 0 _zLibro electrónico
_uhttp://148.231.10.114:2048/login?url=http://link.springer.com/book/10.1007/978-3-642-15646-5
596 _a19
942 _cLIBRO_ELEC
999 _c202971
_d202971