Recent Advances in the Message Passing Interface [recurso electrónico] : 17th European MPI Users’ Group Meeting, EuroMPI 2010, Stuttgart, Germany, September 12-15, 2010. Proceedings / edited by Rainer Keller, Edgar Gabriel, Michael Resch, Jack Dongarra.
Tipo de material: TextoSeries Lecture Notes in Computer Science ; 6305Editor: Berlin, Heidelberg : Springer Berlin Heidelberg, 2010Descripción: XIV, 308p. 120 illus. online resourceTipo de contenido: text Tipo de medio: computer Tipo de portador: online resourceISBN: 9783642156465Tema(s): Computer science | Computer network architectures | Computer Communication Networks | Software engineering | Computer software | Computer simulation | Computer Science | Algorithm Analysis and Problem Complexity | Computer Communication Networks | Software Engineering | Programming Techniques | Computer Systems Organization and Communication Networks | Simulation and ModelingFormatos físicos adicionales: Printed edition:: Sin títuloClasificación CDD: 005.1 Clasificación LoC:QA76.9.A43Recursos en línea: Libro electrónicoTipo de ítem | Biblioteca actual | Colección | Signatura | Copia número | Estado | Fecha de vencimiento | Código de barras |
---|---|---|---|---|---|---|---|
Libro Electrónico | Biblioteca Electrónica | Colección de Libros Electrónicos | QA76.9 .A43 (Browse shelf(Abre debajo)) | 1 | No para préstamo | 375091-2001 |
Navegando Biblioteca Electrónica Estantes, Código de colección: Colección de Libros Electrónicos Cerrar el navegador de estanterías (Oculta el navegador de estanterías)
QA76.9 .A43 Algorithms in Bioinformatics | QA76.9 .A43 Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques | QA76.9 .A43 Principles and Practice of Constraint Programming – CP 2010 | QA76.9 .A43 Recent Advances in the Message Passing Interface | QA76.9 .A43 Algorithms – ESA 2010 | QA76.9 .A43 Algorithms – ESA 2010 | QA76.9 .A43 Transactions on Computational Science VIII |
Large Scale Systems -- A Scalable MPI_Comm_split Algorithm for Exascale Computing -- Enabling Concurrent Multithreaded MPI Communication on Multicore Petascale Systems -- Toward Performance Models of MPI Implementations for Understanding Application Scaling Issues -- PMI: A Scalable Parallel Process-Management Interface for Extreme-Scale Systems -- Run-Time Analysis and Instrumentation for Communication Overlap Potential -- Efficient MPI Support for Advanced Hybrid Programming Models -- Parallel Filesystems and I/O -- An HDF5 MPI Virtual File Driver for Parallel In-situ Post-processing -- Automated Tracing of I/O Stack -- MPI Datatype Marshalling: A Case Study in Datatype Equivalence -- Collective Operations -- Design of Kernel-Level Asynchronous Collective Communication -- Network Offloaded Hierarchical Collectives Using ConnectX-2’s CORE-Direct Capabilities -- An In-Place Algorithm for Irregular All-to-All Communication with Limited Memory -- Applications -- Massively Parallel Finite Element Programming -- Parallel Zero-Copy Algorithms for Fast Fourier Transform and Conjugate Gradient Using MPI Datatypes -- Parallel Chaining Algorithms -- MPI Internals (I) -- Precise Dynamic Analysis for Slack Elasticity: Adding Buffering without Adding Bugs -- Implementing MPI on Windows: Comparison with Common Approaches on Unix -- Compact and Efficient Implementation of the MPI Group Operations -- Characteristics of the Unexpected Message Queue of MPI Applications -- Fault Tolerance -- Dodging the Cost of Unavoidable Memory Copies in Message Logging Protocols -- Communication Target Selection for Replicated MPI Processes -- Transparent Redundant Computing with MPI -- Checkpoint/Restart-Enabled Parallel Debugging -- Best Paper Awards -- Load Balancing for Regular Meshes on SMPs with MPI -- Adaptive MPI Multirail Tuning for Non-uniform Input/Output Access -- Using Triggered Operations to Offload Collective Communication Operations -- MPI Internals (II) -- Second-Order Algorithmic Differentiation by Source Transformation of MPI Code -- Locality and Topology Aware Intra-node Communication among Multicore CPUs -- Transparent Neutral Element Elimination in MPI Reduction Operations -- Poster Abstracts -- Use Case Evaluation of the Proposed MPIT Configuration and Performance Interface -- Two Algorithms of Irregular Scatter/Gather Operations for Heterogeneous Platforms -- Measuring Execution Times of Collective Communications in an Empirical Optimization Framework -- Dynamic Verification of Hybrid Programs -- Challenges and Issues of Supporting Task Parallelism in MPI.
Parallel Computing is at the verge of a new era. Multi-core processors make parallel computing a fundamental skill required by all computer scientists. At the same time, high-end systems have surpassed the Peta?op barrier, and s- ni?cant e?orts are devoted to the development of hardware and software te- nologies for the next-generation Exascale systems. To reach this next stage, processor architectures, high-speed interconnects and programming models will go through dramatic changes. The Message Passing Interface (MPI) has been the most widespread programming model for parallel systems of today. A key questions of upcoming Exascale systems is whether and how MPI has to evolve inordertomeettheperformanceandproductivitydemandsofExascalesystems. EuroMPI is the successor of the EuroPVM/MPI series, a ?agship conf- ence for this community, established as the premier international forum for - searchers,usersandvendorsto presenttheir latestadvancesinMPIandmessage th passingsystemingeneral.The17 EuropeanMPIusersgroupmeetingwasheld in Stuttgart during September 12-15,2010.The conferencewasorganizedbythe High Performance Computing Center Stuttgart at the University of Stuttgart. ThepreviousconferenceswereheldinEspoo(2009),Dublin(2008),Paris(2007), Bonn(2006),Sorrento(2005),Budapest(2004),Venice(2003),Linz(2002),S- torini (2001), Balatonfured (2000), Barcelona (1999), Liverpool (1998), Krakow (1997), Munich (1996), Lyon (1995) and Rome (1994). The main topics of the conference were message-passing systems – especially MPI, performance, scalability and reliability issues on very large scale systems.
19