archive-edu.com » EDU » O » OSC.EDU

Total: 329

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • HPC Training
    Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Supercomputing HPC Support ARMSTRONG Client Portal Get a New OSC Account Request Additional Resources Request a Classroom Project FAQ Computing Environments Available Hardware Training Accounts Available Software Statewide Software Manuals Consult Notices Policies Related Links Statewide Users Group Blue Collar Computing Ralph Regula School of Computational Science BALE Interface Lab Intermediate MPI A Practical Approach Description In this workshop attendees will learn how to Perform true parallel I O with MPI at both the basic and advanced levels Determine which parallel tasks require the use of Intercommunicators vs Intracommunicators and what is the difference between them How to use their own operators in the generic MPI Reduce routine Distinguish between Probing and Receiving messages For example probing can allow the user to dynamically allocate the exact memory needed Create general i e non Cartesian virtual topologies Intermix the C and Fortran 90 languages with the MPI Library calls In keeping with the theme of practical parallel programming this workshop relies heavily on complete codes to demonstrate the topics covered C and Fortran versions Prerequisites The

    Original URL path: http://archive.osc.edu/supercomputing/training/impi/ (2013-06-13)
    Open archived version from archive

  • HPC Training
    Directory Visit OSC Supercomputing Support Networking Support Supercomputing HPC Support ARMSTRONG Client Portal Get a New OSC Account Request Additional Resources Request a Classroom Project FAQ Computing Environments Available Hardware Training Accounts Available Software Statewide Software Manuals Consult Notices Policies Related Links Statewide Users Group Blue Collar Computing Ralph Regula School of Computational Science BALE Interface Lab Using MPI Globus Version on the Alliance Grid Testbed Description One of the most exciting and emerging computing paradigms is parallel programming using a geographically distributed grid of diverse machines This PACS workshop intended for novice grid programmers will explain in detail a step by step procedure for running MPI programs on such a computational grid In addition to lectures the workshop will also provide lab time for attendees to use the Alliance Grid Testbed AGT to run test parallel programs The AGT consists of nationally distributed clusters with 16 32 compute nodes dual 2 4 GHz Pentium 4 Xeon processors 2 GB of RAM 60 GB of local scratch space Gigabit Ethernet one head node and a storage node After an initial discussion of the salient features from a user s point of view of the overall structure and capabilities of the AGT the topic will shift to the grid enabling software itself i e the Globus Toolkit The toolkit components that a grid programmer will be required to use will be described Important Globus software tools include those for user certificate authentication and user specifications of exactly what grid resources their program will need Finally the coding compiling and execution of parallel programs using the MPICH G2 version of the popular and standardized Message Passing Interface library will be described thoroughly The MPICH G2 interface allows the user s MPI code to work with Globus to effectively use the grid for

    Original URL path: http://archive.osc.edu/supercomputing/training/mpig/ (2013-06-13)
    Open archived version from archive

  • HPC Training
    Biomedical Sciences and Visualization Blue Collar Computing Computational Science Engineering Research Applications Networking Research Systems Research Research Report Ralph Regula School Computational Chemistry Grid Summer Institute Young Women s Summer Institute HPC and Software Training Current Training and Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Supercomputing HPC Support ARMSTRONG Client Portal Get a New OSC Account Request Additional Resources Request a Classroom Project FAQ Computing Environments Available Hardware Training Accounts Available Software Statewide Software Manuals Consult Notices Policies Related Links Statewide Users Group Blue Collar Computing Ralph Regula School of Computational Science BALE Interface Lab Parallel Programming with OpenMP Description This course is an overview of OpenMP parallel programming for shared memory systems Users will learn how to apply OpenMP compiler directives to their codes optimization strategies to follow and pitfalls to avoid Topics covered will include Introduction to OpenMP Programming parallel do Directive Identifying Data Dependencies OpenMP Environment Variables Work sharing Directives Synchronization Directives Advanced Data Scoping Prerequisites Familiarity with Fortran or C is preferred Experience with parallel programming is helpful but not necessary Target Audience Current and potential

    Original URL path: http://archive.osc.edu/supercomputing/training/openmp/ (2013-06-13)
    Open archived version from archive

  • HPC Training
    Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Supercomputing HPC Support ARMSTRONG Client Portal Get a New OSC Account Request Additional Resources Request a Classroom Project FAQ Computing Environments Available Hardware Training Accounts Available Software Statewide Software Manuals Consult Notices Policies Related Links Statewide Users Group Blue Collar Computing Ralph Regula School of Computational Science BALE Interface Lab Using Parallel Numerical Libraries focus on ScaLAPACK Description Remember when a user would sit down at a main frame computer or vector supercomputer and know that there would be an extensive library of linear algebra routines available for their use Ever since the advent of parallel supercomputers users have asked for the same thing Isn t there a library routine I can call that will perform my matrix matrix multiplication in parallel for me Well the answer to this question and others like it is finally yes and such parallel linear algebra libraries will be the subject of this course Specifically this advanced course will focus on the libraries which have become the de facto standards for parallel linear algebra SCALAPACK the parallel successor to LAPACK and PBLAS the parallel successor to BLAS Both of the libraries build upon calls to the BLACS library which will set up the processors in a communication grid that matches the problem being solved and distributes appropriate array elements to the correct processor s memory to achieve good performance Since the entire procedure of using BLACS is to establish the correct processor environment and then preparing for and calling the desired PBLAS or SCALAPACK routine can be a bit intimidating to new users the course will show numerous examples The examples will cover popular linear algebra tasks such

    Original URL path: http://archive.osc.edu/supercomputing/training/parlib/ (2013-06-13)
    Open archived version from archive

  • HPC Training
    Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Supercomputing HPC Support ARMSTRONG Client Portal Get a New OSC Account Request Additional Resources Request a Classroom Project FAQ Computing Environments Available Hardware Training Accounts Available Software Statewide Software Manuals Consult Notices Policies Related Links Statewide Users Group Blue Collar Computing Ralph Regula School of Computational Science BALE Interface Lab Using Parallel Numerical Libraries focus on NAG Description Remember when a user would sit down at a vector supercomputer and know that there would be an extensive library of linear algebra routines available for their use Ever since the advent of parallel supercomputers users have asked for the same thing Isn t there a library routine I can call that will perform my matrix matrix multiplication in parallel for me Well the answer to this question and others like it is finally yes and such parallel linear algebra libraries will be the subject of this course Specifically this course will focus on the NAG Parallel Library This library based on the Message Passing Interface MPI contains 183 routines that have been specifically developed for use on distributed memory systems The Library makes use of the Basic Linear Algebra Communication Subprograms BLACS which uses MPI as its message passing kernels and includes the following areas Dense linear algebra including ScaLAPACK Eigenvalue and singular value problems Input Output data distribution Quadrature Random number generators Sparse linear algebra Sparse matrix solvers Support utility routines The interfaces are kept as close as possible to the Fortran Library routines to ensure smoother integration and the user does not in general need knowledge of MPI The course will show numerous examples The examples will cover popular linear algebra tasks

    Original URL path: http://archive.osc.edu/supercomputing/training/parlibnag/ (2013-06-13)
    Open archived version from archive

  • HPC Training
    Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Supercomputing HPC Support ARMSTRONG Client Portal Get a New OSC Account Request Additional Resources Request a Classroom Project FAQ Computing Environments Available Hardware Training Accounts Available Software Statewide Software Manuals Consult Notices Policies Related Links Statewide Users Group Blue Collar Computing Ralph Regula School of Computational Science BALE Interface Lab Multilevel Parallel Programming Description This course is intended as an introduction to multilevel parallel programming a style of parallel programming in which both message passing techniques such as MPI and shared memory techniques such as OpenMP or pthreads are used This allows high performance codes to best take advantage of the distributed shared memory architectures of modern parallel supercomputers such as SGI Origins IBM SP3s and clusters of commodity SMP systems Users will learn how to apply multilevel parallel programming techniques to their problems of interest how to optimize these techniques for different architectures and how to avoid potential problems Topics covered will include Introduction to distributed shared memory architectures Overview of message passing techniques Overview of shared memory techniques Mixing message passing and shared memory Optimization techniques Examples

    Original URL path: http://archive.osc.edu/supercomputing/training/multi/ (2013-06-13)
    Open archived version from archive

  • HPC Training
    Chemistry Grid Summer Institute Young Women s Summer Institute HPC and Software Training Current Training and Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Supercomputing HPC Support ARMSTRONG Client Portal Get a New OSC Account Request Additional Resources Request a Classroom Project FAQ Computing Environments Available Hardware Training Accounts Available Software Statewide Software Manuals Consult Notices Policies Related Links Statewide Users Group Blue Collar Computing Ralph Regula School of Computational Science BALE Interface Lab Using Glenn the IBM Opteron 1350 at the Ohio Supercomputer Center For current training offerings and registration information please visit https armstrong osc edu events upcoming php Description The purpose of this 2 day course is to show how to effectively use the Glenn Linux cluster at OSC Users will learn to log onto the cluster use the compilers perform batch processing and utilize the parallel programming facilities of the cluster Topics covered will include Hardware Overview The Linux Operating System User Environment and Storage Program Development Tools and Libraries Batch Processing with TORQUE Resource Manager and Moab Scheduler Other Sources of Information Target Audience Those interested

    Original URL path: http://archive.osc.edu/supercomputing/training/opt/ (2013-06-13)
    Open archived version from archive

  • HPC Training
    Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Supercomputing HPC Support ARMSTRONG Client Portal Get a New OSC Account Request Additional Resources Request a Classroom Project FAQ Computing Environments Available Hardware Training Accounts Available Software Statewide Software Manuals Consult Notices Policies Related Links Statewide Users Group Blue Collar Computing Ralph Regula School of Computational Science BALE Interface Lab Csuri GPU Environment at OSC For current training offerings and registration information please visit https armstrong osc edu events upcoming php Description The Ohio Supercomputer Center OSC is offering a one day workshop to provide an introduction to scientific computing using General Purpose computing on Graphics Processing Units GPGPU This workshop will provide an overview of the GPGPU resources available at OSC summarize some of the key potential benefits and provide an overview of the most popular programming toolkits and techniques currently in use Topics include Hardware Overview Programming Environment GPU Coding Overview Hybrid MPI OpenMP CUDA GPGPU Enabled Applications Example Problems Participants can expect to learn how mathematical computations are accelerated on GPGPU hardware what the potential speedups are what potential problems can be encountered and what programming

    Original URL path: http://archive.osc.edu/supercomputing/training/GPGPU/ (2013-06-13)
    Open archived version from archive



  •