archive-edu.com » EDU » O » OSC.EDU

Total: 329

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Using the HP Oakley Cluster
    units are bytes can also be expressed in megabytes e g mem 1000MB or gigabytes eg mem 2GB l file amount OPTIONAL Request use of amount of local scratch disk space per node Default units are bytes can also be expressed in megabytes e g file 10000MB or gigabytes eg file 10GB Only required for jobs using 10GB of local scratch space per node l software package N OPTIONAL Request use of N licenses for package If omitted N 1 Only required for jobs using specific software packages with limited numbers of licenses see software documentation for details j oe Redirect stderr to stdout m ae Send e mail when the job finishes or aborts By default your batch jobs begin execution in your home directory This is true even if you submit the job from another directory To facilitate the use of temporary disk space a unique temporary directory is automatically created at the beginning of each batch job This directory is also automatically removed at the end of the job Therefore it is critical that all files required for further analysis be copied back to permanent storage in your HOME area prior to the end of your batch script You access the directory through the TMPDIR environment variable Note that in jobs using more than one node TMPDIR is not shared each node has its own distinct instance of TMPDIR Single CPU sequential jobs should either set the l nodes resource limit to 1 ppn 1 or leave it unset entirely The following is an example of a sequential job which uses TMPDIR as its working area PBS l walltime 40 00 00 PBS l nodes 1 ppn 1 PBS N myscience PBS j oe PBS S bin bash cd HOME science pbsdcp s my program f mysci in TMPDIR cd TMPDIR pgf77 O3 my program f o mysci usr bin time mysci mysci hist cp mysci hist mysci out HOME Beowulf cdnz3d If you have the above request saved in a file named my request job and my program f saved in a subdirectory called science the following command will submit the request opt login01 qsub my request job 1151787 opt batch osc edu You can use the qstat command to monitor the progress of the resulting batch job In the above example the number 1151787 is the job identifier ori jobid When the job finishes my results will appear in the science subdirectory and the standard output generated by the job will appear in a file called my job oN where N is the jobid The N differentiates multiple submissions of the same job for each submission generates a different number This file will appear in the directory where you executed the qsub command The directory from where you execute the qsub command can be referenced by the environment variable PBS O WORKDIR from within a PBS batch script only All batch jobs must set the l walltime resource limit as this allows the Moab Scheduler to backfill small short running jobs in front of larger longer running jobs This in turn helps improve turnaround time for all jobs Sample large memory serial job PBS l walltime 40 00 00 PBS l nodes 1 ppn 1 PBS l mem 16gb PBS N cdnz3d PBS j oe PBS S bin bash cd HOME Beowulf cdnz3d pbsdcp s cdnz3d cdin dat acq dat cdnz3d in TMPDIR cd TMPDIR cdnz3d cdnz3d hist cp cdnz3d hist cdnz3d out HOME Beowulf cdnz3d ja The maximum amount of memory available on a node is 64 GB Sample large disk serial job PBS l walltime 40 00 00 PBS l nodes 1 ppn 1 PBS l file 96gb PBS N cdnz3d PBS j oe PBS S bin bash cd HOME Beowulf cdnz3d pbsdcp s cdnz3d cdin dat acq dat cdnz3d in TMPDIR cd TMPDIR cdnz3d cdnz3d hist cp cdnz3d hist cdnz3d out HOME Beowulf cdnz3d ja The maximum amount of local disk space available on a node is 1800 GB jobs in need of more temporary space than that must use the fs lustre parallel file system instead Parallel jobs are now node exclusive you will use the entire node if you run a job across more than one node More details to be added on charging Programming Environment Compilers FORTRAN 77 Fortran 90 C and C are supported on the Oakley cluster Intel and PGI compiler suites are available The Intel development tool chain is loaded by default Compiler commands and recommended options for serial programs are listed in the table below Language Intel Example PGI Example C icc O2 hello c pgcc fast hello c Fortran 90 ifort O2 hello f90 pgf90 fast hello f90 Parallel Programming MPI The system uses the MVAPICH2 implementation of the Message Passing Interface MPI optimized for the high speed Infiniband interconnect MPI is a standard library for performing parallel processing using a distributed memory model For more information on MPI see the Training section of the OSC website Each program file using MPI must include the MPI header file The following statement must appear near the beginning of each C or Fortran source file respectively include mpi h include mpif h To compile an MPI program use the MPI wrapper scripts which invoke the Portland Group or Intel compilers depending on which module is loaded prior to executing the compilation command The MPI compilers take the same options as the compiler they wrap Here are some examples which produce an executable named a out mpif77 sample f mpif90 sample f90 mpicc sample c mpiCC sample C Use the mpiexec command to run the resulting executable in a batch job this command will automatically determine how many processors to use on based on your batch request mpiexec a out Here is an example of an MPI job which uses 4 of the Infiniband equipped nodes on the Oakely cluster PBS l walltime 1 00 00 PBS l nodes 4 ppn 12 PBS N my job PBS

    Original URL path: http://archive.osc.edu/supercomputing/computing/oakley/index.shtml (2013-06-13)
    Open archived version from archive


  • Using the IBM Opteron Cluster
    run on l nodes x ppn y newdual l nodes x ppn y newquad y must be 9 l mem amount OPTIONAL Request use of amount of memory per node Default units are bytes can also be expressed in megabytes e g mem 1000MB or gigabytes eg mem 2GB If you need more than 24GB of RAM you must request 9 or more ppn l file amount OPTIONAL Request use of amount of local scratch disk space per node Default units are bytes can also be expressed in megabytes e g file 10000MB or gigabytes eg file 10GB Only required for jobs using 10GB of local scratch space per node l software package N OPTIONAL Request use of N licenses for package If omitted N 1 Only required for jobs using specific software packages with limited numbers of licenses see software documentation for details j oe Redirect stderr to stdout m ae Send e mail when the job finishes or aborts By default your batch jobs begin execution in your home directory This is true even if you submit the job from another directory To facilitate the use of temporary disk space a unique temporary directory is automatically created at the beginning of each batch job This directory is also automatically removed at the end of the job Therefore it is critical that all files required for further analysis be copied back to permanent storage in your HOME area prior to the end of your batch script You access the directory through the TMPDIR environment variable Note that in jobs using more than one node TMPDIR is not shared each node has its own distinct instance of TMPDIR Single CPU sequential jobs should either set the l nodes resource limit to 1 ppn 1 or leave it unset entirely The following is an example of a sequential job which uses TMPDIR as its working area PBS l walltime 40 00 00 PBS l nodes 1 ppn 1 PBS N myscience PBS j oe PBS S bin ksh cd HOME science cp my program f mysci in TMPDIR cd TMPDIR pgf77 O3 my program f o mysci usr bin time mysci mysci hist cp mysci hist mysci out HOME Beowulf cdnz3d If you have the above request saved in a file named my request job and my program f saved in a subdirectory called science the following command will submit the request opt login01 qsub my request job 1151787 opt batch osc edu You can use the qstat command to monitor the progress of the resulting batch job In the above example the number 1151787 is the job identifier ori jobid When the job finishes my results will appear in the science subdirectory and the standard output generated by the job will appear in a file called my job o N where N is the jobid The N differentiates multiple submissions of the same job for each submission generates a different number This file will appear in the directory where you executed the qsub command The directory from where you execute the qsub command can be referenced by the environment variable PBS O WORKDIR from within a PBS batch script only All batch jobs must set the l walltime resource limit as this allows the Moab Scheduler to backfill small short running jobs in front of larger longer running jobs This in turn helps improve turnaround time for all jobs Sample large memory serial job PBS l walltime 40 00 00 PBS l nodes 1 ppn 1 PBS l mem 16gb PBS N cdnz3d PBS j oe PBS S bin ksh cd HOME Beowulf cdnz3d cp cdnz3d cdin dat acq dat cdnz3d in TMPDIR cd TMPDIR cdnz3d cdnz3d hist cp cdnz3d hist cdnz3d out HOME Beowulf cdnz3d ja Single node jobs that request 16 GB or more of memory will be scheduled on the quad socket large memory nodes The maximum amount of memory available on a node is 64 GB Sample large disk serial job PBS l walltime 40 00 00 PBS l nodes 1 ppn 1 PBS l file 96gb PBS N cdnz3d PBS j oe PBS S bin ksh cd HOME Beowulf cdnz3d cp cdnz3d cdin dat acq dat cdnz3d in TMPDIR cd TMPDIR cdnz3d cdnz3d hist cp cdnz3d hist cdnz3d out HOME Beowulf cdnz3d ja Single node jobs that request more than 45 GB of temporary space will be scheduled on the quad socket nodes The maximum amount of local disk space available on a node is 1800 GB jobs in need of more temporary space than that must use the fs pvfs parallel file system instead Estimating Queue Time To get an estimate of how long before a job identified by jobid starts use the following command showstart jobid This will query the Moab scheduler for an estimate of the job s start time Please keep in mind that this is an estimate and may change over time depending on system load and other factors Programming environment Glenn supports two programming models of parallel execution shared memory on exactly one node through compiler directives and automatic parallelization and distributed memory across multiple nodes through message passing See the sections below for more information Compiling systems FORTRAN 77 Fortran 90 C and C are supported on the IBM Opteron cluster The IBM Opteron cluster has the Intel and Portland Group suites of optimizing compilers which tend to generate faster code than that generated by the standard GNU compilers The following examples produce the Linux executable a out for each type of source file for the Portland Group and Intel compilers Options which have been found to produce good performance with many though not necessarily all programs are given under Recommended Options Language Portland Group Recommended Options Intel Recommended Options C pgcc sample c Xa tp x64 fast Mvect assoc cachesize 1048576 icc sample c O2 ansi C pgCC sample C A fast tp x64 Mvect assoc cachesize 1048576 prelink objects icpc sample C O2 ansi FORTRAN 77

    Original URL path: http://archive.osc.edu/supercomputing/computing/opt/index.shtml (2013-06-13)
    Open archived version from archive

  • Using the BALE Cluster
    TMPDIR is not shared each node has its own distinct instance of TMPDIR Single CPU sequential jobs should either set the l nodes resource limit to 1 ppn 1 or leave it unset entirely The following is an example of a sequential job which uses TMPDIR as its working area PBS l walltime 40 00 00 PBS l nodes 1 ppn 1 PBS N myscience PBS j oe PBS S bin ksh cd HOME science cp my program f mysci in TMPDIR cd TMPDIR pgf77 O3 my program f o mysci usr bin time mysci mysci hist cp mysci hist mysci out HOME Beowulf cdnz3d If you have the above request saved in a file named my request job and my program f saved in a subdirectory called science the following command will submit the request qsub my request job You can use the qstat command to monitor the progress of the resulting batch job When the job finishes my results will appear in the science subdirectory and the standard output generated by the job will appear in a file called my job o N where N is a numeric jobid assigned by the batch system The N differentiates multiple submissions of the same job for each submission generates a different number This file will appear in the directory where you executed the qsub command All batch jobs must set the l walltime resource limit as this allows the Moab Scheduler to backfill small short running jobs in front of larger longer running jobs This in turn helps improve turnaround time for all jobs Estimating Queue Time To get an estimate of how long before a job identified by jobid starts use the following command showstart jobid This will query the Moab scheduler for an estimate of the job s start time Please keep in mind that this is an estimate and may change over time depending on system load and other factors Programming environment The BALE cluster supports two programming models of parallel execution shared memory on exactly one node through compiler directives and automatic parallelization and distributed memory across multiple nodes through message passing See the sections below for more information Compiling systems FORTRAN 77 Fortran 90 C and C are supported on the BALE cluster The BALE cluster has the Portland Group suite of optimizing compilers which tend to generate faster code than that generated by the standard GNU compilers The following examples produce the Linux executable a out for each type of source file C pgcc sample c C pgCC sample C FORTRAN 77 pgf77 sample f Fortran 90 pgf90 sample f90 For more information on command line options for each compiling system see the manual pages man pgf77 code man pgcc etc Shared memory The BALE cluster can automatically optimize single node sequential programs for shared memory parallel execution using the Mconcur compiler option pgf77 O2 Mconcur sample f pgf90 O2 Mconcur sample f90 pgcc O2 Mconcur sample c pgCC O2 Mconcur sample C In

    Original URL path: http://archive.osc.edu/supercomputing/computing/bale/ (2013-06-13)
    Open archived version from archive

  • Biosciences Software available at OSC
    Field Biosciences Chemistry Structural Mechanics Fluid Dynamics Programming Visualization Software by System BALE Glenn Related Links Supercomputing Support Get a New OSC Account Available Hardware Training Accounts Statewide Software Manuals Consult Notices Biosciences Software available at OSC Amber a molecular simulation program bioperl offers a set of perl modules which can be used for sequence manipulation BLAST tools for searching DNA and protein databases for sequence similarity to identify homologs to a query sequence BLAT a sequence analysis tool which performs rapid mRNA DNA and cross species protein alignments bowtie an ultrafast memory efficient short read aligner geared toward quickly aligning large sets of short DNA sequences reads to large genomes ClustalW a general purpose multiple sequence alignment program for DNA or proteins EMBOSS a sequence analysis package specially developed for the needs of the molecular biology user community fitmodel a program that estimates the parameters of various codon based models of substitution HMMER profile HMMs for protein sequence analysis MrBayes a program for the Bayesian estimation of phylogeny NAMD a parallel molecular dynamics code designed for high performance simulation of large biomolecular systems PAML a few programs for model fitting and phylogenetic tree reconstruction using nucleotide or amino acid sequence data Partek Pro an integrated scalable software environment capable of analysis and transformation of millions of records or millions of variables PAUP a leading program for performing phylogenetic analysis for bioinformatics sequences RAxML a fast implementation of maximum likelihood phylogeny estimation that operates on both nucleotide and protein sequence alignments RECON a package that performs de novo identification and classification of repeat sequence families from genomic sequences RepeatMasker a program that screens DNA sequences for interspersed repeats and low complexity DNA sequences Stata a complete integrated statistical package that provides everything needed for data analysis data management and graphics

    Original URL path: http://archive.osc.edu/supercomputing/software/bio/index.shtml (2013-06-13)
    Open archived version from archive

  • Chemistry software available at OSC
    Links Supercomputing Support Get a New OSC Account Available Hardware Training Accounts Statewide Software Manuals Consult Notices Chemistry software available at OSC AMBER a molecular simulation program ChemTools batch submission wrappers CNS a flexible macromolecular structure determination software suite for X ray crystallography and solution NMR spectroscopy COLUMBUS an ab initio electronic structure program suite CSD The Cambridge Structural Database CSD contains complete structure information on hundreds of thousands of small molecule crystals ESPRESSO PWscf suite of computer codes for electronic structure calculations GAMESS a flexible ab initio electronic structure program Gaussian 09 a connected system of programs for performing semiempirical ab initio and density functional molecular orbital MO calculations Gaussian 03 a connected system of programs for performing semiempirical ab initio and density functional molecular orbital MO calculations GROMACS a versatile package to perform molecular dynamics Jmol a simple molecular visualization program LAMMPS a classical molecular dynamics code designed for high performance simulation of large molecular systems MacroModel molecular modeling software package that allows the graphical construction of complex chemical structures and the application of molecular mechanics and dynamics techniques in vacuo or in solution MEAD Poisson Boltzmann solvation modeling for biologcal systems NAMD a parallel molecular dynamics code designed for high performance simulation of large biomolecular systems NWChem a general purpose electronic structure program designed for maximum efficiency on massively parallel computers Open Babel a chemical toolbox mainly for data format conversion Turbomole a program package for ab initio electronic structure calculations VASP the Vienna ab initio simulation package Indicates software available through OSC s Statewide Software License Distribution Availability by category Ab Initio COLUMBUS GAMESS Gaussian 09 Gaussian 03 NWChem Turbomole VASP Crystal Structure Determination CNS CSD Density Functional Theory Gaussian 09 Gaussian 03 NWChem Empirical Molecular Mechanics and Dynamics Amber Gromacs MacroModel LAMMPS MEAD NAMD NWChem

    Original URL path: http://archive.osc.edu/supercomputing/software/chemistry/index.shtml (2013-06-13)
    Open archived version from archive

  • Structural Mechanics software available at OSC
    Report Ralph Regula School Computational Chemistry Grid Summer Institute Young Women s Summer Institute HPC and Software Training Current Training and Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Software at OSC Software by Field Biosciences Chemistry Structural Mechanics Fluid Dynamics Programming Visualization Software by System BALE Glenn Related Links Supercomputing Support Get a New OSC Account Available Hardware Training Accounts Statewide Software Manuals Consult Notices Structural Mechanics software available at OSC ABAQUS a finite element analysis program Altair HyperWorks high performance comprehensive toolbox of CAE software for engineering design and simulation ANSYS an engineering package and support routines for general purpose finite element analysis statics mode frequency stability analysis heat transfer magnetostatics coupled field analysis and modeling COMSOL Multiphysics formerly FEMLAB is a finite element analysis and solver software package for various physics and engineering applications LSDYNA an explicit three dimensional finite element code for analyzing the large deformation dynamic response of inelastic solids and structures LS PREPOST a pre processor for LSDYNA Stata a complete integrated statistical package that provides everything needed for data analysis data management and graphics

    Original URL path: http://archive.osc.edu/supercomputing/software/engineering/index.shtml (2013-06-13)
    Open archived version from archive

  • Fluid Dynamics software available at OSC
    and Software Training Current Training and Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Software at OSC Software by Field Biosciences Chemistry Structural Mechanics Fluid Dynamics Programming Visualization Software by System BALE Glenn Related Links Supercomputing Support Get a New OSC Account Available Hardware Training Accounts Statewide Software Manuals Consult Notices

    Original URL path: http://archive.osc.edu/supercomputing/software/fluid_dyn/index.shtml (2013-06-13)
    Open archived version from archive

  • Programming Software available at OSC
    that implements MPI extensions for MATLAB and GNU Octave disktmake a replacement for make to define job workflow and execute commands on multiple nodes FFTW a C subroutine library for computing the discrete Fourier transform DFT HDF5 a general purpose library and file format for storing scientific data Intel Compilers an array of software development products from Intel including C FORTRAN compilers and Math Kernel Library MKL Intel MKL Intel Math Kernel Library It contains LAPACK BLAS some FFT routines and a miscellany of other math capabilities Java Virtual Machine the base platform upon which Java applications are run The JVM provides a large flexible and powerful set of functionality to the developer Additionally any code written in Java can be run anywhere that the JVM is installed MATLAB a technical computing environment for high performance numeric computation and visualization that integrates numerical analysis matrix computation signal processing and graphics in an easy to use envionment MINPACK a library of Fortran routines for the solution of non linear multivariate minimization problems mpiexec a replacement program for the script mpirun which is part of the mpich package It is used to initialize a parallel job from within a pbs batch or interactive environment MPI Library a standard library for performing parallel processing using a distributed memory model NetCDF Network Common Data Form is an interface for array oriented data access Octave a high level language primarily intended for numerical computations parallel command processor a program for running a large number of serial processes on a number of allocated nodes processors Torque a networked subsystem for submitting monitoring and controlling a work load of batch jobs on one or more systems PGI compilers Parallel Fortran C C compilers for Intel platforms R a language and environment for statistical computing and graphics ScaLAPACK

    Original URL path: http://archive.osc.edu/supercomputing/software/programming/index.shtml (2013-06-13)
    Open archived version from archive



  •