archive-edu.com » EDU » O » OSC.EDU

Total: 329

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Partek
    existing version using the Add Remove Programs dialog of the Control Panel After the uninstallation find and delete your partekrc tcl file It should be in your Windows HOME typically C Documents and Settings your username If the file cannot be found there or anywhere else on C then check any shared network drives that you use It will likely be in whatever the Windows system variable HOMEDRIVE is set to you can see its value from Start Run cmd exe and then typing in set home without the quotes at the command prompt This will have to be done under the HOME of each user that uses the software that is each user has a personal copy of the file Then install the new version See the instructions below Windows 2000 XP Instructions You must be logged in as an administrator to install the Partek software Download the installation file to your desktop Partek Discovery Suite for WIndows Double click on the file name to start the install application Click Next to begin installing the application Click Yes to accept the license agreement Click Next to install the software in the default location C Program Files Partek Once you have verified your install options click Next to begin the install Click Finish The Partek software is now installed Next you must create the environment variable to access the license server Right mouse click on the My Computer icon select Properties Select the Advanced tab Select the button at the bottom entitled Environment Variables Below the box labeled User variables for xxxx select the New button In the New User Variable window type the information for the two fields Variable name is LM LICENSE FILE Variable value is 27001 license2 osc edu Click the OK button You should now see

    Original URL path: http://archive.osc.edu/supercomputing/statewide/partek.shtml (2013-06-13)
    Open archived version from archive


  • "Customize Your Own" Workshop
    Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Supercomputing HPC Support ARMSTRONG Client Portal Get a New OSC Account Request Additional Resources Request a Classroom Project FAQ Computing Environments Available Hardware Training Accounts Available Software Statewide Software Manuals Consult Notices Policies Related Links Statewide Users Group Blue Collar Computing Ralph Regula School of Computational Science BALE Interface Lab Batch Processing OSC uses the TORQUE to control the batch requests on each of the OSC systems TORQUE is an open source resource manager providing control over batch jobs and distributed compute nodes This affords OSC users access to distributed computing and with multi core systems shared computing Each system supports different resources A reference to the different resources can be found at the OSC hardware page The following is a discussion of the required and recommended uses of batch processing on OSC s systems The individual links present information concerning the use of the PBS scheduler the format of batch scripts best practices and various analysis tools available It is important to note that the batch system is the only means to accessing advanced computing resources on OSC s clusters Throughout the discussion references are made

    Original URL path: http://archive.osc.edu/supercomputing/training/customize/docs/batch/index.shtml (2013-06-13)
    Open archived version from archive

  • Using the IBM Opteron Cluster
    The CUDA toolkit contains the CUDA Runtime API and CUDA Driver API libraries needed to run a CUDA application The CUDA Runtime API is used in most of the examples The Driver API is a low level interface The CUDA SDK is a collection of example programs illustrating various aspects of CUDA and GPGPU usage It includes some utilities that are intended primarily for use with the examples Use of the SDK in production code is not recommended Users who want to use the SDK should install it in their home directories See instructions below Documentation for CUDA can be found here Batch requests Batch requests can be made in both interactive and batch sessions To request a node with GPU capabilities add the following option to your node options in your PBS submission Option Meaning l nodes N ppn P gpu N is the number of nodes P is the number of processors P MUST be 8 Here is what an example PBS file will look like to request an entire node PBS l walltime 40 00 00 PBS l nodes 1 ppn 8 gpu PBS N compute PBS j oe PBS S bin csh module load cuda cd HOME cuda cp mycudaApp TMPDIR cd TMPDIR mycudaApp For an interactive session this is the command to use to request an entire node qsub I l nodes 1 ppn 8 gpu l walltime 12 00 00 Programming environment Glenn currently supports CUDA for GPGPU computation Please see the CUDA section below for more information on how to load the module CUDA The newest version of the CUDA toolkit is currently supported Use the command module load cuda to load CUDA v3 1 into your path SDK s Users can download the CUDA SDK themselves or use the version currently available under the module To setup the SDK from the module simply execute sh usr local cuda 3 1 gpucomputingsdk 3 1 linux run Press enter for both options The module has added the correct path to find the toolkit To build the examples cd NVIDIA GPU Computing SDK C make Binaries will be place in bin linux release Most of the demo s will not work because there is no X display running The list of examples that will work deviceQuery matrixMul The examples that will not work oceanFFT simpleGL Coding in CUDA To include CUDA directories you can use a direct path to the include and lib directories from usr local cuda 3 1 cuda or you can use the environment variable CUDA INSTALL PATH Here is a sample Makefile to help CUDA HOME CUDA INSTALL PATH CUDA INC I CUDA HOME include CUDA LIB L CUDA HOME lib64 lcudart CUDA CC nvcc CUDA FLAGS It is not recommended to use other compilers Once proper support is provided through the compiler then more options will be available for example PGI s compiler CUDA CC CUDA FLAGS CUDA INC o cuda cu obj cuda cu Important The devices are configured

    Original URL path: http://archive.osc.edu/supercomputing/computing/gpgpu/index.shtml (2013-06-13)
    Open archived version from archive

  • Using the Parallel File System
    National Laboratory Getting Started Using the Parallel File System for Serial Jobs Using the Parallel File System for MPI Parallel Jobs Caveats for Using the Parallel File System Links to More Information Getting Started The parallel file system is currently accessible from selected nodes on the following systems Glenn IBM 1350 Opteron cluster glenn osc edu on 965 nodes Pentium 4 cluster oscbw osc edu on 112 nodes Itanium 2 cluster ia64 osc edu on 110 nodes BALE cluster bale login ovl osc edu on 70 nodes On nodes where the parallel file system is accessible it will be mounted at fs pvfs These nodes will be identified to the PBS batch system by having the node attribute pvfs Files and directories on the parallel file system can be manipulated as on any other UNIX style file system so commands like cd mkdir cp ls and so on will work on the parallel file system To access the parallel file system from a batch job you ll need to tell the batch system you intend to use it by adding a pvfs attribute to your job s nodes request PBS l nodes 2 ppn 2 pvfs In a batch job which requests the pvfs node attribute there will be an additional environment variable set called PFSDIR this is similar to TMPDIR in that it is a directory that only exists for the duration of the job but it resides on the parallel file system and is accessible by all the nodes in your job as opposed to TMPDIR which is private to each node Using the Parallel File System for Serial Jobs For serial jobs requiring large 50GB amounts of scratch space the parallel file system should be used in place of locally attached temporary space In these cases the job should use PFSDIR instead of TMPDIR as its working directory Here is an example PBS N bigfile PBS j oe PBS l nodes 1 ppn 2 pvfs PBS l walltime 10 00 00 cd myscience cp input dat PFSDIR cd PFSDIR HOME myscience bigfileapp cp output dat HOME myscience For serial programs doing block binary or unformatted I O to the parallel file system transfer rates of up to 60 MB s have been observed For character I O eg Fortran formatted I O or C printf transfer rates should be approximately 10 15 MB s Using the Parallel File System for MPI Parallel Jobs The MPI 2 specification includes a section on parallel I O and most MPI implementations including the MPICH ch gm implementation used on OSC s clusters implements that interface As a result MPI programs on OSC s clusters can use the MPI parallel I O interface MPI File to acheive higher I O performance The parallel file system is specifically tuned for this type of use Here is an example of a parallel job using the parallel file system PBS N mpi io PBS j oe PBS l nodes 8 ppn 2 pvfs PBS

    Original URL path: http://archive.osc.edu/supercomputing/computing/pvfs/index.shtml (2013-06-13)
    Open archived version from archive

  • Basic Optimization Strategies
    do the work adding multiplying etc such CPUs typically have two levels of cache and a main local memory As one heads up the hierarchy from primary to secondary data cache to main memory the memory segments become slower but larger That is the primary cache is a small but fast and expensive piece of memory while the main memory is large but slow As a general rule of thumb each step up in the hierarchy costs about a factor of ten in overall memory latency The idea behind this sort of architecture is to balance cost and performance and the key to the performance lies in efficient cache use That is we want to have data that is needed by the CPU in the cache as often as possible and to limit the need for accessing data in the main memory Getting data into the cache and using it once it is there is the principal key to single processor performance on these platforms Specific strategies for optimizing code on the various platforms may be found in various sets of OSC workshop materials In particular see Software Development Tools Using the BALE Cluster at OSC Performance Tuning Techniques Some general themes include Get the Right Answers If you don t have to get the right answer you can make a program run arbitrarily fast so the first rule of optimization is that the program must always generate the correct results Although this may seem obvious it is easy to forget to check the answers along the way as you make performance improvements A lot of frustration can be avoided by making only one change at a time and verifying that the program produces correct results after each change Use Existing Tuned Code The quickest and easiest way to improve a program s performance is to link it with libraries already tuned for the target hardware The standard libraries on all OSC systems are all highly optimized for the platform in question In addition there are often optional libraries that can provide substantial performance benefits Find Out Where to Tune When confronted with a program composed of hundreds of modules and thousands of lines of code it would require a heroic and inefficient effort to tune the entire program Optimization should be concentrated on those parts of the code where the work will pay off with the biggest gains in performance These sections of code may be identified with the help of a profiler Let the Compiler Do the Work The most important single tool for optimization is the compiler Modern optimizing compilers such as those found on all OSC platforms are highly sophisticated and capable of generating extremely efficient machine code The main mechanism for controlling compiler optimizations are options passed to the compiler and linker Typically there are many such options over 100 of them for the Origin 2000 Fortran 90 and C compilers consult the man pages for complete details To achieve a high level of general

    Original URL path: http://archive.osc.edu/supercomputing/computing/opt-strategies.shtml (2013-06-13)
    Open archived version from archive

  • High Performance Computing Systems
    Directory Visit OSC Supercomputing Support Networking Support Supercomputing HPC Support ARMSTRONG Client Portal Get a New OSC Account Request Additional Resources Request a Classroom Project FAQ Computing Environments Available Hardware Training Accounts Available Software Statewide Software Manuals Consult Notices Policies Related Links Statewide Users Group Blue Collar Computing Ralph Regula School of Computational Science BALE Interface Lab OSC Starter Kit Overview The OSC Starter Kit provides a suite of programs to help you access the OSC High Performance Computing HPC systems securely and easily from your Windows computer Features Drag and drop file transfers to and from the HPC shared filesystem One click terminal sessions with limited support for graphical applications Links to helpful OSC websites including our account management and social networking portal ARMSTRONG Download Download the OSC Starter Kit 37 MB User s Manual System Requirements The OSC Starter Kit requires a computer running Windows XP or later with 50 MB of free disk space Usage Notes When starting a terminal session you will be prompted first for your username then for your password Both of these boxes will obscure your input with dots so be sure to type carefully Some menu options in WinSCP the bundled file

    Original URL path: http://archive.osc.edu/supercomputing/get_started/ (2013-06-13)
    Open archived version from archive

  • TotalView Technologies TotalView Debugger
    Renew Project Bioinformatics Biomedical Sciences and Visualization Blue Collar Computing Computational Science Engineering Research Applications Networking Research Systems Research Research Report Ralph Regula School Computational Chemistry Grid Summer Institute Young Women s Summer Institute HPC and Software Training Current Training and Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Software at OSC Software by Field Biosciences Chemistry Structural Mechanics Fluid Dynamics Programming Visualization Software by System BALE Glenn Related Links Supercomputing Support Get a New OSC Account Available Hardware Training Accounts Statewide Software Manuals Consult Notices TotalView Technologies TotalView Debugger Introduction TotalView is a symbolic debugger which supports threads MPI OpenMP C C and Fortran plus mixed language codes Advanced features include on demand memory leak detection other heap allocation debugging features and the Standard Template Library Viewer STLView Features like dive a wide variety of breakpoints the Message Queue Graph Visualizer powerful data analysis and control at the thread level give you the power you need to solve tough problems Version Version 8 0 2 1 is currently available on the Glenn Cluster Availability TotalView is available on the Glenn

    Original URL path: http://archive.osc.edu/supercomputing/software/apps/totalview.shtml (2013-06-13)
    Open archived version from archive

  • OSC Research Report
    and Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support 2008 Research Report Home Biological Sciences Advanced Materials Data Exploitation Research Landscape Blue Collar Computing Ralph Regula School of Computational Science Contact Us Research Reports 2009 Research Highlights 2008 Research Report 2007 Research Report Advanced Materials Researchers and scientists in Ohio are developing exciting new classes of materials with unusual properties Their groundbreaking studies are based on the study of atomic and molecular physics and chemistry and involve the processing of polymers metals ceramics and composite materials For example a physicist delves into the interaction of electrons superconductors and microchips A chemist determines how NMR experiments can be used to learn about the bonds between hydrogen atoms And a computational experimentalist develops and uses high performance software to study supersonic and hypersonic airflow phenomena of military jets Worldclass materials manufacturing industries have long driven the state s economy with just under 105 000 workers across 1 184 establishments according to a recent report by Battelle The creation and testing of computational models through the Ohio Supercomputer Center continues to set the bar

    Original URL path: http://archive.osc.edu/research/report/materials.shtml (2013-06-13)
    Open archived version from archive