archive-edu.com » EDU » O » OSC.EDU

Total: 329

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • High Performance Computing Systems
    5x the performance of current systems at just 60 percent of current power consumption Using Oakley the HP Intel Xeon Cluster Glenn the Ohio Supercomputer Center IBM Cluster 1350 Phase One Glenn has been decommissioned December 14 2011 The Ohio Supercomputer Center s IBM Cluster 1350 named Glenn includes AMD Opteron multi core technologies and the new IBM cell processors The system offers a peak performance of more than 75 trillion floating point operations per second and a variety of memory and processor configurations OSC s new supercomputer also includes blade systems based on the Cell Broadband Engine processor This will allow Ohio researchers and industries to easily use this new hybrid HPC The current hardware configuration consists of the following 877 System x3455 compute nodes Decommissioned December 14 2011 650 System x3455 compute nodes Dual socket quad core 2 5 GHz Opterons 24 GB RAM 393 GB local disk space in tmp 88 System x3755 compute nodes Decommissioned December 14 2011 8 System x3755 compute nodes Quad socket quad core 2 4 GHz Opterons 64 GB RAM 188 GB local disk space in tmp Voltaire 10 Gbps PCI Express adapter 4 System x3755 login nodes Quad socket 2 dual core 2 6 GHz Opterons 8 GB RAM All connected together by 10 Gbps or 20 Gbps Infiniband There are 36 GPU capable nodes on Glenn connected to 18 Quadro Plex S4 s for a total of 72 CUDA enabled graphics devices Each node has access to two Quadro FX 5800 level graphics cards Each Quadro Plex S4 has these specs Each Quadro Plex S4 contains 4 Quadro FX 5800 GPU s 240 cores per GPU 4GB Memory per card The 36 compute nodes in Glenn contain Dual socket quad core 2 5 GHz Opterons 24 GB RAM 393 local disk space in tmp 20Gb s Infiniband ConnectX host channel adapater HCA Using Glenn the IBM 1350 Opteron Cluster at OSC Ohio Supercomputer Center BALE Cluster The OSC BALE Cluster is a distributed shared memory hybrid system constructed from commodity PC components running the Linux operating system The current hardware configuration consists of the following The BALE Theater Cluster consists of 55 Workstation nodes and one login node configured with Motherboard ASUS M2NPV VM graphics nVIDIA GeForce 6150 nForce 430 Ethernet gigE on board login node has an extra gigE card Processors AMD Athlon 64 X2 DualCore EE 4200 2 2GHz AM2 Socket L2 512KB RAM 4x1 GB PC5300 ECC DDR2 667MHz Infiniband PCI Express 8x Single Port 4X Infiniband HCA Card MemFree Hard Drive 250GB SATA II Hard Drive 7200RPM w 16MB Cache login node 2 x 500GB SATA II Hard Drive 7200RPM w 16MB Cache The GPGPU Visualization cluster consists of 18 nodes each contains two AMD 2 6GHz Dual Core Opteron CPU s 8 GBs of RAM 750 GB SATA hard disk two NVIDIA Quadro FX 5600 Graphics cards Infiniband Dual Port HCA card Each Quadro FX 5600 card has 1 5GBs of onboard high speed memory

    Original URL path: http://archive.osc.edu/supercomputing/hardware/ (2013-06-13)
    Open archived version from archive


  • Software at OSC
    Current Training and Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Software at OSC Software by Field Biosciences Chemistry Structural Mechanics Fluid Dynamics Programming Visualization Software by System BALE Glenn Related Links Supercomputing Support Get a New OSC Account Available Hardware Training Accounts Statewide Software Manuals Consult Notices Software Ohio Supercomputer

    Original URL path: http://archive.osc.edu/supercomputing/software/ (2013-06-13)
    Open archived version from archive

  • Statewide Software Licensing
    s Summer Institute HPC and Software Training Current Training and Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Supercomputing HPC Support ARMSTRONG Client Portal Get a New OSC Account Request Additional Resources Request a Classroom Project FAQ Computing Environments Available Hardware Training Accounts Available Software Statewide Software Manuals Consult Notices Policies Related Links Statewide Users Group Blue Collar Computing Ralph Regula School of Computational Science BALE Interface Lab Statewide Software Licensing OSC has received funding from the Ohio Board of Regents to acquire statewide licenses for software tools that will facilitate research We are pleased to announce the availability of the above products effective Summer 2000 There are a limited number of licenses for each product The licensing program initially will be available to current OSC users and then will be expanded to a wider audience Please remember that this is an experiment for Ohio and the vendors As such we will need to hear any concerns or problems you encounter Send any suggestions to oschelp osc edu Software available through OSC s Statewide Software License Distribution Altair Hyperworks high performance

    Original URL path: http://archive.osc.edu/supercomputing/statewide/ (2013-06-13)
    Open archived version from archive

  • Supercomputing Environments
    these files use ls a command The files are forward File containing your local e mail address Note the system and the center often rely on communications via e mail If your local e mail address changes please change the contents of the forward file profile Start up shell script for Korn POSIX and Bourne shell users Users may modify this file to add override any environment variables or conventions that were established by the system For a list of current environment variables on a given system enter the env command login Start up shell script for C shell users Users may modify this file to add override any environment variables or conventions that were established by the system For a list of current environment variables on a given system enter the env command cshrc Start up shell script for C shell users that is executed each time a new C shell is invoked Users may modify this file to establish variables and aliases Note A similar file for Korn shell users is identified with the ENV environment variable set in the profile script Do not redefine your PATH environment variable without including PATH If you hard code your PATH it will break the modules software which all of the OSC systems use to make software packages available and as a result you may not be able to compile or submit batch jobs The following is a better way to modify your PATH variable Korn shell profile PATH PATH HOME bin export PATH C shell cshrc setenv PATH PATH HOME bin For most systems the default shell command processor is the Korn shell To change the default shell contact oschelp osc edu Compiling Systems PGI GNU and Intel Compilers are available on all OSC Systems Table 2 Compiling Systems and Commands System Default Compilers Oakley Intel Glenn PGI BALE Cluster PGI Parallel Environments Table 3 provides a summary of the parallel environments and types of memory available on the high performance computers at OSC Table 3 Parallel Environments System Programming Models Memory Oakley Automatic Portland group Mconcur Intel parallel MPI OpenMP distributed between nodes shared between two processors in a node Glenn Automatic Portland group Mconcur Intel parallel MPI OpenMP distributed between nodes shared between two processors in a node BALE Cluster Automatic Portland group Mconcur Intel parallel MPI OpenMP distributed between nodes shared between two processors in a node Scheduling Policies Back to top Scheduling of the cluster s computing resources is handled by software called Moab which is configured with a numerous of scheduling policies to keep in mind Limits By default an individual user can have up to 128 concurrently running jobs and or up to 2048 processor cores in use and all the users in a particular group project can between them have up to 192 concurrently running jobs and or up to 2048 processor cores in use Serial jobs that is jobs which request only one node can run for up to 168 hours while

    Original URL path: http://archive.osc.edu/supercomputing/computing/ (2013-06-13)
    Open archived version from archive

  • Supercomputing Policies
    users to access and use the data OSC 3 OSC Information Security Framework This policy and its supporting sub policies provide a foundation for the security of OSC information technology systems The requirements put forth in this policy and its supporting sub policies are designed to ensure that due diligence is exercised in the protection of information systems and services This policy describes fundamental practices of information security that are to be applied by OSC to ensure that protective measures are implemented and maintained OSC 4 OSC Malicious Code Security This policy is to implement and operate a malicious code security program The program should help to ensure that adequate protective measures are in place against introduction of malicious code into OSC information systems and that computer system users are able to maintain a high degree of malicious code awareness OSC 5 OSC Remote Access Security This policy is to establish practices wherever a remote access capability is provided to OSC systems so that inherent vulnerabilities in such services may be compensated OSC 6 OSC Security Education and Awareness This policy requires OSC to provide information technology security education and awareness to employees contractors temporary personnel and other agents of OSC who use and administer computer and telecommunications systems OSC 7 OSC Security Incident Response This policy defines adequate security response for identified security incidents OSC 8 OSC Password PIN Security This policy establishes minimum requirements regarding the proper selection use and management of passwords and personal identification numbers PINs References in this policy to passwords also apply to PINs except where explicitly noted OSC 9 OSC Portable Security Computing This policy addresses information technology IT security concerns with portable computing devices and provides direction for their use management and control This policy includes security concerns with the physical device

    Original URL path: http://archive.osc.edu/supercomputing/policies/ (2013-06-13)
    Open archived version from archive

  • Supercomputing FAQ
    program to log into another computer over a network to execute commands in a remote machine and to move files from one machine to another It provides strong authentication and secure communications over insecure channels SSH provides secure X connections and secure forwarding of arbitrary TCP connections How can I upload or download files Since FTP is no longer supported at OSC you must use a utility that uses the SSH protocol Current options include Secure CoPy scp and SSH File Transfer Protocol sftp These utilities should be provided on most linux unix platforms but they can also be found at the links given in the next section Where can I find SSH clients For Windows users a popular version of scp and SFTP exists called WinSCP This open source application can be obtained at no charge from http winscp sourceforge net eng Where can I find SSH clients Windows users should consider using our OSC Starter Kit This kit requires no configuration and includes shortcuts for initiating SSH connections with the HPC clusters transferring files to and from the HPC file system and links to helpful OSC websites To find our more click here For users interested in setting up their own environments we recommend the following options Windows Secure Shell by SSH Communications Security Free for non commericial users PuTTY by Simon Tatham Open source Linux and Mac OS X Mac OS X and most Linux distributions come with the OpenSSH suite preinstalled including the ssh and sftp command line tools For more information open a terminal window and type man ssh or man sftp Graphical SFTP clients Windows WinSCP Open source Mac OS X Cyberduck Open source Windows Mac OS X Linux FileZilla Open source How does SSH work SSH works by the exchange and verification of information using public and private keys to identify hosts and users The ssh keygen command creates a directory ssh and files that contain your authentication information The public key is stored in ssh identity pub and the private key is stored in ssh identity Share only your public key Never share your private key To further protect your private key you should enter a passphrase to encrypt the key when it is stored in the file system This will prevent people from using it even if they gain access to your files One other important file is ssh authorized keys Append your public keys to the authorized keys file and keep the same copy of it on each system where you will make ssh connections How do I run a graphical application in an ssh session To do this you need to be running an X display server On most Unix and Linux systems you will probably be running this already If you are using a Mac running OSX you will need to install and run either the Apple X11 server or XDarwin On Windows systems there are numerous choices available including X Win32 and Cygwin X Most ssh clients can be configured to automatically set up a remote X display as part of the login session This may happen without any action on your part if you log into an OSC machine and the command echo DISPLAY gives you more than a blank line you need do nothing else To configure this automated behavior with OpenSSH on a Unix Linux OSX system you will need to either login in with the X option or add the following to HOME ssh config on your local machine Host ForwardX11 yes ForwardX11Trusted yes Note that some visualization programs such as AVS Express require an X server which supports the GLX extension for OpenGL compatibility To check if your X server supports that extension run the command xdpyinfo grep GLX in your login session to an OSC system You should get one or more lines of output if your X server supports this extension Batch Processing Questions What is a batch request On all OSC production systems batch processing is managed by the Portable Batch System PBS PBS batch requests jobs are shell scripts that contain the same set of commands that you enter interactively These requests may also include options for the batch system that provide timing memory and processor information For example requests see Batch Processing in the Supercomputing Environments section How do I submit check the status and or delete a batch job PBS uses qsub to submit qstat a to check the status and qdel to delete a batch request For more information see the online man pages Why won t my job run There are numerous reasons why a job might not run even though there appear to be processors and or memory available These include Your account may be at or near the job count or processor count limit for an individual user Your group project may be at or near the job count or processor count limit for a group The scheduler may be trying to free enough processors to run a large parallel job Your job may need to run longer than the time left until the start of a scheduled downtime How can I retrieve files from unexpectedly terminated jobs A batch job that terminates before the script is completed can still copy files from TMPDIR to the user s home directory via the trap command In the batch script the trap command needs to precede the command causing the TERMination It could be placed immediately after the PBS header lines Here is a generic form trap cd PBS O WORKDIR mkdir PBS JOBID cp R TMPDIR PBS JOBID TERM Compiling System Questions What languages are available Fortran C and C are available on all OSC systems The commands used to invoke the compilers and or loaders vary from system to system For more information see Using the System under Supercomputing Environments When compiling an MPI2 C program I get the following error usr local mpi mvapich2 1 2p1 intel include mpicxx h 37

    Original URL path: http://archive.osc.edu/supercomputing/faq.shtml (2013-06-13)
    Open archived version from archive

  • Analytics Research
    Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Research Blue Collar Computing Analytics Bioinformatics Biomedical Sciences and Visualization Computational Science Engineering Research Cyberinfrastructure and Software Development Systems Research Research Archive Related Links Collaborations Get an Account Research Partners Research Video Library Research Reports 2010 Research Highlights 2009 Research Highlights 2008 Research Report 2007 Research Report Analytics Research Learn more about how Ohio researchers are successfully applying high performance computing to their research With the increasing ability to measure variables the volume of data generated as a part of both research and the practice of engineering and medicine is overwhelming Resolving this problem requires a combination of innovative data storage annotation systems file systems advanced I O and management systems as well as analytical software and the computational power to process the data At the Ohio Supercomputer Center analytics researchers are adapting the latest analytics signal and image processing techniques from the Department of Defense or other sources to non traditional domains such as electron microscopy or NMR This work complements the Center s Blue Collar Computing effort by supporting the commercialization of SIP processing algorithms and promoting the use of analytics by industry For example many DoD high performance computing researchers particularly those in the SIP area develop and run programming codes with MATLAB or related development tools such as MatlabMPI StarP and pMatlab These development tools are convenient because they are completely self contained on a desktop computer To facilitate connecting to and interacting with supercomputers OSC experts developed a SSHToolbox which allows MATLAB researchers to access high performance computers without leaving their comfortable desktop MATLAB environment SSH stands for Secure Shell and is the most widely used protocol tool for connecting to remote high performance computing resources The

    Original URL path: http://archive.osc.edu/research/analytics/ (2013-06-13)
    Open archived version from archive

  • Networking Research
    network dependent These end applications use network protocols that are complex and resource intensive This requires the best effort structure of today s Internet to support massive flows of voice video and data traffic while still maintaining consistent end application performance End application performance over the Internet is directly impacted by the end to end performance bottlenecks present at the end hosts and at the intermediate network paths The bottleneck factors include cross traffic congestion cyber attacks and optical link failures Understanding the limitations and demands of advanced end applications and developing suitable adaptation technologies is vital for supporting existing and emerging end applications on the Internet The Networking Research group at OSC OARnet is engaged in evaluating and developing novel and innovative network based end applications Also the group develops techniques and open source tools for network awareness required for end users to identify and isolate bottleneck scenarios The end applications targeted are a Voice and Video over IP VVoIP for multi point videoconferencing and video streaming IPTV b Remote Instrumentation for remote access of expensive scientific instrument resources c Thin Client Cloud Computing for virtualization of user desktops along with their applications and data and d Large scale Data Transfers for grid computing and disaster data recovery In addition there are projects underway in the network security arena to enable Secure Videoconferencing Detection of Active Worm Propagation and Network Forensics Training The group has setup several Network Monitoring Test Beds to collect human perceptual measures throughputs for large scale data transfers and both active passive network measurements Research findings from the above research projects are being implemented as human aware and network aware end applications such as Remote Instrumentation Collaboration Environment RICE and as open source network measurement tools such as H 323 Beacon and ActiveMon These open

    Original URL path: http://archive.osc.edu/research/networking/ (2013-06-13)
    Open archived version from archive



  •