archive-edu.com » EDU » B » BERKELEY.EDU

Total: 181

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • start [UC Berkeley Clustered Computing]
    beta test usage for the next few days until September 1 2013 so that users have a chance to check things out and shake out any remaining bugs before billing starts After September 1 usage on both head nodes will be billed and more nodes will gradually be migrated from the old psi head node to the new zen head node as time goes on If there are no significant problems with migrating to the new setup I would like to switch over most of the compute nodes to the new zen head node queues before the end of this billing quarter Sept 30 The few remaining 16GB 8core Dell 1950 nodes are likely to remain on the old psi head node and zen queue for the time being and may soon be retired Please report any concerns or problems or gratitude to support millennium berkeley edu Thanks in advance for your help in checking out the new setup Cluster Support mailto support millennium berkeley edu Tue 27 Aug 2013 Free Beta Test of queues on new zen headnode Tue 6 Aug 2013 Reboot S132 at 10 30am back at 11 15am NFS was hung rendering work usr mill and usr sww inaccessible Wed 24 Jul 2013 S131 crashed work4 inaccessible rebooting at 9 45am Fri 26 Apr 2013 Failure of primary NIS and DNS server caused temporary access issues Mon 18 Mar 2013 Network failure partitioning research net has been patched Wed 06 Mar 2013 Ganglia http monitor millennium berkeley edu is down Sat 15 Dec 2012 a DNS server tangelo went down this morning resulting in slow unreliable logins to many systems rebooted service restored Fri 30 Nov 2012 work server hung also usr mill and usr sww rebooted and remounted on cluster Thu 29 Nov 2012 Power

    Original URL path: https://www.millennium.berkeley.edu/ (2014-09-22)
    Open archived version from archive


  • start [UC Berkeley Clustered Computing]
    is a list of pages that seem to link back to the current page Nothing was found start txt Last modified 2011 09 03 16 53 by jkuroda Except where otherwise noted content on this wiki is licensed under the

    Original URL path: https://www.millennium.berkeley.edu/wiki/start?do=backlink (2014-09-22)
    Open archived version from archive

  • psi [UC Berkeley Clustered Computing]
    disk one available for special needs Ubuntu Server 9 10 Linux Debian compatible 64 bit em64t 20 Dell PowerEdge 1950 zen queue 2 Quad Core Intel R Xeon R CPU E5345 2 33GHz 4MB L2 cache 16GB of RAM 1 300GB 10K rpm SAS disk Ubuntu Server 10 4 LTS 64 bit Linux 64 Dell PowerEdge 1850 zen queue 2 Intel R Xeon TM CPU 3 00GHz 1MB L2 cache 3GB of RAM 2 147GB 10K rpm SCSI disks Myricom Myrinet 2000 M3S PCI64B Ubuntu Server 10 4 LTS 64 bit Linux The nodes are available for batch jobs and interactive sessions the Torque Maui queuing system from the batch head node is called psi millennium berkeley edu There are two batch queues psi and zen The default batch queue name is psi although a routing queue named batch is maintained for compatibility with existing scripts A secondary queue named zen is available to access the nodes formerly available via zen millennium berkeley edu The job submission to the Torque Maui batch system is through qsub as before For interactive reservations qsub I also grants access to a node Users who run simple self contained batch jobs may not notice very many differences This acceptance testing period is for finding the potential bumps in the road for the users who use mpirun or gexec or use version specific software and specialized packages Most of the software installed on the the old batch system that is accessible via zen is also available on the new cluster through psi Many of the packages will be in newer versions than those running on the old cluster which may lead to subtle differences for some jobs Some of the lesser used or more esoteric packages have been dropped but most of these can be brought back by request as long as they are supported under Ubuntu 9 10 There are many aspects of the cluster that may be tuned for better performance We would like to work with you the end user community to help determine how this cluster can best support your needs For example the new nodes have 48GB of memory allowing for larger jobs but at a slightly slower memory bus clock rate The new nodes are also running with hyper threading enabled allowing up to 16 simultaneous tasks per node If either of these is deemed problematic we can turn off hyper threading on some or all of the nodes to run just 8 physical cores per node or reduce the memory size on some nodes for faster response on smaller jobs Concurrently we are working on making a new faster work space available to the PSI cluster The Dell Fast Storage Cluster provides high speed shared file systems to this compute cluster Cluster Accounts Access to the EECS Department s PSI Compute Cluster is available to all EECS research account holders who have a sponsor that is willing to cover the recharge costs for the resource time used If

    Original URL path: https://www.millennium.berkeley.edu/wiki/psi (2014-09-22)
    Open archived version from archive

  • i3 [UC Berkeley Clustered Computing]
    Storage Cluster provides high speed shared file systems to this compute cluster Using the Cluster This is an unscheduled shared resource You can view the current use of the machines by running gstat or viewing the Ganglia graphs gstat provides an ordered list of available machines Machines with a load 2 are fully loaded and should be avoided until current jobs have completed Run gexec to launch jobs cd work user gexec n 10 hostname returns the hostname of 10 machines cp home foo work user gexec n 20 foo runs foo on 20 machines For more information on using gexec please see the documentation Filesystems While you should be able execute jobs from your EECS department home directory we strongly suggest that you launch all jobs from a work user or scratch user directory Before executing a program copy all binaries and data files into your work or scratch directory cd into that directory and execute from there This avoids putting unnecessary load on EECS department fileservers which are sometimes unable to handle many simultaneous mount requests Note work has a 30 day deletion policy Files in work untouched for 30 days will be purged without warning work is

    Original URL path: https://www.millennium.berkeley.edu/wiki/i3 (2014-09-22)
    Open archived version from archive

  • nano [UC Berkeley Clustered Computing]
    gigabit Ethernet 6 300 GB 10000 RPM Ultra320 SCSI hard disk drive Using the Cluster Access to the Nano cluster is limited to affiliates of the 6 project PI s To apply for an account visit the Account Request Form The frontend node nano millennium berkeley edu is accessible via SSHv2 Support Community The nano users mailing list is a forum for community support of the Nano cluster All users are added to this list by default but have the option of unsubscribing Torque Batch Queue PBS To submit a script to Torque run qsub script sh Torque defines environment variables and runs your script on the first assigned node it does not launch processes on multiple cpus You may use gexec or mpirun within your script to launch processes on multiple assigned nodes The default queue is batch which contains all nodes View the output of qstat a or qstat qto check the queue status Example Torque Script Torque v2 0 Admin Manual MPI If you are unfamiliar with MPI please visit the MPI Tutorial MPI jobs can be run over the gigabit Ethernet Compilers IBM Fortran Compiler IBM C C Compilers Filesystems home nano group user is your

    Original URL path: https://www.millennium.berkeley.edu/wiki/nano (2014-09-22)
    Open archived version from archive

  • nlp [UC Berkeley Clustered Computing]
    to this cluster is limited to members of Professor Klein s NLP research group and graduate level NLP classes This is an unscheduled shared resource You can view the current use of the machines by running gstat or viewing the Ganglia graphs gstat shows an ordered list of available machines Machines with a load 2 are fully loaded and should be avoided until current jobs have completed Run gexec to launch parallel jobs For more information on using gexec please see the documentation NLP specific filesystem details work on the NLP cluster is a 250GB NFS filesystem mounted on all the cluster nodes work has no auto deletion policy please clean up after yourself scratch is high speed RAID0 storage local to each machine scratch can be cross automounted on NLP nodes at net HOSTNAME Data left on anywhere compute nodes or on the work filesystem is never backed up Data in home directories for CS 294 5 Class accounts is never backed up If you have an EECS research account your EECS home directory is backed up as per IRIS policy fee schedule We strongly suggest that you launch all jobs from a work user directory Copy all binaries

    Original URL path: https://www.millennium.berkeley.edu/wiki/nlp (2014-09-22)
    Open archived version from archive

  • radlab [UC Berkeley Clustered Computing]
    the Ganglia graphs gstat provides an ordered list of available machines Machines with a load 4 are fully loaded and should be avoided until current jobs have completed Reservations Cluster Reservation System Filesystems While you should be able execute jobs from your EECS department home directory we strongly suggest that you launch all jobs from a work user directory Before executing a program copy all binaries and data files into

    Original URL path: https://www.millennium.berkeley.edu/wiki/radlab (2014-09-22)
    Open archived version from archive

  • sensornets [UC Berkeley Clustered Computing]
    Omega testbed consists of 28 Telos motes rev B consisting of an 8MHz Texas Instruments MSP430 microcontroller 48k Flash 10k RAM and a 250kbps 2 4GHz IEEE 802 15 4 Chipcon Wireless Transceiver Active map of the Omega testbed The Telos motes are connected via USB for power programming and debugging Trio Obsolete The Trio testbed was a large scale experiment of 500 Trio motes deployed in the wild at UCB s Richmond Field Station sMote Obsolete The sMote testbed consisted of 78 Mica2DOT sensor motes which consist of an Atmel ATMEGA128L processor running at 7 3MHz 128KB of read only program memory 4KB of RAM and a Chipcon CC1000 radio operating at 433 MHz with an indoor range of approximately 100 meters Each mote was powered by Power Over Ethernet PoE rather than batteries or wall power and was connected to a private Ethernet This facilitated direct capture of data and uploading of new programs The Ethernet connection was used as a debugging and reprogramming feature only as nodes will generally communicate via radio The sMote testbed was replaced by the Motescope testbed in April 2007 Roulette Obsolete The primary objective of this testbed located on the 4th floor

    Original URL path: https://www.millennium.berkeley.edu/wiki/sensornets (2014-09-22)
    Open archived version from archive