archive-edu.com » EDU » O » OSC.EDU

Total: 329

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Applicability of Object-Based Storage Devices in Parallel File Systems
    interact Delegating responsibility for some operations from the host processor to intelligent peripherals can improve application performance Traditional storage technology is based on simple fixed size accesses with little assistance from disk drives but an emerging standard for object based storage devices OSDs is being adopted These devices will offer improvements in performance scalability and management and are expected to be available as commodity items soon When assembled as a parallel file system for use in high performance computing object based storage devices offer the potential to improve scalability and throughput by permitting clients to securely and directly access storage However while the feature set offered by OSD is richer than that of traditional block based devices it does not provide all the functionality needed by a parallel file system We will examine multiple aspects of the mismatch between the needs of a parallel file system in particular PVFS2 and the capabilities of OSD Topic areas include mapping data to objects metadata transport caching and reliability Trade offs arise from the mapping of files to objects and how to stripe files across multiple objects and disks in order to obtain good performance A distributed file system needs to track metadata that describes and connects data OSDs offer automatic management of some critical metadata components that can be used by the file system There are transport issues related to flow control and multicast operations that must be solved Implementing client caching schemes and maintaining data consistency also requires proper application of OSD capabilities Our work will examine the feasibility of OSDs for use in parallel file systems discovering techniques to accommodate this high performance usage model We will also suggest extensions to the current OSD standard as needed Milestones achieved Design and implementation of OSD target It implements almost all mandatory commands in OSD2v3 specification and some interesting optional commands like multi object commands A light weight OSD initiator library to enable clients to speak with the OSD target Extensions to enable usage over RDMA Networks also known as iSER An intelligent OSD capable of executing offloaded commands and related infrastructure Project deliverables Peer reviewed publications Nawab Ali Ananth Devulapalli Dennis Dalessandro Pete Wyckoff and P Sadayappan Revisiting the Metadata Architecture of Parallel File Systems 3rd International Petascale Data Storage Workshop PDSW paper talk Ananth Devulapalli Dennis Dalessandro Pete Wyckoff Data Structure Consistency Using Atomic Operations in Storage Devices The 5th International Workshop on Storage Network Architecture and Parallel I Os SNAPI 2008 paper talk Nawab Ali Ananth Devulapalli Dennis Dalessandro Pete Wyckoff and P Sadayappan An OSD based Approach to Managing Directory Operations in Parallel File Systems IEEE Cluster 2008 paper talk Dennis Dalessandro Ananth Devulapalli and Pete Wyckoff Non Contiguous I O Support for Object Based Storage International Workshop on Parallel Programming Models and Systems Software for High End Computing P2S2 2008 paper talk Ananth Devulapalli Dennis Dalessandro Pete Wyckoff Nawab Ali and P Sadayappan Integrating Parallel File Systems with Object Based Storage Devices Super Computing SC 2007 paper

    Original URL path: http://archive.osc.edu/research/network_file/projects/object/index.shtml (2013-06-13)
    Open archived version from archive


  • RDMA Enabled Apache
    just too big a burden to bear The reason for this is the way TCP IP is handled Today s NICs simply put bits on and off of the wire so to speak All protocol processing is handled by the CPU TCP Offload Engines or TOE cards have been introduced in recent years to lack luster success These devices aim to move the network processing away from the CPU and to the NIC This is a good idea albeit not a full solution The limiting factor of TOE cards is they do not address the problem of accesing memory The CPU must still move data to and from the NIC in order to do this it must make a copy of data to pass through the OS kernel further confounding the problem Due to the shortcomings of TOE cards a technique known as Remote Direct Memory Access has been developed also known as RDMA RDMA not only completely offloads all protocol processing to the NIC but RDMA also frees the CPU of the burden of dealing with memory The network adapter or RDMA NIC RNIC handles movement of data directly from user space There is no copy made as data is not passed through the OS kernel RDMA achieves what is known as zero copy In the past RDMA has proved to be a great solution to networking problems however it was limited to the data center use in technologies such as InfiniBand However with the technology known as iWARP it is now possible to utilize RDMA over ordinary TCP networks In other words RDMA can work over the Internet One of the limiting factors for RDMA deployment is it takes special hardware While a NIC upgrade on a web server is perfectly reasonable requiring all web clients to upgrade their NICs to an RNIC is not rational Fortunately using what is known as software iWARP web clients can make use of a downloadble plug in or direct software support to emulate the iWARP protocols and let the web server take advantages of its RNIC Our work in mod rdma is to create a module for the popular Apache web server that enables it to use RDMA We outfit a web server with 10Gigabit iWARP hardware and utilize clients running software iWARP modified wget and apache bench programs The result is much improved web server performance enabling a single server to fully saturate a 10 Gigabit per second network This web page serves as the distribution site for mod rdma source code as well as any documentation and support This will be the main source of public dissemination of information on RDMA Enabled Apache Technical Details 1 Overview 2 Modifications to Apache server 3 Apache Hooks 4 Transport Modules 5 Supported Transport Types 6 Included Client Applications 7 Licensing 8 Funding 9 Author and Contact Information 10 Support 11 Building Installation Download Release Download a tarball containing mod rdma wget and Apache bench ab Please send an email to Dennis

    Original URL path: http://archive.osc.edu/research/network_file/projects/rdma/index.shtml (2013-06-13)
    Open archived version from archive

  • Software implementation and testing of iWarp protocol
    for protocol testing and for future protocol research The work presented here allows a server with an iWarp network card to utilize it fully by implementing the iWarp protocol in software on the non accelerated clients While throughput does not improve the true benefit of reduced load on the server machine is realized Experiments show that sender system load is reduced from 35 to 5 and receiver load is reduced from 90 to under 5 These gains allow a server to scale to handle many more simultaneous client connections Networking is one of the key aspects in high performance computing As systems grow larger and more complex network technologies continue to evolve Networking technology is driven by the need to move larger amounts of data in smaller amounts of time From the dawn of computing until recently the bottleneck has been the network infrastructure Today as network speeds reach 10Gbps and beyond the bottleneck has moved to the CPU Basically the CPU spends so much time handling communication related processing that little time is left to do computational work It is for this reason that we turn to advanced interconnect technologies like Infiniband and iWarp The High Performance Networks and File Systems Research Group is active on the forefront of network technology From investigating performance of a new technology to pushing the boundaries of what a new technology can do the group aims to play a key role in how high performance networks are used Ongoing work is centered around a technology known as RDMA Remote Direct Memory Access From vast experience with Infiniband to being an early adopter of iWarp the group is active on all fronts Current work includes on going performance studies with Infiniband as well as new ways of utilizing Infiniband for the benefit of the

    Original URL path: http://archive.osc.edu/research/network_file/projects/iwarp/index.shtml (2013-06-13)
    Open archived version from archive

  • Low-latency gigabit ethernet message passing
    Renew Project Bioinformatics Biomedical Sciences and Visualization Blue Collar Computing Computational Science Engineering Research Applications Networking Research Systems Research Research Report Ralph Regula School Computational Chemistry Grid Summer Institute Young Women s Summer Institute HPC and Software Training Current Training and Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Systems Research Current Projects Past Projects Contact Us Related Links Research Home Get an Account Networking Research OSC Networking Research Reports 2009 Research Highlights 2008 Research Report 2007 Research Report Systems Research Low latency gigabit ethernet message passing Principal Investigators P Wyckoff and D K Panda Funding Source Sandia National Labs contract 12652 Duration 9 1 2000 8 31 2002 Description Low latency gigabit ethernet message passing Although ethernet is the networking hardware of choice in the commodity computer market its use in high performance cluster computing platforms has not met wide acceptance due to its relatively poor performance in terms of latency and bandwidth This is not all inherent in the medium itself but rather in the software which has traditionally been used to access the hardware We plan to significantly

    Original URL path: http://archive.osc.edu/research/network_file/projects/ethernet/index.shtml (2013-06-13)
    Open archived version from archive

  • High Performance and Scalable MPI Implementation on InfiniBand
    Research Systems Research Research Report Ralph Regula School Computational Chemistry Grid Summer Institute Young Women s Summer Institute HPC and Software Training Current Training and Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Systems Research Current Projects Past Projects Contact Us Related Links Research Home Get an Account Networking Research OSC Networking Research Reports 2009 Research Highlights 2008 Research Report 2007 Research Report Systems Research High Performance and Scalable MPI Implementation on InfiniBand Principal Investigators D K Panda and P Wyckoff Funding Source Sandia National Labs CA Duration 5 2 2002 3 30 2003 Description The recently proposed InfiniBand IBA standard provides many new features and mechanisms which are not available in other contemporary and popular cluster interconnect technologies such as Myrinet and Gigabit Ethernet These new features raise interesting challenges regarding how to take advantage of them to design a high performance and scalable MPI implementation for large clusters We are investigating aspects associated with implementing both point to point and collective communication primitives of the MPI standard Specifically we focus on the following topics optimal usage of IBA transport

    Original URL path: http://archive.osc.edu/research/network_file/projects/hp_mpi/index.shtml (2013-06-13)
    Open archived version from archive

  • Collective Communication and Connection Management Issues in InfiniBand-Based Clusters
    Research OARnet Get an Account Renew Project Bioinformatics Biomedical Sciences and Visualization Blue Collar Computing Computational Science Engineering Research Applications Networking Research Systems Research Research Report Ralph Regula School Computational Chemistry Grid Summer Institute Young Women s Summer Institute HPC and Software Training Current Training and Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Systems Research Current Projects Past Projects Contact Us Related Links Research Home Get an Account Networking Research OSC Networking Research Reports 2009 Research Highlights 2008 Research Report 2007 Research Report Systems Research Collective Communication and Connection Management Issues in InfiniBand Based Clusters Principal Investigators D K Panda and P Wyckoff Funding Source Los Alamos Natl Lab Duration 05 01 03 09 30 04 Description The emerging InfiniBand architecture 1 is providing a new way to design next generation high performance clusters In addition to the higher network speed this architecture provides several new mechanisms RDMA multicast atomic and service levels to build high performance and efficient communication subsystems for clusters At OSU a high performance MPI implementation MVAPICH 2 3 with a focus toward point to point

    Original URL path: http://archive.osc.edu/research/network_file/projects/infiniband/index.shtml (2013-06-13)
    Open archived version from archive

  • Analysis of message passing environments on large cluster performance
    Research Research Report Ralph Regula School Computational Chemistry Grid Summer Institute Young Women s Summer Institute HPC and Software Training Current Training and Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Systems Research Current Projects Past Projects Contact Us Related Links Research Home Get an Account Networking Research OSC Networking Research Reports 2009 Research Highlights 2008 Research Report 2007 Research Report Systems Research Analysis of message passing environments on large cluster performance Principal Investigators D K Panda P Sadayappan and P Wyckoff Funding Source Sandia National Labs Duration 1 8 2001 2 28 2002 Description Analysis of reliability scalability performance tradeoffs in Myrinet Understand and analyze the reliability mechanism in GM by listing all the different fault scenarios which GM handles and using that to build state transition diagrams to show how GM handles the faults Derive a cost model to the state transition diagram in terms of how much overhead instruction count interactions with host GM requires to handle recovery for each fault Analyze the impact of different types of faults and their frequencies on the overall performance We can

    Original URL path: http://archive.osc.edu/research/network_file/projects/message/index.shtml (2013-06-13)
    Open archived version from archive

  • Supporting MPI collective communication operations with application bypass
    Current Training and Events Educators Online OCS Lecture Series Press Releases Headlines Calendar of Events About OSC Media Kit OSC Media Contacts Staff Directory Visit OSC Supercomputing Support Networking Support Systems Research Current Projects Past Projects Contact Us Related Links Research Home Get an Account Networking Research OSC Networking Research Reports 2009 Research Highlights 2008 Research Report 2007 Research Report Systems Research Supporting MPI collective communication operations with application bypass Principal Investigators D K Panda P Sadayappan P Wyckoff Duration 5 6 03 12 31 2003 Description For large scale parallel systems supporting MPI it is desirable that the MPI implementation ensures progress in order to achieve good performance and scalability 1 Currently collective communication operations in MPI are implemented by explicit send recv calls by the processes However if a single node gets delayed say an intermediate node of a broadcast reduction or barrier operation the whole operation gets delayed This leads to increased execution time for applications and limited scalability Modern interconnects are supporting new communication mechanisms such as remote memory operations RDMA Read and RDMA write Similarly modern NICs are providing programmable interface and memory to support collective communication operations with minimal interaction from processors 2 3

    Original URL path: http://archive.osc.edu/research/network_file/projects/mpi/index.shtml (2013-06-13)
    Open archived version from archive