archive-edu.com » EDU » R » RPI.EDU Total: 920 Choose link from "Titles, links and description words view": Or switch to
"Titles and links view". |

- RPI SCOREC - Multiscale Systems Engineering

the pervasive influence they will have on product and process development Consider the development process for a cardiovascular stent or a tissue engineered replacement artery The designer s goal is to provide a product that restores vascular function remains compatible with the rest of the vasculature and minimizes sites for clotting or cell proliferation that would result in a failure of the therapeutic procedure There currently are no effective modeling and simulation tools to support the optimum design of the artery repair or replacement for long term performance by simultaneously considering the vascular system local flow features vessel wall response and cell response Today s tools do not allow the designer to model the interactions between the device blood flow and vessel wall on system and cellular levels Furthermore there are no methods to determine what nano scale surface coatings might inhibit cell proliferation or clot formation Figure 2 Multiscale systems engineering to accelerate solid state lighting development roadmap Solid state lighting provides another example of the important role that multiscale systems engineering will play in the development of future products Currently white light can be produced using blue LEDs through phosphor conversion at a lumen for lumen cost that is about 100 times that of incandescent lamps However it is projected that in 20 years cost effective white LEDs will be 16 times as efficient and will last 100 times longer Figure 2 making them the main source of white light Multiscale systems engineering techniques should be able to cut this predicted 20 year timeframe in half as indicated at the bottom of Figure 2 Recent research shows it is possible to mix different color LEDs to produce white light which is more efficient because of the absence of Stokes losses Also improvements in efficiency can be achieved by eliminating dislocations in the LEDs Current research to produce higher quality blue LEDs is almost exclusively done by trial and error varying the composition substrate material and growth geometry Multiscale systems engineering will make it possible to much more quickly determine the optimal composition substrate material growth geometry and wavelengths for optimum spectral power distribution Similar issues hold true when considering such diverse applications as arrays of microfluidic devices for investigating enzymes and metabolic pathways in drug discovery or design of heterogeneous materials that optimally satisfy a set of performance requirements A recent project involving SCOREC faculty illustrates the importance of multiscale modeling on product and process design These faculty members collaborated with an industrial sponsor to apply coupled multiscale mechanical and processing simulations that revealed the processing method being used caused local material buckling Examination of unsatisfactory parts and experimental runs confirmed the predictions Based on the knowledge gained from multiscale simulation the processing procedures were successfully modified The challenge of multiscale systems lies in the development of design and optimization tools that fully span the spectrum of scales This paradigm will carry physical principles across many scales and propagate functional characteristics to adjust parameters of the system The

Original URL path: http://www.scorec.rpi.edu/research_multiscale.php (2015-07-15)

Open archived version from archive - RPI SCOREC - Computational Nanomechanics

provides a framework in which the constitutive laws defined at larger scales are based on atomic scale processes The National Science Foundation NSF funds the project Metamaterials Mechanical behavior of polymer based nanocomposites Polymers filled with nanoparticles have mechanical optical dielectric and transport properties which are significantly different from those of the equivalent system filled with micron sized particles at the same volume fraction The goal of this project is to investigate the physical origins of these improved properties and to provide guidelines for material processing optimization The project combines experimental theoretical and simulation approaches The Office of National Research funds the project Metallic alloys Unsteady deformation of Al Mg alloys The objective of this research project is to develop a model to predict phenomena leading to poor formability in Al Mg alloys based on the underlying deformation mechanisms Magnesium is added to aluminum to improve strength properties but is found to also have a detrimental effect on formability This phenomenon is rooted in the interaction of solute atoms with dislocations interaction that leads to unsteady collective motion of dislocations within grains and across grain boundaries which in turn results in unstable material flow The work investigates the details of

Original URL path: http://www.scorec.rpi.edu/research_nanomechanical.php (2015-07-15)

Open archived version from archive - RPI SCOREC - Automated Modeling

partners on the development of simulation based design systems for specific application areas The figure below indicates the overall structure of SEED Within the dotted box are the five key components needed for the effective integration of simulation into the industrial design manufacturing processes The basic functions of these five components are Simulation Model Management Responsible for interactions between the product design as defined in the product management system and the simulation technologies The structures and methods must support simulation processes to the lowest levels while reflecting simulation dictated design changes to the highest level of the product representation Simulation Data Management The simulation procedures use various discretizations of the product domain and produce information defined at that level Structures and methods are needed to define this information and properly relate it to the product representation Simulation Model Generation Tools Performs the geometric operations to construct models and the appropriate domain discretization for a simulation from the product definition Adaptive Control Tools Responsible for determining the appropriate mathematical models selecting discretization technologies evaluating the accuracy of the predictions and determining the improvements needed to obtain the desired accuracy Geometry Based Simulation Engine Responsible for executing the numerical aspects of the simulation procedures It must effectively support adaptive procedures and support the ability to link with existing CAE tools Automatic Generation Modification and Control of Meshes SCOREC has been involved with the development of automatic mesh generation and modification technologies for a number of years The tools that have been developed support the automatic creation and modification of meshes directly from non manifold solid models The tools make use of the flexible object oriented structures representing the geometric model attribute information associated with the model and the mesh itself boundary layer mesh boundary layer mesh crack propagation coarse h mesh curved p mesh The developed procedures have been used by a number of sponsors over the years and have formed the basis of commercial automatic mesh generation products Most current SOCREC efforts in these areas are focused on particular capabilities that remain to be fully developed including Ensuring mesh refinement procedures are able to place node points on the curved boundary of the model domain For more information see the linked PowerPoint presentation Generation and control of proper curved meshes for use when high order discretization methods e g p version finite elements are used to analyze a problem on curved domains For more information see the linked PowerPoint presentation Perform intelligent mesh modification to account for the needs to mesh size changes refinement coarsening and control of mesh entity shape in evolving geometry problems Efforts are underway to have these procedures operate to satisfy the general anisotropic mesh metric fields Since these procedures are often applied to simulations where solution fields must be transferred between meshes during mesh modification a set of incremental solution transfer procedures are also being developed For more information on the basic mesh modification procedures see the linked PowerPoint presentation Interoperable Mesh and Discretization Technologies SCOREC

Original URL path: http://www.scorec.rpi.edu/research_automated.php (2015-07-15)

Open archived version from archive - RPI SCOREC - Adaptive Methods for Partial Differential Equations

new mesh must be designed such that the resulting discretization is an efficient discretization That is the discretization produces the least amount of error with the fewest degrees of freedom In the case of h version adaptivity the determination of the appropriate element sizes in the new discretization is based on the premise that the most efficient mesh for a given problem is the one that equally distributes the error among the elements The theory utilizes information about the existing discretization geometry the element level i e local level and global error data evaluated in the energy norm and a prescribed value specifying the amount of error which is acceptable to the user for the given problem Unfortunately the theory of basing new element sizes on local errors evaluated in norms comprised of integrals over a given element and its boundary contains an inherent limitation Specifically this theory utilizes a single scalar measure of the approximate error in an element to calculate a single scalar value for the appropriate size of the elements which will replace the given element size in the new discretization Without further embellishment this provides information for the distribution of elements within the new discretization only in terms of discrete patches This theory gives no direct information regarding a continuous distribution of new element sizes if the new sizes are significantly smaller than the existing elements Note that in general a continuous distribution of new element sizes in the problem domain necessitates a distribution of new element sizes within the elements existing in the current discretization and that this information cannot be supplied by a single scalar value per element It is possible to use model topology mesh topology and element level error information to infer a continuous distribution of new element sizes within the elements comprising the existing discretization A continuous distribution of element sizes is designed by first determining appropriate new element sizes at specific locations in the problem domain Interpolation functions of suitable continuity are then defined to describe the variation of element sizes throughout the problem domain The topology of the geometric model is then employed to tailor the discretization for specific problem classes Bracket example Click on the picture to enlarge Bracket geometry loads Initial mesh and boundary conditions Adaptively refined mesh Detail of corner area with overlay of original mesh Adaptive analysis environments h hp and hpr s techniques Mesh enrichment The application of adaptive finite element or finite volume techniques on unstructured 3D meshes requires the ability to locally alter the size of the elements as dictated by the error indication procedures One approach to obtain the desired distribution of element sizes is to regenerate the entire mesh using an automatic mesh generator controlled by the given mesh size distributions over the domain This approach is computationally expensive and introduces the complexities of mapping solution fields from one mesh to another An alternative approach to obtaining the desired distribution of elements is to locally refine and or coarsen the

Original URL path: http://www.scorec.rpi.edu/research_adaptive.php (2015-07-15)

Open archived version from archive - RPI SCOREC - Parallel Scientific Computation

support the dynamic re partitioning of a mesh parallel generation of the mesh and linkage to discrete models The constantly evolving nature of the computations in automated adaptive analysis requires the ability to dynamically re partition the underlying structures to maintain load balance Alternative algorithms for this process have been developed and found to have specific advantages for specific situations The entity weightings have been used with the dynamic re partitioning procedures to properly account for computational cost per entity in an adaptive time marching algorithm and in a predictive load balancing procedure where the weights were based on refinement level and the mesh re balanced before the refinement was performed The procedures to perform automated adaptive analysis have been built on the base parallel structures and techniques The specific components considered include parallel automatic mesh generation parallel mesh adaptation parallel solver and complete parallel solution procedures Our most recent efforts on the development of these structures and methods has focused on Definition and implementation of a configurable mesh database that operates in serial and parallel Implementation of effective parallel control mechanisms building on RPM Linkage with a generalized object oriented dynamic load balancing that can be used to maintain load balance for adaptive multiscale computations The configurable mesh data structure we are developing is the recently proposed Parallel Algorithm Oriented Mesh Data structure PAOMD in which the application can dictate the adjacencies needed for AOMD to construct at run time PAOMD is available as open source from SCOREC Given a set of minimal entities and adjacencies AOMD can construct any required adjacency Current extensions allow AOMD to support conforming and non conforming grids defined by the general subdivision of an initially conforming mesh and octree based spatial discretizations The Rensselaer partition model RPM has a hierarchical structure with a

Original URL path: http://www.scorec.rpi.edu/research_parallel.php (2015-07-15)

Open archived version from archive - RPI SCOREC - Computational Fluid Dynamics

levels of modeling have been developed with compressible and incompressible flow solvers Large eddy simulations underway include a study of a high lift airfoil where 10 million nonlinear equations were solved at each time step for 150 thousand time steps making this one of the largest fully unstructured grid simulations undertaken to date 1 5 trillion nonlinear equations solved While it will be some time before design engineers can bring this level of computational power to bear on these problems it is expected that these simulations can provide information that will improve existing RANS models in use today Other simulations include flow over a notch flow within an IC engine flow near the exit of a jet engine and more basic well understood flows such as decay of isotropic turbulence and channel flow The above simulations where the first applications of the dynamic Smagorinsky model to unstructured grid calculations To enable these models to be applied to real world flows our research is also focused on improvements to the numerical methods used to solve the various forms of the Navier Stokes equations In particular research is underway in parallel adaptive approaches that utilize varying grid size and varying order interpolation functions to improve the approximation of the numerical method stabilized finite element method Further research is underway in the development of error estimators and error indicators that are necessary to drive this h p adaptivity This aspect of the problem is particularly difficult for unsteady flows where previously proposed error indicators may suggest adaptivity too often or too late i e fine scale structures are constantly moving We have recently proposed an applied statistics based error adaptivity that seems to hold great promise for unsteady flows We are also undertaking a significant effort to develop new models for large eddy

Original URL path: http://www.scorec.rpi.edu/research_computational.php (2015-07-15)

Open archived version from archive - RPI SCOREC - Virtual Nanofabrication

devices Strain induced defect production in strained semiconductors Principal Investigator Catalin Picu By inducing strain in silicon and other semiconductors carrier mobility can be increased However such strains can give rise to dislocations and other defects We are able to characterize how and where such systems can produce device killing defects Figure 1 left Continuum FEM models are employed on larger scales automatically creating right atomistic sub problems for detailed analysis in areas of high strain Super resolution holographic lithography Principal Investigator Assad Oberai By using electromagnetic interference patterns rather than projective techniques lithographic resolution can be more than doubled for a given wavelength However the biasing corrections made for OPC no longer apply and new techniques are needed to design and analyze holographic masks We adjoint solutions to Maxwell s equations to accelerate the inner loop of a mask optimizer allowing us to quickly create highly effective holographic masks giving near optimal dose contrast Figure 2 top Schematic of construction process of a holographic mask bottom Dose seen by photoresist as holographic mask parameters are optimized starting at left and proceeding to optimal mask on right Plasma etch modeling Principal Investigator Max Bloomfield Reactive ion etching RIE is the industry standard method for transferring patterns to the wafer We use physically based models of transport and reaction to model the etching process on the sub micron feature scale driving these simulations from equipment scale simulations These multiscale analyses cross six orders of magnitude in size scale and seven orders of magnitude in time scale Figure 3 Schematic of ion and neutral species arriving from a plasma reaching the substrate through an opening in the photomask The ions move directionally in response to the electric field and the neutral species move with the normal Maxwellian distribution Figure 4 An L

Original URL path: http://www.scorec.rpi.edu/research_virtualnanofab.php (2015-07-15)

Open archived version from archive - RPI SCOREC - Plasma Etch Modeling

response to the electric field and the neutral species move with the normal Maxwellian distribution To model such a system we approach the problem on the scale of a single feature being etched On this scale which is significantly smaller than the mean free path of any of the gas or plasma species all species can be considered to move in straight lines interacting only with the surface This ballistic approximation allows us to compute the fluxes of species arriving at any point on the surface from the source plasma and from emissions by other points on the surface If we know or can guess how reactions at the surface depend on incoming fluxes we then have enough information to compute the distribution of fluxes everywhere on the surface and the resulting etch rates everywhere on the surface DQR A feature scale code Feature scale simulations that use the above ballistic transport and reaction model BTRM 1 must solve a set of integral equations representing the integrated flow of species to and from control volumes along the evolving surface As with many problems with integral equations at their centers solving this problem involves a large amount of bookkeeping To keep track of all of the reaction transport and discretization tasks involved we have created a trio of codes DQR D a level set based 3D surface discretization and evolution code Q a deterministic 3D fractional transport computation code R a psuedosteady reaction state computation code This trio of codes works together to represent and track a model of the evolving system as it is etched accounting for transport between the source and the surface and surface and other parts of the surface and the complex resulting psuedo equilibrium DQR is part of the CCNI Commons a collection of intellectual property developed in conjunction with the CCNI facilities allowing sponsors and developers to appropriately share in rights to research advancements Q computes the view factor matrix Q or the set of differential transmission probability between pairs of oriented triangular elements in a surface mesh accounting for shadowing and occlusion by the rest of the surface This view factor contains all the information about the geometry of the surface required to compute the transport of material throughout the structure Figure 2 The view factor q er is an answer to the question What fraction of material leaving surface e arrives at surface r Q is able to do this quickly using an octree based preprocessor to efficiently find potential occlusions while ignoring elements away from the line of sight The figure below shows an example of the complicated path that a ray might take in a space with reflective boundary condition As the ray proceeds through the space cells in the octree hashing table that it passes through are shown Figure 3 An example of the complicated path a ray leaving a triangle on the surface can take through a reflective space before it collides with another surface element The cells of

Original URL path: http://www.scorec.rpi.edu/research_plasmaetchmodeling.php (2015-07-15)

Open archived version from archive