A High-Performance Framework for Sun-to-Earth Space Weather Modeling

A High-Performance Framework for Sun-to-Earth Space Weather Modeling
of 8

Please download to get full document.

View again

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
  A High-Performance Framework for Sun-to-Earth Space Weather Modeling Ovsei VolbergGabor Toth Tech-X CorporationThe University of Michigan, Boulder, CO 80303Ann Arbor, MI 48109-2143 volov@txcorp.comgtoth@umich.eduTamas I. GombosiQuentin F. Stout The University of MichiganThe University of Michigan, Ann Arbor, MI 48109-2143Ann Arbor, MI 48109-2143 tamas@umich.eduqsout@eecs.umich.eduKenneth G. PowellDarren De Zeeuw The University of MichiganThe University of Michigan, Ann Arbor, MI 48109-2143Ann Arbor, MI 48109-2143  powell@umich.edudarrens@umich.eduAaron J. RidleyKevin Kane The University of MichiganThe University of Michigan, Ann Arbor, MI 48109-2143Ann Arbor, MI 48109-2143ridley@umich.edu kanekt@umich.eduKenneth C. HansenDavid R. Chesney The University of MichiganThe University of Michigan Ann Arbor, MI 48109-2143Ann Arbor, MI 48109-2143 kenhan@umich.educhesneyd@eecs.umich.eduRobert Oehmke The University of Michigan Ann Arbor, MI 48109-2143 oehmke@engin.umich.edu Abstract The Space Weather Modeling Framework (SWMF)aims at providing software architecture for integrated modeling of different domains of Sun-Earth systemand high-performance physics-based space weather  simulation. The SWMF component architecture promotes collaboration between developers of individual physics models and empowers them by providing coupling context and parallel and distributed computing support. The framework design places minimal requirements on components. The web-based Graphical User Interface facilitates the remoteaccess to the framework even from scientific groupsthat do not have an access to supercomputers or clusters otherwise.  1. Introduction All The Sun-Earth system is a complex naturalsystem of many different interconnecting elements.The solar wind transfers significant mass, momentumand energy to the magnetosphere, ionosphere, andupper atmosphere, and dramatically affects the physical processes in each of these physical domains. The various domains of the Sun-Earth system can be simulated with stand-alone models if simplifyingassumptions are made about the interaction of the particular domain with the rest of the system. Thesemodels can be combined into a complex system of mutually interacting models, each associated with oneor more physics domains to address a wider range of  physical phenomena. For the prediction of extremespace weather events these models must be run andcoupled in an efficient manner, so that the simulationcan run faster than real-time. The ability to simulateand predict space weather phenomena is important for many applications [1], for instance the success of spacecraft missions and the reliability of satellitecommunication equipment. In extreme cases, themagnetic storms may have significant effects on the power grids used by millions of households.Traditionally, researchers developed monolithicspace physics applications, which were able to modelseveral domains of the Sun-Earth system, but it wasrather difficult to select an arbitrary subset of thevarious models, to replace one model with another one, to change the coupling schedules of theinteracting models, and to run these modelsconcurrently on parallel computers. As an illustrativeexample of modeling multiple domains of the Sun-Earth system with a monolithic numerical code, wedescribe the evolution of the space plasma simulation program BATSRUS developed at the University of Michigan. Originally BATSRUS was designed as avery efficient, massively parallel MHD code for space physics applications [2, 3]. It is based on a block  adaptive Cartesian grid with block based domaindecomposition, and it employs the Message PassingInterface (MPI) standard for parallel execution. Later the code was coupled to an ionosphere model [4]various upper atmosphere models [5] and the inner magnetosphere model [6]. These couplings were donein a highly integrated manner resulting in a monolithiccode with the limitations mentioned above. Thus,although BATSRUS is successfully used for theglobal MHD simulation of space weather [7],monolithic programs like it are not able to support thecommunity effort to share models and to combinethese models into flexible, extensible and efficienthigh-performance applications.The Center for Space Environment Modeling at theUniversity of Michigan and its collaborators are building a Space Weather Modeling Framework (SWMF) to provide NASA and the modelingcommunity with a high-performance computationaltool with “plug-and-play” capabilities to model the physics from the surface of the Sun to the upper atmosphere of the Earth. The SWMF is designed tocouple the models of the various physics domains in aflexible yet efficient manner, which makes the prediction of space weather feasible on massively parallel computers. Each model has its own dependentvariables, a mathematical model in the form of equations of evolution, and a numerical scheme withan appropriate grid structure and temporaldiscretization. The physics domains may overlap eachother or they can interact with each other through a boundary surface. The SWMF should be able toincorporate models from the community and couplethem with minimal changes in the software of anindividual model.The SWMF architecture is based on twocontemporary ideas: the framework-based approachcombined with the component-based approach [8] tosoftware development (CBSD). Gamma et al [9] definea framework as a set of cooperating classes that makeup a reusable design for a specific class of software.The framework captures the design decisions that arecommon for its application domain: it emphasizes thedesign reuse over module reuse. The main contributionof the framework is the architecture it defines. From auser’s perspective the use of a framework means aninversion of control in comparison to a traditional useof a library or a toolkit: one should reuse the main body and some kernel and write or add the module thatthe main body calls. The CBSD approach, i.e. thecomposition of systems from existing or newlydeveloped components, was chosen to address theheterogeneous nature of the computational models of the Sun-Earth system and to satisfy the design goals.The CBSD promises higher quality, lower cost andshorter development time. At the same time the CBSDis a paradigm change in the way the Sun-Earth systemis modeled.In this paper we present the design andimplementation of the first working prototype of theSWMF. 2. Architecture overview The SWMF aims at providing a flexible andextensible software architecture for multi-component physics-based space weather simulations, as well as for various space physics applications [10, 11]. One of themost important features of the SWMF is that it canincorporate different computational physics modules tomodel different domains of the Sun-Earth system.Each module for a particular domain can be replacedwith alternatives, and one can build an applicationinstance with only a subset of the modules if desired.  The first working prototype of the SWMF includedcomponents for five physics domains: GlobalMagnetosphere (GM), Inner Heliosphere (IH),Ionosphere Electrodynamics (IE), Upper Atmosphere(UA) and Inner Magnetosphere (IM): •   The GM and IH components are based on theUniversity of Michigan’s BATSRUS MHDmodule. The highly parallel BATSRUS codeuses a 3D block-adaptive Cartesian grid. •   The IM component is the Rice ConvectionModel (RCM) developed at Rice University.This serial module uses a 2D non-uniformspherical grid. •   The IE component is a 2D spherical electric potential solver developed at the University of Michigan. It can run on 1 or 2 processors sincethe northern and southern hemispheres can besolved in parallel. •   The UA component is the Global IonosphereThermosphere Model (GITM) recentlydeveloped at the University of Michigan as afully parallel 3D spherical model.Two additional components for new physicsdomains were added to the SWMF for its secondrelease: the Radiation Belt model developed at RiceUniversity by A. Chan, D. Wolf and Bin Yu and theSolar Energetic Particle model developed at theUniversity of Arizona by J. Kota. With the additionof these two components the SWMF will be able tosimulate the space effects of the great importance for radiation protection of spacecraft equipment and for the protection of lives and health of astronauts.The main SWMF design goals were defined in [12]as (1) incorporate computational physics modules withonly modest modification, (2) achieve good parallel performance in the coupling of the physicscomponents, and (3) allow physics components tointeract with the SWMF as efficiently as possible.The efficient coupling of any arbitrary pair of  parallel applications, each of them having its own gridstructure and data decomposition, is not easilyachieved. There are several a priori known problems,which need to be solved so that the heterogeneouscomputational models of the different domains of theSun-Earth system can properly inter-operate. Anincomplete list of these problems is:1.   There are serial and parallel models.2.   An individual model is usually developed for stand-alone execution.3.   Input/output operations do not take intoaccount potential conflicts with other models.4.   Models often do not have checkpoint andrestart capabilities.5.   The majority of models are written in non-object oriented languages (e.g. Fortran 77 andFortran 90.We employed the framework-based approachcombined with the CBSD approach to develop amulti-purpose, easy-to-use software architecture, whichfacilitates efficient communications betweenexchangeable numerical physics models describingSun-Earth system. The CBSD emphasizes the principle of separation between a componentimplementation and its interfaces. Major characteristicsof CBSD, which are important for the SWMF, include(1) components are the atomic software units of modeling, design and implementation, (2) componentsinteract through interfaces; interfaces are in the center of the architectural design: the loose coupling isessential (3) an interface is separated from thecomponent implementation, and (4) an interface can beimplemented by multiple componentsThere are several potential solutions that providethe necessary interoperability mechanism between parallel modules [13]. The most promising approach isto define a standard set of interface functions that every physics component must provide. In the SWMF acomponent is created from a physics module, for example BATSRUS, by making some minimalrequired changes in the module and by adding tworelatively small units of code: (1) a wrapper, which provides the standard interface to control the physicsmodule; and (2) a coupling interface, to perform thedata exchange with other components.Each application instance created by the SWMFcontains a single Control Module (CON), whichcontrols initialization and execution of the componentsand is responsible for component registration, processor layout for each component, and couplingschedules.From a component software technology perspective, both the wrapper and coupling interface are componentinterfaces: the wrapper is an interface with CON, andthe coupling interface is an interface with another component. As shown in Figure 1, the wrapper interface functions have standard names, which makesswapping between various versions of a component possible. Both the wrapper and the coupling interfaceare constructed from the building blocks provided bythe framework. The structure of a component and itsinteraction with CON and another component areillustrated in Figure 1.  Figure 1. Integration of physics modulesinto the SWMF architecture Requirements play a pivotal role in the initialconstruction of component-based software system. TheSWMF is not an exception: each of the SWMF designgoals is reflected in the interoperability policy [12].Specifically, the requirements necessary to ensure theinteroperability are formulated as the SWMFcompliance definition, which has a dual character: thecompliance definition for physics modules andcompliance definition for components. The physicsmodules must comply with a minimum set of requirements before they are transformed into acomponent [12]:1.   The parallelization mechanism (if any) shouldemploy the MPI standard;2.   The module needs to be compiled as a librarythat could be linked to an executable;3.   The module should be portable to a specificcombination of platforms and compilers, whichinclude Linux workstations and NASA super-computers [12]; The stand-alone module mustsuccessfully run a model test suite provided bythe model developer on all the required platform/compiler combinations;4.   The module should read input from and writeoutput to files which are in a subdirectoryunique for the component;5.   A module should be implemented in Fortran77 and/or Fortran 90;6.   A module should be supplied with appropriatedocumentation.The SWMF requirements for a component aredefined in terms of a set of methods to beimplemented in the component wrapper shown inFigure 1. The methods enable the component to perform the following tasks:1.   Component registration and parallel setup;2.   Input and verification of the component parameters;3.   Provide grid description to CON;4.   Initialization for session execution;5.   One time step execution (which cannot exceed aspecified simulation time);6.   Data exchange with another component viacalls to an appropriate coupler;7.   Component state recording into a restart file onrequest;8.   Finalize the component state at the end of theexecution.The registration provides information about thecomponent to CON, such as the name and the versionof the component. In return, CON assigns an MPIcommunication group to the component based on the processor layout defined in the LAYOUT.in file. Anexample of this file is shown in Figure 2. The firstcolumn identifies the components by their abbreviatednames, while the rest of the columns define the firstand last processor element (PE) ranks, and the PEstride. In the example shown in Figure 2, the GMcomponent runs on every even processor, the IEcomponent runs on the first two processors, the IHcomponent runs on every odd processor, the IMcomponent runs on processor 11, and the UAcomponent runs on 4 processors from ranks 12 to 15.Only registered components can be used in the run. Name first last stride#COMPONENTMAPGM 0 999 2IE 0 1 1IH 1 999 2IM11 11 1UA12 15 1#END Figure 2. An example of theLAYOUT.in file The execution is completed in sessions. In eachsession the parameters of the framework and thecomponents can be changed. The parameters are readfrom the PARAM.in file, which may contain further included parameter files. These parameters are read and broadcast by CON and the component specific parameters are sent to the components for reading andchecking. The CON related parameters define theinitial time, coupling schedules, frequency of savingrestart files, final simulation time of the session, andother information, which is not restricted to a singlecomponent. At the beginning of each session thecomponents are initialized and the interacting  components are coupled together for the first time. TheSWMF application can operate in two different sessionexecution regimes.The sequential execution regime is backwardcompatible with BATSRUS. In this regime thecomponents are synchronized at every time step, sotypically only one component is executing (possiblyon many processors) at any given time. The coupling patterns and schedules are mostly predetermined.In the concurrent execution regime, the componentscommunicate only when necessary. This is possible because the coupling times are known in advance. Thecomponents advance to the coupling time and only the processors involved in the coupling need tocommunicate with each other. In this execution modelall components are 'equal', any physically meaningfulsubset can be selected for the run, and their couplingschedules are determined by the parameters given inthe PARAM.in file. The possibility of deadlocks iscarefully avoided.Based on parameters specified in the PARAM.infile, CON may instructs the components to save their current state into restart files periodically. This makes possible the seamless continuation of a run from agiven point of the simulation. Checkpoint restart is anessential feature of a robust, user-friendly, and fault-tolerant software design. At the end of the last sessioneach component finalizes its state. This involveswriting out final plot files, closing log files, and printing performance and error reports. After thecomponents have finalized, CON also finalizes andstops the execution.The framework building blocks are implemented onthe base of emulation of Object-Oriented Programmingconcepts in Fortran 90 [14], mostly as singletonclasses [9], i.e. each of them has exactly one instanceand provides a global point of access to it.The coupling of the components is realized either with plain MPI calls, which are specifically designedfor each pair of interacting components, or via thegeneral SWMF coupling toolkit. The toolkit cancouple components based on the following types of distributed grids: •   2D or 3D block adaptive grid •   2D or 3D structured gridStructured grids include uniform and non-uniformspherical and Cartesian grids.The toolkit obtains the grid descriptors from thecomponents at the beginning of the run. The griddescriptor defines the geometry and paralleldistribution of the grid. At the time of coupling thereceiving component requests a number of data valuesat specified locations of the receiving grid (for exampleall grid points at one of the boundaries). The geometriclocations are transformed, sometimes mapped, to thegrid of the provider component. Based on the griddescriptor of the provider component, the data valuesare interpolated to the requested locations and sent back to the requesting component. The interpolationweights and the MPI communication patterns arecalculated in advance and saved into a 'router' for sakeof efficiency. The routers are updated only if one of thegrids has changed (e.g. due to grid adaptation) or whenthe mapping between the two components has changed(e.g. due to the rotation of one grid relative to theother). In certain cases the coupling is achieved via anintermediate grid, which is stored by the receivingcomponent, but its structure is based on the providingcomponent. The intermediate grid can be described toCON the same way as the base grid of the receivingcomponent.The framework’s layered architecture is shown inFigure 3. The Framework Services consist of softwareunits (classes), which implement componentregistration, session control, and input/outputoperations of initial parameters. The Infrastructureconsists of utilities, which define physics constantsand different coordinate systems, time and dataconversion routines, time profiling routines and other lower level routines. The Superstructure Layer, PhysicsModule Layer, and Infrastructure Layer constitute the“sandwich-like” architecture similar to the EarthSystem Modeling Framework (ESMF) [15]. Figure 3. The layered architecture of theSWMF. The SWMF will also contain a web-based GraphicalUser Interface, which is not part of the ESMF design. 3. Some preliminary results We present preliminary simulations, which involveall five components of the SWMF prototype. Two
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks