course participants course announcements about this wiki questionnaires and assignments slides of presentations course schedule related resources Gerhard Fischer Hal Eden Mohammad Al-Mutawa Ashok Basawapatna Lee Becker Jinho Daniel Choi Guy Cobb Holger Dick Nwanua Elumeze Soumya Ghosh Rhonda Hoenigman elided#1 Dan Knights Kyu Han Koh elided#2 Yu-Li Liang Paul David Marshall Keith Maull Jane Kathryn Meyers John Michalakes Michael Wilson Otte Deleted Page Joel Pfeiffer Caleb Timothy Phillips Dola Saha deleted |
Additive Schwarz-based Fully Coupled Implicit Methods for Resistive Hall Magnetohydrodynamic Problems - S. Ovtchinnikov, F. Dobrian, X.-C. Cai, and D. E. Keyes http://www.cs.colorado.edu/%7Ecai/papers/odck.pdf This was the dissertation for Serguei Ovtchinnikov and is actually part of an independent study that I am doing with my advisor Xiao-Chuan Cai. The general area is described as the solution of nonlinear time-dependant problems in mathematical physics, performed in massively parallel computing environments. The specific focus is on modeling magnetic reconnection in plasma using iterative methods with preconditioning. These techniques are common in modeling and simulation applications, and the specific problem domain for this paper is plasma physics. Plasma flow is difficult to model accurately because while it obeys the Navier-Stokes equations for fluid flow, it also has electromagnetic properties defined by Maxwell’s equations. Because of this, a phenomenon called magnetic reconnection occurs, where the magnetic field lines in the plasma realign rather suddenly, converting the potential energy into kinetic energy and heat. This phenomenon plays a role in solar flares as well as the aurora seen in the polar regions of Earth and other planets (such as Saturn). Another interesting aspect of this is that this phenomenon is one of the complicating factors in attempting to magnetically confine a fusion reaction, and hampers efforts to build such a nuclear reactor. Sparse linear systems (having very few nonzero entries) are fairly common in applications such as solving partial differential equations numerically. The sparse matrices however usually have a structure that can be exploited to solve the problem more accurately or quickly than using brute force methods. These matrices can be millions of entries, almost all of which are zero and contribute nothing to the solution. Iterative methods are a very powerful tool in solving these kinds of systems because they take advantage of the sparse nature of the problem. In this case the problem is actually nonlinear, but by applying certain discretization techniques we can reduce the problem to the time-stepped solution of individual linear systems. A major issue in solving these types of systems is that as the mesh over the domain to be approximated becomes finer (the number of points to use in approximating the solution increases), the condition number of the system becomes worse. The condition number is a measure of how numerically well-posed the problem is, and a large condition number implies that the accuracy of the resulting solution will be strongly affected by inaccuracies encountered by solving the system on a finite precision machine (just about every computer made). Preconditioning attempts (among other things) to alleviate this concern somewhat, by solving a slightly modified system that has a better condition number but is close to the original problem in some sense. Preconditioners can take many forms, some of which are based strongly on understanding of the underlying problem being solved. A portion of the paper is concerned with choosing an appropriate preconditioner for the problem; in this particular case a right sided restricted additive Schwarz preconditioner with LU on local subdomains is used. The specifics of this are difficult to explain in the scope of this paper, but the basic idea is that this type of preconditioner (restricted additive Schwarz) is very good for solving systems in parallel, and the use of a right sided preconditioner is an advantage when multiple solution steps are to be performed, in this case because at each timestep we calculate a solution to a linear system. One of the design choices in the technique was to use GMRES (the Generalized Minimal Residual Method) as the solver for the preconditioned system. This is one of a number of iterative methods that fall under the category of Krylov subspace methods. The key advantage of this method is that it is more general purpose than other methods in this category such as Conjugate Gradient. The CG method only works on systems that are symmetric and positive definite, and because the problem involves both fluid flow and electromagnetic terms at the same time the resulting matrix is not symmetric. GMRES works well in this case because it will work on almost any system, and is in fact the default Krylov subspace method used in PETSc (Portable, Extensible Toolkit for Scientific Computation), the suite used to build the technique described in the paper. The method described in the paper has several advantages over a direct (non-iterative) method to solve the same problem. A key advantage of this approach is that accuracy of the method is provided at much larger time steps that other methods, as well as having good scalability characteristics when applied to large numbers of processors. This aspect is a critical factor in many such techniques, because scalability and speed are important to allow more accurate approximations over larger domains. As far as criticisms go, I wasn’t sure whether the main focus was whether the paper is solving a problem that had not been previously solved with any method, or whether the paper was demonstrating that an iterative method with a well chosen preconditioner provided superior performance with comparable accuracy. The paper did provide a direct as well as an iterative method, but I was unsure if the main focus was to compare the two methods or just to use the direct method for comparison purposes. Another issue is related to the various tradeoffs and design decisions made going into the paper. The paper does a very good job of describing the strengths and weaknesses of different approaches as well as why certain methods were chosen over others. However, the parameters and techniques chosen for the particular implementation given were not really explored. I.e. using GMRES the restart value was chosen to be 30 but without going into details as to why this value was chosen. Was this determined empirically to maximize convergence given the available memory, or because the method converged well before the iteration count reached this value? All things considered I think this was a very solid paper that demonstrated an advance in a topic very important to scientific computing. This particular area is a very active area of research and the technique demonstrates an approach that helps to solve a problem of concern both in terms of energy production as well as advancing our overall knowledge of the universe. I think it is important to (as this paper has) keep the practical concerns of scalability and error propagation as one of the foremost concerns in developing these techniques. Last modified 2 December 2007 at 2:11 pm by cobb |