MOX Reports
The preprint collection of the Laboratory for Modeling and Scientific Computation MOX. It mainly contains works on numerical
analysis and mathematical modeling applied to engineering problems. MOX web site is mox.polimi.it
Found 1237 products
-
20/2013 - 05/01/2013
Azzimonti, L.; Nobile, F.; Sangalli, L.M.; Secchi, P.
Mixed Finite Elements for spatial regression with PDE penalization | Abstract | | We study a class of models at the interface between statistics and numerical analysis. Specifically, we consider non-parametric regression models for the estimation of spatial fields from pointwise and noisy observations, that account for problem specific prior information, described in terms of a PDE governing the phenomenon under study. The prior information is incorporated in the model via a roughness term using a penalized regression framework. We prove the well-posedness of the estimation problem and we resort to a mixed equal order Finite Element method for its discretization. We prove the well posedness and the optimal convergence rate of the proposed discretization method. Finally the smoothing technique is extended to the case of areal data, particularly interesting in many applications. Keywords: mixed Finite Element method, fourth order problems, non-parametric regression, smoothing. |
-
19/2013 - 04/30/2013
Azzimonti, L.; Sangalli, L.M.; Secchi, P.; Domanin, M.; Nobile, F.
Blood flow velocity field estimation via spatial regression with PDE penalization | Abstract | | We propose an innovative method for the accurate estimation of surfaces and spatial fields when a prior knowledge on the phenomenon under study is available. The prior knowledge included in the model derives from physics, physiology or mechanics of the problem at hand, and is formalized in terms of a partial differential equation governing the phenomenon behavior, as well as conditions that the phenomenon has to satisfy at the boundary of the problem domain. The proposed models exploit advanced scientific computing techniques and specifically make use of the Finite Element method. The estimators have a typical penalized regression form and the usual inferential tools are derived. Both the pointwise and the areal data frameworks are considered. The driving application concerns the estimation of the blood flow velocity field in a section of a carotid artery, using data provided by echo-color doppler; this applied problem arises within a research project that aims at studying atherosclerosis pathogenesis. Keywords: functional data analysis, spatial data analysis, object-oriented data analysis, penalized regression, Finite Elements. |
-
18/2013 - 04/29/2013
Discacciati, M.; Gervasio, P.; Quarteroni, A.
Interface Control Domain Decomposition (ICDD) Methods for Coupled Diffusion and Advection-Diffusion Problems | Abstract | | This paper is concerned with ICDD (Interface Control Domain Decomposition) method, a strategy introduced for the solution of partial differential equations (PDEs) in computational domains partitioned into subdomains that overlap. After reformulating the original boundary value problem with the introduction of new additional control variables, the unknown traces of the solution at internal subdomain interfaces, the determination of the latter is made possible by the requirement that the (a-priori) independent solutions in each subdomain undergo a minimization of a suitable cost functional. We illustrate the method on two kinds of boundary value problems, one homogeneous (an elliptic PDE), the other heterogeneous (a coupling between a second order advection-diffusion equation and a first order advection equation). We derive the associated optimality system, analyze its well posedness, and illustrate efficient algorithms based on the solution of the Schur-complement system restricted solely to the interface control variables. Finally, we validate numerically our method through a family of numerical tests and investigate the excellent convergence properties of our iterative solution algorithm. |
-
17/2013 - 04/15/2013
Chen, P.; Quarteroni, A.
Accurate and efficient evaluation of failure probability for partial different equations with random input data | Abstract | | Several computational challenges arise when evaluating the failure probability of a given system in the context of risk prediction or reliability analysis. When the dimension of the uncertainties becomes high, well established direct numerical methods can not be employed because of the “curse-of-dimensionality”. Many surrogate models have been proposed with the aim of reducing computational effort. However, most of them fail in computing an accurate failure probability when the limit state surface defined by the failure event in the probability space lacks smoothness. In addition, for a stochastic system modeled by partial differential equations (PDEs) with random input, only a limited number of the underlying PDEs (order of a few tens) are affordable to solve in practice due to the considerable computational effort, therefore preventing the application of many numerical methods especially for high dimensional random inputs. In this work we develop hybrid and goal-oriented reduced basis methods to tackle these challenges by accurately and efficiently computing the failure probability of a stochastic PDE. The curse-of-dimensionality is significantly alleviated by reduced basis approximation whose bases are constructed by goal-oriented adaptation. Moreover, an accurate evaluation of the failure probability for PDE system with solution of low regularity in probability space is guaranteed by the certified a posteriori error bound for the output approximation error. At the end of this paper we suitably extend our proposed method to deal with more general PDE models. Finally we perform several numerical experiments to illustrate its computational accuracy and efficiency. Keywords: failure probability evaluation, model order reduction, reduced basis method, goal-oriented adaptation, partial differential equations, random input data |
-
16/2013 - 04/09/2013
Faggiano, E. ; Lorenzi, T. ; Quarteroni, A.
Metal Artifact Reduction in Computed Tomography Images by Variational Inpainting Methods | Abstract | | Permanent metallic implants such as dental fillings, hip prostheses and cardiac devices generate streaks-like artifacts in computed tomography images. In this paper, two methods based on partial differential equations (PDEs), the Cahn-Hilliard equation and the TV-H-1 inpainting equation, are proposed to reduce metal artifacts. Although already profitably employed in other branches of image processing, these two fourth-order variational methods have never been used to perform metal artifact reduction. A systematic evaluation of the performances of the two methods is carried out. Comparisons are made with the results obtained with classical linear interpolation and two other PDE-based approaches using, respectively, the Fourier heat equation and a nonlinear version of the heat equation relying on total variation flow. Visual inspection of both synthetic and real computed tomography images, as well as computation of similarity indexes, suggest that the Cahn-Hilliard method behaves comparably with more classical approaches, whereas the TV-H-1 method outperforms the others as it provides best image restoration, highest similarity indexes and for being the only one able to recover hidden structures, a task of primary importance in the medical field. |
-
15/2013 - 04/08/2013
Antonietti, P.F.; Giani, S.; Houston, P.
Domain Decomposition Preconditioners for Discontinuous Galerkin Methods for Elliptic Problems on Complicated Domains | Abstract | | In this article we consider the application of Schwarz-type domain decomposition preconditioners for discontinuous Galerkin finite element approximations of elliptic partial differential equations posed on complicated domains, which are characterized by small details in the computational domain or microstructures. In this setting, it is necessary to define a suitable coarse-level solver, in order to guarantee the scalability of the preconditioner under mesh refinement. To this end, we exploit recent ideas developed in the so-called composite finite element framework, which allows for the definition of finite element methods on general meshes consisting of agglomerated elements. Numerical experiments highlighting the practical performance of the proposed preconditioner are presented. |
-
14/2013 - 03/23/2013
Gianni Arioli, Filippo Gazzola
A new mathematical explanation of the Tacoma Narrows Bridge collapse | Abstract | | The spectacular collapse of the Tacoma Narrows Bridge, which occurred in 1940, has attracted the attention of engineers, physicists, and mathematicians in the last 70 years. There have been many attempts to explain this amazing event. Nevertheless, none of these attempts gives a satisfactory and universally accepted explanation of the phenomena visible the day of the collapse. The purpose of the present paper is to suggest a new mathematical model for the study of the dynamical behavior of suspension bridges which provides a realistic explanation of the Tacoma collapse. |
-
13/2013 - 03/14/2013
Pini, A.; Vantini, S.
The Interval Testing Procedure: Inference for Functional Data Controlling the Family Wise Error Rate on Intervals. | Abstract | | We propose a novel inferential technique based on permutation tests that enables the statistical comparison between two functional populations. The procedure (i.e., Interval Testing Procedure) involves three steps: (i) representing functional data on a suitable high-dimensional ordered functional basis;
(ii) jointly performing univariate permutation tests on the coefficients of the expansion;
(iii) combining the results obtaining a suitable family of multivariate tests and a p-value heat-map to be used to correct the univariate p-values. The procedure is provided with an interval-wise control of the Family Wise Error Rate. For instance this control, which lies in between the weak and the strong control of the Family Wise Error Rate, can imply that, given any interval of the domain in which there is no difference between the two functional populations, the probability that at least a part of the domain is wrongly detected as significant is always controlled. Moreover, we prove that the statistical power of the Interval Testing Procedure is always higher than the one provided by the Closed Testing Procedure (which provides a strong control of the Family Wise Error Rate but it is computationally unfeasible in the functional framework). On the contrary, we prove that the power of the Interval Testing Procedure is always lower than the Global Testing Procedure one (which however provides only a weak control of the Family Wise Error Rate and does not provide any guide to the interpretation of the test result). The Interval Testing Procedure is also extended to the comparison of several functional populations and to the estimation of the central function of a symmetric functional population. Finally, we apply the Interval Testing Procedure to two case studies: Fourier-based inference for the mean function of yearly recorded daily temperature profiles in Milan, Italy; and B-spline-based inference for the difference between curvature, radius and wall shear stress profiles along the Internal Carotid Artery of two pathologically-different groups of subjects. In the supplementary materials we report the results of a simulation study aiming at comparing the novel procedure with other possible approaches. An R-package implementing the Interval Testing Procedure is available as supplementary material. |
|