3.4 Regularized least squares

The RLS approach to the inversion problem is to find (essentially through a least-squares fit) the model profile that best fits the data, subject to a smoothness penalty term, or regularization. More regularization – a larger weighting for the penalty term – results in poorer spatial resolution (and potentially more systematic error) but smaller uncertainties. In one such implementation (Schou et al., 1994), we minimize
∫ R∫ π¯ 2 ∫ ∫ ( 2¯ )2 ∫ ∫ ( 2¯)2 ∑ [di −-0--0-Ω-(r,𝜃-)Ki-(r,𝜃)drd𝜃]- + μ2 R π d-Ω- drd𝜃 + μ2 R π d--Ω drd 𝜃 (21 ) i (σi∕¯σ)2 r 0 0 dr2 𝜃 0 0 d𝜃2
with μr and μ𝜃 being the radial and latitudinal tradeoff parameters. The RLS inversion has the advantages of being computationally inexpensive and always (thanks to the second-derivative regularization, which amounts to an a priori assumption of smoothness) providing some kind of estimate of the quantity of interest even in locations that are not, strictly speaking, resolved by the data. In this method, the averaging kernels 𝒦 can (but need not be) calculated from the coefficients in a separate step. They are not guaranteed to be well localized, though they are forced to have a center of mass at the specified location r0,𝜃0. Figure 11View Image illustrates typical averaging kernels for a 2dRLS inversion of an MDI data set.
View Image

Figure 11: Averaging kernels for a typical RLS inversion of MDI data, for target latitudes 0 (a), 15 (b), 30 (c), 45 (d), 60 (e), and 75 (f) degrees as marked by the dashed radial lines, and target radii 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99 R⊙ indicated by colors from blue to red as denoted by the dashed concentric circles. Contour intervals are 5% of the local maximum value closest to the target location, with dashed contours indicating negative values.

  Go to previous page Go up Go to next page