### 3.4 Regularized least squares

The RLS approach to the inversion problem is to find (essentially through a least-squares fit) the model
profile that best fits the data, subject to a smoothness penalty term, or regularization. More regularization
– a larger weighting for the penalty term – results in poorer spatial resolution (and potentially more
systematic error) but smaller uncertainties. In one such implementation (Schou et al., 1994), we minimize
with and being the radial and latitudinal tradeoff parameters. The RLS inversion has the
advantages of being computationally inexpensive and always (thanks to the second-derivative regularization,
which amounts to an a priori assumption of smoothness) providing some kind of estimate of the quantity of
interest even in locations that are not, strictly speaking, resolved by the data. In this method, the averaging
kernels can (but need not be) calculated from the coefficients in a separate step. They are not
guaranteed to be well localized, though they are forced to have a center of mass at the specified location
. Figure 11 illustrates typical averaging kernels for a 2dRLS inversion of an MDI data
set.