Dan Maljovec

Fixed Examples

The goal in this exercise will be to fit a scalar dependent value, y, to a scalar independent value, x. In these visual examples, we will take the horizontal dimension to be our independent value, x, and the vertical dimension to be a our dependent variable, y. Let us begin by exploring a pre-defined fixed set of points (Note, there is some random noise in the y value, so reloads of this page may look different).

We first fit a noisy parabola using Ordinary Least Squares. Next, we will apply a set of pre-defined, variable weights to the data and fit it using Weighted Least Squares. Lastly, we will take this same set of points and allow the weights to be defined locally to the query location using a Gaussian weighting function in a process known as Moving Least Squares.

After this experiment, I offer an interactive example that allows you to specify your own training data set and explore all three fits on a single graph.

Ordinary Least Squares

For the ordinary least squares case, all of the data has the same weight. Thus, the outlier tends to pull the right side of the fit upward, but its effect is limited since there is more data showing a downward trend. Note, that without weighting, the majority trend will prevail. That is, since there are more points on the first half of the dataset we report a downward slope on our fit.

Weighted Least Squares

Now that the weights are accounted for, we can see that the large weight of the outlier is actually able to change the sign of the sloped of the fitted line. In this example, we are still restricted to a linear fit as we are still solving a global problem with one set of coefficients.

Moving Least Squares

Note, in this case we can actually fit a non-linear curve to the data since we solve a local linear problem at each query location. Because these local linear problems depend on a weighting function, θ( ,σ), that varies smoothly as we smoothly vary , we can ensure a C1 non-linear surface. The intuition here is we are smoothly blending neighboring local linear models.

Also, note that as σ grows, we approach the ordinary least squares solution, since we are in essence flattening our Gaussian function such that all of the weights approach 1 (that is, points farther from have weights that become indistinguishable to points close to ).

σ:

Custom Example

Now, you can build your own point set and play with the weights and scale parameter to see how these things work on a 1D case (note, the vertical dimension is the dependent variable and the horizontal dimension represents the independent variable).

For more advanced manipulation, you can open the source code and modify the theta function to use a different weighting function. You can even extend the least squares input array, X, to include higher order values such as x2 and x3 to try fitting nonlinear functions, or you could even fit polar coordinates, given a radius and angle, compute an x and a y. In this last case, you would be fitting multiple dependent variables.

I have not yet attempted these myself, but I tried to make the "fit" functions generic enough to handle multidimensional inputs. Multidimensional outputs may be a bit more work.

Instructions

Left click on the graph below to add up to training points for fitting. Left click-and-dragging will adjust the weight of a training point. The size of the point will reflect its relative weight. You can delete existing data points by right clicking on the point you wish to remove. Also, you can adjust the scale parameter, σ, which determines how quickly the weighting function falls off for the moving least squares solution.

Key

Ordinary Least Squares
Weighted Least Squares
Moving Least Squares

σ: