Wednesday, May 4, 2016

Two-pass forward-backward approximation systems

A number of approximation methods work in a two stage cycle, a forward pass to compute the test on the system, and then a backward pass to update the system with the error of the test with respect to the supervised true answer.

In the backward pass, un update function moves in the opposite direction of the edges, update weights as it goes a long.

Neural NetworksDeep Learning - a directed (usually acyclic) graph with weighted edges used to compute a function. In the forward pass, the values at any node are computed as the dot product of the value at the source nodes and the edge weights, then a simple threshold function (in topological sort order). starting from the input nodes (no in-edges) This computes the values at the output nodes (no out-edges). Traditional neural nets have a layer of input nodes, a single layer of hidden (interior) nodes, and a layer of output nodes with no edges directly from input to output. Deep neural nets have more layers. Arbitrary undesigned graphs are not usually not very successful.

Expectation-Maximization approximation of parameters of a statistical model. First the expected value of the likelihood function is calculated, then the model parameters are calculated to maximize that function

Primal Dual linear programming for maximizing a target function restricted by a set of linear constraints. The difficulty is dealing with the possibly large set of constraints and large set of dimensions.

Kalman filter successive measurement refinement. This is usually applied to position measurement with slightly fallible sensors. From an initial (fuzzy) position and direction of an object at time t, the position/direction at time t+1 is predicted y combining the prediction of movement from time t plus a fuzzy sensing at time t+1. Combined, the variance is lessened.

Are any of these even more alike than just the forward-backward pattern? Are there any other algorithms that are superficially similar, have a two-step iterative process?

Update (5/24/2016): this blog post seems to give pointers to how backprop, primal-dual LP, and Kalman filters are interderivable

No comments: