If the input pattern is taught in association with a different output pattern, then the presentation of this input will cause the corresponding pattern to appear on the output. This memory is called hetero-associative.
1. Auto-associative X=Y
Recognize noisy versions of a pattern
Example: Hopfield Memory Network
2. Hetero-associative bidirectional: X<>Y
Iterative correction of input and output
Example: BAM=Bidirectional Associative Memory
3. Hetero-associative input correcting: X<>Y
Input clique (circle) is auto-associative => repairs input patterns
4. Hetero-associative output correcting: X<>Y
Output clique (circle) is auto-associative +> repairs output patterns
Hebbian learning rule states that
"Two neurons which are simultaneously active should develop a degree of interaction higher that those neurons whose activities are uncorrelated."
The Hopfield and BAM nets have two major limitations:
- The number of patterns that can be stored and accurately recalled is a function of the number of nodes in the net.
- There is a problem in recalling a correct pattern when two input patterns share so many identical features (e.g. too many pixels values in common).
Weight matrix representation
Hetero association: W=Sum of (X_transpose x Y)
Auto association: W=Sum of (X_transpose x X)
where X=matrix of input patterns, where each row is a pattern.
Y=matrix of output patterns, where each row is a pattern.
Ideally, the weights in the weight matrix W will record the average correlations across all patterns.
where X=matrix of input patterns, where each row is a pattern.
Y=matrix of output patterns, where each row is a pattern.
Ideally, the weights in the weight matrix W will record the average correlations across all patterns.
No comments:
Post a Comment