Docs

0%

Loading...

Static preview:

Compression tutorial

This tutorial provides a simple example of compressing a small vector using a precomputed orthogonal matrix as a new basis. Through that simple example, this tutorial demonstrates the capacities of Cogent Lab.

We start with a data vector, x:

We can plot x:

We can add in our precomputed orthogonal matrix, U, which will serve as our new basis for x.

We can plot each column vector of U:

Next, we can compute the vector a, which represents x in terms of the U basis (such that \(x = Ua\)). This is just \(a = U^{-1}x\), as computed below:

[8] -0.03712734872 0.8271057473 0.8114806423 0.175395094 1.494637182 -0.9246413458 -2.351901711 -1.272758945

To compress the data, we will define a function that zeroes all but the n elements of a with the highest absolute values:

Then, we will make a2, the result of compress(2) (so the two most important elements are included):

[8] 0 0 0 0 1.494637182 0 -2.351901711 0

From a2, we can compute and plot x2, the approximation of x based on the two most important elements of a:

[8] 0.07841712375 0.5580028932 -1.614870978 0.9784509612 0.9784509612 -1.614870978 0.5580028932 0.07841712375

We can do the same thing for a4 and x4, with the four most important elements of a:

[8] 0 0 0 0 1.494637182 -0.9246413458 -2.351901711 -1.272758945

[8] -0.3025859583 1.3649937 -2.234195464 1.218196986 0.7387049365 -0.9955464921 -0.2489879136 0.4594202058

We can also compute x8, which just uses a without anything removed:

[8] 0.53766714 1.83388501 -2.25884686 0.86217332 0.31876524 -1.3076883 -0.43359202 0.34262447

We can find the error of x8 relative to x, which is very small since nothing is compressed:

x8 error: 4.612726499e-16

We can do the same for x4 and x2:

x4 error: 0.3440342032

x2 error: 0.5757042318