Theo Verelst Diary Page

Latest: March 6 2001

I've decided after good example to write some diary pages with toughts and events.

Oh, in case anybody fails to understand, I'd like to remind them that these pages are copyrighted, and that everything found here may not be redistributed in any other way then over this direct link without my prior consent. That includes family, christianity, and other cheats. The simple reason is that it may well be that some people have been ill informed because they've spread illegal 'copies' of my materials even with modifications. Apart from my moral judgement, that is illegal, and will be treated as such by me. Make as many references to these pages as you like, make hardcopies, but only of the whole page, including the html-references, and without changing a iota or tittel...

And if not? I won't hesitate to use legal means to correct wrong that may be done otherwise. And I am serious. I usually am. I'm not sure I could get 'attempt to grave emotional assault' out of it, but infrigement on copyright rules is serious enough. And Jesus called upon us to respect the authorities of state, so christians would of course never do such a thing. Lying, imagine that.
 

Previous Diary Entries
 

March 6, 2001

The rest of this page I'd prepared a few days ago, so here it is.

I did some homework, and found out that as any normal person my expect: professional work with unprofessional tools is not a good idea if you can help it.

Will we actually go into the source code of inverting matrices? Why not.

Homework: straight matrix inversion in C

For those not knowledgeable about lets say basic math that underlies much of the what are seen as hard core engineering studies, a matrix is a square of rows and columns of numbers. It represents a system of equations such that there are unknowns such as x and y, equations, such as x+3*y and 2x-5y, and right hand side outcomes, such as x+3y = 17 (short for x plus three times y equals 17). The variables, x and y in this example, can be filled in with any number such as 1 or 2 or 1/6 or 5.8462 or PI, and the point of the equations with the rigth hand side together is that they usually represent some kind of problem that can be solved.

It may be that we have some experiment that can be formulated in a few equations, such as the number of double CD's compared to the total number of CD's in a rack is 1:4, which may be written y/(x+y)=0.25 when y is the number of double CD's, and x the number of single CD's. As most know from highschool math, this may be rewritten as y = 0.25 * (x + y), or 0.25*x + 0.25*y -1*y = 0, or 0.25x -0.75y = 0 .

Suppose the total numbers of CD's in a rack are 100, then we would write that x+y=100. Now the whole system of equations looks like:

 0.25 x  -  0.75 y  =    0
 1    x  +  1    y  =  100

Most will be able to solve this 'system of equations', basically because it is not very hard to isolate for instance y as function of x in the lower equation (y=100-x), which can be filled in in the upper equations' y, so that 0.25x+0.75*(100-x)=0, wich can be solved by multiplying out and rearanging terms, gathering the x terms, so that we can find the solution for x, which can than be filled in the intermedeate equation to obtain y. Highschool stuff, but relevant, and generally applicable.

The idea is that the same principle of the equations can be applied to other problems as well, with more than two variables. We may add a ratio of single CD's x with CD singles, called z, so that for instance for 10 normal CD we have 1 CD single, which could be written as z=(1/10)x. Think it over what that means for the distribution of CD singles as compared to the total number of CD's, including the double ones, that sort of thing we aren't getting into know, the only important mathematical observation is that the extra equation and the extra unknown (z, for the number of CD singles), can without any problem be included in a new set of equations, which can be written in matrix form:

 a11x + a12y + a13z = b1
 a21x + a22y + a23z = b2
 a31x + a32y + a33z = b3

In mathematical language, this can be written in shorter form: Ax=b, where A is the 2 dimensional table of a(i,j) elements, with i,j from 1 to 3, x the vector with the unknowns, with elements x,y,... , and b the vector (vertical list) of right hand side constant values b(k), with k 1,2,... .

This is a convenient way af stating the problem for any number of unknowns and equations, and if they are the same, the whole idea works with a square matrix, and i and j both running from 1 to n, n being any integer number, 2 in the first case, 3 in the second example, and in principle as high as one thinks is needed.

The essential and interesting question in mathematical and practical sense is: can we solve such a system of equations, and if so, how. The answer is that indeed in general such a problem can be solved by reliable mathematical means, and that for any real life problem, stated in sensible matrix form, there are likely to be practical solutions without to much effort to write the problem down in some special form, that is the recipe for computing the solution vector x for a system of equations Ax=B can be applied even directly by computers, and will in most cases give a good answer.

Before going into this subject a bit, the idea is to clarify the use of this kind of math, and computer programs that implement it, so that I'm not going into proofs or a buildup as in a course, this is a diary page making clear what may be relevant to get a good idea about certain subjects, not a course combining general knowledge from highschool to high level mathematics into a solid course.

I'm qualified enough to write about this stuff, of course because of my studies and work, and maybe for some because I'll make clear I can get this stuff to actually work, which is always a good way to make clear what one is capable of.

The idea of solving the system of equations I'll first focus on is an idea I wasn't aware about until university, its centuries old and maybe not of utter brilliance, but certainly of interest, and still worth implementing in a computer program for practical applications. I think it may well act as proof that it in principle is possible to solve any system of equations fullfilling certain mathematical or practical conditions, though I'll not go into that subject explicitly here.

If we write equations under eachother with the matching variables aligned, we may subtract one equation a number of times from another without changing the meaning or solution of the system of equations, that too is maybe even proven in highschool math for various level. Applying this principle, we may start with doing so cunningly enough to make zeros appear in to start with the first column, for every equation except the top one. Once we did that, we can use the second equation, which has a zero in its first colum, to make zeros for the 3d and following equations in the second column, and so forth for the remaining columns. In the end, we obtain what is called a upper triangular matrix, where the last equation can be solved easily, because there is only on unkown and the right hand side left.

Since this is not a course, I'll only examplify the principle, the following are the steps that lead to making the upper triangular matrix, which is then made into what is called a diagonal matrix, which leads to the solution of the system and finding the unknowns.

read:
  2.022095  1.221924  6.993713  8.166504 -7.301331    3.98
  8.863831  3.313904  5.107117 -1.496887  7.209167   -2.37
  8.088074  2.564697 -7.567139 -5.213928  5.592957   -0.78
 -6.256409 -0.046997 -5.173035 -5.554810 -5.171814    7.75
 -0.202637  3.271790  9.818420  3.035889 -6.228333    7.06

Starting invert... 
  
 2.022095 1.221924 6.993713 8.166504 -7.301331   3.98
 0.000000 -2.042386 -25.549749 -37.294670 39.214470   -19.80
 -0.000000 -2.322814 -35.540936 -37.878712 34.797176   -16.69
 -0.000000 3.733664 16.465679 19.712545 -27.762302   20.05
 0.000001 3.394240 10.519270 3.854264 -6.960008   7.46
  
 2.022095 1.221924 6.993713 8.166504 -7.301331   3.98
 0.000000 -2.042386 -25.549749 -37.294670 39.214470   -19.80
 -0.000000 -0.000000 -6.483105 4.536664 -9.801595   5.83
 -0.000000 0.000001 -30.241543 -48.465439 43.925247   -16.14
 0.000001 -0.000000 -31.941845 -58.125729 58.210499   -25.44
  
 2.022095 1.221924 6.993713 8.166504 -7.301331   3.98
 0.000000 -2.042386 -25.549749 -37.294670 39.214470   -19.80
 -0.000000 -0.000000 -6.483105 4.536664 -9.801595   5.83
 -0.000000 0.000001 0.000001 -69.627480 89.646454   -43.33
 0.000001 -0.000000 0.000001 -80.477585 106.502335   -54.16
  
 2.022095 1.221924 6.993713 8.166504 -7.301331   3.98
 0.000000 -2.042386 -25.549749 -37.294670 39.214470   -19.80
 -0.000000 -0.000000 -6.483105 4.536664 -9.801595   5.83
 -0.000000 0.000001 0.000001 -69.627480 89.646454   -43.33
 0.000001 -0.000000 0.000001 -0.000004 2.886207   -4.07
  
 2.022095 1.221924 6.993713 8.166504 -0.000000   -6.33
 0.000000 -2.042386 -25.549749 -37.294670 0.000000   35.54
 -0.000000 -0.000000 -6.483105 4.536664 0.000000   -8.00
 -0.000000 0.000001 0.000001 -69.627480 0.000002   83.17
 0.000001 -0.000000 0.000001 -0.000004 2.886207   -4.07
  
 2.022095 1.221924 6.993713 0.000000 -0.000000   3.43
 0.000000 -2.042386 -25.549749 -0.000000 0.000000   -9.01
 -0.000000 -0.000000 -6.483105 0.000000 0.000000   -2.58
 -0.000000 0.000001 0.000001 -69.627480 0.000002   83.17
 0.000001 -0.000000 0.000001 -0.000004 2.886207   -4.07
  
 2.022095 1.221924 -0.000001 0.000000 -0.000000   0.64
 0.000000 -2.042386 -0.000001 -0.000000 0.000000   1.17
 -0.000000 -0.000000 -6.483105 0.000000 0.000000   -2.58
 -0.000000 0.000001 0.000001 -69.627480 0.000002   83.17
 0.000001 -0.000000 0.000001 -0.000004 2.886207   -4.07
  
  2.022095 -0.000000 -0.000001   0.000000 -0.000000    1.34
  0.000000 -2.042386 -0.000001  -0.000000  0.000000    1.17
 -0.000000 -0.000000 -6.483105   0.000000  0.000000   -2.58
 -0.000000  0.000001  0.000001 -69.627480  0.000002   83.17
  0.000001 -0.000000  0.000001  -0.000004  2.886207   -4.07

The principle applied is that to create a zero in an equation by adding a multiple of another is to multiply an equation by exactly the amount such that its coefficient above or under the target 0 becomes the same as that coefficient, and than subtract the equation. This works as long as the coefficients are not zero.

The above example is a randomly generated matrix and right had side, and the steps are shown to solve this system of equations, without explicitly computing the inverse, with every step on column basis shown. The last matrix clearly can be inverted easily, every non-zero entry associates one variable with one right hand side value, so one can easily find for instance that x5 (the fifth component of the solution vector x) equals -4.07 / 2.886207 .

The printf formatting isn't perfect, just like the whole programming language is far from bug free, but the results are clear enough. The minus signs are because the sweeping clearn isn't perfect, and I don't use too many digits accuracy, just single precision floating point numbers that run fast enough, and don't take too much storage.

The question is: is the result correct? I could of course use one of the simple examples that can be analytically worked out, and compare the outcomes, but it is is also possible to simply check wether the solution vector x is correct by simpy filling it in. Let's see whether that works.

The solution vector x is:

x:
  0.663505
 -0.573577
  0.398530
 -1.194533
 -1.411151

Filling it in in the 5 equations should give the correct right hand side, like in the original set of equations above. To see wether our solution is correct, we can subtract the right hand side we found from the original one, and see if there is much difference:

  0.000000
 -0.000004
 -0.000001
 -0.000001
  0.000000
the largest difference is less than one hundred thousandst, so we did a good enough job.

The time it takes a bit of a computer to come up with this solution is incredibly small as compared to human beings doing their math, on any reasonable processor this work takes a fraction of a second, usually a small fraction.

Since all this is done automatically in a program, both the making of a random set of equations, the solution computation, and the verify computation, it is possible to have an idea of how big a problem we can handle.

If we take a 100 equations with a 100 unknowns, the inversion computations take about 1 second on a not new machine. Keep in mind that 100x100 is 10.000 elements in the array, and quite some computations to be done. The complexity, or better put the number of computations needed to solve the system grows faster than linear, that is if we take twice as many equations, there are more than twice the number of computations. This is true in experiment, for a 200x200 system, inversion computer time is about 8 seconds.

As an example the accuracy that can be achieved for larger systems of equations without any precautions such as more accurate intermediate computations, more accurate storage of the matrix and vectors, making sure the equations are such that they for certain can be inverted easily, and making sure the best ordering of compuations is used, lets look at the accuracies for a larger system:

  1.698608  9.399109  0.552368  3.351440  6.129456 -8.223267  1.904297  0.273132  2.895813 -0.288086    3.84
 -2.085266 -1.605530 -7.650146 -6.210022 -3.510742 -0.033875 -9.981995 -3.999329  8.822937 -8.469543   -6.63
  6.819153  4.443665 -5.567627  8.793640  7.898254  4.538269 -9.102783  0.970154 -6.493225 -1.075439   -8.49
  4.243469 -0.712891 -9.624634  3.243713  0.910339 -0.664063  7.008972 -0.015869  7.236633 -8.038635   -2.29
 -8.652344  7.581482  7.403259  2.370300 -5.772400  6.961670 -4.228516  2.152405  7.078552 -7.290344   -7.18
  4.552612  3.898621  7.300110  1.058350 -7.253113 -6.896973  9.749756 -8.121643  7.289124 -2.960205    7.25
  7.779846  6.860352 -7.658691 -4.701843 -7.732849 -8.922119  7.663269  1.666260  2.064209 -9.887390   -0.94
  2.006531 -7.949524  4.315491 -0.061951 -7.536926 -1.883850  8.075562 -8.258667  5.709229 -8.661804   -2.01
 -0.337219 -1.124573  2.081604  7.152710 -6.809082 -1.871948  8.422546  3.888855 -5.285645  2.492981   -9.73
 -9.366760 -3.822937  3.421326  7.633972  1.869812  7.516174 -1.680908 -0.772705  7.322388 -8.312073    2.23

For those willing to check, the solution vector is:

  -8.246738
   5.698690
  -5.315927
  -1.912415
   1.494450
   2.869143
   5.187057
 -11.070212
  -6.525686
  -0.378249
which when filled in to the left side equations above and subtracting the original right side gives the left column below. The right column is the more accurate version of the above right hand side, to compare the deviation or error percentage wise:
  0.000004  3.841248
  0.000003 -6.628723
 -0.000026 -8.485107
 -0.000014 -2.293396
 -0.000529 -7.178040
  0.000251  7.247925
  0.000729 -0.940857
  0.000266 -2.007141
 -0.000237 -9.731445
 -0.000958  2.228394
The largest error is about 1/1000, which is not the worst, though it depends on the application wether such error can be tolerated. The accuracy would probably be better by not assuming that a near zero is a zero during sweeping, and it is completely dependent on the structure or properties of the matrix system. It is quite possible that the random coefficients form a system that can in principle not even be solved, which is not depending on anything else than mathematical considerations. For instance when there is a zero in the final solution on the diagonal, and the right hand side is not zero, no solution can be given.

For natural problems, it can usually be proven that mathematically speaking there always is a solution, except that it may be a very error sensitive matrix equation, in which case maybe accuracy in the solution must be higher to not make too big an error appear in the end result.

For a 100x100 and higher, the error goes up to a fer percent sometimes with this experiment, which is not the worst outcome, but of course could not be good enough for certain computations. Again, that depends not primarily on the computations, but on the systems conditioning in terms of the equations having a good solution at all, considering that they were made randomizing coefficients.

Softwarewise it is reassuring of course that repeated runs of the program always give vectors that when filled in are not at all far of from a good solution, which is proof that there are no real errors in the computations, assuming of course the check is a reliable program, which isn't that complicated to oversee, a matrix vetor computation is just two loops and some array accessing and multiplications, that sort of computation can be overviewed.

It took some more effort than needed to make the program work with the compiler I can use, because it seems to have a major problem with assignments combined with two dimensional arrays, even though the C code was definately normal, it sometimes even completely restarted the system. Which is bad, of course. Anyhow, I worked around various such errors, and then at least reasonable Cis the result, with working results, and sub ten second compile link times, which is fine enough.

The sweep function is as follows:

void invertmats() {
   int n,i,j;
   float *y, p1, d1;

   for (n=0; n<di-1; n++) {      /* With all diagonal elements. */
      for (i=di-1; i>n; i--) {   /* Make zeros starting at the lowest row. */
         y = &m[n][n];           /* The pointer stuff is because of a definate */
         p1 = *y;                /* compiler bug, it actually hangs my whole */
         y = &m[i][n];           /* when doing a 2d array assignent to a var */
         d1 = *y;                /* bad, bad. This works, though */
         d1 /= p1;               /* This is the factor to multiply the eq with */
         for (j=n; j<dj; j++) {  /* Now multiply all relevant row elements */
            y = &m[n][j];
            p1=*y;
            m[i][j] -= d1*p1;    /* subtract the higher equation */
         }
         /*   m[i][j] = m[i][j] - p * m[n][j]; */

         b[i] = b[i] - d1 * b[n]; /* and don't forget the right hand side */
      }
   if (0) printmats("  ");              /* do a matrix print, for small mats */
   }
                                 /* Now the sys is upper tri */
   for (n=di-1; n>0; n--) {      /* sweep clear the upper tri */
      for (i=n-1; i>=0; i--) {   /* vertical stripes of zeros */
         y = &m[n][n];
         p1 = *y;
         y = &m[i][n];
         d1 = *y;
         d1 /= p1;
         m[i][n] -= d1*p1;       /* make use of the existing zeros */
         b[i] = b[i] - d1 * b[n]; /* and the right hand side of course */
      }
   if (0) printmats("  ");
   }
                                 /* Now we have a diagonal matrix */
   for (i=0; i<di; i++) {
      y = &m[i][i];
      p1 = *y;
      x[i] = b[i] / p1;       /* write solution vector elements */
   }
   return;
}

It might not speak for itself, but for experienced enough programmers it should be quite readable. It may be noted that I made the sweeping work as I remember from when I did excercises in linear algebra or electrical networks that required this kind of solution of a system of equations: in humanly logical order, which is just a matter of indexes running the right direction. It could do good graphical demonstration this way, for who would have an interest.

The three main sections are the upper triangularisaton sweep, the diagonalisation sweep, and the solution vector computation.

What about those ^#% matrices on my *&^$% diary pages ?

In fact, to make some things clear I have been and am into, though for the latter it is only fundamental, not so much the real content. It makes clear what is important thinking in certain areas of engineering and science that has more than little validity, there are quite some problems that can be written and solved as a system of equations, and it therefore is of more than little value to be able to deal with them. The fact that a general enough solution methos exists makes things a lot more applicable, even though this solution method is not generally the prefered one. It has an order of three in the complexity, that is for every unknown, two sweeps are made in many equations, which quickly runs one into a lot of computations. Also, there are many problems that make matrices that are not completely filled, that is they have a certain structure with a lot of zero elements over the whole, which makes it wise to consider that fact and adjust the type of solution for such problems.

Also, inverting a system of equations is not the only thing engineers and scientists have an interest in, for instance computing eigenvalues is another interesting property, which is concerned with the way a vector x is mapped onto a multiple of itself by the system of equations.

Also, it may be that a problem is so big that even though essentially what the solution requires is the solution of a large set of equations, we would like to use a method that doesn't require us to make the whole matrix, and still get a good estimate of the solution. Mind that the vectors x and b are n elements, lets say a million elements would fit on a computer, which may store 4 or 8 bytes per elements, that on modern machines is possible. The accompanying matrix would be a million squared elements, which would be a 4 thousand gigabytes, the size of a not small hard disc.

And considering ever element gets accessed a number of times related to the matrix size, it would take forever to invert this matrix even on a 160 megabyte per second ultra ultra wide fast scusi interface. Undoable.

Inthis case, and when an accurate solution is not as needed as a fast estimate, approximations methods can be applied, which are usually iterative. The graphics I've occupied myself with are a good example. One of the research area in the field of what is called global illumination can serve as a good example.

The problem is a room with surfaces that reflect energy arriving at them from a light source. Assuming we can somehow compute how much of the light power that arrives at a surface after reflection ends up on all other surfaces, which indeed is possible, though not trivial, we would like to know what the illumination of every surface in a scene in a computer memory is after we compute what happens when the light is put on.

In short, we can start with computing the matrix that relates the amount of energy coming from every surface in a certain amount of time to the new amount of energy at the next time step. The surfaces in the space, represented by the elements of a vector will be reflecting an amount of light that becomes clear when this transfer of energy is repeated many times, which is the equivalent of a repeated matrix multiplication with the lets us call it light propagation matrix, so that the eventuall light distrbution can be found by repeating the application of the refection operator, written as a matrix on the initial light vector, where the light sources are recorded as energy emitters.

The idea is that using non-trivial mathematics, neumann inversion, known in physics, can be used to turn this problem of the repeated reflection into a system of equations with one matrix, by taking the limit of the reflection operator to infinity, making observations about the rows that mathemtically can be computed in their infinite sum, and rewriting the reflection operator equation by reordering terms, observing various properties of the systems of equations in it. The result is the more than famous enough radiosity equation, which is a formulation in the form of Ax=b, where the elements in matrix A can be computed using formfactors, and where B is the light vector. Then inversion is needed to compute the illumincation or brightness of every surface in a virtual graphics scene.

That idea is fine, and definately interesting, but a large scene easily has 10.000 or more surfaces, up to a million without realy pushing things. That would mean that the system of equations would be a million by a million, which as we have just seen is far to big to solve or mostly even store. There are workstations costing major money on which such a thing can be tried, but them still, this is far from practical.

In this case the solution is that the reflection matrix can be found for a certain reflection, and can be limited by making more elements zero than in fact there would be for an accurate computation, and apply that partial reflection matrix, in an extreme case just one colums of it, and than still not complete, to the data in the right part of light vector. When done right, these quite remarkable simplifications can lead to solutions within percents for major parts of the lighting solution in a quite doable computer time, even for quite big light vectors.

The subject is still an interesting one, I may find some time to deal with it a bit more elaborately, which is relevant also for other subjects than graphics, and scientifically quite valid.

Real men ?

No, nothing much to do with it, is the upfront answer. The association is because of a certain appeal I've regularly felt for the kind of solution methods where mathematics are applied in a way that is not pityfull, and of course that is successfull.

Modern physics, lets say starting with the nuclear physics in the beginning of this century are an example of such a way of going about things. In terms of mathematical formulation, the starting point is to assume that we can make vectors which are as big as we want, also infinitely, containing whatever we want, also for instance a complete function per element. That would mean we have a symbol that represents a infinetely long list of function descriptions. Than the same idea can be used with matrixes, we can make them 10x10, 1000x1000, and even bigger, as long as we specify them somehow in a general way that doesn't require us to actually fill in all possible elements.

The next step is to take such thoughs serious in the same way as we can take the computations on little matrices serious, such as matrix multiplications, inversion, types of matrices, such as projection or rotation matrices, and types of operations represented in these matrix forms. For instance, there is nothing fundamentally against saying that the principle made clear above couldn't be applied to a matrix of infinite dimensions, it would just take for ever to actually fdo it.

The reason for doing so is that it makes it possible to reason mathematically with symbols that have meaning, but cannot easily be filled in. Assuming such reasoning makes sense, and it does, we may reason about relevant problems stated in these terms, and make mathematically founded derivations. Thinking about the light propagating matrix, we may say that there are infinetely many little surfaces in a graphics scene, and then give up because than there is nothing to compute, or try to find out what we still can say in the face of such complexity.

For instance, we may know that no matter what, the amount of light never increases over a reflection operator, and when there are no perfect mirror surfaces, the total amount of light, which we may calculate by summation or integration, will always decrease with every reflection step. Also, refection coefficients will not be negative. Furthermore, the structure of the reflection matrix can be related to the possible sources and receivers of a certain light energy transport path, for instance surface elements that cannot 'see' eachother will be related in the light propagation operator by zero entries.

There are a lot more mathematical properties that are considered basic material for all kinds of computations, which may be applied in general, that is without regarding too much what the operators and formulas actually stand for. When they have power enough in mathematical sense, or happen to apply in certain problem solutions engineers or scientists become more happy or effective.

The reflection matrix concept can also be used as a model for sounds reflecting through a space, where reflections are the amount of sound appearing from one surface to another, by having waves progressing through air, intermixed with the reflection on various surfaces and being scattered back into air motions in various directions.

In a matrix in normal form, we should record both where each sound is goinf to starting from very direction, as well as the timeit takes to get there, which makes it harder to use ordinary matrices. Before using the idea electrical engineers are quite used to, complex numbers as matrix and vector elements, or more advanced idea still, lets think about what this problem is about.

The air in a space could be seen as a 3 dimensional field of contimuously changing pressure, represented by a scalar 'pressure' at every point (x,y,z) in the space considered, measured in pascal or something. The pressure could be considered to change as the result of pressure gradients, that is different pressure at different point in space will cause air to flow in the directions perpendicular to eqi-pressure lines, from high to low pressure.

Now suppose we have certain types of walls, and a speaker emitting some kind of sound, which of course at the point of the speaker will cause pressure changes that will propagate through space, what will be the resulting sound at some point in that space?

Thats a problem that can be solved by making a good enough model and doing a lot of computations, but like in the graphics problem, doing all computations just like in the physical model, without simplifications or model sophistications, will lead to massive amounts of computations for a good idea of the sound somewhere in the potential listening space.

For instance the highest frequencies of the audio spectrum are in the range of 300(m/s)/15.000 (c/s) which is a wavelenght of about 2 centimeter per cycle, so if we have a space of the order of 10 by 10 by 10 meter, we need a number of pressure points to make sense with of at least (10x100/2)pow 3 is about a hundred million. Times at least a few bytes for each pressure, this would maybe fit in the memory of a large enough computer.