in a lecture I got this link. maybe there will be something useful for someone:
basically compressed sensing means that you take a huge matrix and you are looking for a huge sparse vector that multiplied with it will output the results your sensors have received. it's a question of optimization, in that you want to minimize the number of entries that vector have non-zero. (sparse vector means most entries are zero, it requires a different way of storing that vector.) if done right you can then compress a big input into a sparse vector with relatively little amount of data. for example in face-recognition the input a a big amount of pixels of a photo, and the output is a short list of photos among a huge set of different people and angles/shadows that can be used to construct that input. keep in mind that this is not just a system of linear equations that need to output some unique number, there are many possible solutions for the linear system and one is looking for the solution with smallest amount of non-zero entries within a huge vector. so when you have 20 photos of your face stored, you hope the solution of face-recognition applied on another photo will use pixels from those 20 photos and no other photos, as then you can be certain that it's your face. for solving that optimization problem some sort of random number generator is required though, but at least the whole process can be parallelized.