It's not that hard to understand..
Just take a simple example: We examine people leaving a pub and note for everyone the amount of beer ingested and the amount of pee lef in front of the pub. Clearly these two factors have some sort of connection. To get a grasp of these one may produce a special 2-by-2 matrix from the gathered data, a Covariance Matrix. Real world examples involve a lot more than two factors, and as such the Covariance Matrix gets very big. Now, this Matrix can also be seen as a function, that, when applied to some data yields some results. As it is, people tend to have some Covariance Matrix and a lot of those results. And they want to know, what the input was. So they want the inverse Covariance Matrix (which resembles the inverse function).
Up to now the time for calculating this inverse Matrix grew (roughly) by the power of three of the size of the original matrix.[1]
The IBM boffins found a way to do calculate the inverse for which the required time correspondents to the square of the input size. (By estimating a matrix property and using this for an easier calculation). And as a nice plus the new algorithm scales pretty well on massive multiparallel machines.
(At least that's what I read from the abstract, don't have time to look into the paper right now.)
[1] That's why the abstract says "Cubic complexity" not "Cubit ..." (It doesn't involve Quantum Computing. Yet.)