According to the researchers' site apple have patched. Not read the paper in detail yet but it's a nice idea, "Fig. 1 (a) presents the raw gyroscope measurements collected by the two devices. From the figure, we can clearly observe the quantization. This is because the outputs of the gyroscope ADC are integers. Taking the difference between two sensor readings directly reveals the gain of the sensor. According to Equation 2, the difference between two measurements, can be calculated as".

Then goes on to:

~ΔA = round(G_0^-1 ΔO)

Where ~ΔA is their estimate for the changes in sensor readout, G_0 is an initial guess of the gain matrix and ΔO are the changes in reported gyroscope output. You can then start a recursive estimation of the actual calibration matrix G.

You may not need to add much fuzz to defeat this version of the attack if it relies on the quantisation of the output data (effectively, measuring the smallest distance between non-identical outputs), though it does take a least squares approach to its repeated fitting, but a more statistical variation of the same may be able to overcome that. Apple's fix apparently follows the authors' suggestion to apply random noise uniformly distributed over the discrete step width, this means the straightforward rounding step that lets you into ~ΔA and from there to estimate the gyroscope calibration G is defeated, but clever use of something like a Kalman filter to get at the device motion might let you start to average out that noise (since it's at the sampling frequency and motion changes are unlikely to be).

Edit: adding uniform noise at the level of the readout steps isn't going to negatively impact accuracy very much (it will a little). However if you can use statistical techniques to beat added noise then it may get into a trade-off for obfuscation versus calibration accuracy.