How does this compare to LDPC?
The post is required, and must contain letters.
Memoscale is a 6-person Norwegian startup, based in Trondheim, that has developed its own erasure coding (EC) technology. It says it's more efficient than classic erasure coding because it needs fewer hardware resources to run and enables higher storage capacity utilization. All six staff members founded the company in 2015, …
LDPC detects errors within a block due to noise in the channel. It helps to have an upper estimate of the actual noise.
Erasure codes recover missing blocks generally due to sector errors in storage components.They are measured by the number of failures they can tolerate, storage overhead and reconstruction cost/speed. There are trade offs between the three but generally you choose your mean time to data loss which gives you the number of failures to tolerate and then you trade off storage overhead against reconstruction time.
Storage overhead of 1.2 would appear to push up reconstruction cost and decrease MTTDL but it's a valid point in the 3D trade off space.
Both LPDC and erasure codes (at least pyramid ones) can be described with Tanner graphs.
"Classic" Reed-Solomon is MDS, i.e. mathematically proven to have the lowest possible storage overhead. So if they claim to do better, they'd better back it up with hard facts, because it would imply a new kind of maths.
Sounds more like they have build a "locally repairably" code, i.e. a code that has a slightly higher storage overhead than MDS, but which requires less data to reconstruct. This is not necessarily new (Microsoft, Facebook and Dropbox have been quite open about the fact that they use such a code), but can be interesting anyway.
The values for the recovery speed also do not make sense. Modern implentations of Reed-Solomon do multiple GB/s.
(and LDPC's become mainly interesting in the error-correcting case, while this seems to be mostly about erasures, i.e. data is lost, but not corrupted)
The code is MDS like Reed Solomon so any redundancy fragment can always recover any of the data fragments. The code has on average around 1/3 of the recovery traffic of Reed Solomon.
The library itself can do over 10 GB/s of encode and about 60 GB/s of decode on a single core(Intel processor). The numbers above are with a 1 GB/s network bottleneck which is very low. With a faster network you will see improved recovery speed.
Biting the hand that feeds IT © 1998–2020