Reading this "After spending years in college studying to be an electrical engineer, he started his career as a lab tech at Rayovac. From there, he worked his way up through manufacturing companies " as far as I can see it tends to imply that he never actually graduated from university with an Engineering degree so at best I'd say is that he is a highly skilled technician given his resume. We'd never regard someone as a doctor who never graduated from medical school so why should we consider him an engineer no matter how much work experience he has.
24 posts • joined 22 Sep 2013
When is an electrical engineer not an engineer? When Arizona's state regulators decide to play word games
BSD? That implies two things (1) availability of ZFS and I assume you can use it on a boot drive as well by now and (2) OS installations are usually fairly small like several tens of GB's or in other words a small portion of the disk its installed on. The combination of both of those things means that if you can somehow specify "copies=2" when you do the OS install using ZFS you can have redundancy and automatic repair of all OS files so a corrupt file should no longer occur.
Distillation not an option?
I would have thought that distillation in a specialized multi-stage flash unit like the Arabs use to make fresh water from sea water would be the best way to go as you can boil off whatever remaining water is left and all you're left with is solid residue which could be a large mix of radioactive elements and then you can just simply bury it somewhere like you do other contaminated nuclear waste. I believe that you only need a high single digit of kWh's to make one kiloliter of fresh water using this method so even a swimming pools worth of contaminated water would need less energy than that required to make one tonne of Aluminum which is about 14,000 kWh's. These robots would probably only be useful for a contaminated pond or lake with a huge amount of water.
Re: Microsoft part 2
Regarding "Neither filesystem takes very kindly to being physically ejected without being unmounted first, I would have thought a fairly basic practical requirement for any portable filesystem."
Perhaps instead we should use some open or pre-29 version of ZFS even though it probably is even worse at "doesn't take very kindly to being physically ejected without being unmounted first" but due to its superior features like check summing and time stamping for each and every block it could easily repair itself quite quickly when it's mounted and brought back online.
Several years ago researchers got to 220TB for BaFe tape https://www.fujifilmusa.com/products/tape_data_storage/innovations/barium_ferrite/index.html and then IBM got to 330TB with sputtered media https://arstechnica.com/information-technology/2017/08/ibm-and-sony-cram-up-to-330tb-into-tiny-tape-cartridge/ and I've also read of Strontium Ferrite as a future replacement for Barrium Ferrite so who knows where the limit is.
Re: So how come ...
Actually it was the change from GMR heads to TMR heads which only IBM makes as they have had several years experience with TMR technologies http://www.insic.org/news/2012Roadmap/PDF/24_Roadmap%20-%20Heads%20-%20FormattedV5.0.pdf Up to LTO7 drive's heads were GMR and could read either type of tape media whereas LTO8+ are TMR heads and apparently MP tapes trash the TMR heads in short order if used. LTO drives can't tell what type of tape is inserted and since LTO6 can use either the LTO.ORG decided that LTO8 drives won't accept LTO6 tapes as it can't distinguish between LTO6 MP and LTO6 BaFe tapes and therefore the only safe course of action was to deny backwards compatibility and program the LTO8 drives to reject all LTO6 tape cartridges.
Re: Off-site backups
If you look at this document here https://www.lto.org/wp-content/uploads/2018/03/LTO_Media-Shipment-Report__CY17.pdf you'll see on page 4 that tape capacity shipped is going up every year but at the same time on page 6 the number of cartridges sold is slowly declining and I presume that this is because of individual tape capacity doubling every generation and you might be right about the "guy on a skateboard" if they are carrying 1 PB of data sometime in 2026 stored in five 192TB LTO12 cartridges. https://en.wikipedia.org/wiki/Linear_Tape-Open
Re: Sure this will be great on the long term
"The report showed a record 108,457 petabytes (PB) of total tape capacity (compressed) shipped in 2017" https://www.lto.org/wp-content/uploads/2018/03/LTO-Shipment-Report-Release_2017-FINAL.pdf so even for 10TB hard drives you would need 10,845,700 of them plus the servers and data centres to hold them, it does mention "compressed" so roughly halving that number it's still about 5 million drives for that year alone and actual tape sales in total capacity are increasing each and every year. LTO9 with 24TB uncompressed cartridges may come out at the end of this year.
Re: 51st state
My understanding is that anyone born on US soil automatically becomes a US citizen regardless of their parent(s) status and also if born elsewhere to at least one US citizen parent then all that parent has to do is simply apply for citizenship status for their child by way of supplying documentation of birth and custody and then its just granted.
One thing I hate is to destroy a ZFS snapshot the command is 'zfs destroy [email protected]" whereas to destroy the pool itself is "zpool destroy tank" so it could be really easy to make a very bad mistake given the pool one is immediate without any warnings issued and I don't know if you included the snapshot name to the zpool command whether or not it would still work but probably not because the syntax wouldn't be correct but you can just imaging some bad cut and paste job could potentially be executed by accident, I have no idea why the original ZFS authors didn't chose a synonym for one of them instead and I probably would have preferred changing the pool one to something like "obliterate" given the level of damage it could do, surely the American ZFS designers have access to the full range of words in the English language much as anyone else does.
Can you quantify that "extra strain" as I can only see two potential problems and that is the head loading/unloading cycles and thermal shock from going from cold to hot and back again. The first is not a problem as even consumer PC's have at least 400K to 600K head unload cycles so daily shutdown will only account for 365 of these a year so no problem there until you get aggressive APM spinning down idle hard drives in some cases in just 8 seconds so some people have racked up hundreds of these cycles in a matter of hours especially in external hard drives, enterprise hard drives usually have more unload cycles available than consumer drives even though they don't need them. Secondly I haven't seen any statistics where tests are done on hard drives which are say fired up and run for maybe half an hour to warm up to their working temperature and then turned off for maybe half an hour to cool off and this cycle repeated over and over until failure occurs so I presume this is a non-issue either unless you can point me to some research somewhere.
Re: why didn't WD/HGST put a jumper in
I tend to agree back when Seagate announced that they were ceasing production of their 5400 RPM green drives and two reasons I remember they quoted at the time was that the faster drives they had available only used something like 0.5 watts more power which they considered insignificant and also there were too many SKU's, naturally after a month or two's absence perhaps then magically they suddenly had 5400 RPM "NAS" drives available for sale in quantity and of course at a much higher price.
Depends on how they increase the storage size whether by increasing the track density or the DPI and if say you assume the drive will be 4 times bigger by doubling both then it will still only take twice as long to access all of it as you will get double the data in each rotation of the platter so ZFS Raid-Z3 will probably suffice and it doesn't even matter if they are shingled because even Solaris introduced sequential re-silver I think in version 11.2 in 2014. If you don't use the raid array whilst its re-silvering you're hardly working the actuator heads as they will be gently clicking over from one track to the next and the fluid bearings won't wear out and also being enterprise drives they will be adequately cooled so no problems there either. I've sequentially re-silvered an idle ten drive Raid-Z2 array of 3TB Toshiba DT01ACA300's in 7 hours with an I7-4820K CPU processing 1 GB's and my other array in another PC with an E5-2670 v1 CPU does 1.3 GB's and in enterprise systems with more grunt they should go even faster, so yes it could take days but so what? with ZFS the procedure should eventually complete successfully no matter how long it actually takes.
Yes, cassette tapes are good for this reason as firstly they are analogue and the special machines they should be using also record alongside the audio track some special tone(s) so that if you cut and paste the tape or edit out sections of it this is detectable, but to answer your question if you use the ZFS filesystem you can snapshot the data on the hard drive so for example ransomware thinks its encrypting your data but all its doing is writing fresh data whereas the source data is in a read-only state and immutable, also if you record to LTO tapes you can also get a special variant WORM (write once read many) cartridge that could conceivable satisfy this requirement.
No, because "uses" as you put it is 200 times to fill up a tape which for LTO-5 according to Wikipedia here https://en.wikipedia.org/wiki/Linear_Tape-Open requires "80 end to end passes to fill up a tape" and so your ACTUAL TOTAL "Expected tape durability, end-to-end passes" is 16000 (80 times 200) and not the 200 you quoted.
However, since the total size is 1.5TB this implies you could read 18.75 GB for one pass (1.5 TB / 80) so say for the entire data access operation you want to read just one (and only one) file (e.g. a movie) even up to this 18.75 GB size it could conceivably be contained on one pass but realistically most likely would be split over two and so you could quite comfortably do this procedure about 8000 times. Anything much smaller would most likely be readable with one pass only and of course I'm presuming the data on the tapes would be contiguous but that would be a reasonable assumption to make as the backup software would most likely be writing the files sequentially. Naturally I'd make a further assumption that each of the 4 bands and the 20 wraps per band get roughly the same amount of access because otherwise yes each individual wrap might only be good for 200 passes before the tape is worn out in that spot and the drive/software offlines the cartridge permanently due to "too many hardware errors" or whatever.
Tape drives themselves have a MTBF of 250K - 1M hours or so and their tape load/unload cycles are also huge so I don't see them failing early for this reason either, but who knows the tape holding the header pin might snap off after say 1000? load/unload cycles but since its reinforced around that area then 8000-16000 cycles of single file accesses <= 18.75GB in size might still be quite reasonable for one tape cartridge.
That's my 2 cents worth but someone more knowledgeable who actually uses LTO extensively may have a different opinion.
Yes, but modern hard drives that use 5-10 watts need around 20-25 watts to spin up for only a short period of time and most of that extra is the 12 volt feed so some reasonably large capacitors on that line could also assist and this only applies to consumer drives and setups whereas server grade gear such as SAS drives and raid cards/HBA's have things like PIUS (power up in standby) where to avoid such power surges the raid card spins up the drives one at a time, also if they do start up a batch of say 10-20 drives in PIUS mode you may only need a couple in that batch to actually start up and work and the others can stay in powered standby and also be mechanically idle until you shut that batch down and so that will save on head loading/unloading cycles for the unused drives.
Re: Thanks to the recording industry...
Perhaps you might want to consider storing your data using LTO drives and accessing the data on the tapes using LTFS which appear to be completely free from the tax you mentioned according to how I read this document http://www.copiefrance.fr/files/Tariffs_ENG_2017.pdf.
Re: Poor tape, gets no respect
I agree and from what I've read it appears datacentres with tape libraries with LTO-X wait until LTO-(X+2) comes out about 4-5 years later and then they spend a couple of months moving thousands of tapes over to the new ones needing on average about a fifth in number as the LTO-(X+2) drives can still read the LTO-X tapes. They can then finally get rid of the old perfectly good drives which are still reasonably good value to other people and the one main advantage of the new hardware is if they do actually need to recover data then they have much faster drive speeds with which to do so. I expect to see a lot of this happen again when LTO-8 comes out presumably somewhere in the expected October-January 2018 timeframe. I'm not sure what they do with the old tapes but either they sell them off or they just keep them as they are because they are a perfectly good backup of the original data for another 4-5 years until LTO-(X+4) comes out and the process repeats again.
Re: real problem - they don't want to pay telstra etc
I doubt that's the case because if your read their explanation here http://www.msn.com/en-au/news/techandscience/foxtel-reveals-source-and-scale-of-the-glitches-that-crashed-game-of-thrones/ar-BBEEZkR?li=AA4Zor&ocid=iehp where they say 'The company claimed the issue resided in its identity management (IDM) system' and also this here 'Ordinarily, the IDM handles around 5000 requests a day, the company said. But on Monday, it "was hit with 70,000 transactions in just a few hours".' how much bandwidth is used for authenticating credentials? I presume similar to online bank logins (several tens to a couple of hundred kilobytes perhaps) so if you take 'several hours' as say three hours (10800 seconds) then dividing this into 70,000 transactions gives about 6.5 transactions per second so hardly bandwidth intensive. I'd say its probably more of a case of slow implementation of a database server if anything probably running on some Raid 5 array of spinning rust with consequently low IOPS. I presume once people successfully logged on and got authenticated they had no problems with the actual downloads of the video stream which would be several orders higher in magnitude as far as bandwidth goes because there weren't any complaints in that area.
Re: I have a new 1PB+ storage option.
No, not more compressible but don't forget that even though the algorithms are fairly simple like that used for say NTFS compression that was introduced in Windows NT 3.51 it could easily reduce the size by around 50% for easily compressible data (whereas WinZip or WinRAR would shrink it down to say an eighth of the original size) .
Newer tape drives will have better algorithms and even if they don't they will most likely have larger memory buffers and work on larger chunks of data and something like LZW compression even back then would have gotten better results if you used a larger workspace going from a 128 KB buffer to say a 1 MB one and the last time I checked these drives may have like one GB of on-board RAM so its not inconceivable that they can get this sort of compression. Obviously if they get fed like Mpeg-2 data or random numbers then compression will be zero.
I know some professional people that would add in the charge to the customer for the four hours driving on top of the work they do and when their customers complain about that quote they are bluntly told beforehand "take it or leave it" but since they are highly skilled and very good at their job they still invariably get the work order as its still cheaper getting it done right compared to previous contractors that maybe have right royally stuffed things up and then cost them a lot of money to rectify.
"drone operators work 12-hour shifts five or six days a week" As if this isn't bad enough regarding total hours worked (60/72) per week, the article linked to also stated that they were "sapped by alternating day and night shifts " which would really do wonders for your health as you could never establish a consistent sleep cycle. I'm actually more surprised at why people would even undertake such a job under those conditions in the first place.
Re: First sort out frame rates
I agree as interlacing was a brilliant idea as for still images it showed full resolution and for moving images it was updated 50 times a second (analog PAL). I'm not familiar at all with the method of broadcasting for digital TV but if you consider Mpeg-2 P-frames I don't see a problem (as a concept) of replacing each full frame (assuming each frame was already a P-frame) with two half frames (giving 100 Hz of updates) or even 4 quarter frames giving 200 Hz of updates. Alternate options conceptually could be possible as well like say simulcasting on another effectively available channel intervening frames e.g. frames 1,3,5,7... on the main channel and frames 2,4,6,8.... on another one and if your TV is sophisticated enough combine the two together to double the Hz. From a practical perspective nothing like this will probably eventuate however.
As far as I understand it 100Hz and 200Hz TV's simply interpolate between each of the 50p frames and display intermediate results which as far as I can tell works reasonably well so that's another reason nobody will probably be bothering to do anything about this issue. We might have better luck if more films are created with double the normal picture rate (e.g. The Hobbit, Avatar 2+) so when 4K broadcasting takes off they may well consider this and build in this higher refresh capability up front. We can only hope!