Re: Honestly I'm bilingual
((Particle ÷ cm²) ÷ parsec)) is equivalent to (Particle ÷ (cm × cm × parsec)) so that's... weird but well-formed - it's particles per volume, which makes sense physically.
266 publicly visible posts • joined 11 Oct 2010
Hm. I think it's plausible that the errant "rm" deleted the directory entry pointing to the kernel file but not the actual file itself, which would explain why it *didn't* crash next time part of the kernel needed to be paged in. It might not have been just luck. Obvs you'd need the actual directory entry back, pointing at a usable copy of the kernel, for the machine to ever boot again successfully.
It's been part of unix semantics forever that you can open() a file, unlink() it (and any hard links pointing to it) (so now it has no directory entry anywhere in the filesystem) and then carry on using the open file descriptor. read(), wrote(), seek() et al will continue to work and the disk space for the file will be reclaimed only when the file descriptor is closed for whatever reason. The underlying file has a reference count and both hard links and open file descriptors on it increment that refcount.
That all applies to files being accessed by ordinary userspace programs through the normal unix file API. I don't know for sure that it would apply to your kernel image though.
Not for this the specific use case.
I would use the term "Merkle tree" rather than "blockchain" here because what the parent poster is thinking of is tamper-evident logging.
When you use a Merkle tree to make a tamper-evident log, each entry in the log message contains a signed hash of the previous entry. This is fast. Also you don't use proof of work, which is where the never ending increasing power waste in bitcoin comes from. Instead you have multiple organisations (that you trust to not all collude with each other) sign log entries regularly. Certificate transparency logs work roughly this way.
Tamper evident logging is IMHO a perfectly good and reasonable idea for use in establishing evidence of chains of custody, though hardly a complete solution by itself.
Much more, even. The RPi5 has 4 cores at that clock speed and its cores each can execute multiple instructions per clock cycle whereas the 8086 takes I think a minimum of 3 cycles to execute even the fastest instruction. Data that would even fit in the amount of space an 8086 can address could fit entirely within an RPi's caches.
This sounds like the accounting is just a bit incorrect. Intel made a profit last year on GAAP. If their fabs weren't there then Intel wouldn't have been able to make and sell the ~70% of products whose fabrication isn't outsourced. The rest of the business would not respond well if that went away.
Maybe they should be outsourcing more but the external fab capacity does not actually exist for them to buy. I bet they are not pricing in the fact that their internal foundry has some advantages like the fact that it can give Intel first priority at all times.
> I'm not sure the GDPR has a problem with shit.
I think it can. The GDPR isn't only about security. The GDPR also requires you to try to ensure the information you have is accurate. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/data-protection-principles/a-guide-to-the-data-protection-principles/the-principles/accuracy/
I believe broken schemas can interfere with this. As a hypothetical example, imagine if the lack of proper primary keys causes records about two different people to accidentally be merged, and then for one of them to be denied a job offer based on a red flag that was raised on the *other* person's records. That would be unfairly prejudicial to them. The organisation would be failing its obligation to hold accurate data.
If you're currently reading this for the nuclear engineering rather than the geopolitical horror, I strongly recommend the website "Beyond NERVA". The web design is a little bit timecube so I have no idea if the author is a crackpot, but it's fun reading anyway. https://beyondnerva.com/fission-power-systems/fission-power-plant-reactor-cores/ describes a bunch of nuclear reactors that have been flown in space, mostly by the USSR to power radar systems.
The latency on a thermal printer is really good. The only thing I think could possibly keep up is a line dot matrix printer? They go very fast and I believe have negligible start up time. https://youtu.be/KnPBWru2Ecg
I think thermal printers are almost definitely going to be the winner here because they are mass manufactured and hence cheap. If you had throughput problems keeping up with the amount of sheets you need to print, you could just get several thermal printers and send different sheets to each one in parallel.
In 2021, a snapshot became publicly available of what the giveaways cost up to September 2019. Epic published this document for some reason related to the court cases against Apple. https://kotaku.com/heres-what-epic-paid-to-give-away-all-those-free-games-1846815064
Mobile games built on Unity will have to update to the newest version sooner or later because the Google Play Store and Apple App Store both introduce changes to requirements every couple of years. Complying with these usually requires you to update whatever framework you are building on and increase the target SDK version so that your app opts into newly changed defaults. This isn't unique to Unity: it also happens with React Native, Cordova, Flutter, Java/Kotlin and ObjC/Swift.
> GPUs - where you get far more processing units and associated RAM than you'll get onto the CPU
Speed yes, capacity no. Video RAM (GDDR) has higher throughput than normal RAM (DDR) but it's sold at fairly eye-watering prices per byte of capacity and you don't get very much of it even on the biggest most expensive GPUs. nVidia's current datacenter GPUs top out at 180GB.
It is cheaper to buy a server with multiple TBs of RAM in it than to buy a GPU with 80GB of RAM on it. A H100 with 80GB of RAM costs north of £30k.
TLS 1.0 isn't as well designed as 1.2 is. I think we should be expecting that there will be protocol vulns found in TLS 1.0 in future, and when they are found we will all have to turn TLS 1.0 off everywhere in a massive hurry. Similarly to how SSL3.0 had protocol vulns that required everyone to turn it off a few years ago (the "POODLE" issue) in a massive hurry.
In light of this, it's a good idea to turn off TLS 1.0 now, while we can all do it at a leisurely pace, rather than suddenly having to turn it off in a massive hurry if (but probably when) the next big TLS 1.0 protocol vuln is found.
(As has been noted by other commenters, TLS 1.1 can be ignored because just about everyone who implemented TLS 1.1 also implemented TLS 1.2.)
> I don't really have any time for employers who simply employ people based on their having exactly the right skills for their requirements at that particular moment
I'll argue that if your organisation still has COBOL in it in 2023 then you should expect to still have COBOL in it in the year 2223. No contractors are going to live that long. Plan accordingly and set up training that can create the skills you need instead of just praying that you'll be able to find them outside.
For what it's worth, Linden Labs is apparently profitable and employs a couple hundred people. What appears to have happened to it after the initial round of hype was that it grew a user base who like it and spend money on it, for recreation? I assume this is because they have innovative features such as "legs", no "real names policy" and they don't kick people out for being weird. Also it works on cheap computers. You can't beat "it runs on cheap computers", it's practically a super-power.
I'm not at all disagreeing with you that it's way short of the "this is going to change everything!!!!!" hype that surrounded it in the early days, of course. I certainly haven't heard of anyone using it for business for real.
Ehhhh it's not that bad when the new law sets a standard that is easier to judge, and especially when it creates a simple bright line where there used to be ambiguity.
Rather than going through a long winded argument to demonstrate that trading in company scrip is done only with the intent to defraud people (which it is, obviously), you just make trading in scrip illegal and skip all the hassle.
There's this thing called Jevons Paradox. If AWS tech you to use AWS in a more cost effective way, then the effective per unit cost of doing a thing in AWS goes down for you. At a lower per-unit cost, it becomes economical for you to do more things in AWS. A lot more. So your total spending goes up and you are happier about the results.
It's mainly known for being the reason why you can't fix road congestion by building more roads