Re: The last time I heard a loud noise and things were restarting...
The next big use for nanotechnology will be to put thousands microscopic DRM chips into every millilitre of liquid ink.
245 publicly visible posts • joined 11 Oct 2010
The latency on a thermal printer is really good. The only thing I think could possibly keep up is a line dot matrix printer? They go very fast and I believe have negligible start up time. https://youtu.be/KnPBWru2Ecg
I think thermal printers are almost definitely going to be the winner here because they are mass manufactured and hence cheap. If you had throughput problems keeping up with the amount of sheets you need to print, you could just get several thermal printers and send different sheets to each one in parallel.
In 2021, a snapshot became publicly available of what the giveaways cost up to September 2019. Epic published this document for some reason related to the court cases against Apple. https://kotaku.com/heres-what-epic-paid-to-give-away-all-those-free-games-1846815064
Mobile games built on Unity will have to update to the newest version sooner or later because the Google Play Store and Apple App Store both introduce changes to requirements every couple of years. Complying with these usually requires you to update whatever framework you are building on and increase the target SDK version so that your app opts into newly changed defaults. This isn't unique to Unity: it also happens with React Native, Cordova, Flutter, Java/Kotlin and ObjC/Swift.
> GPUs - where you get far more processing units and associated RAM than you'll get onto the CPU
Speed yes, capacity no. Video RAM (GDDR) has higher throughput than normal RAM (DDR) but it's sold at fairly eye-watering prices per byte of capacity and you don't get very much of it even on the biggest most expensive GPUs. nVidia's current datacenter GPUs top out at 180GB.
It is cheaper to buy a server with multiple TBs of RAM in it than to buy a GPU with 80GB of RAM on it. A H100 with 80GB of RAM costs north of £30k.
TLS 1.0 isn't as well designed as 1.2 is. I think we should be expecting that there will be protocol vulns found in TLS 1.0 in future, and when they are found we will all have to turn TLS 1.0 off everywhere in a massive hurry. Similarly to how SSL3.0 had protocol vulns that required everyone to turn it off a few years ago (the "POODLE" issue) in a massive hurry.
In light of this, it's a good idea to turn off TLS 1.0 now, while we can all do it at a leisurely pace, rather than suddenly having to turn it off in a massive hurry if (but probably when) the next big TLS 1.0 protocol vuln is found.
(As has been noted by other commenters, TLS 1.1 can be ignored because just about everyone who implemented TLS 1.1 also implemented TLS 1.2.)
> I don't really have any time for employers who simply employ people based on their having exactly the right skills for their requirements at that particular moment
I'll argue that if your organisation still has COBOL in it in 2023 then you should expect to still have COBOL in it in the year 2223. No contractors are going to live that long. Plan accordingly and set up training that can create the skills you need instead of just praying that you'll be able to find them outside.
For what it's worth, Linden Labs is apparently profitable and employs a couple hundred people. What appears to have happened to it after the initial round of hype was that it grew a user base who like it and spend money on it, for recreation? I assume this is because they have innovative features such as "legs", no "real names policy" and they don't kick people out for being weird. Also it works on cheap computers. You can't beat "it runs on cheap computers", it's practically a super-power.
I'm not at all disagreeing with you that it's way short of the "this is going to change everything!!!!!" hype that surrounded it in the early days, of course. I certainly haven't heard of anyone using it for business for real.
Ehhhh it's not that bad when the new law sets a standard that is easier to judge, and especially when it creates a simple bright line where there used to be ambiguity.
Rather than going through a long winded argument to demonstrate that trading in company scrip is done only with the intent to defraud people (which it is, obviously), you just make trading in scrip illegal and skip all the hassle.
There's this thing called Jevons Paradox. If AWS tech you to use AWS in a more cost effective way, then the effective per unit cost of doing a thing in AWS goes down for you. At a lower per-unit cost, it becomes economical for you to do more things in AWS. A lot more. So your total spending goes up and you are happier about the results.
It's mainly known for being the reason why you can't fix road congestion by building more roads
The cooling system itself uses electricity to move the heat away from where you don't want it. Usually it does this by pushing the coolant (air, water) around. More efficient cooling systems use less electricity to move a given amount of heat. That's where the touted energy savings are coming from.
The stuff OVHCloud are talking about doing here is passively cooled, so they're able to get very close to zero energy being used by the cooling system. Hence the claimed PUE almost equal to 1. It's really impressive because normally it's hard to get passive cooling to scale up to moving lots of heat.
In a PC, in theory a liquid cooling system may use less electricity than just fans because the liquid system moves heat from the CPU to the radiator very efficiently, and then moving heat from the radiator to the environment is much cheaper than moving heat directly from the CPU to the environment would've been. The radiator is easier to cool than the CPU because it has a much bigger surface area. Also the radiator tends to maintain a fairly uniform temperature too because it's made of metal and full of circulating water. If you want to bring the electricity cost of cooling down even further, I've heard of DIYers using huge radiators which are so big they can be passively cooled. I remember reading once about someone going to a scrapyard and picking up a car radiator.
I worked in a company that changed ticketing systems somewhere and 4 to 6 times while I was there. I think it was only 4 but I lost count.
Switching isn't impossible. You either a) migrate your tickets over or b) lose them all, call it a temporary ticket amnesty, and then invite everyone to re file the tickets they care about most.
They'll be amortising this stuff over decades. It sounds like a big number but a 20% cost overrun is nothing in an infrastructure project.
I am very doubtful of the implicit assumption that the project wouldn't have had this overrun if the procurement process had been followed to the letter.
I guess it would be surprising if theropod pooping didn't work similarly to bird pooping, but there were a huge number of other extinct dinosaurs that aren't ancestors to birds, so who knows if any of them evolved peeing? That anatomy is mostly squishy and doesn't fossilise very well.
Also the maps are very optimistic. I found out the hard way that in the middles of cities there are places that are marked as wired for FTTP according to the coverage maps, but you can't actually get service. If you try to actually buy it then a small man with a large OpenReach van comes to your home a couple of months later and sadly tells you that there's no way he can get a fibre all the way from the nearest run to where you are. :(
I assume the root cause of this is that OpenReach get some subsidies or something based on coverage percentages, so they lie about which properties are covered in order to get them without having to pay to actually do the installations.
DH is an online protocol. Both parties send messages back and forth in order to agree on a shared temporary secret.
When you encrypt backups, you are sending an encrypted message to your future self. There is no way for future-you to send messages back to current-you, so the communication only goes one way and you can't implement protocols like DH.