The BBC reports:
"He [billionaire Jared Isaacman] had paid an undisclosed sum - estimated by Time magazine to be about $200 million (£145 million) - to fellow billionaire Elon Musk for all four seats aboard the Crew Dragon."
247 posts • joined 14 Dec 2017
In a computing magazine from the early 1980's I saw an advert for "The Last One" (TLO) - supposedly the last programming language which would ever need to be written, because anyone could program in plain English.
Now it's just a footnote to history in the briefest of wikipedia pages, although the example shown is very illuminating as to how rubbish it was.
Or it doesn't exist at all, except to con money out of investors.
If they're really about to release the FPGA emulator to customers, then at least someone will get a look at what the instruction set looks like, and how novel it really is.
Have they patented anything? Then the patents will be published.
> If you look back at the history of open source you'll generally see that people understood from the word go that it would never be the sole basis for a commercial product.
Not necessarily. The BSD licence is so permissive that it allows code to end up in proprietary, commercial products. This is by design: the people who work on BSD understand this very well.
By "too big to fail", do you mean "so important to society that governments will be forced to bail them out" (like banks)? Or "so cash-rich that they can always buy up any upstarts that try to displace them?"
I don't think Salesforce/Slack or even Microsoft fall into the first category. If they fail, it will be because they cease to innovate and their customers move to something which meets their needs better. The only requirement to stop this being catastrophic is to ensure that all customers have the right to export their data in full.
Twenty years ago, everyone had a Nokia phone. It could have been argued at the time that they were "too big to fail". They failed - and the world moved on.
As it happens, they got to 53 miles, not 17 miles.
However, the height is almost irrelevant. Only about 1% of the energy required to get into orbit is to get up to the right height; the other 99% is in the kinetic energy required to travel laterally.
Increasing their rocket power to get to 100 miles or 200 miles would not get them into orbit; it would just increase the time slightly before falling back to earth. That's why what SpaceX is doing is a *much* bigger deal.
No, this is nothing like "the old probes that pulled server metrics such as CPU and RAM when doing weighted load balancing"
To understand what this is about, imagine a distributed object storage system. There are M servers ("bins"). Each incoming object ("ball") is written to one of those servers. You want to distribute them evenly, so that you don't run out of capacity on one server while others have free space.
When writing, it would be easy to pick the one with the most free space. That's what you're describing.
The problem is, how do you *read back* a given object? Either you have to search for it across every server, or you have to maintain a huge database of every object and its location.
To avoid the huge database, you want to locate an object from just a hash of its name. Such an algorithm will necessarily require objects to be moved when you add or remove a storage server, and you want to minimise the number of movements in that case.
This is what storage systems like Ceph and Swift do. They use variations on "consistent hashing" which is mentioned in the paper as the previous state-of-the-art. However, such algorithms give effectively 'random' distribution. If all the objects are the same size then this gives pretty good results, but if you have a handful of huge objects mixed in, you can get badly out of balance.
The paper describes a better algorithm which avoids having a central database, maintains a good balance within some constraint factor, and minimises the number of servers you have to look on to find a particular object.
This is a big deal. You should give them credit for it, rather than dismissing something you don't understand as being nonsense.
> Ask for an advisory date for FTTP (or even how far we are down the rollout) and an answer there is none. Which complicates planning a little.
That's because switch-off of the PSTN is *not* linked to the availability of FTTP.
Come 2025, if you have no FTTP, you'll still be getting Internet via FTTC (or heaven forbid ADSL). However your voice service will be delivered over that, as VoIP, rather than analogue narrowband signals injected directly over the copper.
You can start switching to VoIP today. There are many providers of IP-based telephony services. If you have a PABX in the basement linked to an E1 trunk, now is the time to get rid of it.
In terms of alarm companies, you'll need them to provide a service which works over IP. Again, you can move to that as soon as the alarm company has a workable IP-based solution.
A while ago, Unifi APs started calling home to the vendor to report stats.
Now my Unifi APs have their management address on a separate VLAN, which blocks *all* outgoing traffic. The controller sits on this VLAN too. And I only open up outbound access from the controller when fetching a new firmware version.
It's sad that we can't trust vendors not to spy on us these days.
All modern hard drives do this.
You have a queue of outstanding requests on the bus (SATA/SAS etc). The drive optimises its seek path across the platters, using its knowledge of the rotational positioning of sectors as well as elevator seeking.
The more parallel I/O you're doing - i.e. the deeper the queue - the more opportunity it has to improve the total throughput.
The first paragraph of the article got it wrong in almost every important detail. There is no copper stop-sell in Mildenhall. They are not "also" withdrawing analogue voice; they are *only* withdrawing analogue voice.
The stop-sell in Mildenhall is much more limited than Salisbury. Copper will remain for ADSL/FTTC connections.
However, it is certainly true that you will need a UPS to make landline phonecalls during a blackout (if you can't receive a mobile phone signal).
El Reg has indeed got it wrong here.
Unlike Salisbury, where they've gone full-on to FTTP, the stop sell in Mildenhall is *not* on copper; it is on analogue voice services only. Copper remains for providing xDSL services, and voice will be carried digitally over that.
The phasing out of the PSTN (analogue voice) will be complete by 2025, whereas the phasing out of copper has no date set. Openreach's public goal is about 70% coverage by "mid to late 2020's", so there will still be substantial amounts of copper remaining well into the 2030's.
Vendors don't choose to provide cloud services because of licences. Ask yourself why Microsoft pushes Office 365 so heavily over the on-prem products, when they own the licences anyway.
Vendors provide cloud services:
- to get a recurring revenue stream
- to control upgrades and bugfix releases
- to simplify support processes
- because many customers don't want to manage servers any more
- ... etc
AWS have Graviton ARM VMs you can rent, up to 64-core bare metal, at about half the cost of equivalent Intel ones.
Lots of people run Raspberry Pi as desktop and/or server. Plenty of other ARM-based boards out there too, e.g. NASes and routers.
> Not one of the oldest? THE oldest.
I don't think so. The first Linux I ran was Soft Landing Systems (SLS), and according to Wikipedia at least, Slackware was a fork of that.
It certainly was buggy: I remember that the permissions on the /var/spool/mail directory were set wrongly out-of-the-box, so that you couldn't even send mail between two local users. But fixing that sort of problem was a great way to learn how Linux/Unix actually worked :-)
> So of any of these servers go down, or the companies hosting them go out of business or just plain decide to not host them any more... your NFT is gone and there's nothing you can do about it.
The NFT should just contain an SHA256 hash of the image or whatever. It doesn't need to provide a way to retrieve it. Ownership of a URI is useless, because the content at that URI can change anyway.
> And something the tweet doesn't cover: if I go to that URI, then I can simply download your NFT.
That's missing the point. You're not downloading the NFT, you're downloading the image that's linked to it. You have a copy of the image, but you don't "own" the image. The NFT asserts ownership, and allows you to transfer that ownership. It's not a copy-protection mechanism. It's more like a certificate of authenticity.
Why do you think that blockchain would be at all applicable to this situation?
A vaccination record ("person X has been vaccinated") has no value in the blockchain if just anyone can put it there. But if it's signed by a trusted third party - the NHS, say - then it's the signature that matters, not the fact that it's in the blockchain. Vaccinations aren't transferrable, so a record of "ownership" of the vaccination being transferred from one person to another doesn't mean anything.
For the same reason: "The holder of this token is the only legitimate owner of the original artwork Pig In A Frock Playing Poker With A Chimp, Bill Gates 2021, pixel-in-DRAM" has no value, unless you can prove that the person who put it there in the first place was Bill Gates (or someone with delegated authority from Bill Gates)
It always struck me as incredibly wasteful to tie up a 40-gigabit Thunderbolt 3 / USB-C port just for charging the laptop (especially if you have only one or two).
A separate dedicated power port makes much more sense.
I've been sticking with my 2015 MBP, which has two TB2 ports, two USB-A ports, HDMI, SD card, Magsafe 2 charging, and 3.5mm audio: they all get used at various times.
[this is macOS 10.14.6 with security update 2020-007]
MacBook-Pro-4:~ $ ln -s /usr/bin/sudo /tmp/sudoedit
MacBook-Pro-4:~ $ /tmp/sudoedit -s /
sudoedit: /: not a regular file
MacBook-Pro-4:~ $ /tmp/sudoedit -s '\' `perl -e 'print "A" x 65536'`
Segmentation fault: 11
So in short, macOS apparently is vulnerable, but it's partially mitigated because it checks the password earlier in the process (so you need to know the local account password).
Java SE is not free - it has a commercial licence that "permits personal use, development, testing, prototyping, demonstrating and some other uses at no cost."
This article is muddying the water.
If GraalVM requires Java SE, then it's not free. But if GraalVM is GPL and runs independently of Java SE, then saying that "might just give those users and developers an on-ramp that keeps Java SE relevant" makes no sense.
Even when fetching over HTTPS, I doubt your browser refetches *all* the assets for a page for every page view. For many sites that would be multiple megabytes per view; you'd certainly notice it.
The problem described in the article is when two separate websites (site1 and site2) both reference an asset at the same URL, e.g. img src="https://example.com/foo.png". Colluding sites could generate an image (or stylesheet etc) dynamically, and then check its content. The solution in Firefox is to have separate caches when browsing site1 and site2.
Not "technically": AWS genuinely has done the right thing here.
AWS runs (and sells) Elasticsearch as a service. It also contributes its improvements back upstream. Via OpenDistro it also contributed further functionality, some of which was only otherwise available as paid-for add-ons (e.g. alerting).
Elastic.co didn't want AWS's software contributions. It wanted their money.
AWS won't be held over a barrel by Elastic.co. Open source software without license fees is what allows AWS to scale up and up and up. This applies to their whole stack, from the Linux kernel upwards. At their scale, it's cheaper for them to employ their own software engineers rather than pay licences, in the same way that it's cheaper to build their own servers than buy from Dell or HP.
In itself this makes no difference to Elastic.co, since they were never going to see any money from AWS anyway. What they now risk is that all the other unpaid community developers will jump ship to the AWS fork - and that their future potential customers may pick this one too. Elastic will have to make its value proposition the support services which come from buying a commercial partner - which they could have done all along.
IMO, the open source community has *gained* a new product champion for Elasticsearch in AWS. We all know AWS's business model doesn't include selling software licences, so there's no risk that AWS will pull the same trick as Elastic.co. There is a potential risk that they will steer it in the direction of their own interests, but since their own interests include deploying Elasticsearch at massive scale with high reliability, that's likely to coincide with the community's interests too.
Aside: Elastic.co no doubt noticed that AWS sells services with built-in licence fees for Windows and SQL Server and VMware and Oracle, and hoped AWS would do the same for them. That ain't going to happen. AWS resell those particular pieces of software, not because they can't write their own database or virtualization layer (they have!!), but because a subset of customers insist on running those exact pieces of software. And crucially, those pieces of software have been closed-source and commercial from day one, so there's nothing to fork.
EDIT: it's also worth noting that in keeping Elasticsearch development fully open source, AWS is helping *its own competitors* - both the other big cloud providers, and the more specialised Elasticsearch service providers. AWS don't mind. They continue to compete on reliability, breadth, and price.
But how is this different to all the things running underneath Elasticsearch: the Linux kernel, GNU utilities, the OpenJDK JVM, and libraries?
You can build a cloud service around all of these things without open-sourcing your special sauce. What makes Elasticsearch think it deserves to be treated differently?
AWS here are the good guys. They took the Apache 2.0 distribution of Elasticsearch, enhanced it with components which normally you'd have to pay elastic.co for (e.g. alerting), and then released it back to the world under Apache 2.0. It seems to me that elastic.co cares less about people "contributing back" code, than getting their cash.
On the other side, from AWS' point of view, I can understand that it makes no sense to buy a licence at their scale. Even if they negotiated favourable terms for now, it would leave them exposed to huge price hikes in the future.
The Elastic Licence is highly restrictive. It says you can use (but not distribute) the binaries, and only for certain purposes. You can look at the source code, but you can't use it: even if you build your own binaries from source, you can "use the resulting object code only for reasonable testing purposes". Essentially it's a full-blown commercial licence, where the cost for some uses of the software is zero.
The SSPL is less restrictive, allowing you to distribute and modify binaries and source. However if you provide Elasticsearch as part of a cloud offering under this licence, you must then release the source to your *entire* cloud environment, such that someone else could replicate the entire cloud. As such, it becomes a huge risk to any SaaS operator to use Elasticsearch anywhere in their infrastructure under these terms.
Of course, you can buy your way out of the risk by paying a licence to elastic.co. But why do that for Elasticsearch, and not for all the other myriad open source components you rely on in your infrastructure?
I think it's disingenuous for elastic.co to argue that cloud users haven't been contributing back source code patches. Nobody wants to maintain their own private fork - everyone wants their enhancements to be merged upstream to avoid the burden of carrying them forward. Look at Red Hat's contributions to Linux.
The reality is, elastic.co is more interested in getting your cash than your patches.
Memory safety is one class of security problems, and there are tools to deal with them. However, many security issues stem from higher layers - SQL injection and CSRF are just two examples.
Put it another way: PHP is a memory-safe environment, but I think you'd be hard pressed to claim that everything written in PHP is secure :-)
The programmer's mindset and approach are far more important. Treat software engineering as a branch of *engineering*.
I think you are making contradictory points there. C is, to all intents and purposes, portable assembly language. It exposes all the fundamental underlying architectural things: pointers, integer overflow etc. Issues around memory management and buffer overruns are the same whether you write in C or assembly.
The only real difference is that if you write in C, then your code gets to run on more processors. A modern compiler can pick a more efficient set of assembly instructions than a human can.
Aside: my first computer had 1KiB of EPROM and 128 bytes of RAM, later expanded to 4KiB. Having an *assembler* was a huge upgrade from writing directly in hex :-)
There was an official Apple Magsafe airline adapter - I know because I got one. However, the only place I came across a compatible airline power socket was in older BA 747's in Premium Economy. I did get to use it a few times: IIRC, it was able to power the laptop, but for some reason wouldn't charge it.
Biting the hand that feeds IT © 1998–2021