Will CPython be renamed?
I'm running CPython on Alpha. It's sad that people want to change for change's sake without articulating meaningful advantages.
74 publicly visible posts • joined 31 Oct 2009
Google doesn't care if they don't make sense. Heck - I think they flaunt the fact that you can't talk with people there, because they don't want to waste their time with anyone who's not paying them a good bit of money.
They're stupid, too - if you frequently send spam that comes from Google to abuse@google.com / abuse@gmail.com, they'll consider you a spammer.
Even huge multimillion subscriber YouTubers can't communicate with Google / YouTube without getting form responses that show that Google / YouTube just has someone mashing form response buttons, like the receptionist at the hospital in Idiocracy.
I think the evaluation that you shouldn't use Google for critical services is spot on, and it's nice to be able to point to stories like this as examples for the less technical folks.
It's interesting that this is coming from Cloudflare.
They are the most prominent example of rate limiting, blocking based on reputation and geography, as they put it, "socioeconomic bias", and so on.
They also want to protect scammers by mixing their web site endpoints in with their legitimate customers, while at the same time they want to deanonymize people behind CG-NAT by surreptitiously monitoring DNS using DNS-over-https, so this is a little more than hypocritical.
While the thing is written as though Cloudflare is talking about Cloudflare's decisions which lead to "socioeconomic bias" and other issues, they don't actually say it's about them. It really should. They're the primary cause of the problems, just as they're the ones who protect DDoS-for-hire gangs that make Cloudflare's products seem much more necessary than they really are.
Evaporative coolers are NOT necessary, ever. Does your house have evaporative coolers? Does your house need to waste water to cool it?
If you were building a datacenter that you anticipate being around for decades, you'd build geothermal loops and use the ground to cool your servers. You'd pipe extra heat to nearby municipalities, like datacenters in Sweden do.
Datacenters use water and evaporative coolers because people don't like investments that don't pay off in the short term, and municipalities are dumb enough to sell water to them for low prices. This is clearly not sustainable.
The media, including The Register, are tricked in to writing about water waste as though it's inevitable. It's not.
When a company prioritizes money over security, they should not be trusted with security.
They have a long history of security issues, and we all know they don't provide updates to anyone who isn't paying for support. But what do you get even when you do pay for support? You get to be their beta tester, because many of their "features" and many of the bugs that they're finally getting around to trying to fix haven't really been tested. You get their own staff being unable to make their own "features" work. You get told that you get nothing for not being able to use those "features" for months while they try to fix them.
Even when you report issues while having support, and they finally claim to have fixed them, you still need to have active support to get those fixes. A company that I worked with had taken SonicWalls out of service because of broken "features" and wasn't going tp pay until assured that those "features" were fixed, and SonicWall would neither offer the fixes that caused the company to stop using their products, nor would they assure that the "features" were fixed, nor offer any additional support if they weren't.
Much of their support don't even understand basic networking. It's like calling Comcast or AT&T - they know terms, but the first half hour of any call is dealing with someone who doesn't know what a NAT state table is, but pretends that the thing they've condescendingly read out of a script disproves everything you've said. One of their higher tiered support people told me, completely seriously, that NAT timeouts HAD to happen and said it's impossible (his word) to keep a NAT state open indefinitely (I said I didn't want ridiculous amounts of time - a year is fine - but apparently it's not possible in SonicWalls to turn off the timeouts).
If that weren't bad enough, they will tell you that a device is "obsolete", then sell you a new device that has LITERALLY THE EXACT SAME HARDWARE INSIDE.
They're a shady company that should never be trusted with anything related to security.
Thank you for attending my rant.
I'm confused. The story talks about tiny11 and nano11. The link for tiny11 goes to a page that doesn't have the word, "nano" anywhere on it. The Github page for tiny11builder doesn't have the word, "nano" anywhere on it.
If I got to Github, then to the developer's project page, we finally see a link to nano11:
https://github.com/ntdevlabs/nano11
Wouldn't it be good to include a link to that in the story?
The problem is your (admittedly inidirect) use of Linode. Simply choose another provider that doesn't allow mostly unchecked abuse, and you'll be fine. Or smarthost through a better provider, if you don't want to move.
The problem with DDoS is that network operators are just shit. We've known the solution forever: egress filtering. Never, ever allow traffic to leave your network that doesn't clearly have your network as the source.
The secondary problem is that we've lost the ability to communicate with network operators and we have no effective sanctions. In the old days, if your network was getting attacked and you write to the admin of the source network, the admins there would remove the attacker or take other actions immediately.
Imagine if you're at home and you get an interstitial that says a device on your network is being used for malicious things, and now it's on you to figure out what. Is that REALLY worse than insecure machines being able to just do whatever the heck the botnet owners want it to do?
There's a nice Calvin and Hobbes (not original) cartoon making the rounds that says:
"DEI initiatives were not put in place to ensure lower-qualified minorities could get hired instead of more highly qualified whote people. It was put in place to ensure lower-qualified white people were not hired instead of more highly-qualified minorities."
Completely reasonable and fair.
When people nowadays talk about how certain things won't or can't be successful unless they please some corporate entity, a wonderful example to consider is TCP/IP. Imagine if instead of TCP/IP, we had standardized on what corporations had to offer.
Imagine if Novell tried to make Netware work planet-wide. What would that look like? Who would pay for it? How many people would have full time jobs doing nothing but babysitting it? How much would humankind suffer as a result?
Then I remind people that if corporations had their way, we'd be locked in to their services and would need to pay them for those services in perpetuity. It's neither fun to imagine, nor is it practical. Look at how many services Google stopped supporting because of "profit", even when tons of people still wanted to use them. Profit is a very wasteful determinant for whether something is worth running.
It's also a good reminder that we need to be vigilant about companies that are trying to lock us in now. Cloudflare, for instance, wants to recentralize the Internet around them, a for-profit corporation in the Untied States, so we'll be "protected" from the very scammers and DDoS actors that they protect.
Let's learn from the successes of the past and not get tied in to proprietary things and corporate lock-in!
There are two common ways to deal with the lack of floating point hardware. One is to compile the OS and software with software floating point routines. The other is to use traps which are called when floating point instructions are run which emulate the floating point instructions as though you have a real FPU.
The first is faster and more efficient for systems without FPUs, but even if those routines automatically use floating point hardware if it exists, the routines create a lot of overhead.
The second is usually the default, but the instruction emulation traps have a lot of overhead - for every single floating point instruction, you need to do a context switch, which is expensive.
Since a vast majority of the computers used in the world have floating point hardware, the second is usually the default. There was even a recent discussion about this for NetBSD running m68k.
There's a third way, which is to patch binaries as they're loaded and/or run to do an exception when a floating point instruction is run, but then also replace that instruction with a call to a software floating point routine. This is what can be done on AmigaOS with instructions that aren't available on the m68060 and are emulated. It might be worth considering something like that for NetBSD, although modifying running code in a protected memory OS is hardly trivial.
Microsoft is SO GOOD at reinventing a thing, poorly, with long term security issues. I don't know that professional programmers with that goal could make those kinds of problems on purpose.
It's good to see that they're capable, finally, of admitting when something is shitty. For how many decades did people have to endure macro viruses in documents JUST IN CASE someone might use macros and wouldn't want others to have to answer a prompt or something.
I have a stock Amiga 3000: 25 MHz m68030 & m68882, 16 megs of memory, built in ECS and SCSI, plus a Zorro ethernet card. I love that I can still run a modern OS on it, with working TLS, and I can browse web sites, even this one, at least as well as some Internet appliances. I can work on common document types, ssh, write shell scripts, print (!), find modern programs and play games, all on hardware that is functionally the same as what someone could buy in 1990.
Another Amiga I have is an Amiga 4000 with a 66 MHz m68060 and a fast SSD on UW-SCSI. It's so responsive that the only contemporary machines that feel so quick are Arm-based Macs.
This kind of OS, this kind of culture, this kind of ecosystem has a reason to exist. Many people haven't had a chance to play with these machines, so I really think more people would get it if they had a chance to sit down and do real things.
So updates for m68k AmigaOS in 2025? It doesn't surprise me, because it's so damned good. It does make me happy, though :)
We, at least readers of El Reg, aren't going to be fooled in to thinking that Cloudflare is this wonderfully clever company that can do all the things they're doing yet are somehow clueless and inept when it comes to this issue. They know exactly what they're doing.
They keep going down the same path with no signs of changing course. For instance, they don't want people to report abuse, so they stopped processing abuse complaints sent to abuse at cloudflare.com and send an auto-reply that says to use their web interface. Their abuse reporting web page has gotten worse and worse, and will likely continue to degrade. Do they really not have the technical acumen to fix things? Really?
For instance, the fields on the abuse reporting page only allow a certain number of characters. If you paste too many, you're stopped from adding more, but you can't submit the form and you're not told why. You have to know to remove a 100 characters or so from the abuse evidence field, as if that's supposed to be obvious. We wouldn't want to overload their poor servers!
They've added a CAPTCHA to the abuse site. Apparently poor Cloudflare's web server that handles abuse complaints is just too fragile to work without it? Or do they simply not want to hear from people they decide are undesirables?
They added a time limit which is greater than the amount of time a reasonable human can copy and paste a second abuse report. There's no reason in the world to do this aside from wanting to make reporting abuse as arduous as possible.
Their abuse staff are either playing stupid or are actually incompetent. Some phishing sites show an error when you use certain browsers, but not others. But try to tell Cloudflare this in response to their "we see no evidence", and they just keep replying with the same form response.
Are they REALLY this incompetent? Or are they evil, and want to recentralize the Internet around them, and want to protect the scammers they host? You decide.
I simply don't believe you and think you've believed the marketing hype from Cloudflare. Let's look at what you wrote:
"stopping the multitude of DDOS bot attacks"
Bots are ubiquitous on the Internet. Stopping bots, though, isn't something that should be an afterthought - that is, you shouldn't need Cloudflare to do it, even though it's nice to have fewer bots actually connecting to your servers. A server that can't stand up to bots on the Internet does not deserve to be on the Internet.
Does "DDoS" actually mean what you think it means? That's where the disconnect is. You're almost certainly not getting a proper DDoS attack. You're just getting "attacked" by lots of bots. That's not the same as a DDoS attack. Please look it up if you're still unsure.
Please don't be an apologist for Cloudflare on a technical site by implying that your site wouldn't be online if you didn't use Cloudflare. It's disingenuous.
With their comically bad approach to security - security through stupidity, I call it - I'm surprised that anyone still buys SonicWall.
From EOL'ing devices that are literally identical hardware to new devices, to charging for security updates even when still in warranty, to having tech support that tells us nonsense such as how we can't have arbitrarily long TCP connection timeouts because all TCP connections must expire quickly (let's assume we're not talking about past 2038 and just talking about connections that could live multiple days or weeks), about how ALL computers connected directly to the Internet will be compromised without firewalls, about how forging RST between machines communicating across local subnets is a FEATURE, is good and "improves security", about how VPNs are "complex" and SonicWall-to-SonicWall connections that don't work consistently is a "normal and expected" problem...
I have quotes saved from SonicWall support about all these things. They must be really good at marketing.
The good thing is that this didn't affect people. First, a brand new capacitor is likely to form at least OK, if not well, from a reversed polarity, as opposed to a properly used capacitor that's then introduced to the opposite polarity. Second, this just acts to clean the voltage from the power supply, so it's strictly not needed. Third, Macs of that age don't need negative voltage for anything important, so long as the expansion card doesn't need it. Sound works, as does RS-422 and LocalTalk, although it's possible there are machines / devices to which serial can be connected that wouldn't be happy with the lack of proper negative voltage.
I have quite a few 68030 and 68040 Macs that I've built in to various cases, and none need negative voltage, although it's really not hard to add in most instances.
The people who claim the sky is falling are usually the ones who erroneously claim that the intention of IPv6 is to replace IPv4. No reasonable person ever said that IPv4 needs to be or will be replaced. IPv4 isn't going to be turned off. Rather, IPv6 is obviously needed for, for instance, cellular carriers that might have tens of millions of customers and perhaps hundreds of thousands of IPv4 addresses.
The idea of IPv6 is that it makes connectivity better. Connecting to the Internet via NAT means extra work and complexity, because each NAT session has to be tracked for its entire lifetime. When you have NAT routers, whether home devices or fancy, expensive CG-NAT devices, that have too many sessions active at one time, the oldest NAT sessions (usually) get dropped before the session has come to its natural end. We can see this with, for example, AT&T fiber routers that have a NAT & firewall state table that's 8192 entries large. This is in 2024! This is how many NAT states you can get even if you get 10 gigabit service and have a hundred devices behind it. It's ridiculous.
Fancy, high end, ISP scale CG-NAT has limitations, too. Sure, devices have enough memory to keep track of millions of NAT state entries, but you can only have 2^16 (65536) possible active NAT sessions per IP address. Large CG-NAT deployments also have artificially low state timeouts, as anyone who uses Starlink can tell you.
The point is that if IPv6 were ubiquitously available, your cell phone would connect via IPv6, and everything would be golden. Older devices and connections to legacy sites / services that aren't yet on IPv6 would still work, and we would simply be using NAT only when necessary, and certainly not for a majority of traffic.
That's it. The sky isn't falling. Nobody is taking IPv4 away. Thank you for coming to my TED talk.
You're not wrong, but that's not the point I was making.
gcc 10 can compile gcc 11, gcc 12, gcc 13. It's quite normal to have a toolchain in use from several years ago that's still relevant and useful now.
rust from a year ago can't compile Firefox. rust from a year ago can't be used to bootstrap rust from now. So if you have a modern system that was 100% up to date a year ago, you have to - no exaggeration - compile newer but not new rust, which then you use to compile even newer rust, which you can then use to compile Firefox.
Sure, everyone downloads binaries, but that's not the point.
It's not sustainable to imagine finding a specific version of rust from a specific part of a specific year if you want to compile a certain version of a package like Firefox. It reminds me too much of how every IT person used to have an old laptop around to run a specific version of Java, along with some older Internet Explorer, because Java breaks too many things, and our "write once, run anywhere" promise was more like, "if you want to configure your fibre channel switch, you'll never try to use a JRE newer than X".
Are they implying that the people who packaged the OS are at fault, since tmpfiles.d shouldn't have entries for things that aren't, you know, temporary files, like one's home? Or are they reminding us that life is fleeting and that nothing is really ever permanent?
By their logic, it wouldn't be unexpected if running "rm" with no options removes everything in the current directory... Not sure I like that thinking.
Microsoft can't stop a network of spammers / scammers from sending phishing spam from Outlook.com claiming it's from MAILER-DAEMON. Does anyone seriously think they have the wherewithal to identify malware when they're deathly afraid to do the slightest thing that might affect the status quo?
One thing SonicWall is known for (besides horrible devices, insane defaults and employees that don't know anything about networking) is that EVERYTHING costs money. If you're not paying for constant support for your devices, you don't get updates. Even if you do pay, you don't always get updates if your equipment is "too old", even when the hardware is literally the exact same guts as the "new" device - you're told you must buy the new device.
SonicWall is a bad, scammy company. Anyone who runs in to issues with their SonicWall devices should be encouraged to get better devices.
Damn, this is telling. Consider the fact that the costs would obviously be much, much less than what any other non-Microsoft owned company would pay.
It makes me think of when Google cloud "showed off" their cloud's prowess by calculating 100 trillion digits of π using their "high performance" cloud offerings. They, conveniently, never mentioned price once. I roughly calculated that I could buy all the hardware I'd need (high end Epyc hardware, 1/2 petabyte of storage), pay for an expensive hotel for three months, run the calculations, pay myself handsomely, then keep the hardware when I'm done, and it'd still have been significantly cheaper than if a customer had to pay Google for what they ran.
If Linkedin can't use Azure at steeply discounted / possibly free pricing, then what does that say to anyone else considering using Azure?
They want to recentralize the Internet around them.
They want to host and say they don't host, so they don't have to handle abuse, by redefining the word, "host".
They want to host known spammers and scammers because "free speech".
They want people all over the world to send their DNS queries to them via DoH.
They want to marginalize most of the non-western world by having CAPTCHAs on every web site.
And so on.
They try to distract from their nefarious activities using tons of seemingly positive things, like cheerful participation on Hacker News and by offering free services (which do little more than begin the process of addiction and dependency).
I'm glad they're this dumb that they have outage after outage that show how the Internet is worse for using Cloudflare, because if they worked perfectly, many people would never know.
WTAF? The driver takes off, finds a cat in the back, and just ejects it in some random spot? That driver should be criminally charged.
The fact that we humans treat animals as property and not as things that have a right to exist and not suffer says something about our value system, and it's not good at all.
Notice the phrasing: "SIGINT enabled CPU". This doesn't necessarily mean that Cavium directly participated. It could just as easily be explained by Cavium implementing something incorrectly, or implementing the wrong thing (Dual_EC_DRBG), and the NSA had confirmed that anything using those built-in CPU features is exploitable by them.
I created a new email account when I stayed at an MGM-owned hotel. I started getting spam at that email address, including some that had personal information that only could've come from MGM's room booking servers. I tried contacting them about it. Did they care? No. Did they send inane copied-and-pasted paragraphs of irrelevant distractions such as suggesting things that show their ignorance and such as how to remove malware? Yes.
Did I report them to my state's Attorney General's office for not disclosing a breach? You bet I did.
They want to help people move from an old, solid and established language to a something that'll likely get stuck on specific version of a JVM that can't be updated any more five years from now and will require a dedicated machine for fear of toppling the whole fragile edifice... so they can get more hands on the code.
That seems like lighting the house on fire to get rid of the rodents.
I can't imagine a meeting where people who work at a motherboard company discuss doing something this dumb, then decide it's a good idea and plan to do it, then a team of programmers get tasked to write the code, all the while not a single person points out how ridiculously insecure this whole thing will be.
Are people really that dumb? Spiteful? Evil? I really don't know any more!
For ages they've benefitted by people not making any sort of distinction, and therefore assuming Intel. For instance, since the late '80s, any book or course teaching assembly language never specified the architecture, because OF COURSE it was x86.
An example is that "x64" isn't a real thing - the "x" is supposed to mean it's a placeholder, and there are no 80164 / 80264 / 80364 / 80464, et cetera, processors. But if Microsoft / Intel can make most people who see "64 bit" in relation to a processor assume it's referring to amd64 / x86_64, then that'll make them happy.
Intel knows ARM is a serious, real threat, so if they can co-opt the word "processor", they will. They want people to be confused when they hear "ARM processor".
The "campaign to have the company deny service to the forum" certainly wasn't why Cloudflare stopped hosting Kiwi Farms, or at least not directly. They stopped because of how many people in tech decided that Cloudflare's inaction was unacceptable and decided it wasn't a good look to be a Cloudflare customer. The campaign helped make more people aware, though.
The number of people and companies switching to other services probably scared the poop out of CEO Matthew Prince and and VP Alissa Starzak. That's the reason why Cloudflare dropped Kiwi Farms.
Now Cloudflare acts like they're the victims, using words like "censorship" to get people riled up. Really, they should stop pretending that free speech includes illegal stuff, and that "illegal" is only defined as that activity which is so bad that a jury must be convened and an indictment filed. Anything less than that, according to Cloudflare, should be not only allowed, but protected (and made in to profit, of course).
Web developers and end users typically don't know much about security. They add plugins without having any way to know how to compare long term maintenance with short term convenience.
As a systems administrator who hosts quite a number of Wordpress instances, I have to say it's a HUGE problem that plugins can't easily be disabled from the perspective of the server without taking the chance of breaking the whole site. This is rather stupid and makes any long-term Wordpress site on the Internet very problematic.
What really needs to happen is that sites need to be able to run even if a plugin is removed / disabled / deleted, and systems administrators need to be able to do this when bad plugins are being exploited. This is because we can't expect web developers and end users to know to log in and disable them themselves.
Because this isn't the case, I've somewhat often had to disable (chmod 0) plugin files that cause a site to stop working, then let the client figure it out. Emailing them to tell them to fix it ASAP doesn't work (although I do this all the time, anyway), because security is an abstract thing they don't understand until it's already too late. So I email them, then break their web sites, then let them scramble to fix it.
It's really, really stupid, and in my mind it's yet another example of how Wordpress wants to coerce everyone to move to wordpress.com because almost all other installations will either break or become insecure if unmaintained.
OVH is a veritable cesspool. They calculatingly intermix legitimate clients with spammers, scanners and spammers, and they ignore abuse complaints. They know exactly what they're doing.
Paulin wants "open" because he doesn't want Google and Amazon to have a monopoly on hosting scammers.
To paraphrase Jim Samuels (fortune), Intel is like the guy at the party who gives cocaine to everybody and still nobody likes him.
They have their one hit wonder, the x86, and they've shown time and time again that even after throwing money at everything, they're really not innovators. They're scared now because their cash cow has become sickly because of neglect, so we see stuff like this.
It wasn't very long ago that Samsung inadvertently bricked huge numbers of Blu-Ray players due to the most basic of bugs in an XML parser:
https://www.theregister.com/2020/07/18/samsung_bluray_mass_dieoff_explained/
Samsung's ineptitude is why I tell people who buy Samsung TVs to simply use them as displays. Get a Roku or Apple TV, and don't connect the Samsung at all. Problem solved!
I've been telling people about this for years - once our address books are out in the open, then we're going to start seeing robocalls with spoofed caller ID which uses the numbers of people we know and expect to hear from.
The shitstorm has already begun.
Updated IPMI and BIOS on a Supermicro system. IPMI took more than 45 minutes. BIOS had to be updated twice because system wasn't in "manufacturing mode". All BIOS settings had to be manually reset.
Supermicro didn't announce updates, nor did they say whether these updates correct the known Intel ME problems, but considering that there are many BIOS updates for many models of Supermicro motherboards, all dated sometime in October, I wouldn't be surprised if they do a "fix first, announce later" kind of thing.
This was a test to see how long updates for other Supermicro systems will take, and the results are pitiful.
Let's hope this was the official fix and I don't have to spend another hour or two to upgrade later.