* Posts by doublelayer

10378 publicly visible posts • joined 22 Feb 2018

Boffins: cloud computing's on-demand biz model is failing us

doublelayer Silver badge

Re: Remember

Your point being that you can buy boxes with GPUs in them? Because if they can do their research on one of those, great, although depending on how long they need it, renting a similar cloud machine would probably still be cheaper. The problem being that most of the kinds of things they're running can't be run on one of those. Depending on the work, they'll either need a very large number of those or they'll need something completely different, for example something with a lot more CPU power since that is only optimized for GPU.

Or, if they're using GPUs, they might want faster performance on those. That box is as cheap as it is because it doesn't use typical VRAM with its GPUs. In order to be able to fit large models in the RAM, they've gone for cheaper but much slower LPDDR5 shared with the CPU than what you'd find on a normal GPU or on their other accelerators. The cluster I had access to had a lot of real GPUs and was no doubt very expensive to create and operate. It only made sense because the cost could be shared among the biology, physics, and astronomy departments as well as little extra users like me, and people still had to queue and negotiate for sufficient access. A $4000 box with nice GPU performance numbers if you use Nvidia's marketing with numbers for FP4 performance which most research cannot use is really not the same thing.

doublelayer Silver badge

Re: Sounds like a good use for spot instances ..

The article covered that. They can't do their computing in little chunks that can get interrupted. They need a bunch of instances all running at the same time that stay running until they're done. With some modification of the software they're running, they could probably make it more fault-tolerant so it can restore itself when instances become unavailable, but that won't fix the biggest problem of needing a lot of capacity at once. It just means it will be stalled rather than broken.

Unfortunately, no matter how you go about getting that capacity, that is expensive. You can wait until the cloud provider you're using has that much free, you can buy the hardware for the few times you'll use it, but any way you manage it, it comes at a high price.

doublelayer Silver badge

Re: Remember

Except for the types of projects covered in the article, it's not cheaper to use your own hardware in the long term because it would be idle for a lot of the time. Buying your own means obtaining a lot of expensive hardware and facilities to install it in just to have it powered down most of the time. Bigger universities often build a computing resource which is shared between project teams in some way where they can express the amount of compute they need and get put in a queue*, but individual teams are not going to have the funding to do that, so if they can't do it altogether, then they have fewer options.

The problems they are having are for very spiky use cases which are a strong point of the cloud but, as the article points out, not infinitely so. The cloud providers aren't too eager to have lots of idle expensive kit either, so they don't overprovision enough that lots of instances can be set up simultaneously. Nobody wants to have lots of idle expensive kit which is why it is hard no matter who you get your equipment from. Running your own hardware is not a magic solution to this unless you started with the magic massive grant, and if you did, the cloud bill might still be the cheaper option leaving more of the magic grant for other expensive things.

* The shared computer has its costs as well. When I was a student, I had access to that and the ability to run jobs. You had to schedule those well in advance and occasionally coordinate with other researchers to make sure you weren't going to cause problems for one another by running too long or interrupting something before it was done.

Rideshare giant moves 200 Macs out of the cloud, saves $2.4 million

doublelayer Silver badge

Re: Yet another case..

Maybe being a little less general, though, unless you want to be wrong sometime which doesn't help. Cloud can be more expensive a lot of the time, but if you want a way to make it a lot more comparatively expensive than it otherwise would be, try to run Macs in it. Macs clearly weren't designed for this because you can't run Mac OS in a VM on commodity hardware, or custom hardware, and they don't have DC-scale hardware you can buy from Apple. Combine that with the requirement to rent them by the day, which the article and Grab's statement suggest is Apple's doing although I thought that was AWS's choice, and you've got a recipe for something that is even less connected to hardware prices than normal cloud hosting would be.

If you use this example to predict what other price comparisons would look like, you're going to be making overestimates which won't convince many people who check your calculations. Comparing prices requires creating detailed plans of what infrastructure you're going to use and what you need to set it up. People with an agenda, usually a sales commission, who decide that it can all be done from the cloud, it's definitely less expensive, and let's not bother with those boring plans, are often wrong and always annoying. Unfortunately, the people who have an agenda and say that running in the cloud is always less functional and more expensive, and you don't need to consider those boring plans to know that, are equally so.

'Vibe coding' named Word of the Year. Developers everywhere faceplant

doublelayer Silver badge

Re: Garbage

I had not read about that before, and I spent far too long trying to parse the 8-stage example in that article. If that is anything like what people were actually supposed to enter, I'm surprised they bothered.

doublelayer Silver badge

Re: Vibe

In the sense that the invoice gets chopped up into little bits that you get one by one, yes. In the total expenses sense, though, they are more similar.

You'll never guess what the most common passwords are. Oh, wait, yes you will

doublelayer Silver badge

Re: What about username?

The username is going to be stored in cleartext because they use that to identify you. If you want to have more random data, stick it onto your password to make it longer*, because that's the part that gets hashed if they're doing it correctly. Admittedly, I do kind of do this by using separate emails for different sites, but that's for spam detection and prevention, not account security.

* If they have a maximum password length, then you have a bit better of an argument and a reason to worry about what they're doing with the password you give them.

doublelayer Silver badge

Re: leave their door key "hidden"

"I wonder how many of the passwords covered in the article were setup as a honeypot ?"

Given that the point of a honeypot is to know when someone got in and what they did, that would be none of them. Honeypots are a little more complex than having a bad password on someone else's site. In addition to being useless, deliberately trying to do that is likely a terms of service violation on that site.

doublelayer Silver badge

Re: Where are they getting the passwords from?

Some of those will be cleartext or unsalted hashes, both of which make frequency analysis easier. Others will be salted hashes which they tried some old favorites against to make sure they're still in use. Unfortunately, good password storage is another thing that we've known how to do for some time and yet we will probably never see the end of yet another system storing them badly.

UK judge delivers a 'damp squib' in Getty AI training case, no clear precedent set

doublelayer Silver badge

Re: Stored or not stored?

"Always relying on the law to determine right or wrong is trite logic that fails hard - as there's a lot of laws that are immoral - but magically those laws are moral because the law says so."

Read the comments again until you understand that I did not say that. There's a difference between legal and moral, and my point is that, no matter whether you think something is moral, if the law says it's a crime, you have to change the law to make it not one. If you keep using logic that you think it's moral thus it's allowed, you're going to get very surprised when you lose in court. The rest of us will either try to get the law changed or will work within it.

The arguments so far that it is moral have been...well they haven't really been arguments. You just say it. The closest things to arguments we got were that it's easy, so you might as well make it not an offense, which is a really stupid attempt. Your new one is about whether certain things count as an offense in the UK which is mostly unimportant because you have clearly stated that commercial violation of copyright is, and the people the article is talking about are very commercial indeed. This thread has clearly outlived its usefulness, but I suggest thinking about your arguments next time you want to convince people; I'm much closer to your view than you recognize, but you've used and defended a lot of clearly bad arguments which don't respond to any of my points but appear to have been picked out of a bag of cliches.

doublelayer Silver badge

Re: Stored or not stored?

"It is an error to equate unauthorized copies to theft - as making a copy of digital data doesn't change the original by one bit, while robbing someone takes something physical from them."

1. I didn't. The robbery mentioned in the post referred to physical robbery and was an analogy pointing out the fact that the ease with which a crime can be committed is no argument at all for whether it should continue to be a crime.

2. We have lots of things equated with theft which work in exactly the same form. For an example, wage theft is a crime where I do work for someone who agreed to pay me but criminally withholds owed money. On that basis, they did not take anything I had and keep it away from me. Yet it is still theft because I am owed money and they are deliberately not giving it to me. As law stands, someone owning the copyright has the right to place terms on the distribution and use of their work, and you commit a crime if you violate those unless you and your use comply with explicitly written exceptions. Most such crimes do take the form of refusing to pay them money which they have the right to require if you obtain the covered work. Others, for example violating an open source license with specific terms, are less financial but are still criminal and can result in financial penalties.

As long as you continue to use flawed logic like this to either claim that these aren't crimes or to distract from the fact that they are crimes by quibbling with language you understand as well as anyone else, you are and will be wrong. You can argue that they should not be crimes and thus that we should consider changing laws. We would likely agree on some of that. For example, the specific exceptions where you can use copyrighted work for free without permission, I don't think there are enough of those. I also think copyrights last too long. You have not made any such argument, however.

doublelayer Silver badge

Re: Stored or not stored?

Even if your claims were correct, which they aren't, that crimes aren't being punished does not make any case about whether they should be crimes. You can try to make a case that the law should be changed*, but that it's easy to get away with isn't any argument for that point of view. Otherwise, I could say that, because I could probably get away with robbing you, it's totally justified for me to do so.

* I wouldn't suggest you bother though. I've seen your case. It's a very wordy version of "But I want it".

doublelayer Silver badge

Re: Stored or not stored?

Well yes, in order to have consequences for your crime, you have to be caught first. Not really a useful point here, is it?

And in my opinion, copyright supports cultural progression and violating it obstructs it by making it much harder for those who progress culture by making creative works. You want to use those works, meaning you recognize that they have value, but you want to find a way where you get them without having any benefit accrue to the person who did all the work. If I had total control over copyright law, I would weaken it, but it would remain because none of your ill-considered replacements have or will work.

doublelayer Silver badge

Re: Stored or not stored?

This rubbish again? If your employee needs to learn from some materials and they steal copies, then yes, they can be punished for doing so and if you told them to do it, so can you. The AI companies did not obtain licenses to the stuff they trained on because they were not willing to pay for the things that, if we used them, we would have to pay for or commit a crime.

doublelayer Silver badge

Re: Stored or not stored?

Without permission, yes it does. Try pirating some software from a litigious company and find out. There are terms on copyrighted works which apply to your use or copying of it. If you're a for-profit company, non-profit company, or individual, you're equally culpable and can be punished for doing so. Evidently, if you are an AI company, laws don't apply to you. I wonder if any other ones do? I think I need to found an AI company.

UK agri dept spent hundreds of millions upgrading to Windows 10 – just in time for end of support

doublelayer Silver badge

Re: Solution for low salaries in gov IT

And that life cycle is still a little shorter than Windows 10's. It was released in June 2014, a month before Windows 10 was. It entered extended support in July 2024, 15 months before Windows 10 did. It runs out of extended support in June 2028, anywhere from a year and a half to three years before Windows 10 depending on what version you're prepared to use. Everything needs either somewhat regular attention to updates*, a rigorous plan for how you're going to deal with very old stuff, or a willingness to take the risk of very old stuff. Unfortunately, a lot of people decide to go with option 3 but their reasons are bad and selfish**.

* The benefit of many open source operating systems and distros is that the attention to updates is often somewhat easy and free, but you still have to do it. Linux being free doesn't help you if you never tried running on anything after a 2.6 kernel, because 2.6 isn't supported anymore.

** Often, the reasons include things like finance not wanting to spend money on maintenance or IT not wanting to do the work to update it. Whether the users, customers, business, or whatever this thing makes possible are at risk is often treated as a much smaller factor than it should be. And yes, I do think IT is sometimes to blame. Certainly not always, as I have met many admins who know exactly how important a system is, what will go wrong if it fails, and fight a lopsided battle to make that happen. Unfortunately, I've found lots of people who don't like change or work and happily ignore it until something breaks and they don't know what to do.

doublelayer Silver badge

Re: Ha, ha, ha

It's fun to blame Microsoft, but let's consider this. The migration we're talking about happened from 2023-2025. In other words, it happened well after Microsoft announced Windows 11 and announced the time for Windows 10 security updates to end. Unless the department went to the storehouse of used hardware, the machines they bought not only support Windows 11, but almost certainly shipped with it and had to be overwritten to return then to 10. All of this information would have been well-known from the start since all the necessary information was available from the end of 2021 with most of it announced months earlier. This is not Microsoft's fault.

Trump turnabout sees him re-nominate amateur astronaut Jared Isaacman to run NASA

doublelayer Silver badge

Those are all true without necessarily having any effect on his qualifications for the post. So he has a lot of hours in the air, great, we will assume that this means he is a well-qualified pilot, although theoretically there are ways you could get lots of hours without having that. Being a pilot is not sufficient or even necessary for administering NASA. If someone else was chosen on the only basis of being a seasoned air force pilot, that would not be very helpful. The astronaut part is even less convincing because he did not have any of the things that distinguish actual astronauts. He paid for an interesting experience. That's no recommendation at all.

NASA has a lot of complicated responsibilities. Even if we ignore the building space stuff part on the theory that private companies, and given his friendships I think we know which, will be doing all of that, they still have lots of oversight and management of that technology and equipment to do. No, managing a business of building someone else's payment systems into someone else's point of sale hardware, no matter how successfully, is not automatically experience in managing extremely safety-critical equipment with extremely specific and rigorous certification procedures. If we assume that the private companies will handle all of that, then we're left with an organization whose primary mission is scientific, whatever their funders decide to say, since they're in the business of deciding which expensive hardware to send and exactly what they hope to get for doing it. If they want to have a village on the moon (good luck), there's a lot of knowledge gathering required unless they want to have a dead village and a contaminated moon.

None of this is an indictment of Isaacman's capabilities, but nor is any of it support for them. His statements are somewhat encouraging, because I would rather have more space research than a doomed attempt to get a human somewhere for more bragging rights (we stopped manned lunar missions, not because we had to, but because we could accomplish more with less). That may be as good as we can get. But no matter how many hours he has in the air, it's not going to make a difference to the important part of the job because never will his responsibilities require him to personally pilot a fighter to obtain a goal.

Amazon complains that Perplexity's agentic shopping bot is a terrible customer

doublelayer Silver badge

Re: Agree with nobody

There are a lot of problems with AI that could be fixed by having the makers of the AI pay for the costs. They are very good at making sure they never have to, and as things go, this is one of the easiest ones for them to get out of. When a user decides to give Perplexity permission to spend their money, they're choosing to run software and take the risk of getting bad purchases. They take the financial cost before the return is processed. Therefore, the cost is on them. If we compare to stealing copyrighted data or even burdening servers with floods of requests, the AI company is much less culpable.

Since they have never been punished for doing illegal things in the first case or legal but harmful things in the second, there's almost no chance they'll get anything for not being great purchasing bots. Instead, the costs will likely be put on all Amazon customers, eventually all customers of anything Perplexity can buy from, as they change their return policies. Free returns was one of those things that was nice as long as most people weren't unreasonably overusing it, but like anything with a large shared potential cost, it won't last forever when it gets too expensive.

doublelayer Silver badge

Re: Agree with nobody

I don't know that I'd count out the option that the bot orders something wrong and the user ends up having to return it. In my experience, you have to be pretty thorough at checking product descriptions, specs, and reviews whenever you have any special requirements, and AI bots tend not to be great at that. I have not allowed the bot to purchase things on my behalf and am not willing to, but I did just try making Perplexity find a specific item for me and it did not give me good results. I think both possibilities are likely, and possibly even that both are happening. I would not jump to the conclusion you appear to be that this is all Amazon; after all, I have stated that I support nobody and if Perplexity was actually doing a great job and avoiding Amazon's unilateral bad behavior, I would support them.

doublelayer Silver badge

Agree with nobody

I can't agree with Amazon because they're effectively saying that users are supposed to get their permission before choosing to use software of their choosing spend their money. Neither of those things should be Amazon's business. However, the two reasons why Amazon might care about it are either Perplexity messing up so often that Amazon's returns costs are noticeably increasing, which indicates that Perplexity's software is crap and Amazon is paying for it, or Amazon is trying to steer people toward purchases that they make more money on, which would mean Amazon's abusing its market power and expecting to have a right to enforce it. Nobody comes out of this looking good.

Tesla board wants to grant Musk $1T in stock, Norway wealth fund says nope

doublelayer Silver badge

"The argument that it doesn't cost Tesla anything to hand Elon stock isn't correct."

Just checking, who was making that argument? Because my argument was that A) 11% is hideously expensive just for that and B) the overhead of trying to do that would be so much more than that that it would be disastrous to attempt.

doublelayer Silver badge

Some of those milestones are really hard. Some of them are not. For example, getting 20 million Teslas sounds next to impossible except they're already starting with eight of those millions in the bag. The reward amounts are also huge. The headline figure of a trillion in stock does require a bunch of unlikely things, but for context, it means granting an award of about 11% of the company if he succeeds in addition to the amount he already has, and the company and investors will pay the inevitable price of obtaining that 11% to hand to him, making it substantially more expensive than it sounds (and it sounds ridiculous). The combination is that he can get part of that award for completing the easy milestones and still get far more shares than most knowledgeable board members or investors would be at all happy with.

Deploying to Amazon's cloud is a pain in the AWS younger devs won't tolerate

doublelayer Silver badge

Re: Gone

And for good reason because those files have to live in an environment which is a lot more maintainable if it's also specified that way. I've seen plenty of things that take the form of a repo of application code. FTP that up and run it and...it doesn't work because you need a database. Where's the database? That wasn't in the code. Is there a copy of it somewhere? Can I make a blank one with a script or something? Often, the answers were yes, but you don't know where, and no. Oh, and did I mention that you need a specific config for the HTTP server? It's just 120 lines, nothing complicated. Have fun reverse-engineering that because it wasn't stored alongside this because the HTTP server config doesn't live with the application code.

The complexity of cloud systems is indeed as painful as the article describes, but some of it has a real and good reason behind it. A lot of things are nicer when they can be expressed as deterministic code than when they take the form of vague, possibly incorrect or missing documentation or snapshots and images without context*. That is painful too, just not to the person who initially developed it.

* For example, I have an application from a place I've volunteered. I've got a big archive full of code and I have a backup of a database. The organization doesn't want to run it, which is great, because I would have to guess what to do with this stuff if they did. There's no documentation of what this does, how to install it, what's in the database, and for all I know, there may be components written by the original contractors which aren't in my big archive. I have no way of knowing whether this can run, even though theoretically I do have copies of the files concerned.

Canonical CEO says no to IPO in current volatile market

doublelayer Silver badge

Because, by doing an IPO, they and/or the company get more money. They can use that to either fund projects they didn't have the budget for or to have more money of their own. You can keep a company private if you don't need it to grow or have a plan for how you're going to grow it while staying private, but sometimes, an IPO is the easiest way of getting money quickly and sometimes, there are things you can't do unless you get money quickly. I don't know what Canonical would want to do with that, but several things I think we'd appreciate would probably not be possible without it*.

Also, a lot of people who found companies aren't as fortunate as Shuttleworth and intend to make some money from doing so. This may not apply to Canonical, but it's not that unreasonable for someone to found a business with the hope that they will make money at the end of it, and an IPO is a technique for getting that money without simply liquidating the business. Before adding another IPO = destruction comment, know that the alternative for people wanting to leave their company and get paid for doing so is selling the entire thing, and I'm not sure you would like any of the companies that would bid for it.

* To pick a single example, and almost certainly not a thing they're ever going to do, I'd quite like a working open smartphone. Canonical tried that. It failed. They never got the software fully working and they didn't have hardware. But, if they felt it was worth it, raising a lot of capital would allow them to hire people to write that software fully and get hardware they can make sure will run it. We can see what happens if you try it the private company way; we have existing Linux smartphones which have old, underpowered hardware, take years to become available, and run software that can't do basic things and would be unsuitable for all but ardent fans of the concept. Unless a million people suddenly get interested enough in the idea and pledge preorders, the resources to fix that problem would not show up with another private effort.

Italian tech company promises to make America Online great again

doublelayer Silver badge

Re: What are the spoon benders actually buying ?

From what I can see, there are a lot of little media pieces that were attached to AOL, and I'm not sure whether they still are or if Yahoo is keeping them. Several blogs like TechCrunch, an advertising platform, and the like. I suppose there's always a bit of strip mining you can do to those if you had them.

doublelayer Silver badge

Re: AOL addresses now mean older people

Assuming that I'm correctly interpreting IFA as independent financial advisor, old tech might not be the best depending on exactly what it is. I don't think there's any known security problem with having an AOL address, but there's plenty of other old tech and people who insist on using it nonetheless which I wouldn't want around my financial information. Depending on what they are doing and what data they have access to, I'd want more confidence that they know how to be secure with digital information and that they are using that knowledge, which means even using new tech isn't good enough for my comfort. Of course, I expect that many places will be holding my data insecurely, but if I'm hiring one, I'm in a position to make sure that one is not among them. Trusting the person is different than trusting that they won't do something dangerous by accident; there are many people who I am absolutely certain want only the best for me who could still make a problem purely by accident.

doublelayer Silver badge

Re: This is slightly worrying for me....

Fortunately, they do.

AI layoffs to backfire: Half quietly rehired at lower pay

doublelayer Silver badge

Re: where's the loyalty

And, although I can agree with your sentiment, I do not agree with your ransomware response for reasons I have a feeling you already understand. By your own statement, you're still in normal contracted hours for thirty minutes, you have a chance to start solving the problem, even if you decided you were definitely going to go home, you could raise the alarm, and yet you instead propose to commit a massive act of misconduct by failing to do your job during contracted hours and knowingly make the problem worse.

I could accept the logic that you would inform people, and when they asked you to stay later refuse to do so. That's not what you proposed. In a reasonable company, I would stay later to fix the problem and I would expect them to show at least some short-term gratitude for that, as measured in actual rewards I can have. So far, my employers have mostly done that much. Even assuming that your employers are the unreasonable ones here, giving you the benefit of the doubt that you have valid reasons for your displeasure, I would do my job and inform them that after-hours stuff is for the on call, and if they don't have an on call then they'd enjoy the results much better if they found and convinced someone to start being one tonight. You're intentionally refusing to do yours during the hours they pay for. If that's the level you have, you should resign as unwilling to work for these people and they could consider your ignoring a problem during work time when you knew it was happening willful damage to their systems, so you wouldn't enjoy the results much either.

'Keep Android Open' movement fights back against Google sideloading restrictions

doublelayer Silver badge

Re: Yet another nail in the coffin for amateur developers

You know Microsoft isn't responsible for third-party antimalware software's excessive caution, right? Unless Defender is also doing it*, that's not Microsoft. It doesn't help if you blame the companies who are not doing this, and Microsoft has had plenty of opportunities they haven't taken. When they announce harmful changes, we should all take them to task for that, but when they haven't, jumping to the conclusion that they'll definitely do what they have shown no interest in doing but plenty of others, hence the article, have announced in no uncertain terms that they will be doing is misplaced priorities at best.

* And if Defender is doing it, then you have a problem somewhere, because it doesn't do it to anyone else.

Ubuntu Unity hanging by a thread as wunderkind maintainer gets busy with life

doublelayer Silver badge

Re: Time for the big EvilCorp users to chip in

How about the complaints from users who did not like the Windows 8-style attempt to merge mobile and desktop views. Canonical had the same reason for trying that that Microsoft did when they were building a phone OS, but they discontinued that too. Microsoft had to change their UI too. If it had been popular, more people would have run it and maintained it. A few people liked it enough to continue development, but not very many because there were quite a few people arguing for Ubuntu to drop it even earlier than they did.

doublelayer Silver badge

Re: Time for the big EvilCorp users to chip in

Those Linux people are working on Linux, the part they use and have the skills with, not keeping an unpopular and not very supported desktop environment compatible with a specific distro. I doubt there is a single company that relies on Unity as a UI layer; while a few people might run it out of preference, they can easily migrate to any of several maintained desktop environments. There is a reason that Canonical stopped developing Unity in 2017 and why the team that picked it up was so small. There's another reason why the most active version of Unity these days is a fork optimized for mobile. We have a hard enough time convincing companies to support the things they do use. You're not going to have any success getting them to support a component they don't use and wouldn't notice if it disappeared.

doublelayer Silver badge

Re: How did Canonical do this?

Because the child concerned put it up for a vote in 2022 and the vote won. When that happened, they listed multiple contributors, and they still have multiple contributors, just multiple contributors not all of which know enough to completely maintain it. Canonical is not responsible for ensuring people who work on external projects are making good decisions no matter whether they link to them or not. However, given the current status, that link might get taken down soon enough if 25.10 can't get released because 25.04 will go out of support in three months.

The Chinese Box and Turing Test: AI has no intelligence at all

doublelayer Silver badge

Re: What happens when the drumming stops?

Not if you ask them, without context, "What happens when the drumming stops?" as the original post stated was done. They wouldn't know that is a joke, and it would be interpreted as a request for information. I think the most common response you would get is "What drumming?" with the next most likely response being some form of "Huh?". Now if you tell them it's a joke, then they would try to remember the joke.

doublelayer Silver badge

Re: What happens when the drumming stops?

I don't think a joke has to be funny to everyone. I didn't find it very funny either, but there were two things that made it clearly a joke. It was supposed to be funny, and it followed a very predictable template. The premise makes you believe that the meaning of the drumming ending is something seriously problematic, but it turns out to be something very benign. That's a very common structure of jokes, the creation of suspense and a disproportionately small outcome of the suspense. It's used in humor all the time, some much funnier than this.

Of course, nothing makes an attempted joke less funny than breaking it down and discussing the tropes and structures of comedy, but I don't think something stops being a joke just because you don't find it funny. In fact, I'd go as far as to say that, even if nobody at all found it funny, as long as the teller intended it to be funny, it can still be termed a joke. It would just be a joke that failed utterly.

doublelayer Silver badge

Re: "Is AI really intelligent?"

You might recognize that your quotes are about things never happening, whereas the one you replied to is about things we can see today. They never said that there will never be an actually intelligent program. They said the ones that we had today are not intelligent. If we modified your quotes accordingly, they would go from amusing failures of prediction to obvious truths, for example:

"we don't know what stars are made out of, that's one thing that science does not understand", which they didn't; incorrect estimation of star age based on incorrect assumptions of what they are made of was a problem at the time.

The only exception being the New York Times article, which would have to and probably did have extra context because they already had working flying machines in the form of hot air balloons which did, indeed, allow for some navigation and flight. I don't mean that, with the context, the quote would be right. I'm pretty sure the author of that article would see a modern airliner as a pretty functional machine. But there's a big difference between commenting on the present and making predictions for the next million years.

doublelayer Silver badge

Re: no reason that AGI cannot eventually evolve

You may not think that technology evolves as biology does, but you're comparing to that. Technology "evolves" in a very different way than organisms do, a way that's only evolution by metaphor, so it's very hard to use anything that one did to predict what another will or even can do. And yet you're repeatedly trying it.

Even if I accepted the comparison, it doesn't go in the direction you claim. Humans didn't evolve from flatworms, although both organisms did evolve from a common and even less impressive ancestor. Of the things that evolved from that ancestor, a lot of them are not intelligent, even accepting that many animals might be. Some of them, for example, turned out to be flatworms. That is even though intelligence has a lot of evolutionary benefits. We have less reason to build completely functional artificial life, especially as it now turns out to be really hard. Before we tried it, people wondered if it might turn out to be super easy and thus happen by accident. It wasn't. Now, we see that, not only is it so hard to evolve anything from Gen AI such that companies spend lots of employee time trying and failing to introduce simple guardrails, but there is little need to actually make a functioning artificial intelligence because people are willing to accept flawed simulations as that anyway.

If you're not suggesting it, it reduces your comment to an "it's not impossible". If that's all you've got, then fine, granted, it's not impossible. We can comfortably ignore it until it gets meaningfully closer.

doublelayer Silver badge

Re: So much hype

You are sort of right for broadcast video, but everything else is on demand. Even live video can be if you can rewind it. People want that, meaning that broadcast signals are getting less used. There's also a lot more stuff that the internet does than just live television, so just because you've found one thing which, if you don't use some features is more efficient with it doesn't negate the fact that demand for bandwidth has been and likely will continue to increase for all the many other things we do with it. As an example, video calling is a lot more popular now than it once was. Remote working is too. Both of those increase bandwidth use significantly.

doublelayer Silver badge

Re: So much hype

Most of that is either based on faulty evidence or just unrelated. For example, diseases increasing. Yes, they did, and that's probably the main reasons for increased mortality. How did the industrial revolution do that? It probably did, by moving a lot of people into cities where epidemics can proceed faster, but you write as if that was something done deliberately or at least directly caused by those machines. It was not. Manufacturing more things did not cause health problems, and without it, we wouldn't have had many of the resources we used to discover how those diseases work and cure them. Epidemics that continued at a slower pace but never ended would not have been better.

Poverty, not so much. Conditions for the poor in industrial cities were absolutely terrible and notably so, but conditions in pre-industrial farmland were also absolutely terrible, but people didn't need to note them because that's just what peasants live like. People in poverty were able to have a lot more if they did so during the industrial revolution. Yes, it still wasn't anything like what we have today, but while there was more death from disease, there was less death from starvation, and that moderate improvement continued is how we have gotten to the relative prosperity we have today. I wouldn't want to live then, and perhaps there could have been different things done at the time that would have generated a better result, but if you use faulty information, you'll get faulty results. You're painting an inaccurate picture of societal decline.

doublelayer Silver badge

Re: Chinese Room counterargument

I think the most important part of the program is not language parsing or emission but the content. LLMs can generate sentences that are mostly or entirely grammatically correct, but they don't work if they contain obvious falsehoods or express an opinion that doesn't make any sense. It's a smaller-scale problem of the "Colorless green ideas sleep furiously" problem (for those who are not aware, that follows all the rules of English grammar while meaning less than nothing because, in addition to having no logical connection, it contradicts itself).

A program that can simulate intelligent conversation in Chinese is a lot more than a perfect linguistics engine. It would need to have a very detailed model of everything it was discussing, especially if observers considered it human which would mean that many of the LLM's favorite "I'm a computer" escape hatches wouldn't be available.

Signal president Meredith Whittaker says they had no choice but to use AWS, and that's a problem

doublelayer Silver badge

Re: Self-Serving Bulls**t????

How does that make any point relevant to the bad email comparison? The cloud viability discussion is about the ability to scale a global service with less resources, which has been covered in plenty of threads you have likely already read, and even if I totally agreed with your latest argument, which I don't because nobody, including Whittaker said cloud never failed*, it wouldn't do anything to improve the previous one which is still incorrect.

* In fact, the fact that it does sometimes fail is why she made the statement in the first place and isn't pleased with her limited options. Pointing out that she correctly recognized that cloud isn't perfection incarnate isn't very helpful at finding an alternative, especially if you start assuming that something other than cloud is.

doublelayer Silver badge

Re: Self-Serving Bulls**t????

I don't? Do clarify. I clarified the problems with their post. If I'm misunderstanding something important, for example the part where they made an inaccurate comparison between Signal and email, then you could explain to me and anyone unfortunate enough to share my misconception, what the problem is.

doublelayer Silver badge

Re: Self-Serving Bulls**t????

No, it's not self-serving, because Signal is not email. Signal has the voice and video call system, in fact they started with the voice call part, but even when it is text messaging which is a lot of the traffic, it's in that category known as "instant messaging". That sounds like kind of an old term; wasn't IM a thing we used to say a lot more in the 1990s and early 2000s? Yes, it was, to distinguish it from very not instantaneous email. Signal wants message delivery latency to be very low, and it is, whereas email latency is still higher, and they want to do it without having user-run mailservers, which they do. That means different infrastructure requirements, and if you don't understand why, you don't understand what communication apps do.

doublelayer Silver badge

Interesting you should ask. An article gives an interesting summary of how they used to do it, but a little blurb at the top gives us the important information:

UPDATE: Unfortunately, this post is no longer accurate with regard to Skype’s infrastructure. After the massive Skype outage in December 2010, it was expected that Skype was exploring ways to make their system more stable and resilient. In early 2012, Skype (at that point now owned by Microsoft) was reported to have replaced much of the P2P supernode infrastructure with supernodes hosted in Microsoft data centers.

So your answer is that they moved it to the cloud because their previous self-run infrastructure proved insufficient. Of course, they probably could have gotten a bunch of money and decided to build their own, and if they had built enough of that, they might have decided to rent out some of that and become a cloud provider in their own right.

Global distributed systems take a lot of work that a lot of people choose not to think about. Me too, when I can get away with it, because a lot of it is boring. I have watched people think they've done it when they really haven't, though, and they tend not to like the results. If you host in a single facility, you're not distributed. If you think that a few colos on different continents does it, you're probably not as interrelated as global communication services are.

Starlink tells the world it has over 150 sextillion IPv6 addresses

doublelayer Silver badge

"is not talking about "entries in a file listing locations", it is general comment referring to the total IPv4 address space that Starlink have available to use."

That's probably what it should have been, but that is not what it is. They were still using the entries directly from Starlink's geolocation feed. Since you want to know the details, I've just retrieved the same feed (keep in mind that this is now the October feed rather than the July one), and they have gained some more /24s, now listing 1437 of those. And they list several smaller blocks, specifically 470 /25s, 131 /26s, 152 /27s, and 6 /28s.

You are correct that they won't be announcing these as /25s and smaller in BGP. That is how they list them in the file, and that is where the numbers in the article, even the general ones, are coming from. In a slight defense of the article, this represents the IPs Starlink claims to operate, so it is more representative of the number of addresses in use than the number of addresses they own would. If we analyzed this further, we could determine how they are announcing these smaller blocks and whether they completely own the large blocks from which they cut them, but I don't care. If you want to make more sense of the article, you have the background.

doublelayer Silver badge

They're talking about entries in a file listing locations, and the entries there (that file is public) are smaller, including many /25s and /27s. I have not bothered to check whether those all fit together into neat /24s and are just divided here for geographic tracking or whether they might be splitting blocks with someone else and routing internally to handle that, but when the article refers to smaller blocks, they are talking about smaller blocks.

Android malware types like your gran to steal banking creds

doublelayer Silver badge

Re: So it doesn't type anything?

You felt you had to post that twice? Also, there have been typewriters for quite some time. Just because your gran presumably wasn't working as a typist, can you be certain that she never chose or had to write something with one of those? And if, for even thirty seconds in her life, she did, then it probably does type like that because the long and variable times between characters are typical of inexperienced typists. The grans it's unlikely to type like are those who typed a lot and did so fluently, speedily, and accurately, and based on your assumption, that doesn't sound like it described yours.

New boss took charge of project code and sent two billion unwanted emails

doublelayer Silver badge

What problem would that fix, exactly? We can't solve this problem because there is not an even multiple of seven days in a year. Even if we change the length of a week, we still couldn't do it because, on leap years, our new calendar of 73 5-day weeks would break too.

Unless we simply refuse to do anything on the intermission days, we'll need to have some timekeeping concept for them. People needing medical care aren't going to like the do nothing idea. Thus, calendar changes won't help and there's no benefit in doing them, even if the equal-length months would have been nice if that was done originally.

Ex-CISA head thinks AI might fix code so fast we won't need security teams

doublelayer Silver badge

Re: Bring back flowcharts

Yes, that's the lots of little flowcharts option I already mentioned. The problem being that you now have to deal with all the various ways control flow can be modified. I write software that has scheduled jobs, event-activated jobs where something notifies us to start one, pull event jobs where we need to search for triggers to start one, user-called jobs, and job pipelines. These all run in parallel with mechanisms to prevent them from stepping on each other and to keep them in the proper order of data flow. Flowcharts to define that have lots of problems representing the external sources of information that can cause the jobs, and the flowchart describing how each function operates would be very complicated if you need to get all the concurrency data in there. If you don't get all the concurrency data in there, the system is guaranteed to hopefully crash but unfortunately quite likely deliver false results in a matter of minutes which will deliver an angry person to your desk. This system is not functionally described in a form you can easily break into little self-contained units.

doublelayer Silver badge

Re: Bring back flowcharts

Even if I accept that logic, how do you propose compiling a flowchart to a deterministic program? Programming languages have the advantage that you can run and model with the same thing which helps a lot because, as experience has shown me, any two attempts to write the same thing will have weird differences if they're big enough.

The other problem with flowcharts is that they only kind of work for a simple type of program which takes one input, runs for a while, and produces some output. If it runs multiple things in parallel, collecting some information while running other things, handling failures and potential problems without direct user interaction, a diagram of what it's doing gets a lot more complicated. You have two choices, neither of which is good. You can split it into lots of little flowcharts including arrows that come in from nowhere, or you can build a massive flowchart which covers an area the size of Wales (although I propose you use a flatter place) and still has arrows coming out of the Irish Sea to describe new events or data that weren't present when the program started.