"the output of an MOT is one of three things. Pass, Fail or advisory"
actually it's fail, pass or pass with one or more advisories. Advisories don't result in fail.
3800 publicly visible posts • joined 10 Sep 2009
"G.co is a genuine Google subdomain and anyone can create a new Workspace using a g.co subdomain without having to verify that they own it."
How utterly, crassly insecure!
"We have not seen evidence that this is a wide-scale tactic, but we are hardening our defenses against abusers leveraging g.co references at sign-up to further protect users."
I should bloody well hope so -- it should never have been possible in the first place.
This (as usual) hardly qualifies as a 'sophisticated' attack -- someone just spotted and made use of a wide open door with a fluorescent welcome mat bearing the legend "burgle me please".
The self-absorption indicated by pointlessly adding his roadster to the burgeoning population of space junk has just been exceeded by his proposal yesterday to rename the English Channel to “the George Washington Channel”. The "richest man in the world" seems to think he owns it (and, obviously, the near space surrounding it).
"... it's less clear why businesses might want to become endpoints in OpenAI's ecosystem and surrender direct contact with customers"
Unless they're the increasingly common kind of business that thinks customers are a nuisance except (briefly) when their wallets open. Of course you wouldn't want to do business with them, but once there's an "AI" intermediary you might not even realise you're doing so -- a further extension of the Amazon marketplace, where even now you don't really know whether the notional business actually exists.
"Growth" is an interesting phenomenon. It comes in two kinds. The first is natural growth born of fulfilling a demand in a market that is not yet saturated, and that kind of growth is conducive to the common wealth. However once the market approaches saturation, further growth must become predatory -- squeezing out competitors to take their share of the market (and indeed in many cases squeezing every last possible dollar out of the customer, as in the widely adopted move to subscription models for services that do not essentially need to change)-- and that is, sadly, only conducive to enriching the most ruthless, but detrimental to the common wealth. The tech oligarchs that have clustered around Trump exemplify that end. So I suggest that "growth" is not the best general indicator of economic health, particularly given that the Gx economies are largely in the second (predatory) phase.
The appointment of someone from Amazon to regulate competition is thus highly ironic, given Amazon's history of anti-competitive practices. But I guess our govt. has fallen yet again for the old but fallacious assumption that because someone is financially successful they must have a generally applicable "secret of success", whereas it's commonly to a great extent just down to a combination of luck and greed.
"Hallucinating" implies existence of cognition, so in the context of "AI" it's just another example of marketing bullshit. Why not just be honest and say "the output is (predictably) nonsense because the machine hasn't a clue what it really means"?
"the "Consult" tool analyses "the thousands of responses any government consultation might receive in hours, before presenting policy makers and experts with interactive dashboards to explore what the public are saying directly"
In reality that probably means "to filter out any challenge or counter argument to the govt. proposal." However the AI will render even more efficient a process that seems (from my experience of submitting to govt. consultations) to be quite effective already, so maybe we won't see much change from outside. The key difference will merely be that, instead of having to read and reject unpalatable responses, staff will never get to see them in the first place, so the workload and stress will be reduced and the department will be a happier place to work. I suppose that's a good thing...
"Is there any reason copyright laws cannot be toughened up to tackle deepfakes. If everyone owns their own likeness ..."
'Owning your own likeness' is not covered by copyright. As its basic principle, it applies only to created works and is vested in the creator. So you can't have copyright in your likeness -- <irony>at a pinch it might be extended so that your mum might own copyright in your likeness but that's seriously stretching a point</irony>.
"Empires rise... become complacent, corrupt, senescent... And empires crumble and fall."
But unfortunately (according to archaeologist Flinders Petrie[1]) the cycle takes on average about a thousand years. So, sadly, M$ has a lot of life left yet.
[1] W. M. Flinders Petrie, Revolutions of Civilisation, Harper & Brothers, 1922
.
"you already have two factors. One you have the device containing the passkey. Two you have authenticated to the device using either biometrics or a password to be able to utilize the passkey"
Unless I've misunderstood, that means however 'secure' the passkey itself is, overall security is only as strong as whatever authenticates access to the device, which can break down in event the device is in the wrong hands. There is apparently no universally robust authentication system that remains so in all eventualities.
One of the most robust (technically speaking) authentication tools is the out of band one time key generator dongle, and these have been around for ages. But I have witnessed many instances where the dongle had been stored with the laptop in the same bag when stolen.
Any sole authenticator approach (however technically 'sophisticated') is pants, hence MFA.
No apologies necessary -- you're absolutely right. The biggest problem we face for the future is successive echelons of notionally technical folks who understand less and less about how things actually work internally. And there are moves to make this even worse by paring down initial training.
How?
Although, as usual, the capabilities of the malicious agent once installed get reported in detail, the key issue of how it got in in the first place seems either to have been ignored or suppressed by the investigators. This is commonly the case, particularly where the intrusion has been reported as "a sophisticated attack" (and ultimately turns out to have been a complete push-over). The result is that improvement is hampered. Stopping the initial intrusion is the strongest defence there is, but you have to know what you're up against.
Actually, they probably don't, they just don't give a tinker's cuss about the user of the web site they develop, for three reasons: [1] they (the developers) are Gods who can't be challenged; [2] they're getting paid by their client (the web site owner) not the users; [3] they're utterly ignorant of what they're creating at the code level coz they use fancy dev tools and open libraries that they just take for granted. An informed guy I spoke to recently told me that web development is now mostly in the hands of graphic designers, not technically informed folks.
"it's been going on in the UK for at least a decade"
And not just computer data. Since 2014, UK local councils can create regulations restricting personal behaviours, and these attract criminal penalties for infringement. They are enforced by either non-police council employees or by third party firms (in some cases paid by number of citations issued), and against them the sole challenge is via the prohibitively expensive High Court.
So these "AI police reports" are just a small part of the loosening of the reins on both law making and enforcement, and merely symptomatic of overall a cultural shift towards authoritarianism that seems to be occurring worldwide.
Roll on the era of Judge Dredd.
"seems to be "correct" English"
apparently, if a mistake or error is widely enough perpetrated it becomes valid[1]. Of course the Earth must at some point have been flat because almost everyone thought so. Must have been a huge upheaval when it became spherical.
[1] Similarly to 'learnings' in its complete neglect of grammar is the almost ubiquitous and utterly ungrammatical "attendee" and 'escapee". The 'ee' ending grammatically indicates the object of a sentence (the thing or person to which something is done -- as in 'employee') The correct grammatical ending for the subject (the thing or person that performs the action, as in 'employer') is 'er' or 'or', so the correct spelling would be ;'escaper' and 'attender', regardless of whether a majority of the uninformed say otherwise. And yes, grammar does matter. Without it, language becomes ambiguous and incapble of expressing nuances so the clarity of communication suffers.
" leading to burn-through of a secondary O ring on launch"
But it's not like this was a surprise. Actually burnthroughs were quite common on previous flights -- indeed it had been suggested officially that burnthrough products could help seal what was inherently a badly designed segment joint. The latter had been primarily designed to save materials cost as the original (and safer) design had needed a much larger diameter and more costly billet for manufacture. And in any case the segmented booster design was politically motivated as it allowed the boosters to be manufactured in a state distant from the NASA site, so transportation of a non-segmented booster would have been very difficult. So compromises all down the line. The root causes of accidents are often quite far removed from the incident.
A worthy attempt (if only at the symptomatic level), but benchmarks would seem somewhat moot in the face of some basic failings of principle from which the current AI paradigm suffers. The stochasticity of results and the effective impossibility of verifying how they were arrived at are fundamental barriers to trust (and indeed to a great extent barriers to improvement).
It's not quite that simple in reality. The simplest and entirely legal way to produce flavoured spirits at home is to buy the spirits and add the flavours (as in sloe gin) without any distilling needed. Or you can approximate by fermenting with a high alcohol yeast, yielding up to about 20% alcohol by volume (also entirely legal).
The revenue have always been very strict on distilling (mainly because of the huge revenues from alcohol) and the penalties for illicit distillation are severe. My local pharmacist used to produce distilled water for sale to motorists. About every three months an inspector would come and test his still for traces of alcohol or other relevant distillation residues.
"I'd suggest help lobby the supermarkets to stock more real Budweiser"
Here in the UK almost all beers ostensibly from Italy and Spain, and around half the "English" beers ostensibly from regional breweries are brewed under licence by Heiniken.
Ah, branding: the ubiquity of bullshit!
""This ruling, if not overturned, means that judges and plaintiffs' lawyers run Delaware companies rather than their rightful owners – the shareholders"
Not quite, as shareholders don't have an entirely free hand -- both they and the executive have to comply with company law. The basis for the original decision was that the board was not sufficiently free from influence by the beneficiary, and of the second decision was that [1] those who voted were potentially led by misinformation on the proxy form into believing the first decision was voided and [2] a shareholder vote in any case could not nullify the bias to which the Board was subjected if that bias was essentially unlawful. These all seem quite reasonable arguments based on current legislation.
"code written with its Copilot AI model is "significantly more functional, readable, reliable, maintainable, and concise"
than what? Grammatically, "more" is a comparative, so this statement is meaningless unless it includes a secondary subject to compare the first with. So (not surprisingly) this is pure hype.
"being able to detect, neutralize, and recover from attacks at pace, be it through their own technical implementations or with support..."
This excellently exemplifies the root of the problem. Almost everyone still thinks that cyber security is a technical issue. It's most definitely not - it's a cultural one with technical facets. While I accept without reservation that our technologies are deeply flawed and need constant protective attention, almost every reported data breach has fundamentally been down to poor decision making or sloppy management on the victim side or in their supply chain. Whether or not an entity can be secure is at least as much a matter of attitude as it is the deployment of tech fixes, including whether that entity operates proactively or purely reactively to identified threats, whether it operates a blame culture or not, and a host of other psychosocial characteristics. Indeed, the culture mostly drives the choices and adoption of protective technologies, so tech robustness and resilience can only be achieved where the entity is willing to invest the effort and expenditure to select, implement and maintain the most appropriate technologies.
If this turns out to be a reality, it just means that the nature of the affected jobs will change -- from spending time making informed decisions to instead wasting it working out whether the AI is talking bollocks.
'not to mention some tangible "proof of delivery" '
There's a curious clause in many contracts I've encountered (which seems to be legally valid, at least in the UK) which states that proof of posting (not delivery) is sufficient evidence of serving.
Ah, the brilliance of the few! Training in the fundamentals for this kind of real engineering (which are primarily a way of thinking, not just a body of knowledge) has become an essential for general education in our intensely technology-driven societies, as opposed to aiming for provision of soft options that merely avoid "turning teens off".
"Much of the content, such as CPU functionality and fetch-decode-execute cycles, is abstract and challenging for students, overshadowing practical exposure to emerging technologies like AI"
We already have a general population of "users" who rely on (presumably better informed) others to define and provide the technologies they make practical use of but understand little or nothing about. Unfortunately, it is from that population that successive echelons of the supposedly better informed are largely drawn, so the societal body of real expertise declines, and with it the quality and reliability of the technologies..
"It also pointed out that it is possible to pass the GCSE Computer Science course while doing very little – if any – programming on a computer"
Perpetuating the myth that programming is the primary computer expertise. But someone has to design the hardware, develop the new languages and protocols, ensure security and robustness, and a host of other essentials. A nation of coders who don't understand the systems they're coding on or for is at a huge disadvantage when it comes to developing systems with increasingly far reaching societal impact.
So I concur there should be two pathways, but it would be a huge mistake to deprecate computer science in favour of mere "digital literacy", even if that includes user level practice on "AI".
"The GCSE contained out-of-date content about networks and internet protocols that could be removed from the specification to make way for more exciting material, Adamson said"
I'm not at all sure that "exciting material" is the best criterion for what constitutes sufficient baseline knowledge for potential practitioners in a highly technical subject.
"a study [PDF] by King's College London, the Nuffield Foundation, and Reading University also recommended broadening the GCSE curriculum and better teacher training and professional development in the subject"
So the current comp sci syllabus is inadequate[1] and the teachers aren't sufficiently competent in the subject. Does this explain, at least to some extent, why the students are avoiding or dropping out of the subject? Maybe those deficiencies are the first things we should fix.
[1] I've taught on such courses and commonly found the (nationally ratified) syllabi patronisingly shallow and consequently boring to students. They generally needed deepening rather than broadening. We had to surreptitiously break the rules on order to impart useful knowledge and keep students' attention.
"There may also be a nation state element to the attacks"
Regardless of this, it'd be interesting to discover whether these hospitals were targeted, or were merely so darned wide open they fell victim as collateral damage to a shotgun attack aimed at some third party (as in the case of notPetya). Having tried to do infosec in an NHS trust, I suspect the latter is most likely.
I have a Thecus NAS that has an "irritating" issue that's never been resolved. I used to power it down when not actively in use, but every so often after a few powerdowns it loses its boot password and refuses to go live again. Thecus provided a script that clears the password, but then you have to start from scratch again setting up access from all the machines that talk to it.
It appears that, despite NAS in principle needing to be robust ands reliable, they're in general thrown together like everything else in IT. What a surprise (!!)
"probably by saying that they're doing fine so "shut up!""
The PM actually responded (on the evening news today) that the 2M+ who signed the petition were probably drawn from the population that didn't vote labour (so presumably by implication their views don't count). This from one who stated before taking office that he would put country before party.
"Why on earth do they think that giving wrong results is a good idea?"
Two possible reasons I can think of:
[1] they can't be arsed to prevent it happening
[2] wrong results crafted as click bait[1] make them as much money as right results
These possibilities are not mutially exclusive.
[1] e.g. search for 'Pythagoras' theorem' yielding links to sales sites announcing "big discounts on Pythagoras' theorem"
.
" "Many AI users spend a long time identifying how best to communicate with AI tools to get the desired answers or response"
This has for ages been the bane of those trying to get meaningful results from search engines. The evidence is out there -- just nobody in the technocracy (or more realistically, the tech plutocracy) has bothered to look.
"their risk-based decision-making, miscalculated the potential impact and likelihood of its exploitation"
Because infosec risk assessment (even as specified by international standards) is currently no better than crystal ball gazing. Even the most basic tenets of probability theory are typically a complete mystery to practitioners, nobody seems to record the rationale or supporting evidence for their decisions, and the psychology that underpins good and bad decision-making is completely ignored. The results are therefore almost always garbage from which no learning or improvement can take place. Unfortunately, these defects have become enshrined in the standards because they reflect current practice, and as pretty much all training is based on said standards we're stuck in a loop from which we can't break out.
"when the driver loses the ability to apply torque, they will immediately receive a visual alert on the user interface, with an instruction to safely pull over the vehicle to the side of the road"
Whether that's actually possible is of course outside the realm of the manufacturer's responsibility, but in the absence of torque (i.e. drive to the wheels) it seems unlikely.
Ah, the mandatory lip service to the technocracy.
What benefits exactly? Is it possible that "human resources" folks aren't competent enough any longer to manage without an automaton to make their decisions for them?
HR used to be called Personnel -- the emphasis then being on people. That seems to have fallen largely by the wayside, but the use of "AI" for staff selection seems the final fatal step on the path to the automatisation of humans.