
Always releveant article
https://medium.com/@antweiss/learned-helplessness-in-software-engineering-648527b32e27
The more cybersecurity news you read, the more often you seem to see a familiar phrase: Software supply chain (SSC) vulnerabilities. Varun Badhwar, founder and CEO at security firm Endor Labs, doesn't believe that's by coincidence. "The numbers are going to go from 80 to 90 percent to maybe 95, 98, 99 percent of your code in …
We need a lot of discussions about cybersecurity prevention but just talking about "solutions" is only a discussion ... so many discussions everywhere sound wonderful but then we discover problems when we implement our "solutions" ... it's a bit like talking about how we can get to the top of Mount Everest; do we need to ride a bike, ski up there, drive a Tesla, or maybe even walk?
I'm not complaining, this is just the software environment that we've all lived with for years now - We have to describe a problem, wait for a solution, install an upgrade and then see a few new problems so we buy a new computer but starting seeing a few problems again that may have evolved from the original infections. We all want easy access on the Internet but thinking about how nice it is for us normally means that we often don't realize that easy access has become the wide problem. For years now I've been thinking that we need an icon for our security discussions - a pair of wire cutters - LOL El Reg, this is me laughing about the icons every time, this icon choice is so much better!
The primary cause of the secuity problem is the leadership and management. Its a terribke combination of greedu arseholes who dont care and will sell out everyone for anothe dollar, and basically of them have no idea of what the problem is and thus they always fail with their uninformed dumb decisions.
Anyone who says "the primary cause of the security problem" doesn't understand security and doesn't understand IT.
There is no single primary problem. There will not be any single primary solution. Complex problems do not have simple descriptions and rarely have simple solutions.
As usual, CHF's obsession with corporate leadership has just resulted in an adolescent reduction that offers no actual insight and has no explanatory power. Try thinking critically.
Good points! And I guess the way AI is coming in to this is by stochastically moving the target, in all directions, at random times, subtantially compounding the SSC trustworthiness assessment issue (if I understood right). Apparently, when AI systems are trained to imitate humans, they might end up acquiring goals of agency, that, for the more devious of them, could lead to the "willful" production of malicious code, infested with viruses, worms, and backdoors, that are superhumanly clever in their designs, and that are cranked out at rates faster than can be achieved by the most chubby of cybercrims. It seems then that Badhwar and the Endor Labs gang will have their work cut-out for them (for decades indeed) addressing that vast automation in genAI's self-motivated cyberthreat code manufacture (including those ever-evolving serpentine genAI-produced strategies for evading detection).
A great business to be in going forward!
does this mean these systems will repeat what it has [sic] learned?
Not in any useful sense, no, though plenty of Reg commentators who can't be bothered to learn anything about LLMs will tell you that.
The problems with using large, deep autoregressive models trained on large, dirty corpora, with no meaningful interpretability or explicability, for generating anything used for any serious purpose, are many and complex. They certainly don't boil down to "it will repeat what it has learned" (even if sometimes you can find a starting vector and gradient that will do a significant amount of that). They're much, much worse than that.
Broadly, the problems with using uninterpretable/inexplicable models fall into two categories: the output of the model, and what happens to the people using it. It's a mistake to focus exclusively on either category, much less on a single problem or class of problem within one category.
That may be much of the reason why Microsoft and others try to push AI copilots into near everything. They not only have the supposedly better paying versions, but also the hard to turn off "free" version littered over the operating system and (office and more) other software.
This combination plus their "classic" telemetry allows them to gain massive amounts of data on how users, including people using their software for doing their work, do their work. In other words, they have access to mountains of information allowing them to learn how to professionally and efficiently organize work. In many companies, the work flow or how individuals and the company or work place as a whole organize their work is a big part of their success and competitive advantage. I'd go as far as to say that it often is more important then the exact technical expertise available so long that the latter is adequate. It's the classic theoretical scientist versus pragmatic scientist or engineer dilemma. The first has better knowledge of the technology, but often fails to make commercial products let alone successful products based on it. The later may lack some in depth knowledge of the technology, but has plenty of knowledge and skill in just making it work and profitable.
Microsoft and others AI tools may be of questionable use to actual users, but so long they accept them (or fail to turn them off), the data is slurped and send to the mothership. Even if Microsoft and OpenAI and Meta and Alphabet and... can't turn it to really usable "co-workers" yet, that data sits in their data warehouses waiting for the AI and data mining technologies to improve.
In sort, it resembles very very much the biggest industrial espionage of all ages, even if no technical proprietary documents are slurped. Even if all that stolen, as that is what happens during industrial espionage, information can't be put to good use yet, the act and intentions are present. As such, it should legally be treated as to be for what it is: an illegal industrial espionage effort unlike anything ever seen in human's history and be dealt with very strong sparing no one including top execs in a way comparable to an employee of a high tech company like ASML copying GBs of data on a stick to sell to his next employer or his government. To be more precise: that employee only stole one stick worth of information on a single project in a single company, the CEO of a slurping AI firm... needs a sentence more appropriate to the amount of theft ordered on his behalf.
If the goal is to produce competitive products, I'm sure that can be a viable strategy. Maybe I'm not thinking hard enough, and the goal is really to not need competitive products - let's call it the Boeing strategy.
I'm reminded of my uncle (now dead) who over 60 years ago was working as a cowboy on a Texas ranch where a cow got into a large sack of dried beans and ate them all. Over the next 24 hours the beans digested/fermented and the gas expanded but could not all escape -- they were with the cow all night and the vet even cut it open to try to relieve the pressure, but in the end it died.
Thinking that feeding more and more data to the system will improve it proportionately - where is the evidence? Even when fed John Grisham books, all it can extract is the superficial prose. It can't write a best seller (although it can clog Amazon with junk books). In fact, it can't even write interesting fiction.
Same with coding. It has utility - it knows a lot of prose so if you have to write a script file in a language you rarely use it is helpful. But it is useless at writing the plot - and that won't change with more data.
But that makes your AI an untrusted, unvetted source, created by a previous generation of untrusted, unvetted sources. Quis custodiet ipsos custodies?
Which is the safer and more trustworthy? Proprietary code, unvetted and unvettable by independent parties, AI that has written/trained itself unto the seventh generation and counting fast, or F/LOSS well visible to the thousand eyes (should they be arsed)? If you were a black hat, which would you drop your evilware into? Just askin' ;-)
Stallman's concept of free software arose out of a desire/need to see the source, to fix the bugs and adapt it to his needs.
Modern business practice is just fill your boots. Hardly anyone actually looks at the source, let alone audits it. It's free of charge, what a bargain!
-A.
Yes, small question: Who is going to do the latter, and who is going to pay for the people doing it?
"Bill of Materials" has a nice ring to it, and sure, it certainly is nice to know whats in the pie, but knowing the ingredients is worth pretty much squat when there are no resources to vet them.
Just another attack on opensource, it is much easier to use legal means to force big companies to place undisclosed back doors on closed source softrware than it is to sneak one into open source code that is constantly being looked at by thousands of developers who are specifically looking for such things.
Notice how the uptick of these attacks on open source coincide with the new spying laws in the US.
You all should go see what those laws say, it directly impacts the discussion and the security of our systems and data.