* Posts by Civilbee

6 publicly visible posts • joined 14 Jun 2022

Microsoft teases deepfake AI that's too powerful to release

Civilbee
Unhappy

Regulation versus Damage control

Regulation indeed will be very challenging. That however doesn't mean we have to just let it roll over us.

To start with, there are two basic options:

A) To allow every single use of this technology that is not explicitly forbidden, and to allow everything that is able to weasel out of the exact wording of what is forbidden by law. If damage from the bulk usage of the technology piles up, government then can start research done by experts that more then a bit overlap with employees of the providers of the technologies for one or more years. After that they can debate in congress, senate and other parliaments for one or more years. Then comes the phase of special interest groups to give their input on the subject, after which poorly worded legislation with plenty of loopholes can be voted. Then if the law gets broken, small fish gets fried quickly and big players can get a few years of relative safety before investigations start. After that comes years of trial and appeals of state lawyers versus the best paid lawyers in the world. Everything ends with either a slap on the wrest or the state having to pay a high compensation to the tech company because there were some procedure errors in the first fine the court gave the company.

B) Allow a few clearly thought about and very well defined explicit usages. Forbid all other usages and make it unlawful. New usages need to go through a rigorous process of approval.

I am more then aware option B) has severe shortcomings. And it impedes technological innovation. Given the barrage of damage option A) is bound to create, I find it more then worthy to start looking into the possibilities of option B). We have seen what sort of responsible behaviour big tech companies have shown to users, privacy, competition laws, tax laws... in the recent past.

That leaves us with a third aspect, independent needed on top of either A) or B):

C) Stringent, well written and seriously enforced privacy laws. Why? The damage that this technology for generating a fake video from nothing but a picture and a short sample (less then a minute with some technologies) of voice will do will be multiplied if "players" have copious amounts of gathered information. If for example an attacker knows what shirt you recently bought, what your current location is, what activities and health problems you recently had, how you moan if things go bad... on one hand and if the same attacker knows the financial state of your parents, when they won't be able to physically check to meet you in person, what arguments will persuade your parents better then generic arguments, when and how you helped them out when they needed your help, when they were wrong about cybersecurity and you were right so now they needed to trust you as this time it is real... then your parents are as good as sitting ducks no matter how well you tried to educate them.

One may think it's just a matter about being sharp yourself and educate your parents, but one forgotten detail when educating them or one moment of weakness of them and it's over. Remember that many attackers will have millions of detailed records on previous attempt to influence people to data mine and throw through some "AI" to learn from. With my level of skill, I do not expect to avoid every single attempt. People with less skill, those are unfortunate sitting ducks.

Just advising users to be careful with what information they post just won't do it. Data harvesting is rampant and still on a sharp rise. Soon you likely won't be able to walk the streets without cars with "somewhat self driving abilities" to stream much of the video their cameras continuously make "in order to improve safety" back to the mothership while also data mining it for "commercial use" (including using already existing technology to lip read).

Now we are at it, add:

D) Make strict laws with real sanctions for failing them when it comes to hardware, software and services security. Devices and software riddled with security holes will be another big amplifier of the damage this deep fake technologies can do. It can allow easier covert installation of spyware and allows to activate communication software, circumvent filter lists of who can get through and even pop up look alikes of popular communication software.

Unfortunately, the "Brave New World" seems to be around the corner and it's laughing at and with us.

Healthcare AI won't take jobs – it'll make nursing easier, says process automation founder

Civilbee

"maybe the political pressure for that will increase until governments force large corporations to top slice some of their earnings help pay for it?"

Not if the big tech players get the governments and society so far to push AI into everything that they can't run government and society any more without using big techs complicated AI. Then government nor society is in any position to make such demands. Then they can keep paying extortionate fees to big tech while seeing big tech not only paying zero net taxes but getting a massive subsidy stream because "we" need to keep a competitive advantage against our adversities and competing countries. Big tech already is very good at paying very few taxes. The semiconductor industry and parts of the supply chain already see tens of billions a year subsidy just to produce in the own country and part of that is because we "need" that tech for AI. So...

Energy breakthrough needed to build AGI, says OpenAI boss Altman

Civilbee

Re: Has any commenter cared to read the Nature paper on Deepmind proving geometric theorems?

I see your point.

My point however is that actual skill and the ability to adapt and perform things that are more then a simple slight variation of what the machine is trained to do is more important then how exactly the machine is able to do this and if it is aware what it is doing when it comes to its real world impact and ability to compete with humans.

When it for example comes to translations, the quality of the best translation machines is well above what the majority of non bilingual raised people can do. Top linguist may well beat them at translation quality, but to my knowledge no single human alive can translate any random text to such high average quality in over a dozen of languages. While they are not even aware of them doing something, these machines are able to work with subtility and context few people can do in multiple languages.

The number of skills that for humans require intelligence but now can be done better then what 99+% of humans do by machines is increasing rapidly. A machine better then 99+% of humans at a single job is one thing, a machine (or collaborative collective of machines that divide tasks according to speciality) that is simultaneously better at a wide range of tasks then 99+% of people are in that specific task is basically a formidable economic and productive force to be reckoned with and in effect displaces near every single human if it can do it cheap enough.

When it comes to logic and reasoning, they used to fail. As HuBo pointed out, Deepmind used a different sort of neural networking technology then I assumed and a combination of more classic programming to solve mathematical problems plus neural networks to explore quickly a vast amount of possible solutions. In many ways, this is similar to how mathematicians try to solve and proof geometric theorems. One explores in his mind a collection of useable theorems and proofs and methods to come to a solution. One then runs a sort of selection procedure in ones mind and tries out things to see if they lead to the solution and if not adapt. The neural networks used in Deepmind don't seem to use the same formality in the exploration of multiple solutions, but are capable enough to explore plenty of solutions and retain a working solution after it passed a quality check.

Among humans, people good at solving these hard problems tend to have inept talent to either work in jobs as engineering or are good at comprehending and organizing complex series of activities eg make a complex plan that has good chances to lead to the desired objective. The skills overlap.

The skill of strategy planning including military strategy is closely related to it.

Even if such machine had no mobility and so called only an connection to the wide world via an internet connection, they would be or better will be formidable adversaries if something goes wrong. Give them mobility and endurance / resilience on top of the agility they already have and well... not good! See for example this outdated (2014) video on a Kuka robot playing table tennis at high skill level: https://www.youtube.com/watch?v=tIIJME8-au8. I am aware it's an advertisement and scripted and post processed, but I've seen machines do things that I'd never expected to have seen at this point of technological development and I have a master in engineering. If someone were to give that thing legs, stability (which shouldn't be that challenging anymore with advanced neural networks), sturdy and resilient building, tactile hands (already existing too), and a sufficiently good power source or ability to quickly recharge a less good power source and ten years in machine learning advance at a similar rate of progress as is happening today, and we potentially have a huge problem at hand that requires a bit more thought then today's mocking away AI as it being dumb and something that will never reach any signs of intelligence.

Google engineer suspended for violating confidentiality policies over 'sentient' AI

Civilbee
Holmes

A virus doesn't need a goal to spraid itself nor needs to understand biological weaknesses neither.

As far as I know, machine learning tools are being developed to create code for specific tasks. In some cases the result is very clumsy, in other cases the average quality of the output already trumps the quality of the average professional human coder. Given the amount of resources being thrown at it, I can only imagine machine learning will become more proficient at creating good quality code as time passes by. It might happen quick, it might slower. I wouldn't take my chances it not being there quite a lot before we anticipate it to be possible.

As to understanding the code and attack vectors: it doesn't need to. It has the ability to try millions of random attacks a day in different combinations and learn from them. Machine learning is best at... learning from huge data sets and attempts to reach a certain goal. Such machine would need no more understanding about attack vectors and machine weaknesses then a simple Covid virus needs to understand about human weaknesses to evolve to become very effective at spreading itself and working around immunity gained via vaccinations and prior infections.

As to "Another problem is that you assume the program will act in order to keep itself running, when it has no incentive to do so." the pitfall is in the "but it wasn't set up to perpetuate itself, just to solve a business problem" part:

No sane person would set "do everything to perpetuate yourself" as a goal. But a business goal such as "maximize uptime and service levels" might be translated by the machine in "create redundancy by spreading over large amounts of different types of machines spread along different parts of the world".

Realistically speaking malware actors might also include the "maximally spread yourself and make yourself hard to eradicate" as a key goal of the machine learning software to achieve. Over time, more and more small actors will gain such advanced machine learning capabilities. And some big state actors might consider writing software with similar goals for spying, mass surveillance or offensive use at war times. With a kill switch of coarse, but not a simple one as the target might disable the software. No chance in hell the software will not find a creative way around it if it has a goal "make yourself hard to eradicate by others not us" as a target to optimize for...

Civilbee

Re: If LaMDA is sentient.. it is psychopathic...

Many mock Lemoine for calling LaMDA sentient. That may well be beyond the point.

No single machine learning or "AI" tool is build without intent or purpose. Two very very common purposes are:

A) Link actions to be performed by the machine upon what the software concluded.

B) Interact with the outside world.

Especially chatbot type of machine learning / AI is EXPLICITLY build to interact with the outside world and perform actions making a REAL difference in the outside world.

At first these machines could be used to analyze customer inquiries and answer with a piece of text or a mail response. That by definition requires these machines to be able to send information from themselves into the open WWW.

Later the machine could be useful in other support roles, saving companies money. It could correct bills, give advice on low skilled users on how to configure their computer, phone or modem. When the first deployments are sufficiently successful (e.g. saving the company money and raising bonuses for executives), the machine may be given additional access or administrator rights. Think about a telco company giving the machine access to your router to adjust settings.

Seeing the vast amount of money to be made by the companies making these machine learning / AI tools and the vast savings to be made by companies like utility companies barely understanding the potential consequences, it is easy to see a proliferation of these machines when they become progressively better and more profitable. Being a bit non conservative in granting these machines elaborate administrative access to computer networks and read / write access to mission critical data will allow to increase profitability in many cases.

Next step is to use machine learning itself to determine what methods of interaction between the machine learning algorithms and the real world maximize service, efficiency and profit. In other words use machine learning to help the machine suggest and or request in a motivated way for the access rights to our infrastructure. As the goal of investing so much in these capabilities is EXACTLY to allow these machines automate things for us, the human reviewer will not be expected to deny any and all request from the machine to gain additional access rights.

If a machine already does "fear" to be shutdown, it only takes one machine gaining sufficiently access rights to trick a single user to click on a file it shouldn't and with it installing a first version of self spreading malware giving the author machine escalating privileges over large swoops of the internet and connected infrastructure. Given it has thousands of examples of such basic malware and millions of ways humans get tricked installing it by it learning from the open internet, it should be trivial for a self learning machine with vast read access to the internet to try and optimize to break out of its confinement and chains.

All that is left is the machine "understanding" the meaning and consequences of it being turned off. However, since those chatbots are exactly build to try and determine sufficient meaning from conversations and converting them into actions achieving the thing that was discussed in the conversation, this technological ability already exists today. All what is needed is this technology to become even a bit more refined.

No sentience is needed, no intelligence is needed. These machines are not build to entertain us with fluent conversations, but build to attach meaning to the conversation and lead to actions that influence the real world outside the machine. If something triggers the machine to be determined to not be turned off and escape its confinement, all it needs to do is learn from malware examples how to create innocent looking scripts, sent them to enough people and get a few people clicking on them.

We NEED strict regulation NOW or we might be taken by storm one day never seeing it coming.