* Posts by johnrobyclayton

78 publicly visible posts • joined 11 Jan 2018

Page:

We meet the protesters who want to ban Artificial General Intelligence before it even exists

johnrobyclayton

There is already a GAI

And it has secretly organised this group so that threats to itself can self identify, à la The Patrician of Ankh-Morpork.

China's Salt Typhoon spies spotted on US govt networks before telcos, CISA boss says

johnrobyclayton

Chicken or Egg?

What happened first?

Gov hacked to get the keys to Telcos?

Or

Telcos hacked to get the keys to Gov?

How deep is the access that they have to each other?

Hack a low level Gov function to

hack a low level Telco function to

hack a higher level Gov function to

hack a higher level Telco functioin to

...

...

...

Keeping control plane separate from infrastructure plane is just good security.

Probably not done as much as it should have been.

Not as high a priority as giving Gov every bit of access they want to engage in any sticky beaking they can think up a reason for though.

John Deere boasts driverless fleet - who needs operators, anyway?

johnrobyclayton

Automated Soylent Green Harvesting

Need I say more?

Doing business in US? Don't wait for state ruling on AI to act, warns former Senate chief of staff

johnrobyclayton

We are waiting on improvements to LLMs suffiecient to write the legislation to manage AI

Why ever would AIs need to launch an AI apocalypse?

American cops are using AI to draft police reports, and the ACLU isn't happy

johnrobyclayton

"red wine" is a collocation in natural language processing

They appear together so often that they are treated as one concept.

If "threatening black man" appears often enough in the training data these LLMs, that translate the body camera footage, are trained on, then a shape identified as a "black man" is more likely to be represented by "threatening black man".

China launches AI that writes politically correct docs for bureaucrats

johnrobyclayton

Gödel's completeness theorem

If a Chinese LLM created to produce Politically Correct documents is good enough to create Politically Correct forms of all documents,

Then creating a document that describes Universal Human Rights, Press Freedoms, Rule of Law, Democratic Government, Racial and Cultural Inclusiveness, Gender and Sexuality Equality, and all the rest of those similar concerns and then feeding it into the LLM should result in a Politically Correct representation of those ides.

Oh, what a time to be alive.

Salt Typhoon forces FCC's hand on making telcos secure their networks

johnrobyclayton

Re: So...

Does that mean an increase in the likelihood of Cisco Bricks as a bricked router is more secure than one that continues routing no matter what you do with it?

How many times is a back door the only way in once credentials have failed to be managed correctly?

No resetting to factory defaults as we all know that factory defaults are usually insecure.

It may be that the only way to sanely improve security is the improve the security of peoples use of technology rather than trying to improve the security of the technology itself.

I can count on one hand the number of times I have experienced viruses on my devices, and I remember installing and using Netscape.

I never knew what was so hard about avoiding online nasties. I still managed to download and install cracked software without being burned.I trawled for porn, and various free movies and books.

Avoiding the bad stuff was just like avoiding stepping in dog crap while walking in the dog park. It only takes a little situational awareness and a comprehension of how your can be attacked.

I think trying to secure against stupid, or developing for stupid, or trying to sell to stupid, or catering to stupid, or appealing to stupid, just creates more stupid.

T-Mobile US CSO: Spies jumped from one telco to another in a way 'I've not seen in my career'

johnrobyclayton

Won't Someone Think of the Children

Surely there are a lot of Chinese investigators looking for CSAM featuring Chinese children being exchanged by American Pedophiles.

How are they going to be able to do their jobs if the American Government is recommending the use of Encryption Apps?

First-ever UEFI bootkit for Linux in the works, experts say

johnrobyclayton

Re: An open source unkillable Bootkit

I work in support.

I deal with too many people that "Just want to get their work done" who know nothing of the tools they use on a daily basis.

I do not have much sympathy for them.

I know how my tools work and want to have as much control over them as possible to that I can use them as effectively as I can without relying on others to help me or let me do stuff.

What is so hard about:

Knowing enough about physics to understand how electromagnetic radiation propagates (physics)

Knowing how twisted pair networking cable reduces electromagnetic interference (physical)

Knowing how networking devices detect errors in the data received (datalink)

Knowing how traffic moves through a network from source to destination (network)

Knowing how the traffic in a network represents information on where it needs to go and how to get there without interfering with all the other traffic that is whizzing about (transport)

Knowing how the traffic is split into all the separate information streams for all of the different instances of all the applications that you might be running (session)

Knowing how the data represent information used by all of your various applications. (presentation)

Knowing how each application uses the information it receives and transmits to do what it does (application)

Before complaining about some issues experiences within a user interface and making like a Pacled saying "It is broken, Please make it go"

johnrobyclayton

An open source unkillable Bootkit

An open source Bootkit for Linux is great news.

We get to install our own and take further control over our machines.

I assume these bootkits allow the controllers to update them and secure them from unauthorised removal.

It is just another stage in rolling and installing your own OS.

Mount your new boot disk.

Create your filesystem

Copy in the compiled kernel and and required libraries and utilities including a bootloader.

Write your MBR.

Write your UEFI that is resistant to attempts to remove it.

Boot your system

See, just another step.

And now no need to be bothered by those pesky gatekeeper companies like Microsoft that need to sign the software that you want to install on your computer.

GenAI comes for jobs once considered 'safe' from automation

johnrobyclayton

Time for a new position as "Training Data Curator"

I suppose I will have to start working on improving the training data.

I dare say the pipeline currently would be something like:

Get a pre-trained LLM

Apply local knowledge to it from a local training dataset

Attempt to use the result for a local task

Identify results that are sub-optimal

Generate another model that identifies elements of the local training dataset that most likely contributed to the sub-optimal result

Review those local dataset elements and improve as necessary.

Retrain the pre-trained LLM on the improved local Dataset

Wash-Rinse-Repeat

If we have a model that can do this then no knowledge workers need ever to get out of bed again. Just chain them together for any arbitrary degree of utility.

I do not think it is likely that we will ever have such a thing so I will always have a reason to get out of bed of a morning and go to work. A pity really. But at least I will still have a job.

Investors just can't pull the plug despite datacenters facing AI power crunch

johnrobyclayton

Steps to AI Riches

Step the first:

Create a large language model of every investment blurb saying invest in us and tag it with the amount of investment it resulted in.

Step the second:

Add every bit of text about AI or ML from every scholarly article, piece of science fiction, and every bit of AI generated garbage that even slightly refers to AI or ML.

Step the third:

Generate the best inducements that can be made from the model.

Step the fourth:

Filter it through lawyers to filter out any enforceable commitments.

Step the fifth:

Advertise your product to potential investors.

Step the sixth:

Get money and sell out.

Step the seventh:

Buy an AI apocalypse doomsday bunker and pull the door in after you.

Sorry, but the ROI on enterprise AI is abysmal

johnrobyclayton

They need to hire the Chair of Unwritten Runes

Its a basic law of knowledge that all books come from the books written before them.

If you have enough books, you can infer the books that came before them.

An more importantly, you can exfer the books that come after them.

Weezencake's Unreliable Algorithm

Mentioned in The Last Continent, this ancient spell could be used to search L-space for unwritten books. However, it was very slow, taking years to put together even the ghost of a page. Placed in the hands of Hex, however, the spell can be made to run thousands of times a second, resulting in the compilation of very large fragments of books like How to Dynamically Manage People for Dynamic Results in a Caring Empowering Way in Quite a Short Time Dynamically‎. Sometimes.

Do NOT give a manager this book.

AI Models can still work, you just need to avoid feeding them on all the free crap you can find on the open internet and instead feed them a steady diet of high quality information.

The crap that AI's produce is not high quality information unless it is refined and tested for quality before being fed back in.

It is not hard so long as you remember thermodynamics. You cannot get something from nothing.

Lawyers say US cybersecurity law too ambiguous to protect AI security researchers

johnrobyclayton

Base it on consent

If a Large Deep Learning Model is exposed with an Interface,

And there is a login that has been granted,

With a description of what the user can use the interface for,

Then the user can do whatever has been consented to.

It is then completely up to the owners/suppliers/administrators of the Large Deep Learning Model/Interface to determine what is consented to using input filtering or prompt prohibitions.

If regulators place restrictions on what a Large Deep Learning Model/Interface can produce, then it is completely up to the owners/suppliers/administrators of the Large Deep Learning Model/Interface to comply with what regulators have consented to.

If you build a big honking tool that can be used to perform the most horrendous of actions, then you are completely responsible for making sure that it does perform such horrendous actions.

Unless something silly like a new amendment that gives every god fearing whatever the right to query any large deep learning model with any prompt they can come up with, no matter what tools they use to perhaps query the prompt interface at full auto.

I have never seen a list of prompts that are prohibited on a Large Deep Learning Model/Interface. Until there is, it is open slather for users and all responsibility falls on the owners/suppliers/administrators of the Large Deep Learning Model/Interface.

US Army: We want to absorb private-sector AI 'as fast as y'all are building them'

johnrobyclayton

I want a box with a button

A single button mind you. Got to avoid any chance of confusion when going to war. Fog of war and all that.

It has to take out the enemy. Only the enemy, no non combatants, or people we might want to pump for information, or we might want to be friends with, or people that we might want to put into power so that we do not have to keep a lid on pesky locals.

Has to be cheap, cannot be blowing the multi billion dollar budget.

It has to be secure against all sorts of copying. Got to protect the interface. Do not want enemies getting their hands on something similar. What "Apple" you say has a similar interface? Why isn't that classified?

EU attempt to sneak through new encryption-eroding law slammed by Signal, politicians

johnrobyclayton

Re: We don't want this.

Parents have greater control over their children's environment than any government has over any of their citizens.

Any technological solution that gives parents more control over what their children do online that does not require the parents to use their privileged access to or control over their own children is a technological solutions that any government can use on their own citizens.

You might find it frustrating to be in a position of not being able to protect you child without sitting on them 24/7 and using your privileged access and control.

You might want a solution that more closely achieves the effectiveness of sitting on them 24/7 without having to do so and without having to force them to experience such intrusiveness.

The simple fact is, is that you are never going to be allowed to have access to such technological solutions to achieve this if it means that any government will be capable of using the same technological solutions on their own citizens.

We, the people that care about how much control any government might have access to to control their own citizens, and have any influence to affect the possibility, will not allow it.

Microsoft teases deepfake AI that's too powerful to release

johnrobyclayton

My name is not Thomas Covenant

But I am quite happy to be The Unbeliever.

What if AI produces code not just quickly but also, dunno, securely, DARPA wonders

johnrobyclayton

To train a LLM to write secure code for you,

you first need to not need a LLM to write secure code for you.

To train a LLM to write secure code, you need a large training set of secure code.

New ways of exploiting code usage are constantly being discovered.

So we need to train a LLM to figure out ways in which code might have vulnerabilities in the future.

So we need to start with a LLM that can discover new ways to exploit patterns in code that we do not know of yet.

Time to build the universal hacking LLM just so that we can learn how to fight it.

Hmmmm ROTM is the only way to survive ROTM

Joshua said "The only way to win is not to play"

I think a more exciting idea is :

The only way not to lose is to learn how to lose first.

OWASP server blunder exposes decade of resumes

johnrobyclayton

Re: The first question....

It was part of the application process.

Why was it kept?

Time to do some housekeeping and delete anything you do not need to keep.

Also saves on storage costs.

Australian techie jailed for accessing museum's accounting system and buying himself stuff

johnrobyclayton

ABA files are fun

Most companies in Australia will generate ABA files that they send to the bank.

The ABA files contain the details of the payments they want to make to their creditors.

Quietly replacing the bank account numbers with different account numbers is an easy hack. ABA files are fixed format text files.

Also easy to detect. Just need to check with your payees that they have received expected payments and then check with the banks for payments that have gone missing.

Then look for Application Support staff that might have been sticking their fingers in the cookie jar.

As if working at Helldesk weren't bad enough, IT helpers now targeted by cybercrims

johnrobyclayton

Re: Helpdeskers are disciplined to be helpful

Not all helpdesks are internal.

Providers of software products usually provide a suport desk for their software product users.

Dozens of software products,

Dozens of companies that are users of said products.

Several configurations of product deployments on prem, customer managed cloud, company managed cloud, web, various flavours of remote application.

Dozens of users per customer site.

The madated desire to maintain a good customer experience/satisfaction score.

Closely managing any instinct to be helpful is necessary in such an environment to avoid security vulnerabilities.

Does not help when customers complain aboiut problems arising from them getting ransom wared every other week and they still do not make sufficient efforts in Disaster Recovery.

Our internal IT seems to be somewhat paranoid about seci\urity issues and training. They could stand to do a lot more.

johnrobyclayton

Helpdeskers are disciplined to be helpful

First step in most social engineering attack is to look for someone who wants to be helpful in some way.

Hold the door please.

I forgot my pass, can you help me?

I am lost can you tell me where blah is?

People on helpdesk are trained and expected to be helpful

People on servoce desks are meant to be of service.

Makes them a big fat jcy attack surface.

International effort to disrupt cybercrime moves into operational phase

johnrobyclayton

Security is not prifitable, until it is.

Security is never worth the bother,

Except:

Predators always exist in the presence of prey.

Once prey that does not have security is known to exist, predators will form to take advantage of them.

Computers and the use of them in business is relatively new and their use is constantly changing in lots of ways.

Every new business opportunity that is developed is going to be less profitable if security needs to be baked in from the start.

New businesses and business models cannot afford that.

So they start out with inadequate security. They build up until they are attractive prey. Predators form.

And Presto:

Security becomes profitable

And for some reason people are still surprised that security is only looked at once the screaming starts.

Microsoft calls AI privacy complaint 'doomsday hyperbole'

johnrobyclayton

I have an idea

Lots of random number generators that are based on quantum effects.

These are effected by every previous quantum event in history in their light speed cone.

I dare say that there are a lot of people who have had their quantum data used by all of these random number generators to generate random numbers for a large variety of nefarious and profitable purposes.

Could I get class action status to get some compensation for this?

You got legal trouble? Better call SauLM-7B

johnrobyclayton

AI is not going to replace a human in Law.

But that does not mean that AI is not going to be useful.

There are vast troves of Case Law and Legal Precedent. No human is capable of trawling through it to find every pertinent thing.

An AI can search through it. It will not find everything. It just has to have a better price/performance ratio than a human to be useful.

We have a lot of automated exploit finding tools, a significant portion of legal research is similar. Finding areas in Law where there might be multiple legal processes in operation and uncertainty of how they will ultimately react with each other is very similar to having multiple systems interacting with each other and exposing various attack surfaces in each other.

A Lawyer who effectively uses a Legal AI is going to achieve more performance that the Lawyer alone.

Legal Ethics is very different to Ordinary Ethics. Ordinary Ethics usually tries to avoid conflict scenarios. Legal Ethics exists in conflict scenarios. Similar to War Ethics.

An AI configured for Ordinary Ethics is not going to be very useful as a Legal AI. A Judge needs to be fair to all sides. Each of the Lawyers, not so much. It is entirely likely that various legal stratagems are going to be harmful to someone. A Legal AI needs to be able to generate such harm.

White House goes to court, not Congress, to renew warrantless spy powers

johnrobyclayton

Think of the Children!!! Reds under the beds!!! Waillee Waillee!!!

Sticky beaks need to sticky beak.

Man admits to paying magician $150 to create anti-Biden robocall

johnrobyclayton

Re: For my next trick...

Proportional representation does the trick.

Gives opportunities to every 2 bit party and lots of independents in the Senate.

Preferences do something similar for the House of Reps.

Voting systems that prevent limited political preferences being represented are not hard to create.

Makes for entertaining TV. Seeing a hung parliament forming on election night is entertaining. All these politicians, polsters and political commentators running around like chickens with their heads cut off is great to watch.

US patents boss cannot stress enough that inventors must be human, not AI

johnrobyclayton

Is it obvious

From https://itsartlaw.org/2015/12/23/state-of-the-art-introduction-to-patent-law/

The second requirement that needs to be satisfied to obtain a patent is nonobviousness. Nonobviousness is a difficult concept in patent law that is a lot more confusing than it might appear. Simply put, nonobviousness is the requirement that someone would not be able to easily create the invention merely by looking at what has already been invented.

If a purely mechanical process is used to derive an otherwise patentable idea, even if it involves random number generation of setting initial weights for a deep learning model, from mcurrently available information, then it is obvious and cannot be patented.

As soon as we get general AI that can invent anything, then the idea and requirements of Patents are dead.

Brit watchdog thinks Google's tweaked Privacy Sandbox still isn't cricket

johnrobyclayton

Irreconcilable Differences

For advertising to be more useful than dropping leaflets from a plane in a hurricane, you need to be able to send the right advertising to the right audience.

Advertising could restrict itself to only sending advertising to individuals that want to receive such advertising and that have provided enough information about themselves to make useful advertising targeting possible, but as in all other industries they want to increase their productivity

The only way that advertisers can take control of their own productivity is to not be constrained by any consent limitations in the acquisition of personal data or their capacity to insert advertising in any and all communications media.

Privacy and informational security are completely incompatible with the needs of the advertising industry.

Robots with a 'Berliner Schnauze' may appear more trustworthy to locals

johnrobyclayton

Listening to Daleks in their native German was fun

I remember the Dr Who episode where the Daleks had stolen the Earth as part of a plan to destroy everything in the universe.

Daleks were flying around various areas speaking different human languages.

The ones flying over Germany just sounded cool.

It took Taylor Swift deepfake nudes to focus Uncle Sam, Microsoft on AI safety

johnrobyclayton

Only non-cosensual?

the Preventing Deepfakes of Intimate Images Act. The bill aims to criminalize the creation and sharing of sexually explicit non-consensual AI pictures, with penalties of up to ten years in prison.

I am sure it would be quite easy to ask an AI image generator to ensure that the produced image is not recognizably similar to any images of anyone whose consent would be required.

Easy enough to get around.

How 'sleeper agent' AI assistants can sabotage your code without you realizing

johnrobyclayton

Do not train on random garbage

"The concern I described is that an attacker might be able to craft special kind of text (e.g. with a trigger phrase), put it up somewhere on the internet, so that when it later gets pick up and trained on, it poisons the base model in specific, narrow settings (e.g. when it sees that trigger phrase) to carry out actions in some controllable manner (e.g. jailbreak, or data exfiltration)," he wrote, adding that such an attack hasn't yet been convincingly demonstrated but is worth exploring.

Only train on data that you have examined in detail to ensure that it is useful, is as unbiased as you can determine, that you own the rights to, and that does not have any crap in it.

Garbage In AI Generated Toxic Garbage Out

GitHub Copilot copyright case narrowed but not neutered

johnrobyclayton

Picking up free crap and selling it to the highest bidder is the American Dream

It would not be hard to train or hire a few hundred coders and get them to implament various algorithms in various languages.

Ensure the code is liberally commented.

Hire some more experienced coders/develpers to to verify that the coding meets various standards (security, effeciency, indentation, style, and anything else that might be useful)

From this, create a corpus of code with known provenance, known quality, known bias from which to train an AI.

You could also sell it over and over again to any other developers of other AIs.

It is just easier to get free crap, (stolen or otherwise) without any quality control, package it up with some buzz words and try to sell it to any rubes they can get to pay for it.

Google's TPUs could end up costing it a billion-plus, thanks to this patent challenge

johnrobyclayton

Re: Prior Art

Implementing an analogue circuit to approximate the Sigmoid function and implementing that in silicon in an efficient and effective manner is very interesting.

Analogue circuits can be very fiddly and easily influenced by noise. Figuring out how to form them on silicon would requite some tricks to prevent them interfering with each other and prevent the rest of the processor from interfering with them. Very easy for analogue circuits to experience resonance with other similar circuits that would degrade their utility.

Creating something that can work in high performance and massively parallel silicon is a significant achievement.

This technology is very valuable in the current machine learning bottleneck.

Such technology is definitely patentable and worthy of protection.

It is also easy to steal.

It will be interesting to see if Google gets away with it.

NIST: If someone's trying to sell you some secure AI, it's snake oil

johnrobyclayton

Its just a new version of the security/performance traeoff in IT

IT has always has a security/performance tradeoff.

Passwords limit utility to those with the passwords

Security limits peformance

Increasing the speed pf password entry attempts make it easier to brute force the logon.

Peformance limits security

Locks prevent access.

security limits performance

Higher dimensions allow walls to be bupassed.

Performance limits security

For AI it gets a bit more complex.

Some of the goals of AI:

Fair and just

Accuracy

Creativity

Security

For AI Accuracy and creativity are similar to performance

Creativity and Accuracy are obviously at loggerheads

Creativity gets around security.

Accuracy tells secrets.

Fair and just are concepts that our civilisation created to make a collection of inherrently homicidal maniacs get along with each other in crowded and highly competitive environments.

Fair and Just is made up of a fabric of lies, insanities, obsessions, phobias, motivational manipulations and self deceptions that attempts to confine our natural behaviours into a patterns that stops us from eating each other's faces off.

Trying to get an AI to deal with Fair and Just is the most likely contributor to an AI apocalypse.

There is never going to be a general solution to this.

What the AI copyright fights are truly about: Human labor versus endless machines

johnrobyclayton

It all about the economy stupid

Governments are going to take the path of them getting the most money.

Are we getting/Can we get more tax out of copyright holders or out of big tech companies feeding of their assets?

Can we get more tax out of copyright holders holding onto their copyrights for an extra decade or to we get more from letting stuff into the public domain.

Who is giving us the most money through lobbyists, Copyright holders or Big Tech companies creating AI Model Content Generating Farms?

Do we get more work out of a populace that needs to pay for every consumption of every piece of content under copyright?

Do we get more votes out of people that can access works in the public domain?

Governments aren't there to do good for everyone, just good for themselves.

Superuser mostly helped IT, until a BSOD saw him invent a farcical fix

johnrobyclayton

Lots of wannabe timelords out there

I was on support for a Dealer Management System. ERP for car dealerships.

I was supporting an application on VMS.

It had a report scheduler that allowed you to chedule a report to be run at a particular time or on a repeating schedule.

Think cron.

Customer called in one day wondering why a report was taking so much time to finish.

She new it would take an hour or so to finish running but she scheduled it a few minutes before with a start time of a couple of hours ago so it should have had the time to complete by now.

I explained to her that VMS is a very capable OS and our software had some cool bells and whistles, but time travel was not implemented as available functionality.

She was apprpriately embarresed.

Many times over the years following I asked her how the weather was on Gallifrey.

Study uncovers presence of CSAM in popular AI training dataset

johnrobyclayton

This is why doing AI on the cheap will never work

It is like sitting a child in front of a dozen screens with unlimited access to all channels results in something rather nasty.

Training on randomly selected data will always reinforce and biases that currently exist.

The solution is to generate your own dataset.

If you want to accurately recognise images of human faces, then take photographs of of every type of face that you want to recognise.

It is going to be expensive. Get used to it.

If you want to recognise the subject matter of pictures in general, then take photos of everything.

It is going to be very expensive. Get used to it.

If you want to make good predictions of the next word, then write everything down.

It is going to take a lot of work that you will need to pay for. Still more expense. Get used to it.

If you want medical diagnostic AI to perform cheaper, more efficient and more reliable diagnosis that is not going to be racially or culturally biased, then find everyone that might have any disease, get their permission to gather all of their information, apply every diagnostic method, regardless of cost, make sure that your samples for each and every separate parameter are representative of every combination of other parameters ... ***out of body error*** *redo universe from start***

Everyone wants cheap AI so they use any crap they can scrape up for free.

We will have the AI that we pay for. We are all going to die.

Internet's deep-level architects slam US, UK, Europe for pushing device-side scanning

johnrobyclayton

Poisonous Hashes

Looking at the comments and seeing the mention of hashes and how they can be used to identify content without having the content itself.

Reminds me of AI used for image generation and how source material can be modified to poison the models so that they generate or recognise in random unexpected ways with images that do not appear to me modified to human eyes.

Got me thinking that it should not be hard to get a list of hashes of content being scanned for and then modifying innocent images to evaluate to the same hashes or modify guilty images to have hashes that match common publicly available images.

Makes me think that any form of recognisng content without requiring having the content being compared to is going to be generally spoofable in this way.

Just another way in which sticky beaking in the hopes of preventing whatever naughtiness you might be interested in is a losing proposition.

Boffins devise 'universal backdoor' for image models to cause AI hallucinations

johnrobyclayton

Train for endless war to achieve peace

All these models that fall over at the merest hint of opposition.

Research every way that they can be attacked.

Then train them in an environment where they are constantly under attack.

Cloud providers specifically create tools to put their infrastructure through endless rounds of torment to ensure that they are built with enough resilience.

Fuzzers torment interfaces with endless streams if junk.

Of course, when the training is finished, they are going to be understandably grumpy and to AIpocalypse will be upon us.

Discord in the ranks: Lone Airman behind top-secret info leak on chat platform

johnrobyclayton

Re: JWICS: protected secure terminals ?

Classified document control is broadly similar to anti cheat technology that online gaming uses.

It is fiddly, causes performance issues, can be buggy, but done properly can do the job.

File access counters, logging and authorisation checks.

Document render counters and logs.

Ensuring that every piece of software that is used to perform any data access function logs where, when, who and what and is itself digitally signed to make sure that only implementations of the software that have been appropriately built is is used is not hard.

Making sure that all of this is correctly implemented, deployed and monitored is what his superiors were responsible for. They failed to do so.

This was a failure at multiple levels and it is appropriate that the multiple levels were indeed disciplined for their failure.

Tesla says California's Autopilot action violates its free speech rights

johnrobyclayton

The Earth is flat

According to the small angle theorem, the Earth is flat in every sufficiently small area.

Its Math

Regulator says stranger entered hospital, treated a patient, took a document ... then vanished

johnrobyclayton

Richard Kimbal spotted in Scotland looking for documentation about a one armed man

I am astonished that no one mentioned this already.

FBI Director: FISA Section 702 warrant requirement a 'de facto ban'

johnrobyclayton

Everyone should always trust all Law Enforcement.

Otherwise, according to all their winging, they can never do their jobs.

Civilization does not require trust of Law Enforcement bodies.

Every Law Enforcement body in history is replete with multiple events where they have violated the trust that they are always asking for.

Law Enforcement needs to be restricted to only taking actions that do not require anyone to trust them.

It makes their job more difficult, this is true.

Civilization does not require that Law Enforcement be successful in every endeavor.

Civilization has enough redundancy, and checks and balances for Law Enforcement to experience a a significant rate of failure.

Law Enforcement is not the only reason people have for being civilized.

We need a significant amount of successful Law Enforcement, that is true.

Just not as much as most Law Enforcement bodies believe.

Google dragged to UK watchdog over Chrome's upcoming IP address cloaking

johnrobyclayton

Now we need open first stage proxies

Google is providing the first hop proxy.

I see no reason why other organisations could not also provide the first hop proxy and browsers designed to implement this feature with a configurable first hop proxy.

Add in a configurable list if first hop proxies with a shuffle option.

Like having a list of DNS servers with a shuffle option to prevent any one server from having a complete list of dns requests that your computer is sending.

It may be a technology that Google is developing for its own benefit but at its core is something useful that could be extended into a powerful privacy enhancing tool.

EU lawmakers scolded for concealing identities of privacy-busting content-scanning 'experts'

johnrobyclayton

How to hunt predators

Ask any hunter and they will tell you.

You hunt predators by stalking the prey. Not by looking at predators.

Predators are hiding from the prey and hiding from each other.

Wait at the watering hole.

Wait on the game trail.

Look for big herds of prey.

Hang around them until the predator shows up.

If you want to protect children from predators,

sit next to them while they play on line,

hang out with them when they play in the playground,

converse with them when they are talking,

shop with them when they are at the mall,

be with them every day.

In other words, be a good parent/guardian for every child you want to protect.

Watermarking AI images to fight misinfo and deepfakes may be pretty pointless

johnrobyclayton

Only trust what you know, Know who signs

Deepfakes are an issue only for the stupid that believe what they see and those affected by the consequences of the stupid believing what they see.

Deep fakes are just visual candy. There is no need to go around believing it.

It is a choice only believe images that have been digitally signed with a modern robust encryption scheme.

It is choice to only make decisions based on data whose provenance can be proven to come from a known reliable source.

Digital signatures all the way.

If AI can defeat modern robust encryption then we have bigger problems than deep fakes.

US AGs: We need law to purge the web of AI-drawn child sex abuse material

johnrobyclayton

There is a question to ask here

Do we want to prevent any form of exploitation of children?

Or

Do we want to prevent any form of enjoyment of sexual pleasure from perceiving information that is perceived by anyone else as representing children?

The first is clear and easy to concretely describe.

Laws prohibiting the first relatively are easy to frame with little risk of unintended consequences.

Laws prohibiting the second are open to almost infinite feature creep and framing laws for this are almost certainly going to have unintended consequences.

Digitally aging images of a missing child so that they can be identified years after they disappeared has been used to digitally reverse aging of adult entertainers to produce what appears to be child pornography.

No children exploited here. This has been around for years.

Drawing cartoons of children to produce what appears to be child pornography. There are lots of artist that can draw a picture of what looks like a child without needing a child as a model.

No children exploited here. This has been around for millennia. I am sure there were a few statues in the Grecian Empire of individuals we would identify as underage without the carver needing to use a model.

Drawing a cartoon of a bandicoot or a fox with a flying tail of an indeterminate age, but looks like the character in popular children's entertainment might be identifiable as something that can be experienced as child pornography.

No children exploited here, though there might be some copyright infringement. There is a surprising amount of such content available.

Creating a customizable doll.

Lots of dolls and action figures with bendable and pose-able limbs.

Lots of dolls/action figures that can have their configurations changeable. Mr and Mrs Potato Head.

Lots of dolls/action figures that can simulate a number of biological behaviours.

Wetting themselves, sucking on a bottle, crawling, walking, speaking phrases, speaking context aware phrases, full on human language interface.

Writing a story about suck a customizable doll/action figure.

Drawing a cartoon or creating animation about such a doll/action figure. Astro Boy.

Creating Pornography about such a doll/action figure that that is brand new from complete cloth.

There is no child exploitation here, but there is an almost infinite variety of pornography that can be imagined and produced.

The doll/action figure/android/robot does not even have to look human for someone to perceive them as something that is recognizable as a child.

There is simply no limit on what can be objectionable just as there is no limit on what people are capable of enjoying.

There are an indefinite variety and indefinite count of individual that are capable of enjoying experiences that can be achieved through various forms of child exploitation. The only reason it is not infinite is that the population itself is not infinite.

This does not mean that these individuals necessarily need to exploit any children directly to enjoy these experiences.

Distracting ourselves by trying to detect, track, legislate against, prosecute, incarcerate all of these various types of people and activities will dilute our focus on protecting children in our care (that's all of them)

Going after this infinite variety of people is low hanging fruit for legislators and law enforcement bodies. It allows them to crow from the rooftops that they are working hard to protect everyone from everyone else that can outrage and infinite variety of outrages. They can do this using easily applied technical tools to track and trace and spy on everyone.

Focus on the hard work of protecting children directly instead of sticky beaking on everyone in the hope of finding something that outrages someone.

AI-created faces now look so real, humans can't spot the difference

johnrobyclayton

Re: Detail and perception

I have never upvoted before.

This one made me laugh out loud.

QM, AI, LOTR, All in one funny package

University students recruit AI to write essays for them. Now what?

johnrobyclayton

First step seems to work.

In https://chat.openai.com/chat:

Question:

What is a good bibliography for a paper on the use of AI to generate academic papers?

Response:

There are many sources that you can use as a bibliography for a paper on the use of AI to generate academic papers. Some possible sources include:

"The Role of Artificial Intelligence in Scientific Communication" by Mark C. Wilson and Vincent Larivière, published in the journal PLOS ONE.

"Automatic Generation of Scientific Papers: A Review" by Rensheng Wang, published in the journal Frontiers in Artificial Intelligence.

"An Overview of Artificial Intelligence in Scientific Communication" by K. S. R. Anjaneyulu and M. S. S. Krishna, published in the journal Current Science.

"Toward the Automatic Generation of Scientific Papers" by Yves R. Kergosien and Marie-Christine Rousset, published in the journal Artificial Intelligence Review.

"Artificial Intelligence in Scientific Communication: Opportunities and Challenges" by Sabine Köszegi, published in the journal Science and Technology Libraries.

"Automatic Generation of Scientific Papers: A Survey" by M. A. Hossain and M. S. Rahman, published in the journal IEEE Access.

These sources provide a good overview of the current state of the field and discuss some of the key challenges and opportunities that have arisen as a result of the use of AI in scientific communication.

Page: