Re: Complete Microsoft Products?
Only before they're released, never after...
7 publicly visible posts • joined 21 Jul 2023
He probably meant turn off the whole 'wifi router', not just the wifi; likely smaller malware doesn't write itself to disk, but stays in the ram to avoid detection. So yeah, power-cycling the machine could wipe it. But that only really matters if you fix the security hole, or it'll just re-infect from the same source. Or you just keep your router off forever after.
This is likely 'Resource Augmented Generation', where it essentially compares words in your question to words in a 'table of answers', adds the most likely answers to the 'context' of the question, and then generates text based on that. Basically, it reads the top three google results into memory and writes down the citations. There's still probability involved in how it presents the answer, but usually little enough that facts won't be distorted.
The core LLM tech is still probabilistic (random on some level) but RAG is a decent way to constrain hallucination by aligning it to an external 'source of truth'. It's pretty interesting. The 'intelligence' of the AI is still directly proportionate to the number of parameters and architecture of the model, but the dumber the model the closer it gets to just being a fancy google search filter.
If you're interested in LLM tech, I'd highly suggest trying to run one locally; it will show the limits of the tech so much better than a polished presentation like ChatGPT ever could. A decent desktop GPU is more than enough for a small one to tinker around with. Look up KoboldCPP and Obabooga.
The real threat of 'targeted advertising' is that there's no promise that such targeting will stay in the hands of 'the marketing departments of actual companies', as you so blithely suggest.
Consider advertising like a gun, with your info being targeting data. Google is happy to hand that gun to anyone who pays. In the hands of marketing departments of actual companies, it's mildly annoying or distracting. But anyone - literally anyone - can buy or sell addspace; the only criterion is money.
And if they refuse to even pay, they can just hack an ad that does. Like, if I don't want to pay Google for a gun, I can just punch someone else and take theirs, once it's targeting a person who's demographics I like.
So, say I think up a scam targeting your demographic in particular; if your information is well-known, that scam will hit you precisely, as will every other fraud or attempt by a bad actor. If your info isn't well-known, instead you'll get a random assortment of scams and fraud opportunities.
I don't believe myself smart enough to outsmart everyone; better to just keep my attack surface low, so only scattered attacks will hit me.
It's a bit paranoid - but that's how I think.
If the physics of the brain is algorithmic, AI is at least theoretically possible on our current hardware.
Turing, Church, and Godel proved this something like a eighty years ago. If a computation engine is Turing complete, it can emulate any other Turing complete computation engine. Therefore, if the laws underlying the mechanisms of consciousness can be computed, then they can be computed digitally.
This is why, I think, people get so excited about neural networks. If the equations of a neural network entirely describe the parts of the brain that actually allow us to think, then we've gotten our foot in the door; the rest of it is just a matter of scale and finding the right bits to focus on.
There are definitely formulas that can't be computed, though; anything involving the halting problem is provably impossible on a Turing machine. If consciousness relies on something like that, A.I. is definitely impossible on digital computers.
Moreover, it hasn't been positively demonstrated that consciousness is algorithmic. If it was practically demonstrated, we'd have AI by now, and to theoretically demonstrate it, we'd need a complete description of the physical laws describing the brain. 'A few more years' of research won't begin to scratch that.
Personally, I doubt that consciousness is algorithmic. I believe in free will because I experience it; therefore I reject the idea that I'm deterministic - therefore I reject the idea of computable consciousness, and I don't believe digital computers will ever manage to host A.I.
It may be a slightly solipsistic argument, but that's my view.
"Until now, when a purchaser seeks a new image 'in the style' of a given artist, they must pay to commission or license an original image from that artist,"
Really?
Because a lot of webcomic artists, such as the aforementioned Sara Anderson, have incredibly simplistic styles, and hiring a different artist to draw in that style wouldn't be hard at all.
Moreover, if it's possible to 'own' a style, then it becomes necessary to ask what defines that style, and whether other people can use those design elements without infringing on the style.
Does Randal Monroe now own stick figures?
Does Yves Klein own the color blue?
In fact, we already have a body of law about defining visual elements that designate the origin of a thing. That's what trademarks are about. If someone creates an A.I. image with mickey mouse in it, they'll be unable to use it commercially. If these artists really want claim they're entitled to protection automatically, they'll need to convince me that trademarks are automatic and non-specific, and I don't buy it. If it was possible to trademark Walt Disney's style, the house of mouse would have done it years ago.
There might be an argument to be made on the use of the images; if they've licensed the image for use by humans and not computers, then scraping and training with it could be prohibited. But that's input, not output; as to the companies owing them for images made using their 'style'? That way is a slippery slope, and I don't think the judge is insane enough to try that.