* Posts by FeepingCreature

404 publicly visible posts • joined 31 Oct 2018

Page:

Checkmate? AI's pawn-pushing prowess proves partly pitiful, partly promising

FeepingCreature Bronze badge

Just reply with the move

This is of course equivalent to telling a human "Don't think about the problem, just blurt out the first move that comes to mind."

Talking about things is literally how these systems think.

Standing theory is that GPT-4 is better because it has been explicitly trained to think about chess internally, without speaking out loud. Without that advantage, you *have* to allow the system to reason. If you don't, well, it's not surprising that the result is unreasonable.

Starlink offers 'unusually hostile environment' to TCP

FeepingCreature Bronze badge

Re: TCP options & other transports

If it's random uncorrelated drop, might it be genuinely viable to simply send every packet twice (with slight delay)?

I stumbled upon LLM Kryptonite – and no one wants to fix this model-breaking bug

FeepingCreature Bronze badge

You should just make an arxiv writeup

Just publish a paper about it, give it a punny name, the usual.

FeepingCreature Bronze badge

Re: So,

There used to be a bug where you could just spam the same letter a thousand times and it'd make models go weird, but at least 3.5 and Opus have fixed that already. This sort of bug is definitely possible. It would be interesting if the writer has a short prompt that breaks the bot.

Neuralink keeps losing the thread on brain implant wiring

FeepingCreature Bronze badge

Re: I suspect there is a reason nature doesn't use wires

Nature uses wires with anchors: ie. axons and synapses. And if you pull them out, you do get brain damage. So that fits.

OpenAI says natively multimodal GPT-4o eats text, visuals, sound – and emits the same

FeepingCreature Bronze badge

> The main, if not the -only- use-case for so-called Gen-AI is in deceiving or manipulating people in one way or another.

Weird, I guess it's successfully deceived me into thinking it's writing code for me.

Ex-Space Shuttle boss corrects the record on Hubble upgrade mission

FeepingCreature Bronze badge

> According to Hale, John Shannon was the Ascent Flight Director for the mission. Hale quoted a recent comment from Shannon: "This would be a good case for why you have a flight control team instead of just programming the flight rules into a computer."

Honestly this case seems like a good argument for computer flight rules over human judgment to me.

Jensen Huang and Sam Altman among tech chiefs invited to federal AI Safety Board

FeepingCreature Bronze badge

Re: Committee of protecting incomme streams

To be fair, there's a case to be made for "keep 'em where we can see 'em."

Forget the AI doom and hype, let's make computers useful

FeepingCreature Bronze badge

Re: That is a quote I will keep

Yep and that's wrong, the specified syntax creates a delegate closure, not a lambda. You can tell because the parameter types are fully specified.

An example of a lambda would be alias add = (a, b) => a + b;.

The reason is that lambdas are actually purely compile-time constants. They don't capture scope (how! they aren't even values); they merely act like they capture scope because the effect of passing a lambda (at compiletime!) to a function is that the called function (not the lambda!) is turned into a child of the current function. Note that the lambda is not assigned to a variable because they are symbols, not values - alias add captures by name, not by value. That's why lambdas are passed to functions as a symbol template parameter: array.map!(a => a + 2); note the !() that marks template parameters in D. map gets the frame pointer of the caller to pass to the lambda because the compiler turns the instantiation into a child function.

(Is this a good way to implement lambdas? Not really! But we're stuck with them.)

IMO what this demonstrates is that LLMs are just as prone to jumping to ready but wrong conclusions as humans are.

FeepingCreature Bronze badge

Re: Tools for the job

It is very funny to me though that with Copilot, you can *genuinely* solve problems sometimes by just writing a really detailed comment. All we need is a version of gcc that calls out to GPT-4 for corrections.

FeepingCreature Bronze badge

Re: That is a quote I will keep

I have in fact made a new language that the LLM has never seen (check it out!) and it has been pretty good at putting together programs in it. However, ironically, it is very close to D (the actual language D, ie. DigitalMars D), so the model may have been cribbing off its D skill.

(If you really want to stump it, ask it to explain in detail how lambdas work in D. This is a bit unfair though because they work very differently from other languages, and also the way they work isn't really documented anywhere. But you can brutally confuse the poor thing by asking it how exactly a lambda call in D gets the runtime frame pointer, despite the lambda being clearly passed in a compiletime parameter...)

That said, your question is already answered: "Not well but yes." One of the OpenAI papers is about this, I don't remember which offhand. The models "learn to learn": one of the patterns they pick up is gradient descent reinforcement learning; ie. they can complete patterns even if they first time they have ever seen the pattern is in their context window during evaluation. The one thing they can't do (yet!) that humans can, is transfer those run-time "virtual" weights to more permanent memory. However, it's unclear how much this matters considering context window size is getting bigger and bigger. And this seems like an engineering issue; for instance, if we can identify the circuits that implement this "runtime reinforcement learning" pattern, we may be able to track which tokens the network attends to, and store them for the next "proper" learning phase.

FeepingCreature Bronze badge
Unhappy

Re: That is a quote I will keep

I don't know what to say. This just isn't true. People say this because they want it to be the case, but it really just isn't. There isn't even any evidence for this! There was this one paper that said you could phrase LLMs as a linear operator, but it used a *really* tortured operation and very obviously had little to do with how those networks actually work. Meanwhile, all the actual papers by OpenAI were "we built this network and it managed to generalize in interesting ways" and all the research in the field is iterations on this theme. Also if you actually use a current LLM, it is very obvious that it can do broadly novel work using generalized knowledge.

I frequently get LLMs to write programs that have simply never existed, and I can literally watch it iterately build the code up using its understanding of what the code is doing and what I am asking of it, which, just to emphasize again, are novel requests on novel code that fall into established, generalized patterns of understanding.

Any claim that LLMs just copypaste from the corpus is simply not compatible with either the scientific publications of the field or observed reality. This is the machine learning equivalent of climate change denial.

FeepingCreature Bronze badge

Re: That is a quote I will keep

It's just false though. Like, do actually read the papers. The whole point is that it generalizes.

How to coax ChatGPT into making better predictions: Get it to tell tales from the future

FeepingCreature Bronze badge
Go

Good?

It's hard to see how politics could be made worse by superhuman predictions.

AMD to open source Micro Engine Scheduler firmware for Radeon GPUs

FeepingCreature Bronze badge
Thumb Up

Hell yes! Finally!

Maybe now somebody can actually figure out why it crashes all the time under load! Since AMD are clearly not up to it.

This card has been out for a year and a half, I should not be seeing MES crashes in my kernel log.

Tiny Corp launches Nvidia-powered AI computer because 'it just works'

FeepingCreature Bronze badge

As somebody who's having crashes with exactly that specific pile of shit in exactly that card regularly:

Expect? No. Hope? Yes.

At this point AMD kinda have to take drastic step to regain-- wait, that's incorrect. *Gain* customer confidence. The only thing I have confidence in them is fumbling otherwise-good technology with terminal lack of polish.

Sidenote: anyone know where this mysterious repo is? edit: Aah, https://gitlab.freedesktop.org/tomstdenis/umr cool stuff.

Oh, and: George Hotz is completely right about AMD.

Third time is almost the charm for SpaceX's Starship

FeepingCreature Bronze badge

Re: maximising

Eh. In the exact phrasing, they're maximizing public safety as a secondary concern after the primary objective, which is attempting new techniques in space. Kinda hard to do that without launching a rocket.

FeepingCreature Bronze badge

Re: Capabilities.

Probably confused it with the 1100m³ payload volume.

EU-turn! Now Apple says it won't banish Home Screen web apps in Europe

FeepingCreature Bronze badge

Re: Illegal under the DMA

That's what Windows said too. Of course, there's actually no technical reason not to give non-Safari browsers access to the ability to present apps with native styling and special privileges.

BEAST AI needs just a minute of GPU time to make an LLM fly off the rails

FeepingCreature Bronze badge

All that's needed is token probabilities

Next up: "using just an EEG, we jailbreak an enemy combatant in less than a minute..."

How to weaponize LLMs to auto-hijack websites

FeepingCreature Bronze badge

Broadly correct, but slight correction: the primary difference in the GPT generations is the size of the network, not just the generational dataset. As things stand, GPT-2 was 1.5B weights, GPT-3 was 175B weights, and GPT-4 is suspected (leaked) to be 1.8T weights split in 16 units, of which only two (dynamically chosen) are active at once.

Someone had to say it: Scientists propose AI apocalypse kill switches

FeepingCreature Bronze badge

Seems a bit convenient to say "it isn't worth serious discussion", but unlikely to convince anyone. Personally I find that scenario very worrying - as in, almost the only one worth worrying about.

Though I do agree that by the time you hit autonomous action in the physical world, a killswitch won't fix things either. Then again, a national actor probably has an easier time being convinced to remotely disable a datacenter than to bomb it, and that can matter when an AI in mid-takeoff starts messing with your communication.

Tesla's Cybertruck may not be so stainless after all

FeepingCreature Bronze badge

Re: Musk? Who trusts this guy?

Staggering to think there isn't anyone better!

I mean, it's not like NASA relies on Musk by choice.

Musk claims that venting liquid oxygen caused Starship explosion

FeepingCreature Bronze badge

Wow, read the room. We don't need that kind of hostility here.

What the AI copyright fights are truly about: Human labor versus endless machines

FeepingCreature Bronze badge

Re: Blood in the Machine

Outlaw it to give away your products even for free? Bit of an overreach, seems to me.

Your concept would literally make Santa Claus the biggest criminal in the world.

Science fiction writers imagine a future in which AI doesn’t abuse copyright – or their generosity

FeepingCreature Bronze badge

Re: Regulate the prompts?

The problem with this scenario is not at all that the data was obtained illegally, but that you're committing a weird kind of slander by creating the impression that I'm the one doing or saying these things. If you're using, say, my mannerisms to generate a character that would never be confused for me, I have no problems with it. That's just acting.

What, would you ban celebrity impersonators?

FeepingCreature Bronze badge

Re: Regulate the prompts?

I don't understand why it's fine if you do it manually but it becomes immoral if you do it with a tool.

FeepingCreature Bronze badge

> If the LLMs could produce their receipts, authors would be a lot happier

I cannot imagine that would be the case. All the same economic worries would still apply if there was one more sale of each of your book. If Microsoft could solve this issue by building a library with a thorough purchase trail, don't you think they would?

The economic case for bookwriting rests critically on scarcity of authorial labor, not scarcity of paper and ink.

FeepingCreature Bronze badge

Re: Regulate the prompts?

And then, of course the reverse: use the model to estimate the originality of human-written published novels.

The Tolkien estate has a lot of royalties coming their way...

'Wobbly spacetime' is latest stab at unifying physics

FeepingCreature Bronze badge

Tegmarkism represent!

All my homies exist as logical implications of formal systems.

The thing that makes Tegmark IV appealing for me is that I've always thought: "no matter what actually exists or doesn't exist, the fact that the laws of physics if computed will contain myself is immutable and inescapable." So if that's true, and it's hard to see how it couldn't be, why would existence need anything else?

"But mere mathematics doesn't give you the passage of time, or consciousness, or the phenomenal feeling of selfhood." "Yeah, I calculated the laws of physics and turns out that's what you said in that mathematical formalism as well."

Google teases AlphaCode 2 – a code-generating AI revamped with Gemini

FeepingCreature Bronze badge

If you can do it once in a million, you can train on it.

As long as probability of success is finitely greater than zero, it can usually be engineered to approach one.

AI agents can copy humans to get closer to artificial general intelligence, DeepMind finds

FeepingCreature Bronze badge

Programming AI used to be a pipedream too. Now I write half my scripts with GPT-3, by which I mean I tell it what I want and it does the work.

FeepingCreature Bronze badge

Sure, and if you show me a landmine that can potentially learn to build more landmines I'll be just as worried.

SpaceX celebrates Starship launch as a success – even with the explosion

FeepingCreature Bronze badge

Re: Self destruct

Generally, if a rocket leaves the launch corridor, and you make the launch corridor more lenient, it'll leave that one as well - but this time with more speed. It's not like KSP where you can "save it" with deft maneuvering.

FeepingCreature Bronze badge

Re: I can't help but feel....

On the other hand, the satellites themselves are pretty flat, so they can "just" dump them out of a horizontal slit (Google "Starlink Pez dispenser"). Not to downplay the complexity of course.

FeepingCreature Bronze badge

Re: Conrgatulations due

I don't think those are leaks. Don't have a source, but I remember reading it's intentional. Not sure why though - indicators?

FeepingCreature Bronze badge

Re: I can't help but feel....

Yeah there's no point in having insurance if you can cover the costs of a loss out of pocket. It's important to remember that insurance is negative expected cash value. Otherwise, no insurance company would offer it. It's only positive expected value if losses have a risk of bankrupting you.

SpaceX's Starship on the roster for Texas takeoff

FeepingCreature Bronze badge

(Don't downvote: it was true at the time.)

Making the problem go away is not the same thing as fixing it

FeepingCreature Bronze badge

Is it just me, or is it stupid, counterproductive and borderline health hazard to do weekly tests of an alarm system while people are working? Sure, if you're testing evacuation procedures that makes sense (though I'd never test those *weekly*, that just trains people to treat every alarm as a test), but if you're just testing the alarm itself, surely you'd do it outside of working hours?

Forget "final notice", if it was me I'd assume I was being hazed and start looking for another job.

Web Summit CEO's comments on Israeli conflict 'war crimes' sparks boycott

FeepingCreature Bronze badge

Freedom = Freedom from consequences

Free speech has of course always meant freedom from certain consequences. The term "freedom" is worthless if it doesn't include consequences - by that metric, China and Russia have free speech. You just have to deal with the consequences, which may include torture and murder.

Elon Musk's ambitions for Starship soar high while reality waits on launchpad

FeepingCreature Bronze badge

417

Once again, the launch was initially scheduled for 417. Like what, did Musk do a cheeky little weed-based rain dance?

Lost your luggage? That's nothing – we just lost your whole flight!

FeepingCreature Bronze badge

Re: This one command you must not enter

That's why when I want to mass-delete things, I always start with a `#`. `# rm -rf ~/.local/...`

Well, if I remember.

The SQL equivalent would be beginning every console session with `BEGIN TRANSACTION`.

Hope for nerds! ChatGPT's still a below-average math student

FeepingCreature Bronze badge

Re: Just wondering

I mean, but it's not like humans usually remember text either, or events, or reality.

FeepingCreature Bronze badge

Re: " throw in a few trick questions"

If you hook an LLM up to an action harness where a certain output results in an injection into the token stream of the result of that output, interpreted as an action, onto reality, as measured by a sensor, then the LLM is in fact capable of picking up quite involved correlations in that.

That's the whole point of the plugin architecture.

FeepingCreature Bronze badge

"The details get lost" is also known as "abstraction", a key component of intelligence.

FeepingCreature Bronze badge

Re: " throw in a few trick questions"

Humans also only analyze correlations between tokens: you think your sensory neurons feed you direct access to unvarnished truth? LLMs are just one step further removed than humans are; neither of us experiences unmediated reality.

If you have any causal account of meaning arising from correlations in reality and sensory perception, that account would also justify meaning in LLMs.

ChatGPT will soon accept speech and images in its prompts, and be able to talk back to you

FeepingCreature Bronze badge

Re: Reliance

> - We could be deceived by AI systems. For example, an AI chatbot could be programmed to impersonate a real person, such as a friend or family member. This could be used to manipulate people into giving away sensitive information or money.

This is already happening.

California governor vetoes bill requiring human drivers in robo trucks

FeepingCreature Bronze badge

Human drivers can already do that - and are already doing that. It turns out the main defense is that most people aren't murderers.

Neuralink's looking for participants willing to be part of human trials

FeepingCreature Bronze badge

I for one...

I think Neuralink's approach is cool, and I'm excited to see how it works in practice.

Hey, maybe if it turns out well, I can get one in a decade or two!

FAA wants rocket jockeys to clean up after their space launch parties

FeepingCreature Bronze badge

Re: 25 <b>YEARS</b> sounds a bit long

Well, orbits are extremely predictable, so this just comes down to not allowing operators to even launch satellites that won't naturally decay over 25 years.

Page: