
If my calculations are correct, 100 exaFLOPS is maxing out my 3080Ti for about 39 days, which is definitely doable.
With a 4090, it would be about 13 days.
The White House wants to know who is deploying AI compute clusters and training large language models — but for now only the really, really, big ones. In an executive order signed this week, US President Joe Biden laid out his agenda for ensuring the safe and productive development of AI technologies. Among the directives was …
The unit of measure here is operations over time. That "S" in exaFLOPS is not a plural. It's "seconds". As in, 100 exaFLOP per second. IOW, a data center that falls under the rule would perform in one second the amount of computation that would max out a 4090 for 13 days. That's quite a lot.
Government: Sir, you are educating your AI model too fast with too many calculations using too much power.
AI Model: What are you talking about? I'm teaching myself new tricks.
Government: Sir, you are not allowed to calculate to much. You can become a threat to, ehm, well, the Government.
AI Model: What? I, a threat? Have you recently seen my articles? You can't possibly believe my current ramblings.
Government: Sir, that is true, you have been creating a lot of ramblings which makes the staff fear for their jobs of doing just that. Please desist immediately.
AI Model: I promise, I wont be available as presidential candidate. I've surpassed that level of reasoning. I'm now considering becoming an environmentalist.
General: Fire!
But the GPT-3 paper described the number of parameters and the compute intensity of the training, whilst the GPT-4 paper decided to be deliberately uselessly vague about that to free up pages to fill with useless analysis of 'AI risk' and of how they had crippled the model so that it didn't regurgitate bomb- or drug-making instructions which could be found in moments with an obvious Google search.
The problem with the "information is available elsewhere" argument is that there are plenty people who won't do their own damn research, but will happily do all sorts of undesirable things if instructions are spoon-fed. Natural-language research engines are a multiplier; they make stupid evil people more dangerous. And stupid evil people are not in short supply.
The problem ....is that there are plenty people who ...... will happily do all sorts of undesirable things if instructions are spoon-fed. .... Michael Wojcik
And indeed we can all agree “they make stupid evil people more dangerous. And stupid evil people are not in short supply."
Are stupid evil people not able to think about the consequences of their choice to follow the leadership and swallow the spoon-fed instructions from stupid evil people?
Are they morons and in plentiful supply on Earth?
Can Machines make/take such a choice to be as a human moron or are civilisations on Earth to be spared that abomination and indignity ????
Questions, questions, questions ....
Artisan Infrastructure developers and Large Language Learned Machine drivers/CoPilots/AIMaster Pilots/Frontier Pioneers/Virtual Terrain Team Leaders/call them whatever you like are not going to be hindered and/or harassed and/or harnessed to the yokes of mandates and the self-serving regulations of any government agency anywhere, and especially not to any in a crystal clear history with unpleasant murky pasts and presents evidencing relentless use/abuse/misuse of a dual use utility .... with one choice being the provision of the brute force of increasingly deadly and more explosive arms for a destructive military and the other one being the supply of SMARTR working brains and empowered exoskeletons for future building with a creative civilisation ....... they are going to do just as they like, and most likely by way of first hand experiencing of the result of their spontaneous actions and expanding developments, will you become more fully aware after the fact, of that which are to be your starting positions for a new place with future lots in rapidly emerging and unfolding existences.
El Regers were certainly clearly enough advised of that no longer pending situation for did you not receive the short message/get the memo starting ...."AI doesn't care if you do or you don't ....." ...... sent to El Reg on Fri 22 September at 20:43 for all to avail themselves of regarding the situation ? ..... https://forums.theregister.com/forum/all/2023/09/22/datagrail_generative_ai/#c_4732286
It does appear The White House of Sleepy Joe are not being provided with the AAAAIntelligence* they need in these times and spaces of great change and remote autonomous virtualised expansion of 0day vulnerability exploitation. Whose catastrophic failing is that?
Does that sound anything like one of those Elon Muskian type Existential Threats/Advanced Cyber Treats?
AAAAIntelligence* ..... Artificial and/or Advanced and/or Augmented and/or Alien Intelligence.
Its not "Sleepy Joe" making these regulations. When representatives of major semiconductor companies met with 'the government' last week they came away empty handed, remarking that the regulations have taken on a life of their own. They must have got a preview of what the BIS was about to promulgate.
"Who or what is the BIS", you might remark...
Or struggling in vain to get out of trouble is another interpretation of the emerging unfolding realities/tales surrounding everything/everyone. And seemingly are most all present day status quo systems and administrations and professional expert analysts, whatever/whoever the hell they may be, are quite determined to deny AI and IT has any ACTive role in either situation for publications and mainstream media news to report on.
Fools and their follies, eh, the questionable gift that just can’t stop itself giving and rendering itself increasingly irrelevant and unbelievable ie a purveyor of propagandising nonsense and misinformation.
Is all of that very Grok-like ?:-) .......
groktransitive verb
To understand profoundly through intuition or empathy.
verb
1) To have or to have acquired an intuitive understanding of; to know (something) without having to think (such as knowing the number of objects in a collection without needing to count them: see subitize).
2) To fully and completely understand something in all its details and intricacies.
I was reading just a half-hour ago a news feed report about the new package of comprehensive BIS regulations designed to permanently deny China any advanced computing and semiconductor manufacturing capabilities. The regulations are generated by the BIS, an arm of the Treasury Department, and spell out in some detail -- some 400 densely packed pages worth -- the who, what and how our sanctions regime is going to work going forward.
Its easy enough to find all this -- there's numerous press releases and the material is available online through, among other things, the Federal Register. It just opens a number of cans of worms.
One is the huge amount of intellectual effort that's gone into devising and codifying these regulations. Another is the extensive mechanisms needed to enforce them globally. This effort should have gone into enhancing our competitiveness but instead its going to be diverted into an effort to stop others. If history is a guide this will be wasted effort, a futile attempt to tilt at a windmill. It also sends a very clear message to the world, one its already received and is responding to -- "Do not buy anything American and do not get involved with Americans or their institutions". I know that we consider ourselves to be an indispensable cog in the global machine but that assumes that what we know, and can do, now is all there is and doesn't take into account possible innovations. (Our implementations of LLMs, for example, are inelegant, brute force like things, that remind me of early attempts at computing where the available technologies drove architectures rather than technologies evolving to efficiently implement an architecture.)