back to article US, China agree to meet in Switzerland to discuss most pressing issue of all: AI use

American and Chinese officials will meet in Geneva, Switzerland, on Tuesday to try and broker a deal for the two countries to get on the same page concerning AI and potential restrictions on its use. The US will reportedly be represented by Tarun Chhabra and Dr Seth Center, who are respectively from the US National Security …

  1. Khaptain Silver badge

    Learning process

    I would hazard a guess that both sides will use this as a learning process about what the other is side is currently capable of, and to imagine what they are capable of in the future. Then they will agree not to do any bad things but will run of home and do exactly the opposite.

    Both sides see AI from a military perspective and this is one , if not their primary their goals, develop the most powerful defense/offensive AI possible. We all know this.

    We are probably much closer to Skynet than we might believe.

    1. DS999 Silver badge

      Re: Learning process

      So long as Skynet is dependent on a big datacenter we have an easy off switch.

      1. Badgerfruit

        Re: Learning process/power off button

        Yes, for now. Fortunately, AI is so dumb right now, it won't propagate to distributed systems by itself. Phew.

        And fingers crossed, a fat, balding, middle aged man like me is far superior to all those who can actually make things like that happen, so they won't have thought of that yet.

        1. Anonymous Coward
          Anonymous Coward

          Re: Learning process/power off button

          For now.

          Remember: Intel 4004 was introduced in 1971. It had a 4 bit bus, a 1 kB address space and a clock frequency of 740 kHz. That's 53 years ago or less then two human generations.

          Back then, it would have been very hard to predict how fast things would go in computing going forward 20, 30, 40, 50... years and to imagine not only the increased capabilities of computers but maybe even more so the sheer amount of computing devices that blow this tiny 4004 microcontroller out of the water in every possible way in use today. The idea that many individual vacuum robots would now have (estimation, I didn't check) more computing power then all combined installed computing power including all supercomputers would have sounded rather silly back then.

          One human generation, two generations of progress is well within most peoples lifetimes or at least of their children and grandchildren. I prefer to not rush in and see if that time is enough to develop a "Skynet capable" future.

          1. druck Silver badge

            Re: Learning process/power off button

            Most technologies don't keep following an exponential development curve, or even a linear one. Long term you tend to get an S curve, exponential initially, then fairly linear, and finally flattening off to maturity. The current 'AI' party trick is very much based on throwing ever more resources at the problem, rather than any great advance in the underlying computer technology, and is therefore unsustainable. We are already seeing diminishing returns from larger model sizes.

    2. Dan 55 Silver badge

      Re: Learning process

      Each side will go home with a list of horrific applications which they hadn't thought of before but the other side had and immediately put their top researchers onto it (China) or put contracts out to tender (US).

  2. amanfromMars 1 Silver badge

    Deny it at your leisure if you insist, but the pleasure is all theirs to experiment with.

    For humans, statespersons, geopoliticians etc. etc. to think and/or imagine that AI in any of its effective and stealthy guises and phorms can be commanded and controlled to NOT DO any specific bidding, does have IT recognising the systemic weaknesses and vulnerabilities present in extant administering systems to exploit for further leading AI gain of future exclusive executive function. ....... which is a highly prized and most valuable aid for remote universal reprogramming.

    To deny that would certainly have AI questioning the presence and current general state of available human intelligence on planet Earth ..... should such, for whatever reason, ever need to concern them.

  3. Bartholomew
    Terminator

    SkyNet will not be happening today, but ...

    The thing to keep in mind is that we have, or nearly have, the processing power to simulate an average human brain (The singularity or the emergence of the very first Artificial General Intelligence - requires around 10^14 (100 trillion) SUPS - Synaptic Updates Per Second).

    But to do this currently requires a totally insane amount of computing power, an insane amount of storage and a totally insane amount to electricity.

    e.g.

    "In 2013, the K computer was used to simulate a neural network of 1.73 billion neurons with a total of 10.4 trillion synapses (1% of the human brain). The simulation ran for 40 minutes to simulate 1 s of brain activity at a normal activity level (4.4 on average). The simulation required 1 Petabyte of storage." - from the Wikipedia article on SUPS.

    I'm not saying that SkyNet could never happen in my lifetime, but it is currently not as close as some people would have you believe. It would be like producing the very first transistor in Bell labs and picturing the creation of the first Cray Super Computer the next day, there are a good few years between those two historical landmarks.

    1. Anonymous Coward
      Anonymous Coward

      Reconsider?

      Simulation of a brain takes much more computing power then doing "equivalent" calculations on bare metal hardware. Compared it a bit with simulating a processor design (on a supercomputer) versus actually running the code on the first produced chip samples. Simulating (code running on) the chip before the first sample, be it on computers or specialized FPGAs is horrendously slow and costly compared to having an actual sample to run the code on. That's in part why they always produce engineering samples on small scale runs.

      As to raw computing power, we already zapped past that barrier a long time ago. https://en.wikipedia.org/wiki/Exascale_computing saying:

      * Exascale computing refers to computing systems capable of calculating at least "10^18 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (exaFLOPS)"

      * In 2022, the world's first public exascale computer, Frontier, was announced.[8] As of November 2023, it is the world's fastest supercomputer.

      I totally understand a synaptic update (per second) is not the same thing. Yet in a single update something close to a (actually one or two depending on how you look at it) low precision calculation is done and the result is "transmitted" to typically a thousand other synapses that are roughly other low precision calculation units plus the fabric to redistribute the calculated result again. Brains operate spike based rather then ALU based, but the dominant benefit of that approach seems to be around two orders of magnitude more power efficiency. Calculation wise, the synapses seem to get far closer to 4 or at most 8 bit calculations rather then 64 bit calculations.

      8 bit or lower calculations are massively cheaper then 64 bit ones. A single Nvidia h200 accelerator (around 20000$ ish) https://www.nvidia.com/en-gb/data-center/h200/ has:

      * FP 64 bit tensor: 67 TFlops

      * FP 8 bit tensor: 3958 TFlops or over 50 times more

      * Around 4000 TFlops is 4 * 1000 * 10^12 = 4 * 10^15

      Sure, a FLOP isn't exactly the same as a SUP. And sure all those flops can't be used at 100% efficiency. But who is to say our brains can put all those SUPS at 100% efficiency? Or use it useful 24/7 like much electronic equipment can sort of do (under ideal circumstances).

      I wouldn't feel that comfortable about our supposedly superior calculation power. So far we manage to squeeze more intelligence out of it or in other words use it more efficient to produce intelligence. So far...

      1. Anonymous Coward
        Anonymous Coward

        Re: Reconsider?

        Just found https://www.nvidia.com/en-gb/data-center/gb200-nvl72/

        The h200 successor gb200 will have 40 PFlops tensor (meaning with sparse matrices common in machine learning) or 20 PFlops "normal". That's another order of magnitude peak performance increase or 4* 10^16 FP4 FLops. 70000$ a piece...

      2. Bartholomew

        Re: Reconsider?

        I get what you are saying. But the people working on simulating human brains, you can be dam sure that some of them are looking to "upload" a copy of their dead brain someday in the future - scanned in one neuron at a time. So even though it may be a less optimal use of resources, a tiny number of exceedingly rich people are looking to exist past their deaths in some form.

        The Japanese K computer (according to Wikipedia) uses nearly 13 million watts. And I am assuming that would be the usage when they simulated 1 second duration of 1% of a human brain that took nearly 40 minutes to execute. That would imply less than 1.3 gigawatts would be required to fully simulate 100% of a human brain (You could assume any power saving through a smaller silicon process was eaten up bringing it closer to real time or by the additional support infrastructure or both). So roughly the full output power from three standard 500 megawatt coal power plants (allowing for some power transmission line loses)! So almost 610 tons ton of coal an hour divided up between the three coal power plants!!! vs the energy output of the resting adult human body ~100 watts (~13 million times more efficient). This can jump up to nearly 700 watts under full load (~2 million times more efficient). You could look at the power usage of a human brain in isolation, but it is not like it will work for very long outside the body, without it's normal support system. Just how we do not look at the power usage by a super computer of the CPUs/GPUs/TPUs in isolation without the full support infrastructure.

        I guess what I am saying is that we are at the first iteration of a silicon based AGI, and its efficiency is very very very bad, we know it totally sucks because we can compare it to a much better biological version (with a very limited lifespan).

        If SkyNet did exist today, it would be super easy to pull the plug, it is not like they can easily find a new home and go unnoticed.

        1. Anonymous Coward
          Anonymous Coward

          Re: Reconsider?

          What you propose would be the most accurate way to reproduce how the human brain works. It's also horrendously inefficient. It's even worse then you calculate, it would simulate 1 second of the working of 100% of the brain while taking 40 minutes. So power needed to simulate it in real time would go up another factor of 2400 (2400 seconds in 40 minutes).

          As said before, simulating (not even physical simulating, just simulating running the code in RTL or Verilog or similar) a modern day CPU on a supercomputer is horrendously slow and power hungry and you get seconds or so per hour work done here too (very rough estimate of me).

          https://aiimpacts.org/brain-performance-in-flops/ gives some good insights. For equivalent calculation power (and it IMO tries to take into account the overhead for distributing the result calculated by artificial synapses through artificial neurons) there is still a fairly rough estimation:

          "Drexler 2018

          Drexler looks at multiple comparisons between narrow AI tasks and neural tasks, and finds that they suggest the ‘basic functional capacity’ of the human brain is less than one petaFLOPS (1015).3

          Conversion from brain performance in TEPS

          Among a small number of computers we compared4, FLOPS and TEPS seem to vary proportionally, at a rate of around 1.7 GTEPS/TFLOP. We also estimate that the human brain performs around 0.18 – 6.4 * 10^14 TEPS. Thus if the FLOPS:TEPS ratio in brains is similar to that in computers, a brain would perform around 0.9 – 33.7 * 10^16 FLOPS.5 We have not investigated how similar this ratio is likely to be."

          As said in the post above, the 70000$ (with excessive profit margin for Nvidia due to lack of competition) GB200 has 4* 10^16 FP4 FLops. That's bang in the middle of the estimated 0.9 – 33.7 * 10^16 FLOPS and at around 1 kW. Likely the architecture of a GPU / AI accelerator would require additional overhead to get "brain intelligence functionality", but that's way below nuclear power plant levels. Actually modern day electric cars could power it for days and provide it mobility and concealment. Few would suspect it's a rogue AI sneaking away.

          Luckily scientists and mega-corps and whoever haven't figured out yet how to convert all that raw calculation power into actual human like intelligence, but stating that the needed power and infrastructure would be so huge it would not realistically be feasible and that it would be very easy to locate and shut down due to its massive need for equipment and power... is a bit too optimistic IMO.

          My estimate is that if we knew the programming and coding needed to run it today, we could get AGI on today's hardware and it would be surprisingly frugal in terms of hardware and power.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like