The Register Home Page

back to article New Google AI model maps world in 10-meter squares for machines to read

Google has released a new AI model that maps the world in 10-meter squares for machines to read. The company's AlphaEarth Foundations model has been trained on vast amounts of Earth observation data from satellites and other sources to produce embeddings for computer programs, rather than visual imagery or some other form of …

  1. Rafael #872397
    Boffin

    ...an error rate 24 percent lower than other models...

    We need an El Reg standard for approximate comparisons between vague amounts of something or else. The Tukey? The So-So?

    1. Anonymous Coward
      Anonymous Coward

      Re: ...an error rate 24 percent lower than other models...

      As it is AI, perhaps a unit related to shit?

      ".. an error rate 24 percent lower than the current bog blocker!"

      1. Ken Shabby Silver badge
        Alert

        Re: ...an error rate 24 percent lower than other models...

        A Fatberg, Turdberg apparently is a valid surname in some parts of the world.

      2. Anonymous Coward
        Anonymous Coward

        Re: ...an error rate 24 percent lower than other models...

        "As it is AI, perhaps a unit related to shit?"

        "Churchill Cod" was once a popular euphemism amongst the lower orders.

    2. LionelB Silver badge

      Re: ...an error rate 24 percent lower than other models...

      I propose The Drake. Or perhaps the POOMA (Pulled Out Of My Arse).

    3. Doctor Syntax Silver badge

      Re: ...an error rate 24 percent lower than other models...

      "We need an El Reg standard for approximate comparisons between vague amounts of something or else."

      Obviously it must be the firkin as in "two firkin small" or "two firkin big".

    4. Brave Coward Bronze badge

      Re: a standard for approximate comparisons between vague amounts of something or else

      Like always, the French have it solved a long time ago.

      Its' called 'Grandeurs et unités - Systèmes d'unités pifométriques' ('Grossen und Einheiten - Nasimetrischeinheitensystem' in German, 'Quantities and units - Nosemetrical system of units' for the rest of the world).

      So, if you're blessed enough to read some (pretty vernacular) French, here it is : https://asqualab.webmo.fr/documents/qualite/metrologie/19-unites_pifometriques.pdf

      You're welcome.

  2. HuBo Silver badge
    Windows

    Not convinced

    Looking at Table S1 (pdf p.20) of the paper linked through "according" in TFA, it seems to me that they go from a 40-D spatial data vector (with maybe another 24-D of time-dependent Sentinel-2 & Landsat), and then run some magic ML through that to get a 64-D embedding on a sphere. The compression of source data features seems negligible (eg. 1:1), and a lot of the source data was already professionally preprocessed before its use here anyways (DEM, GEDI, ERA5, GRACE, NLCD, WikiPedia, GBIF), plus 10m is really the highest resolution of this dataset (available only for some optical bands of Sentinel-2 and the ice-monitoring SAR C-band of sentinel-1) ... most of the dataset is coarser (down to 100m).

    Also, they appear to rely on some method of Dosovitskiy et al., 2020 for their analysis, which has never been peer-reviewed and published (it is a conference paper). So the whole approach and argumentation appears weak to me, and I don't really see anything that qualifies as results in there either (except for a feeling of "the whole earth at 10m, wow!" -- which this clearly isn't given the source data).

    The description of what this analysis(?) software(?) database(?) is can probably do with a rewrite to enhance clarity imho. At present it looks to add zero value to the source datasets ... or maybe I misunderstood(?)

    1. that one in the corner Silver badge

      Re: Not convinced

      Ah, you forget the almost mystical ability of current AI to fill in the gaps. If we can "enhance" a blurry old TV programme to fully watchable modern standards Doom Watch style[1] then we can easily interpolate the missing 10m resolution data.

      > most of the dataset is coarser (down to 100m)

      After all, 10m from 100m, why, that is only a factor of ten, grab the polyfilla. What was that? Volume, not linear scaling? Ok, just a factor of 1000, still doable. Huh? 40 dimensions? Um, ah, - look, squirrel! (door slams)

      > 10m is really the highest resolution of this dataset ... ice-monitoring SAR C-band of sentinel-1

      "Gentlemen, Google Gemini assures us there are enough ice deposits below the Gobi Desert to defeat Global Warming. Acting on Gemini's information, we have started issuing shovels to the Bedouin."

      [1] /s wrt the "enhancements", btw; cracking episode otherwise, especially wrt politicians.

      1. Doctor Syntax Silver badge

        Re: Not convinced

        "Ah, you forget the almost mystical ability of current AI to fill in the gaps."

        Absolutely. The model shows you what ought to be there.

      2. LionelB Silver badge

        Re: Not convinced

        "Acting on Gemini's information, we have started issuing shovels to the Bedouin."

        I presume you are also relocating them from their usual habitats to the Gobi Desert, or that may be a complete waste of time ;-)

        1. that one in the corner Silver badge

          Re: Not convinced

          Gemini assures us that this is correct. Although we did push back on the suggestion of using the Adirondack.

    2. harrys Bronze badge

      Re: Not convinced

      Nope not misunderstood......

      Your talking about architecture whereas they are talking marchitecture :)

      And we all now know where that got Intel!

  3. Forget It
    Go

    ESA Copernicus Sentinel2

    has been doing 10x10m resolution for a decade.

    Se https://browser.dataspace.copernicus.eu

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like