At last - my topic ;)
Chair of CEN-CENELEC JTC21 here, the (admittedly arcanely named) standardisation committee that has the mandate from the European Commission to create the so-called "harmonised standards" for the AI Act.
Adding to the article:
- The AI Act was originally designed in the pre-chatgpt era and mostly sets the rules for AI *products*. The most interesting category here are AI products (or systems) for high-risk applications. These incur the more onerous requirements laid down in articles 8 to 15.
- Having said that, these articles are fairly high level, e.g. they might require human oversight without specifying what that exactly means and how to go about it. An officially acceptable (but not compulsory!) implementation path is described in a set of "harmonised" standards. Whoever follows these standards is automatically deemed to be compliant with the AI Act requirements, hence it will be the preferred method for most organisations. This is what we are developing in JTC21.
- Requirements specifically for foundation models (NOT products!) were bolted onto the draft AI Act only during the final rounds of negotiations in 2023. They work differently, and given that there was no time to develop standards for these requirements, the Code of Practice was created in a more tightly managed dialogue, with the EU in the driving seat rather than a standardisation body. It is possible (but not yet decided) that the Code of Practice will be replaced by an additional set of harmonised standards a few years down the line.
Take this as the opening salvo for an AMA around European AI standardisation ;)