back to article Draft EU AI Act regulations could have a chilling effect on open source software

New rules drafted by the European Union aimed at regulating AI could prevent developers from releasing open-source models, according to American think tank Brookings. The proposed EU AI Act, yet to be signed into law, states that open source developers have to ensure their AI software is accurate, secure, and be transparent …

  1. Paul Crawford Silver badge

    As far as I can see from a very quick look, it says nothing of the sort about open source. Just that AI applied to anything has to still meet the rules on safety and discrimination that any other technical or business method would have to.

    If you use anything in your company you ought to do due diligence on it and/or have supervisory capacity to control it. AI is just another tool.

    1. Mishak Silver badge

      Indeed

      And anyone deploying a system where functional safety is a consideration is required to ensure that the safety level of any component that is used satisfies the project's safety objectives (SIL / ASIL, ...).

      The due diligence around that would include ensuring that documentation is appropriate, test coverage is complete, and that test artefacts are available to show that the claimed requirements are satisfied and that testing has been completed. This is unlikely to exist for an open source project, so this work is going to have to be done in house.

      To me, this act is really just saying that using open source doesn't mean all of the above can be ignored (which, unfortunately, is quite often the case when open source is used within safety-related systems). The only way there could be any comeback on the open source developers would be if they are making a claim that the software "is safe", but that would be tricky, as who made that claim?

    2. UCAP Silver badge

      I think the scenario they are talking about is a company who uses an open sourced AI component and subsequently falls foul of the regulations; they might decide to try to pass the legal buck on to the open source developer(s).

      Pretty sure that there is a way to solve this by adding appropriate clauses to the license. Firstly you can put a condition in the license that states that use of the open source component in any commercial application is at the company's own risk, and that the authors of the component do not accept any responsibility for legal problems that this might cause the company. You could re-enforce that with a clause that says that the AI component cannot be used for any commercial purposes within EU legal jurisdiction, and that the legal costs of the open source developers must be paid by the company regardless of the outcome of any legal action.

      Note: I am not a lawyer, this is simply my take on the subject.

      1. katrinab Silver badge
        Meh

        “No commercial use” clauses don’t comply with most definitions of open source.

      2. Mishak Silver badge

        In the functional safety domain, the buck stops with the system integrator - unless there is an explicit contract in place stating that the code is suitable for use in the end product. Even then, the integrator will have to be able to prove* that they have carried out an appropriate "due diligence" check.

        *prove basically means they can show documentary evidence that a process was used to qualify the component, test artefacts and the like are available, and that any test results are reproducible.

      3. Doctor Syntax Silver badge

        "Firstly you can put a condition in the license that states that use of the open source component in any commercial application is at the company's own risk, and that the authors of the component do not accept any responsibility for legal problems that this might cause the company."

        Something along these lines is fairly standard.

      4. heyrick Silver badge

        Don't most of the licences say it is supplied "as is", "no liability", etc etc?

        1. MacroRodent

          Yes, they disclaim

          Yes. For example, the old and widely used MIT License says in big friendly letters:

          THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

          That should do it.

    3. Anonymous Coward
      Boffin

      TLDR

      To me, from the summary of the legislation, it looks quite reasonable and basically says that a company can't use open source as an excuse for not meeting other requirements. I fail to see a direct impact on the open source community.

      1. John Brown (no body) Silver badge

        Re: TLDR

        Yes, this already has existing similarities in existing case law. The recent case of a woman who died eating a "vegan" product from Pret which it turns out was "contaminated" with milk product triggering her dairy allergen reaction. The company producing the ingredient claimed the product was dairy-free. Pret had no evidence that they had checked those claims and then went on to repeat the claims on their end product. Both made claims and so both are on the hook for it. If either had NOT made that claim, the one or both of them would probably not be on the hook.

        If open source AI models and/or code is used, they have to be careful with what they claim and anyone using it commercially has to equally make sure any pre-existing claims are valid and that any changes they make in the code, the model, the data or how it's deployed also meet any claims they may make.

        1. Jellied Eel Silver badge

          Re: TLDR

          Both made claims and so both are on the hook for it. If either had NOT made that claim, the one or both of them would probably not be on the hook.

          Wouldn't they be on different hooks though? So victim's next of kin sue Pret for contaminated product, Pret sues their supplier. Would the victim be able to pursue claims against both? Then thinking contracts could attempt to indemnify suppliers, ie code provided as-is/as-seen, or caveated with standard food lables that product may contain traces of nuts.

          But something that used to be a pet hate of mine, ie sales insisting bids must be 'fully compliant', even though RFPs wanted us to take liability for things we had no control over. Also for AI, kinda curious how far that may extend, eg would it include 'AI' routines used in video games?

  2. ThatOne Silver badge
    Devil

    Job opportunities!

    > Supercomposite told El Reg random images of AI-generated people can often show up in negative prompts

    And that, mark my words, was the birth of the profession of "AI shrink"!

    "Let me explore the subconscious of your AI model: Does your AI dream of electric sheep, or are there lost, repressed childhood memories buried deep within, which could be triggered unexpectedly and cause it to misbehave, resulting in lawsuits, lost profit and all that?"

    1. John Brown (no body) Silver badge

      Re: Job opportunities!

      Maybe "AI" should be treated as an "employee" in terms of how it "represents" the company, ie it produces "work" for the company like any other employee therefore the employer is 100% reposonsible for it's actions and what it produces during it's employee time. In other words, there are processes in place to make sure the employee is producing valid work up to spec. It doesn't matter what the "thought processes" are inside the employees head, what comes out as a result of those thought processes are what counts and what needs to be up to spec.

      1. ThatOne Silver badge

        Re: Job opportunities!

        > there are processes in place to make sure the employee is producing valid work up to spec

        You mean bug-free? Come on, they don't bother to do this for simple conditional software, there is no chance they'll spend the money to test fuzzy AI stuff beyond the basic "seems to work as expected" part. Lawsuits are just a possibility (and covered by insurance), while testing expenses are a certainty.

        1. John Brown (no body) Silver badge

          Re: Job opportunities!

          I only said "valid work up to spec". I didn't say the spec had to be any good :-)

  3. amanfromMars 1 Silver badge

    Future Building via the Advanced IntelAIgent Virtual Machine Root Route

    There's a hell of a lot of ifs and ors, somehows and unforeseen or uncontrollable probabilities invented in that Brookings tale, Katyanna, and none here would expect them capable of spouting anything else for that is surely undeniably their usual bog standard modus operandi/vivendi.

    It would be more than a tad odd though for them to think AIdDevelopment be listening to take heed of their concerns rather than having extensive and expansive plans already rolling along to eliminate them and in so doing have IT and media paint a whole new book of novel panorama pictures for human realisation and virtual machine colonisation.

  4. EmilPer.

    "be transparent about risk and data use in clear technical documentation"

    what is wrong with that ?

    This should apply to all software, open source or not.

  5. claimed Silver badge

    Hindsight

    When we look back at the Victorian shysters who added 'Electric', or 'Magnetic' to their products to sell them, it strikes me that we're just seeing the same thing now with 'ML/AI'.

    People don't really know how it works, so there is just a lot of noise and products being created with the label slapped on.

    Looking forward to a future time when ML is just in the toolbox and people are not making a load of noise about the modern equivalent of magnetic soap!

    1. Martin an gof Silver badge

      Re: Hindsight

      Victorian shysters who added 'Electric', or 'Magnetic' to their products

      Or 'Radium', 'radioactive', 'X-ray' etc. slightly later.

      M.

    2. TimMaher Silver badge
      Windows

      Re: Hindsight

      Back in my day, he said, while staring myopically into the distant past, we called this stuff “Expert Systems”.

      About the same time that 4GLs were going to change the programming world.

      Ah well.

      1. Nick Ryan

        Re: Hindsight

        Yep, I'm still waiting for anything to actually be AI. So much of what's chucked out but is called AL is nothing more than either Logical Reasoning ("Expert Systems") or Machine Learning (pattern recognition).

        The stupid cases about whether or not a tool (application) owns the produced work don't help the insanity of the AI bullshittery either.

  6. steelpillow Silver badge
    Angel

    Just call it AINAI

    "AINAI" is an acronym of "AINAI Is Not an Artificial Intelligence."

    This software is made available "as is". It has no defined function, such as Artificial Intelligence or ruling the world or befuddling EU regulations, but is made available in case others might find it useful.

  7. ChoHag Silver badge

    Every cloud has a silver lining

    This is of course awful, chilling effect on development and all that, HOWEVER ... documentation mandated by law? I could get behind that.

  8. karlkarl Silver badge

    The GPL has:

    15. Disclaimer of Warranty.

    THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

    16. Limitation of Liability.

    IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

    I don't see how any law can undermine this.

    1. John Brown (no body) Silver badge

      "TO THE EXTENT PERMITTED BY APPLICABLE LAW""

      "UNLESS REQUIRED BY APPLICABLE LAW"

      Licences and contracts cannot override national law, which the quoted licence makes clear. The concern is how that new EU law might affect the licence terms above, ie the applicable law. It's also the reason you almost NEVER hear about some products T&Cs being disputed in a Court of Law, rather the supplier/manufacturer will settle out of court.

  9. Anonymous Coward
    Anonymous Coward

    How have they defined AI?

    Because we don't have any artificial intelligence, just pretend neural nets and training sets.

    1. Anonymous Coward
      Anonymous Coward

      Re: How have they defined AI?

      Isn't that the modern definition of AI? The training set should preferably be racially and/or gender biased, otherwise it will not be representative of real-world Big Data.

  10. Justthefacts Silver badge

    Autonomous vehicles…..

    So, it’s fairly clear that the main target of this legislation is “high-risk AI”, which they define as safety-critical AI.

    In the context of current commercially relevant products, that’s mostly autonomous vehicles. The EU Commission essentially want the legal ability to requisition the (proprietary) source code of Tesla, Waymo and Mobileye, before it goes on the roads, plus the training data which is nearly as valuable. Hand the whole lot to their pet R&D institutes for certification to take apart, analyse, and almost certainly decide it’s “not safe”….before releasing their own rival autonomous code which will be “Europeanised”.

    I doubt any non-EU company will be willing to touch this process with a barge pole. But hey, give it a go, let us know how it all goes for you!

    1. heyrick Silver badge

      Re: Autonomous vehicles…..

      Or maybe they just want someone to lay blame on if it should fail to spot a child on the road and thus running it over. Because nothing is going to change if the company blames the programmers and the programmers blame the data set and......

      They probably don't give a crap about source code, only that it can be proven that it was tested on a few hundred different children and not, say, tested on zero children because "they're just small sized adults - it'll be fine".

      Or similar, what whatever other safety critical use of "AI" is out there.

      1. Justthefacts Silver badge

        Re: Autonomous vehicles…..

        As you say, they don’t need white box visibility of the source code to ensure “don’t kill the children”. It’s difficult to see what reading the source code would help *at all* to understand the performance of a neural network. Certainly there’s no researcher anywhere would say white box predicts performance in particular scenario.

        If you want to know performance, you want test results in some standardised tests. They could do that. That would be normal. But the law is quite explicit not to define any standardised test regime.

        But EU want source code and traceability *only*. It’s quite clear what they want it for.

    2. John Brown (no body) Silver badge

      Re: Autonomous vehicles…..

      The EU is not only far too bureaucratic to actually do what you describe, but because it's so bureaucratic there would be whistle blowers galore if they tried that, And anyway, there are governments around the world, the US leading the way, which require commercial source code be submitted to them under various circumstances either for QA or "safekeeping". What's being proposed by the EU isn't new, other than being more specific on what it wants.

      And don't ever forget, processes which you accuse the EU of are waaaay harder to implement there than in the US. Shout "patriotism" in the 50 States of the US, there are two parties, REP and DEM will often come to agreement quite quickly on a national issue. Try that in the EU and you've got 27 national governments each with AT LEAST two main parties, often many more, all trying to come to their own internal decisions before they even think about agreeing with all or most of the 26 others. And if a new EU "law" is introduced, it's then up to each nation to implement as they see fit. Usually more or less all the same, but often with variations.

  11. Howard Sway Silver badge

    I discovered this woman, who I call Loab

    .... which makes you sound very believable and not at all weird in any way.

    Have you seen her anywhere else, such as your bathroom mirror? Or perhaps seen too many low budget horror films? On the other hand, people "see" faces in random places all the time, it appears we are wired that way, but most people don't generally name them. Especially like that.

    1. heyrick Silver badge

      Re: I discovered this woman, who I call Loab

      Dall-E is quite amusing when it gives a completely gonzo image in response to some input, and you then have to take apart what you said to try to work out how it arrived at that result. Turns out, even language that one thinks is pretty clear can be ambiguous.

      (plus, sometimes Dall-E is just amusing, I have a girl holding what looks like an Olympic torch because it clearly has no idea what a katana is; so it can also be fun to provide gibberish to see what turns out; it has absolutely got the idea of a giant flying warrior hamburger made of knitting)

  12. GrapeBunch

    "Draft EU AI Act regulations could have a chilling effect on open-source software"

    Only if the open-source is struck directly with the tube of frozen sperm.

  13. Sam Adams the Dog

    Sentience of AI

    Google's Pichai "admitted that Google's AI voice assistant sometimes fails to understand and respond appropriately to requests. "The good news is that anyone who talks to Google Assistant—while I think it is the best assistant out there for conversational AI — you still see how broken it is in certain cases," he said.

    Wait... what if it's sentient but merely hard of hearing?

    1. ThatOne Silver badge

      Re: Sentience of AI

      > what if it's sentient but merely hard of hearing?

      What if it's sentient but not really interested in what you have to say?...

  14. Mike 137 Silver badge

    Oh no, not again...

    ""Open source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided 'as is'..."

    Yet another example of confusion between open source and free (in the no license fee sense). They're not the same thing at all. They may co-exist but they're entirely independent.

    Apart from which, there is also a reasonable argument that liability (in the broadest sense) for software safety should be commensurate with the potential for harm associated with its use, independent of its licensing regime.

  15. JimC
    Pirate

    It does sound

    Like another instance of "we're an innovative disruptive Internet business, we shouldn't have to obey national law". Makes a change, I suppose, that this particular wail is not coming from a silicon Valley startup paying its executives megabucks out of investors stakes.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like