back to article JetBrains' unremovable AI assistant meets irresistible outcry

JetBrains introduced an AI assistant in December to help programmers write code. Now the biz is trying to figure out how to allow its customers to get rid of it. "JetBrains AI Assistant is similar to GitHub Copilot, but it’s deeply integrated into JetBrains development environments (IDEs), code editors, and other products," …

  1. Will Godfrey Silver badge
    Facepalm

    JetBrains is completely missing the point

    It's throwing developers into the position where they have no idea what is actually in the resulting code - never mind where it came from.

    1. Mike 137 Silver badge

      Re: JetBrains is completely missing the point

      "the position where they have no idea what is actually in the resulting code

      Bloated closed source libraries have been doing this for years, so this is only one step further down that road. The IP issues are perfectly valid though and a significant commercial hazard.

    2. 142

      It does nothing of the sort!

      You have to actively enable, subscribe, and pay them a fortune for the f'n thing before it does anything.

      These complaints are beyond deranged.

      It's akin to being afraid you're going to find yourself locked on a cruise ship just because the shopping mall you go to added a travel agent...

  2. Ayemooth

    Is it really a "plugin" if it can't be "unplugged"? At the risk of sounding old, don't words matter any more?

    See also when companies use the term "deprecated" when they mean "removed".

    1. alain williams Silver badge

      Not knowing what words mean

      My favourite hate of misused words is "regular", eg "we will do this more regularly" - where what they mean is "frequently". Often said by people who should know better.

      Maybe I am just showing my age.

      1. Michael Hoffmann Silver badge
        Coat

        Re: Not knowing what words mean

        Or the other way around?

        I'm happy to report that I'm quite regular, but thankfully not frequently. Unless I ate something bad.

      2. David 132 Silver badge

        Re: Not knowing what words mean

        Ha! You've reminded me that I got a metaphorical rap over the knuckles from my English teacher at school for mixing up the usage of "frequently" and "regularly". Bless Mr Philpott; it's a lesson that has stuck with me for nigh on 35 years now :)

      3. jmch Silver badge
        Thumb Up

        Re: Not knowing what words mean

        Hehe, me too....

        Right up there with accuracy vs precision

        1. keithpeter Silver badge
          Childcatcher

          Re: Not knowing what words mean

          "...accuracy vs precision"

          I used to teach that lesson to workshop based vocational students and lab technicians early in the term. Along with Precision's little friend Resolution. And the Error cousins Systematic and Random.

          I had a small selection of 'modified' verniers and micrometers. And a slightly distressed top-pan balance. Early in the term because reading scales exposes any perceptual/eyesight/dyslexia issues fairly promptly. And they used the ideas in later practical work.

          Went well that one. A lot of concepts in 3 hours.

          1. Doctor Syntax Silver badge

            Re: Not knowing what words mean

            My analogy would be an old and old-type fabric surveyors tape, stretched in places.

      4. sorry, what?

        Re: Not knowing what words mean

        My pet hate: "revert" from Indianglish to mean respond. This is not the same as returning to a previous state... d'oh!

    2. Someone Else Silver badge

      Is it really a "plugin" if it can't be "unplugged"?

      cf. Micros~1 and Insecure Exposer Internet Explorer.

      1. cookieMonster Silver badge
        Joke

        You spelt “Exploder” wrong

      2. sev.monster Silver badge

        Ah, 8.3 short names. I used them when filenames were too long and inefficient globbing wasn't good enough.

        Nowadays I am happy for Zsh' curly bracket filename expansion and other features and have since never looked back.

    3. CowHorseFrog Silver badge

      Thats your defintion of plugin, JB defintioni s prolly its a circular, its a plugin becaue its called a plugin.

    4. CowHorseFrog Silver badge

      Kind of like Democratic North Korea, where you can only vote for one guy.

  3. 45RPM Silver badge

    Yeah. I’m clearly just old and grumpy. But I don’t like this and I’m going to stay in my retirement home by the C.

    I know the arguments against using C, yadda yadda, but 30 years of C experience says my C pretty efficient now - I’ll never get that good at Java, Rust or Swift. But you know what? I’m not convinced that AI C will get to be as good as I am either - even if an AI is trained to write C (probably has already?)

    Which, I suppose, is just an old dude yelling at clouds. And what I’m yelling is “stop the world, I want to get off.” This tech is a tech too far (well, for me anyway)

    1. CowHorseFrog Silver badge

      In most cases, efficiency doent matter, what really matters is correctness.

      1. Richard 12 Silver badge
        Terminator

        That is of course why LLMs cannot be used for code generation:

        It will not be correct.

        On the bright side - it often doesn't compile/translate either, so the human does have to rewrite it anyway.

      2. 45RPM Silver badge

        Define efficiency. Efficiency of execution (and then what? efficiency of compute? memory usage? storage? Has its algorithmic efficiency been calculated)? Efficiency of maintenance? When I say efficiency, I abuse the term horribly - including, but not limited to, does it follow the SOLID principles? And, by the way, I'm also fine with breaking those principles - but you need to understand why they exist and why you need to break them before you do. Breaking it because you didn't understand it is as cardinal a sin as a developer releasing code that they don't understand.

        1. CowHorseFrog Silver badge

          Im talking about efficiency suggestions from AI tools like this. They are going to be extremely simple, and basically no different from any other non AI tool, stuf like using builder when creating a message rather than concating strings etc.

      3. Anonymous Coward
        Anonymous Coward

        > In most cases, efficiency doent matter, what really matters is correctness.

        Nope. A lot of inexperienced programmers misunderstand this.

        What matters is maintainability. Correctness is worth nothing if the requirements change and the code is too difficult to understand and change.

        Whereas maintainable code that is incorrect, can be easily corrected.

        1. CowHorseFrog Silver badge

          Maintainability comes with correctness.

          Most code is absolute crap, when you get the most basic of things wrong you can be sure many others are also poor and wrong.

          Code that does exactly what it supposed to do is correct, and fails like its supposed too. Stuff that does unknown things when things go bad is not correct or acceptable. Facts are most code barely works tese days and when its given for example bad inputs, the messages it gives if it does when it fails is hardly correct. Throwing a NPE or some other random exception instead of a proper message is not correct.

    2. David 132 Silver badge

      These days, many of us yell at clouds.

      On-prem is far superior. Fight me. :-P

      1. sev.monster Silver badge

        I don't think there would be many (from enterprise environments) that would disagree. Unless they are stuck in the MSP/SaaS trap because the upper echelon refuses to give them budget for otherwise.

      2. 45RPM Silver badge

        Nope - I won't fight you. I'll fight with you on that.

        I think that clouds have their uses. Small startups will be able to afford infrastructure that would otherwise be outside their reach. But, once you reach a certain size, you should bring it inside. It's like not owning your own home or renting your electricals. At some point, you should be able to afford to buy your own.

        1. bigtimehustler

          OK, been really bad for netflix, but clearly you know better. What have you managing?

          1. CowHorseFrog Silver badge

            What make you think managers know anything ?

            Most managers cant code or even setup a home router and yet you think they can fly a plane and a rocket at the same time ?

      3. Doctor Syntax Silver badge

        "On-prem is far superior."

        I've also yelled at on-prem on occasion.

    3. Anonymous Coward
      Anonymous Coward

      Fellow C programmer here, though my C is getting er.. "Rusty". Anyway while I do use AI, the point is never to accept the code it generates verbatim but to save some substantial amount of typing and review/modify the code before accepting it.

  4. Michael Hoffmann Silver badge
    Unhappy

    So, even smaller shops aren't safe from the AI/LLM enshittification - and worse, think it's OK to shove it in without proper means to get rid of it.

    NB: many years of Jetbrains subscription and I did use their trial knowingly. It was slow as, so I turned it off, without bothering to try and remove it. The fact that I couldn't if I wanted to, is shocking.

  5. elDog

    As many comments in the Jetbrains discussion said - this smells like a marketing ploy

    Of course the leakage of very critical development data into the www is horrible. But those fine MBAs who have taken a CompSci course and think they understand enough about software and security and client relations - they get to tell the C-suite to jump onto the AI bandwagon.

    I'm a frequent user (past 8 years) of several of Jetbrains' products. I don't really want any more clippy-style prompts jumping onto what I'm typing. I know how to make my own mistakes, thank you - and learn by correcting them.

    1. Someone Else Silver badge

      Re: As many comments in the Jetbrains discussion said - this smells like a marketing ploy

      But those fine MBAs who have taken a CompSci course and think they understand enough about software and security and client relations [...]

      Wait...What? A B-schooler taking a CS course?!? They'd get laughed out of the course by all the real nerds faster than they could trip over their own shoelaces. And B-school refugees do not like getting laughed at (plenty of examples IRL nowadays).

      1. Doctor Syntax Silver badge

        Re: As many comments in the Jetbrains discussion said - this smells like a marketing ploy

        It's the old story - what they'd like and what they need are two different things.

  6. claimed Silver badge

    It’s not just C-Suite

    That is utterly removed from reality anymore. Now you have a whole generation of developers raised and trained by google et al’s “click to accept” pattern that they can’t even see what’s wrong with this.

    1. matjaggard

      Re: It’s not just C-Suite

      I don't really agree with this sentiment in this case, but I do think that the EU's cookie policy has meant that even more than before people will click on these things without thinking. And clicking decline without thinking is likely equally bad, we have to stop asking stupid questions all the time.

      1. Dan 55 Silver badge

        Re: It’s not just C-Suite

        All EU cookie banners did was make tracking more visible. Some people find clicking no (or yes) annoying, others find the tracking annoying. Personally I'm in the latter camp. There's no need for them if the website just uses first-party session cookies, but it seems sharing or selling data is too difficult for most websites to give up.

        Surely the blame has to be placed at the door of the shrinkwrapped EULA which apparently allows any and all consumer rights to be overridden in the US and has a long ignoble tradition stretching back decades. Your only recourse is to not use the software.

        Fast forward to today and now you find your software updates, your EULA is changed, and even if you were to click no your codebase is probably already uploaded and is now training data for an LLM under the expectation that you would have clicked yes anyway.

        JetBrains of all companies should know about developer resistance to this bullshit, but it seems they're all drinking the same kool-aid. It's not the questions which are the problem, it's the entire industry collectively jumping on the same fucking bandwagon.

        1. TheMeerkat Silver badge

          Re: It’s not just C-Suite

          EU cookie rule is dangerous for security as it teaches the masses to click “Yes” without reading.

          It is a stupid rule that achieved nothing other than annoy people.

          1. Richard 12 Silver badge

            Re: It’s not just C-Suite

            The intent and purpose was to make tracking visible and easy to decline by requiring explicit permission to follow you around with a clipboard.

            Many advertising sites then decided to deliberately find the very most annoying way to pretend to follow it. Probably in the hope of getting it repealed.

            Most of these cookie popups are actually unlawful, but the law has not been enforced - you can probably guess why.

            1. Mike 137 Silver badge

              Re: It’s not just C-Suite

              "The intent and purpose was to make tracking visible and easy to decline by requiring explicit permission to follow you around with a clipboard"

              The intent was actually no more than to make tracking visible and easy to decline by requiring explicit permission. The legislation is silent on how it was to be implemented, so as Richard12 rightly says site owners used the most annoying way they could think up in the hope we'd lose patience and click through without due attention. Even El Reg fell for this -- the cookie banner (that only goes away if scripting is enabled, otherwise permanently blocking the bottom of the content screen) is at least twice the size it needs to be to convey the message and include the interactive controls. But this is probably on eof the least intrusive around -- many sites overlay the entire content with obstructions until the enforced "decision" has been made.

              Having spent a lot of time on the GDPR from when it was first conceived, I'm convinced that the framers envisaged a default of no tracking with options to allow it if acceptable, with clearly explained consequences. What we got was exactly the reverse, plus exclusive concentration of 'cookies', consent to scripted trackers being assumed without the option.

              1. Doctor Syntax Silver badge

                Re: It’s not just C-Suite

                "I'm convinced that the framers envisaged a default of no tracking with options to allow it if acceptable, with clearly explained consequences."

                This should have been how it was framed. No doubt there was a lot of lobbying to make it as it is now. It's time the EU held a review with the specific objective of deciding whether to make on-consent the default.

            2. Crypto Monad Silver badge

              Re: It’s not just C-Suite

              > Many advertising sites then decided to deliberately find the very most annoying way to pretend to follow it. Probably in the hope of getting it repealed.

              Especially sites which require 10 to 20 "Legitimate Interest" areas of data processing to be unchecked individually, in order to opt out of tracking. Or worse, sites that list 200 or more advertising partners, and having to uncheck every one individually.

              I always uncheck the "legitimate interest" boxes (except when it's 200+ individual advertising partners), although I don't really believe that unchecking them will actually make any difference to the processing that goes on behind the scenes. We have no way of knowing whether they honour those selections or not.

              1. Doctor Syntax Silver badge

                Re: It’s not just C-Suite

                "Especially sites which require 10 to 20 "Legitimate Interest" areas of data processing to be unchecked individually, in order to opt out of tracking. Or worse, sites that list 200 or more advertising partners, and having to uncheck every one individually."

                Looking at it another way, it's good of these sites to make it clear that you will want to leave now and never go near them again.

          2. Flocke Kroes Silver badge

            Re: click without reading

            If 'reject all' is not easy to find I try a different web site. That and 'clear cookies on exit' combined with frequent exits.

          3. CowHorseFrog Silver badge

            Re: It’s not just C-Suite

            Its unfortunate the EU and UN hasnt made third party tracking technoloogies like cookies a violation of human rights and criminal offenses.

  7. Anonymous Coward
    Anonymous Coward

    I'm a Pycharm user. I've not signed up or logged into any AI service as part of using Pycharm so I'm guessing I'm not using their AI. The standard code autocomplete/autosuggest is pretty darn good; Not perfect, but it can save time.

    1. druck Silver badge

      The moment that changes, and I see any AI, I'm back to vim.

  8. GoneFission

    >Any data sent to the AI service is not used for training.

    It's just stored eternally with no option to delete it and until the company eventually finds a profitable non-AI-training avenue to utilize it.

    1. sarusa Silver badge
      Devil

      This! So they /are/ sending your stuff to Teh Claude. Where I'm sure it will be perrrrfectly safe and they would never misuse it.

      1. sev.monster Silver badge

        Don't bring Claude into this, he's just a poor delivery boy trying to make money for his mother's medicine!

        1. tiggity Silver badge

          I like my Claude funky

      2. Someone Else Silver badge
        Headmaster

        So they /are/ sending your stuff to Teh Claude.

        Along with all the admonitions by other commentards to leave Claude out of this, I suspect you may simply have misspelled "Clowd".

        You're welcome.

  9. Doctor Syntax Silver badge

    "Any data sent to the AI service is not used for training."

    So they're saying data is sent to the AI service.

  10. The Central Scrutinizer

    Is AI writing The Register articles now?

    Checking grammar before publishing would be handy.

    1. stiine Silver badge

      They haven't done that very well since whatshername left.

    2. Paul Smith

      Bad grammer is usually an indication that something was written by an actual human.

      1. Someone Else Silver badge

        ...or someone phishing...

      2. Doctor Syntax Silver badge

        Bad grammer is married to bad granddad.

  11. sarusa Silver badge
    Devil

    Bad code

    I haven't used this JetBrains abomination, but my big complaint with trying Github copilot was how often the code was wrong. Sometimes subtly, sometimes not. Because the 'AI' has no idea what you are actually trying to accomplish and what all the constraints on things are. But I guess that doesn't matter when you're just writing Microsoft code, because nobody expects that crap to work.

    1. CowHorseFrog Silver badge

      Re: Bad code

      Of course its wrong, how can any intelligence even AI possibly know your intent just by seeing one or two lines of your code ?

      How can it possibly know your end goals ?

      A real human just off the street wouldnt know, so how can AI know without any background and context of the problem you are trying to fix or solve ?

      1. breakfast

        Re: Bad code

        This is the message that, unfortunately, CEOs and CTOs are not going to get at all, resulting in a lot of us losing jobs and a lot of catastrophically bug-ridden AI-generated code that they imagine will solve their problems. Somehow it will turn out not to be their fault (and indeed, it will transpire that they have earned significant bonuses) when this turns out to be catastrophically incorrect.

    2. deltics2

      Re: Bad expectations (was: Bad code)

      Yep, expecting CoPilot to write your code is doomed to fail. BUT, CoPilot is an absolute God-send for writing tests and documentation. It's ability to generate a description of a function including example usage is perfectly usable. Not always perfect and it sometimes gets stuck in a rut where it trots out the same text for a function that it generated for a previous, similar function, but if you correct its mistake it is less likely to make the same mistake again. It can also work wonders in generating README.md content, in one instance that blew me away, it suggested a 90% complete and 100% accurate "Feature List" for an open-source module (GoLang) I was working on!

      Similarly for writing tests, I have shown my colleagues the seemingly magical way that CoPilot can generate test cases for me. After using a vs code snippet to stamp out a boiler-plate data-driven test, declaring a struct to be used for each test case (which includes a "name" field which describes the test case) and writing the first test, for subsequent test cases I typically only need to provide the test case name and CoPilot will generate the rest of the test case for me. Often it will then start suggesting entire tests cases, complete with name.

      The idea that CoPilot and similar AI will put good engineers out of a job is idiotic and serves only one purpose: to provide click bait headlines. What it CAN do is become that junior assistant programmer that all good engineers sometimes wish they had that they could give the boring jobs to do.

      I for one am happy to spend less time writing tests and documentation when my AI assistant can do that for me, leaving me with much more time to spend on the fun stuff that AI sucks at: actually solving problems.

      (I haven't tried the JetBrains AI so I have no idea if it is any good for these use cases. And I should also add that I use GoLang predominantly and there may be aspects of that language, being incredibly simple and highly idiomatic, that makes it particularly well suited for AI to be helpful. With other languages and other AI's your mileage may vary).

      1. sev.monster Silver badge

        Re: Bad expectations (was: Bad code)

        This is the reason why I am not at all worried about our "AI" overlords. Neural networks especially as they are being trained on complex human stuff like language are ultimately simplistic (i.e. reactionary) and often wrong. But their uses as suppliments and for targeted automation can't be ignored.

        It's no different than say assembly line robots, which were also claimed to be out to take our jobs: Sure, many industries now use them, but in how many of those cases will you still see line workers, transferring product to other belts, checking product quality, aligning and setting product, and what have you? And there are still people needed to develop, test, and maintain those devices. Yes, we might have lost simple "put this dangly bit onto this other dangly bit" jobs, but the types of people hired for those jobs (c.f. junior assistant programmer) are entry level blue collar and eventually found plenty of other jobs in the same vein after much grumbling. And it's not like all industries have the money or technical need for such high dollar value machines, plenty of people still hire hands when necessary. It's just not as widespread as it once was.

        We will still have juniors and interns, LLM's will not replace them. But now those that can't can have a similar edge even without one. You'll have to check the output no matter if it's human or metal anyway; letting the intern push production code is not something any self-respecting overseer would allow. (Though the number of people using Copilot and other generative tools to write their code without ever checking it (possibly because they don't even understand it) is particularly hilarious/disasterous/terrifying.)

      2. TheMeerkat Silver badge

        Re: Bad expectations (was: Bad code)

        If your test is written based on the code, you just test that your bugs were not fixed by the next developer.

        You should use writing tests as an opportunity to actually test your code, not to generate “code coverage” that your management insist on.

        1. Richard 12 Silver badge

          Re: Bad expectations (was: Bad code)

          Yes and no.

          In an existing code base, most of the time you want the unit tests to prove you haven't changed anything unexpectedly.

          The real problems with autogenerated tests is that they're often fragile, failing for spurious reasons, or don't actually test anything at all.

          1. Will Godfrey Silver badge
            Meh

            Re: Bad expectations (was: Bad code)

            I have a somewhat jaundiced view of unit tests. They are only of any use if you are testing the right things - and discovering what the right things are, is the hard part.

            This is particularly relevant with sound generation software i.e. soft-synths. One that I was using some years back fell foul of this. Extensive unit tests were set up prior to a major upgrade. After the upgrade the sound of just one of the patches I'd designed changed noticeably. They never did discover what had actually changed, and I had to rework the patch to fit.

            Another one that didn't show up, they did fix but with the comment "We didn't think anyone would do that." If it can be done, sooner or later it will be done!

      3. Doctor Syntax Silver badge

        Re: Bad expectations (was: Bad code)

        "CoPilot is an absolute God-send for writing tests and documentation"

        And while it's doing that what is it doing with the code you're asking it to document?

      4. druck Silver badge

        Re: Bad expectations (was: Bad code)

        If you admit it writes crap code, are you sure the tests it writes aren't crap too - or is it just a tick box exercise to satiate management.

        1. deltics2

          Re: Bad expectations (was: Bad code)

          I *am* management, when it comes to my own code. But if you mean is it just to hit an arbitrary coverage target... NO.

          I set myself a specific and reasoned coverage target of ... 100% (after allowance for one exception). Is that unreasonable? Is it meaningless? Neither.

          I am fortunate to be working in a green-field context; this isn't about retrofitting coverage onto a legacy code base where considerations are different. And you would, of course, be right to point out that 100% coverage tells me nothing about the quality of my tests. What it DOES tell me however, undeniably, is that 100% of my code can be REACHED by tests. So, if I find a problem in my code or my tests, the solution will be test-ABLE.

          Before using GoLang I too subscribed to the "arbitrary test coverage targets are meaningless" school of thought, largely as a pragmatic allowance for the fact that writing testable code in some languages and particularly in legacy codebases resulted in a disproportionate diversion of effort into refactoring/creating difficult to understand code in order to make it testable. With GoLang, writing testable code is trivial, to the point that the argument against coverage targets itself becomes arbitrary.

          i.e. If you CAN (reasonably and practically) achieve 100% coverage, why would you choose NOT to? As long as you don't lose sight of what the metric measures: i.e. testABILITY not correctness.

          Correctness of code is determined by the quality of your tests. Coverage is a metric only of the ability to provide tests that could/should deliver that correctness.

          Now, as I mentioned, my 100% coverage target is not always actually 100%. For modules (re-usable packages in GoLang), 100% does in fact mean 100%.

          But for microservices 100% means 100% of code outside of the service bootstrap function (main()), a well defined, tightly scoped exclusion

          With the patterns I have adopted for microservices in GoLang, the bootstrap is reduced to triviality; it falls firmly in the category of "not worth the effort" (of trying to figure out a way to make it testable, because it is the hardest part to make meaningfully testable).

          The same effect could be achieved by using a coverage target of some % less than 100, except that it is impossible to ascribe a fixed %age to account for main.go in any all/microservices as the proportion of the total code that it represents can vary. Whatever number you choose leaves the possibility that some small percentage of non-bootstrap code might also be untested (and therefore potentially untestable).

          Better, easier and more accurate (and less arbitrary) to exclude it explicitly and cover 100% of what is left is.

          However, I should also say that in my day job I am also a "manager" (strictly speaking a Chapter Lead, which is not the same thing, though I do have people leadership responsibilities).

          Again I am fortunate to work in a similar context there - green-field GoLang microservices. The two squads I have touch points with have come to the same conclusions (this influenced my philosophy in personal projects; the teams aren't just doing what their manager/management tells them... quite the reverse in fact).

  12. biddibiddibiddibiddi

    I know how to "remove" it from the code, and I can't even program. You go back to the version of the code that didn't include it, and then add in all the OTHER changes that didn't involve the AI plug-in (bug-fixes, security, unrelated features, etc.) and then debug that. Then you can create an actual plug-in that fits into the plug-in system like any other plug-in, for people that want to use AI features for whatever idiotic reason. Yes, it will take a good bit of work, but far less than trying to unbundle it I bet, and you know what, that's the cost of doing stupid things without talking to your users because you think you need to force stuff on them that isn't to their benefit.

  13. davestewart

    I wouldn't worry right now

    I really wouldn't worry; the new AI is FAR from deeply integrated with your codebase; it can't tell me how many colours I've used, cannot refactor module entry points, or give me any meaningful information about my codebase as a whole. JetBrains AI is currently very, very far from being the all-seeing, all-knowing entity they advertise it as.

  14. This post has been deleted by its author

  15. Anonymous Coward
    Anonymous Coward

    An even bigger mistake than ...

    ... their previously monkeying around with their perpetual Licencing Terms on version updates during the term of your licence.

  16. Sceptic Tank Silver badge
    Big Brother

    Sh!t4Brains

    I drive a manual shift vehicle – I enjoy having that sort of control. I'm unlikely to use tools that come and plonk somebody else's prewritten rubbish code* into my project. Soon I'll be coding in Notepad++ (or worse) if they're going to force this AI rubbish on us.

    * Possibly their code is actually better than mine. Who knows?

    1. Peter Mount
      Facepalm

      Re: Sh!t4Brains

      Going back to using Emacs, the first "ide" of sorts I had used on an Amiga when I started learning C, would be a pain, but doable.

      That said, when I first started programming 40+ years ago now there was no such thing as an IDE. just the command line - think BBC Basic ;-)

  17. Ben 56

    It seems deliberate. Pattern seen in other companies.

    I went to renew a subscription for my copy of Webstorm a fortnight ago, looked for any discount codes or promotions in the forum, came across those massive threads about privacy leak, accidentally consenting (like just clicking update on the IDE) and so forth.

    I'm specifically not renewing until the baked AI is removed, not disabled, by default, including the shitty annoying trial dialogs that pop up in the code (would you believe that bug was a thing and not intentional)?!

    Almost sounds like that shitty Microsoft Edge bug that sucks all the tabs from Chrome even when you tell it not to? https://www.theregister.com/2024/01/30/microsoft_edge_tabs/

    Anybody seeing a pattern here on what companies are doing?

    They put legally dubious code out for a few months and call it a bug which is then fixed a few months later after a marketing push usually, because of the risk of legal issues. The most famous of these is the Microsoft Windows 7/8 forced upgrades.

    Of course none of this can be proven but it does look very convenient and a benefit of having "bugs"

  18. fpx
    Devil

    Not Without Your Consent

    Ah, the old excuse. Of course your consent is buried on page 42 of a 100+ page document in legalese that you have to click through. The statement is just as nonsensical as "we take your privacy seriously."

    1. lukewarmdog
      Joke

      Re: Not Without Your Consent

      We take your privacy. Seriously! No, we do! We seriously take your privacy! Where? Oh there's a bin over there, we just pop it in and immediately stop worrying about it!

      Now how about you just sign here on page 100..

  19. Anonymous Coward
    Anonymous Coward

    Mostly moaning morons

    The plugin does sod all by default. You do need to pay them (or start a trial) to have it work at all.

    The plugin can be deactivated. Deactivated plugins don't even run.

    As far as I know you can't enforce plugins to be disabled upon installation, nor can you blacklist or whitelist specific plugins. Therefore, which plugins (first or third party) you can use comes down to business policy rather than technological enforcement.

    Which brings me to the next point. The claim that someone's business policy precludes anything with "AI"-capable something pre-installed even if it's not used and can be explicitly turned off sounds completely made up or overplayed. Such a policy would preclude any modern mobile OS, Windows 11, Edge, Teams, Zoom, GitHub, etc. That's dubious. Even worse, such a policy would allow you to use anything "AI"-related as long as it is a plugin that was not pre-installed, which is silly. Obviously, the policy is there to prevent sending information out. Which is great, because it doesn't by default, and it can be disabled completely. So, what's the fuss all about?

    To clarify, I am not saying that the JetBrains AI plugin is any good. It's more like having a very eager, hypercaffeinated intern with the IQ of a butchered pig blabbing about non-stop. Its only real use is making more money for JetBrains. I'd rather it didn't exist, but not for any of the unhinged reasons moaning morons bleat about.

    1. Doctor Syntax Silver badge

      Re: Mostly moaning morons

      "Deactivated plugins don't even run."

      You know that for sure?

    2. StrangerHereMyself Silver badge

      Re: Mostly moaning morons

      Let's wait and see how your employer is going to respond when that AI plugin you installed leaked the company's secrets. I'm pretty sure you'll be looking for another job.

      As long as the IP questions remain unanswered I think it's foolish to allow some AI thingie to generate code that becomes part of your codebase. Also, many vendors secretly train their models on YOUR code and attempt to disguise this by hiding this in their TOS.

    3. Cav Bronze badge

      Re: Mostly moaning morons

      "The claim that someone's business policy precludes anything with "AI"-capable something pre-installed even if it's not used and can be explicitly turned off sounds completely made up or overplayed. "

      You didn't read or comprehend the article.

      "I just want to make it clear that I cannot use this product at the company I work for because security will not allow for a by-default AI implementation to be a part of the product"

      It has nothing to do with "any modern mobile OS, Windows 11, Edge, Teams, Zoom, GitHub, etc." at all. Security would not allow a product of the company making said product to contain code generated, or potentially exfilterated, by a pre-installed AI. I.e. security are protecting their product and intellectual property, specifically from an AI development tool.

  20. bigtimehustler

    If one corporate user has ever logged in to their own AI enabled account, it's game over anyway. There is no way to avoid this, it's all already happened.

  21. StrangerHereMyself Silver badge

    Not listening

    Listening after the fact isn't listening per se. If they'd had some apathy for their customer's concerns they would've asked them beforehand bundling this code-stealing plug-in.

    This is just another company that's trying to hop onto the AI bandwagon in the hope it will increase their stock-price. These kinds of moves are disconcerting and makes me doubt the motives of their management. They're probably more concerned with their $100 million bonus than customer satisfaction.

    1. Mimsey

      Re: Not listening

      Listening after the fact isn't listening per se. If they'd had some apathy for their customer's concerns they would've asked them beforehand bundling this code-stealing plug-in.

      They actually do have apathy towards their customers' concerns - you may have been thinking of empathy, which they show no signs of having.

  22. PRR Silver badge
    Trollface

    Baggage

    > allow its customers to get rid of it. .........they come with baggage...

    Buy an airplane ticket. They will lose your baggage.

    aint got time to take a fast train

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like