JetBrains is completely missing the point
It's throwing developers into the position where they have no idea what is actually in the resulting code - never mind where it came from.
JetBrains introduced an AI assistant in December to help programmers write code. Now the biz is trying to figure out how to allow its customers to get rid of it. "JetBrains AI Assistant is similar to GitHub Copilot, but it’s deeply integrated into JetBrains development environments (IDEs), code editors, and other products," …
"the position where they have no idea what is actually in the resulting code
Bloated closed source libraries have been doing this for years, so this is only one step further down that road. The IP issues are perfectly valid though and a significant commercial hazard.
You have to actively enable, subscribe, and pay them a fortune for the f'n thing before it does anything.
These complaints are beyond deranged.
It's akin to being afraid you're going to find yourself locked on a cruise ship just because the shopping mall you go to added a travel agent...
"...accuracy vs precision"
I used to teach that lesson to workshop based vocational students and lab technicians early in the term. Along with Precision's little friend Resolution. And the Error cousins Systematic and Random.
I had a small selection of 'modified' verniers and micrometers. And a slightly distressed top-pan balance. Early in the term because reading scales exposes any perceptual/eyesight/dyslexia issues fairly promptly. And they used the ideas in later practical work.
Went well that one. A lot of concepts in 3 hours.
Yeah. I’m clearly just old and grumpy. But I don’t like this and I’m going to stay in my retirement home by the C.
I know the arguments against using C, yadda yadda, but 30 years of C experience says my C pretty efficient now - I’ll never get that good at Java, Rust or Swift. But you know what? I’m not convinced that AI C will get to be as good as I am either - even if an AI is trained to write C (probably has already?)
Which, I suppose, is just an old dude yelling at clouds. And what I’m yelling is “stop the world, I want to get off.” This tech is a tech too far (well, for me anyway)
Define efficiency. Efficiency of execution (and then what? efficiency of compute? memory usage? storage? Has its algorithmic efficiency been calculated)? Efficiency of maintenance? When I say efficiency, I abuse the term horribly - including, but not limited to, does it follow the SOLID principles? And, by the way, I'm also fine with breaking those principles - but you need to understand why they exist and why you need to break them before you do. Breaking it because you didn't understand it is as cardinal a sin as a developer releasing code that they don't understand.
> In most cases, efficiency doent matter, what really matters is correctness.
Nope. A lot of inexperienced programmers misunderstand this.
What matters is maintainability. Correctness is worth nothing if the requirements change and the code is too difficult to understand and change.
Whereas maintainable code that is incorrect, can be easily corrected.
Maintainability comes with correctness.
Most code is absolute crap, when you get the most basic of things wrong you can be sure many others are also poor and wrong.
Code that does exactly what it supposed to do is correct, and fails like its supposed too. Stuff that does unknown things when things go bad is not correct or acceptable. Facts are most code barely works tese days and when its given for example bad inputs, the messages it gives if it does when it fails is hardly correct. Throwing a NPE or some other random exception instead of a proper message is not correct.
Nope - I won't fight you. I'll fight with you on that.
I think that clouds have their uses. Small startups will be able to afford infrastructure that would otherwise be outside their reach. But, once you reach a certain size, you should bring it inside. It's like not owning your own home or renting your electricals. At some point, you should be able to afford to buy your own.
So, even smaller shops aren't safe from the AI/LLM enshittification - and worse, think it's OK to shove it in without proper means to get rid of it.
NB: many years of Jetbrains subscription and I did use their trial knowingly. It was slow as, so I turned it off, without bothering to try and remove it. The fact that I couldn't if I wanted to, is shocking.
Of course the leakage of very critical development data into the www is horrible. But those fine MBAs who have taken a CompSci course and think they understand enough about software and security and client relations - they get to tell the C-suite to jump onto the AI bandwagon.
I'm a frequent user (past 8 years) of several of Jetbrains' products. I don't really want any more clippy-style prompts jumping onto what I'm typing. I know how to make my own mistakes, thank you - and learn by correcting them.
But those fine MBAs who have taken a CompSci course and think they understand enough about software and security and client relations [...]
Wait...What? A B-schooler taking a CS course?!? They'd get laughed out of the course by all the real nerds faster than they could trip over their own shoelaces. And B-school refugees do not like getting laughed at (plenty of examples IRL nowadays).
I don't really agree with this sentiment in this case, but I do think that the EU's cookie policy has meant that even more than before people will click on these things without thinking. And clicking decline without thinking is likely equally bad, we have to stop asking stupid questions all the time.
All EU cookie banners did was make tracking more visible. Some people find clicking no (or yes) annoying, others find the tracking annoying. Personally I'm in the latter camp. There's no need for them if the website just uses first-party session cookies, but it seems sharing or selling data is too difficult for most websites to give up.
Surely the blame has to be placed at the door of the shrinkwrapped EULA which apparently allows any and all consumer rights to be overridden in the US and has a long ignoble tradition stretching back decades. Your only recourse is to not use the software.
Fast forward to today and now you find your software updates, your EULA is changed, and even if you were to click no your codebase is probably already uploaded and is now training data for an LLM under the expectation that you would have clicked yes anyway.
JetBrains of all companies should know about developer resistance to this bullshit, but it seems they're all drinking the same kool-aid. It's not the questions which are the problem, it's the entire industry collectively jumping on the same fucking bandwagon.
The intent and purpose was to make tracking visible and easy to decline by requiring explicit permission to follow you around with a clipboard.
Many advertising sites then decided to deliberately find the very most annoying way to pretend to follow it. Probably in the hope of getting it repealed.
Most of these cookie popups are actually unlawful, but the law has not been enforced - you can probably guess why.
"The intent and purpose was to make tracking visible and easy to decline by requiring explicit permission to follow you around with a clipboard"
The intent was actually no more than to make tracking visible and easy to decline by requiring explicit permission. The legislation is silent on how it was to be implemented, so as Richard12 rightly says site owners used the most annoying way they could think up in the hope we'd lose patience and click through without due attention. Even El Reg fell for this -- the cookie banner (that only goes away if scripting is enabled, otherwise permanently blocking the bottom of the content screen) is at least twice the size it needs to be to convey the message and include the interactive controls. But this is probably on eof the least intrusive around -- many sites overlay the entire content with obstructions until the enforced "decision" has been made.
Having spent a lot of time on the GDPR from when it was first conceived, I'm convinced that the framers envisaged a default of no tracking with options to allow it if acceptable, with clearly explained consequences. What we got was exactly the reverse, plus exclusive concentration of 'cookies', consent to scripted trackers being assumed without the option.
"I'm convinced that the framers envisaged a default of no tracking with options to allow it if acceptable, with clearly explained consequences."
This should have been how it was framed. No doubt there was a lot of lobbying to make it as it is now. It's time the EU held a review with the specific objective of deciding whether to make on-consent the default.
> Many advertising sites then decided to deliberately find the very most annoying way to pretend to follow it. Probably in the hope of getting it repealed.
Especially sites which require 10 to 20 "Legitimate Interest" areas of data processing to be unchecked individually, in order to opt out of tracking. Or worse, sites that list 200 or more advertising partners, and having to uncheck every one individually.
I always uncheck the "legitimate interest" boxes (except when it's 200+ individual advertising partners), although I don't really believe that unchecking them will actually make any difference to the processing that goes on behind the scenes. We have no way of knowing whether they honour those selections or not.
"Especially sites which require 10 to 20 "Legitimate Interest" areas of data processing to be unchecked individually, in order to opt out of tracking. Or worse, sites that list 200 or more advertising partners, and having to uncheck every one individually."
Looking at it another way, it's good of these sites to make it clear that you will want to leave now and never go near them again.
I haven't used this JetBrains abomination, but my big complaint with trying Github copilot was how often the code was wrong. Sometimes subtly, sometimes not. Because the 'AI' has no idea what you are actually trying to accomplish and what all the constraints on things are. But I guess that doesn't matter when you're just writing Microsoft code, because nobody expects that crap to work.
Of course its wrong, how can any intelligence even AI possibly know your intent just by seeing one or two lines of your code ?
How can it possibly know your end goals ?
A real human just off the street wouldnt know, so how can AI know without any background and context of the problem you are trying to fix or solve ?
This is the message that, unfortunately, CEOs and CTOs are not going to get at all, resulting in a lot of us losing jobs and a lot of catastrophically bug-ridden AI-generated code that they imagine will solve their problems. Somehow it will turn out not to be their fault (and indeed, it will transpire that they have earned significant bonuses) when this turns out to be catastrophically incorrect.
Yep, expecting CoPilot to write your code is doomed to fail. BUT, CoPilot is an absolute God-send for writing tests and documentation. It's ability to generate a description of a function including example usage is perfectly usable. Not always perfect and it sometimes gets stuck in a rut where it trots out the same text for a function that it generated for a previous, similar function, but if you correct its mistake it is less likely to make the same mistake again. It can also work wonders in generating README.md content, in one instance that blew me away, it suggested a 90% complete and 100% accurate "Feature List" for an open-source module (GoLang) I was working on!
Similarly for writing tests, I have shown my colleagues the seemingly magical way that CoPilot can generate test cases for me. After using a vs code snippet to stamp out a boiler-plate data-driven test, declaring a struct to be used for each test case (which includes a "name" field which describes the test case) and writing the first test, for subsequent test cases I typically only need to provide the test case name and CoPilot will generate the rest of the test case for me. Often it will then start suggesting entire tests cases, complete with name.
The idea that CoPilot and similar AI will put good engineers out of a job is idiotic and serves only one purpose: to provide click bait headlines. What it CAN do is become that junior assistant programmer that all good engineers sometimes wish they had that they could give the boring jobs to do.
I for one am happy to spend less time writing tests and documentation when my AI assistant can do that for me, leaving me with much more time to spend on the fun stuff that AI sucks at: actually solving problems.
(I haven't tried the JetBrains AI so I have no idea if it is any good for these use cases. And I should also add that I use GoLang predominantly and there may be aspects of that language, being incredibly simple and highly idiomatic, that makes it particularly well suited for AI to be helpful. With other languages and other AI's your mileage may vary).
This is the reason why I am not at all worried about our "AI" overlords. Neural networks especially as they are being trained on complex human stuff like language are ultimately simplistic (i.e. reactionary) and often wrong. But their uses as suppliments and for targeted automation can't be ignored.
It's no different than say assembly line robots, which were also claimed to be out to take our jobs: Sure, many industries now use them, but in how many of those cases will you still see line workers, transferring product to other belts, checking product quality, aligning and setting product, and what have you? And there are still people needed to develop, test, and maintain those devices. Yes, we might have lost simple "put this dangly bit onto this other dangly bit" jobs, but the types of people hired for those jobs (c.f. junior assistant programmer) are entry level blue collar and eventually found plenty of other jobs in the same vein after much grumbling. And it's not like all industries have the money or technical need for such high dollar value machines, plenty of people still hire hands when necessary. It's just not as widespread as it once was.
We will still have juniors and interns, LLM's will not replace them. But now those that can't can have a similar edge even without one. You'll have to check the output no matter if it's human or metal anyway; letting the intern push production code is not something any self-respecting overseer would allow. (Though the number of people using Copilot and other generative tools to write their code without ever checking it (possibly because they don't even understand it) is particularly hilarious/disasterous/terrifying.)
Yes and no.
In an existing code base, most of the time you want the unit tests to prove you haven't changed anything unexpectedly.
The real problems with autogenerated tests is that they're often fragile, failing for spurious reasons, or don't actually test anything at all.
I have a somewhat jaundiced view of unit tests. They are only of any use if you are testing the right things - and discovering what the right things are, is the hard part.
This is particularly relevant with sound generation software i.e. soft-synths. One that I was using some years back fell foul of this. Extensive unit tests were set up prior to a major upgrade. After the upgrade the sound of just one of the patches I'd designed changed noticeably. They never did discover what had actually changed, and I had to rework the patch to fit.
Another one that didn't show up, they did fix but with the comment "We didn't think anyone would do that." If it can be done, sooner or later it will be done!
I *am* management, when it comes to my own code. But if you mean is it just to hit an arbitrary coverage target... NO.
I set myself a specific and reasoned coverage target of ... 100% (after allowance for one exception). Is that unreasonable? Is it meaningless? Neither.
I am fortunate to be working in a green-field context; this isn't about retrofitting coverage onto a legacy code base where considerations are different. And you would, of course, be right to point out that 100% coverage tells me nothing about the quality of my tests. What it DOES tell me however, undeniably, is that 100% of my code can be REACHED by tests. So, if I find a problem in my code or my tests, the solution will be test-ABLE.
Before using GoLang I too subscribed to the "arbitrary test coverage targets are meaningless" school of thought, largely as a pragmatic allowance for the fact that writing testable code in some languages and particularly in legacy codebases resulted in a disproportionate diversion of effort into refactoring/creating difficult to understand code in order to make it testable. With GoLang, writing testable code is trivial, to the point that the argument against coverage targets itself becomes arbitrary.
i.e. If you CAN (reasonably and practically) achieve 100% coverage, why would you choose NOT to? As long as you don't lose sight of what the metric measures: i.e. testABILITY not correctness.
Correctness of code is determined by the quality of your tests. Coverage is a metric only of the ability to provide tests that could/should deliver that correctness.
Now, as I mentioned, my 100% coverage target is not always actually 100%. For modules (re-usable packages in GoLang), 100% does in fact mean 100%.
But for microservices 100% means 100% of code outside of the service bootstrap function (main()), a well defined, tightly scoped exclusion
With the patterns I have adopted for microservices in GoLang, the bootstrap is reduced to triviality; it falls firmly in the category of "not worth the effort" (of trying to figure out a way to make it testable, because it is the hardest part to make meaningfully testable).
The same effect could be achieved by using a coverage target of some % less than 100, except that it is impossible to ascribe a fixed %age to account for main.go in any all/microservices as the proportion of the total code that it represents can vary. Whatever number you choose leaves the possibility that some small percentage of non-bootstrap code might also be untested (and therefore potentially untestable).
Better, easier and more accurate (and less arbitrary) to exclude it explicitly and cover 100% of what is left is.
However, I should also say that in my day job I am also a "manager" (strictly speaking a Chapter Lead, which is not the same thing, though I do have people leadership responsibilities).
Again I am fortunate to work in a similar context there - green-field GoLang microservices. The two squads I have touch points with have come to the same conclusions (this influenced my philosophy in personal projects; the teams aren't just doing what their manager/management tells them... quite the reverse in fact).
I know how to "remove" it from the code, and I can't even program. You go back to the version of the code that didn't include it, and then add in all the OTHER changes that didn't involve the AI plug-in (bug-fixes, security, unrelated features, etc.) and then debug that. Then you can create an actual plug-in that fits into the plug-in system like any other plug-in, for people that want to use AI features for whatever idiotic reason. Yes, it will take a good bit of work, but far less than trying to unbundle it I bet, and you know what, that's the cost of doing stupid things without talking to your users because you think you need to force stuff on them that isn't to their benefit.
I really wouldn't worry; the new AI is FAR from deeply integrated with your codebase; it can't tell me how many colours I've used, cannot refactor module entry points, or give me any meaningful information about my codebase as a whole. JetBrains AI is currently very, very far from being the all-seeing, all-knowing entity they advertise it as.
This post has been deleted by its author
I drive a manual shift vehicle – I enjoy having that sort of control. I'm unlikely to use tools that come and plonk somebody else's prewritten rubbish code* into my project. Soon I'll be coding in Notepad++ (or worse) if they're going to force this AI rubbish on us.
* Possibly their code is actually better than mine. Who knows?
I went to renew a subscription for my copy of Webstorm a fortnight ago, looked for any discount codes or promotions in the forum, came across those massive threads about privacy leak, accidentally consenting (like just clicking update on the IDE) and so forth.
I'm specifically not renewing until the baked AI is removed, not disabled, by default, including the shitty annoying trial dialogs that pop up in the code (would you believe that bug was a thing and not intentional)?!
Almost sounds like that shitty Microsoft Edge bug that sucks all the tabs from Chrome even when you tell it not to? https://www.theregister.com/2024/01/30/microsoft_edge_tabs/
Anybody seeing a pattern here on what companies are doing?
They put legally dubious code out for a few months and call it a bug which is then fixed a few months later after a marketing push usually, because of the risk of legal issues. The most famous of these is the Microsoft Windows 7/8 forced upgrades.
Of course none of this can be proven but it does look very convenient and a benefit of having "bugs"
The plugin does sod all by default. You do need to pay them (or start a trial) to have it work at all.
The plugin can be deactivated. Deactivated plugins don't even run.
As far as I know you can't enforce plugins to be disabled upon installation, nor can you blacklist or whitelist specific plugins. Therefore, which plugins (first or third party) you can use comes down to business policy rather than technological enforcement.
Which brings me to the next point. The claim that someone's business policy precludes anything with "AI"-capable something pre-installed even if it's not used and can be explicitly turned off sounds completely made up or overplayed. Such a policy would preclude any modern mobile OS, Windows 11, Edge, Teams, Zoom, GitHub, etc. That's dubious. Even worse, such a policy would allow you to use anything "AI"-related as long as it is a plugin that was not pre-installed, which is silly. Obviously, the policy is there to prevent sending information out. Which is great, because it doesn't by default, and it can be disabled completely. So, what's the fuss all about?
To clarify, I am not saying that the JetBrains AI plugin is any good. It's more like having a very eager, hypercaffeinated intern with the IQ of a butchered pig blabbing about non-stop. Its only real use is making more money for JetBrains. I'd rather it didn't exist, but not for any of the unhinged reasons moaning morons bleat about.
Let's wait and see how your employer is going to respond when that AI plugin you installed leaked the company's secrets. I'm pretty sure you'll be looking for another job.
As long as the IP questions remain unanswered I think it's foolish to allow some AI thingie to generate code that becomes part of your codebase. Also, many vendors secretly train their models on YOUR code and attempt to disguise this by hiding this in their TOS.
"The claim that someone's business policy precludes anything with "AI"-capable something pre-installed even if it's not used and can be explicitly turned off sounds completely made up or overplayed. "
You didn't read or comprehend the article.
"I just want to make it clear that I cannot use this product at the company I work for because security will not allow for a by-default AI implementation to be a part of the product"
It has nothing to do with "any modern mobile OS, Windows 11, Edge, Teams, Zoom, GitHub, etc." at all. Security would not allow a product of the company making said product to contain code generated, or potentially exfilterated, by a pre-installed AI. I.e. security are protecting their product and intellectual property, specifically from an AI development tool.
Listening after the fact isn't listening per se. If they'd had some apathy for their customer's concerns they would've asked them beforehand bundling this code-stealing plug-in.
This is just another company that's trying to hop onto the AI bandwagon in the hope it will increase their stock-price. These kinds of moves are disconcerting and makes me doubt the motives of their management. They're probably more concerned with their $100 million bonus than customer satisfaction.
Listening after the fact isn't listening per se. If they'd had some apathy for their customer's concerns they would've asked them beforehand bundling this code-stealing plug-in.
They actually do have apathy towards their customers' concerns - you may have been thinking of empathy, which they show no signs of having.
> allow its customers to get rid of it. .........they come with baggage...
Buy an airplane ticket. They will lose your baggage.