It seems like a good idea
but to many people have "that would never happen to me" mindset
Visual Studio Code program manager Chris Dias has defended an intrusive new "Workspace Trust" dialog, saying it is to "raise awareness that there are many attack opportunities when you download code from the internet." The feature, introduced last month in version 1.57, was initially described as "extra security against code …
The fundamental problem, of course, is that random code should simply not be capable of causing havoc - though it's been a "feature" of the traditional "security" model for so long that hardly anyone even acknowledges it isn't inevitable.
And that problem is so entrenched that "trust" is no real defence - in fact it's just another route to undermine such shoddy protection as there is.
A better way to approach this - especially in a development environment - would be to run all the code in a sandbox of some kind unless otherwise directed. It might help catch unintended wayward code as well as deliberately malicious code.
No, the fundamental problem is that the bloody editor wants to run arbitrary bits of code when you didn't ask it to.
Fix that entirely unnecessary misfeature by deleting it and you don't have to run rings around yourself trying to stop it in all the cases where that's a bad idea.
Well, indeed, but that mindset is a product of simple arrogance. Keep on asking whether that code that they downloaded from the internet does what they think it does and, maybe, they will start to think. If they don't, at least you tried. That's the way I approach all these problems; I ask if you are really serious and it you are then that is your judgement. When I let go of my end of the ladder and you fall off the building; you said you could hold it. I've had too many people telling me they know what they are doing not to trust them, many of them were right.
They just want to train people into clicking boxes without reading, as fast as possible.
Next version will show "Do you relinquish your soul to Microsoft?" or something more sinister like "Do you give us irrevocable license to do whatever we want with anything you write?"
Thanks to GDPR and Cookie Law I am well trained in clicking any popup windows to go away, I don't even notice them.
We run a Python application at my workplace which we maintain internally. Whenever we are asked to 'put this validation in' or 'ensure the user knows this' it ends up being done by adding a box to click through. I don't think there's a meaningful action which can be performed at this point without 6 box-clicks afterwards. Nobody is interested in the end result being 'user awareness' when building this but are instead interested in 'closed out jira'. The end result is always the same... users learn either how to click fast or which combination of tab + space will get them to the end. Nobody knows what the boxes say, not even those who made them.
Is this what you get when you trust VS Code or is there more? Is there any guarantee that VS Code is not inserting bad code in your projects when you make a build? Are your sure? Have you checked the entire chain of software to make sure? From bios to OS, from compiler to editor and more?
If not, you should read Ken Thompson's Reflections on Trusting Trust. All these dialogs are treating symptoms. The problem is elsewhere and the complexity does not help. Maybe we should start at reducing the self-inflicted complexity and move forward from there.
"The trust onus is on the toolchain."
Trust starts there? VS Code is an executable, which BTW, is governing every toolchain.
Also, I like the humor behind the article calling VS Code "lightweight", which also opens the question on just WTF is VS Code really doing behind the scenes with all that bloat? Seriously, the bigger the field, the more places to hide eggs.
… but for years it's supported a quarantine flag that gets attached to downloaded stuff, and to files generated from that stuff (unzipped archives, mounted disk images etc.). If you try to do something that could allow a quarantined item to do damage, you're asked if you want to allow the action. If you do, the flag is cleared, and you're not asked again for that item. It can be a pita, but I find it a lot less rebarbative than Clippy.
"Redmond, start your photocopiers."
Actually, Windows has been doing this just as long as Mac OS if not longer.
Files are tagged with an alternate data stream that identifies whether the file came from the local system, trusted network, internet, etc. When you try to open an executable that is flagged as coming from the internet, you're warned that the file is blocked and you have to unblock it or change security settings to allow it to run.
The problem with restricted mode in Excel is that it is essentially useless. You can't even apply an existing filter or expand a field in a pivot table. So even performing the bare minimum of manipulation with a sheet you've been sent is impossible. Unfortunately, what this means is that most people I've seen just hit the "open in normal mode" routinely for all sheets they open without even thinking.
It's a bit like the Vista "UAC" issue again. I understand why MS implemented it, but it was so intrusive and popped up so frequently that people either turned UAC off altogether, or just got into the habit of hitting "allow" without even thinking. Either way, the security and safety gains were nullified.
That's the problem here. Make security too intrusive and people will get used to bypassing it by default. Either way, all you've achieved is to make your product seem more annoying. What MS need to do is to find the right balance so that people actually stop and think when they get such a notification.
Of course it does! This is the same company that happily extended email to be executable code instead of a simple messaging platform so spammers could pwn your computer, added "features" to the web browser so anonymous adverts on "trusted" websites could pwn your computer, and made it much easier to run everything as administrator (root) rather than encouraging least-privilege software to limit the damage their new standards caused.
The real question is, "Do you trust the authors of Visual Studio?"
1) ESlint is an open source project that has nothing to do with Microsoft, people commonly call it from their node build scripts which can be kicked off from inside code, so if you want to know why a static analyzer runs code talk to them (I'll disappoint you by adding that there's probably a good reason)
2) Jupyter Notebooks also has nothing to do with MS, but with the right plugin to integrate them you can run arbitrary code from inside the notebook (by design)
3) I'm pretty certain that before you add plugins vscode can't execute any code at all, it's a text editor that becomes an IDE as you customize it
4) vscode and pretty much all of its plugins are open source so there's no issue of trust, feel free to audit it yourself
(edit love that the MS hater brigade is already out to downvote a post that consists of four easily verifiable facts lol)
...in your code with a result checksum for each library file being called. (As well as checksumming its own footprint). If checksum's match then you've made some effort to verify your external sources. However, if your external sources in turn call libraries outside of their domain (which is a risk being discussed a lot these days), then you can't guarantee trust. Many coders are oblivious of the fact that this could happen
If all libraries were to have checksum checking built into them in this way, then you could have some protection against root of heirarchy change. If this were a coding "standard" whereby you called a library only if it met the structural checksum standard, and that each library it called committed to doing the same, then security problems would start to decrease.
The type of trust you're talking about here is different, where you want to check that a library you're calling into isn't altered at runtime. The type of trust the article is talking about is during development where a dev has obtained code and wants to open it locally, since there are many ways for code to execute out of your IDE you want to distinguish between code you trust and code you're looking at but don't trust to execute on your dev machine.
If you're curious though, checksums aren't robust enough to be the basis of your library trust system. The usually proposed and sometimes implemented solution to the problem you're discussing is to sign code cryptographically where you use a private key and the code/executable itself to make a digest which is appended to the package, then people can verify that the code is unchanged by using your public key and their copy of the package at any time. Googlable keyword is code signing.
This isn't useful in this or most cases. The problem this is intended to solve is the use of untrusted libraries by a developer who doesn't audit the library for security or is dealing with a library of such complexity that it's infeasible to do so. If the dev inserts the checksum of an insecure library, your system won't catch it.
If you're really afraid that something will modify a library which your main application imports, you're going to have to do more. If all you've done is insert checksums into your main module, the person substituting libraries can just edit those checksums so they match again. You can do this better, namely by signing your code and not running code from unsigned random files.
No, it probably wouldn't. Some filesystems keep track of that, but not all do. What if something other than a browser downloads the file? What if it was in an archive that was downloaded? What if it was copied from a drive using something that doesn't have such a metadata field? What if it's currently on something without such a field? If you're doing this at all, you can't treat a file not known to have been downloaded recently as safe.
So finding something called readme.txt is a better test? Anyone up to no good will just look through the source and avoid the filenames which pop up this dialog.
A poster above who expressed the idea better than I had the same idea. It's slowly getting more comprehensive on Windows.
I'm going to be a horrible person and say that this sounds reasonable. The motives make sense, the implementation is mildly annoying but it's basically SUPPOSED to be, or it wouldn't work (as they demonstrated). I will complain about many Microsoft-related things, but this is not one of them.
OK, thanks for sharing that but it's tired refrain from like moaning about why gas lamps in the streets aren't bright enough.
MS are a big nasty corp, they're not going change and one person moaning about it won't change that. However there's plenty of alternatives out that don't involve MS, you can avoid them if try hard enough. So stop wasting time moaning and start a movement to help others who feel the same, maybe invent your own O/S, it's been done before and I believe is used quite a bit now. I think there's even some orgs out that promote openess in software and software development, now what's it called? Hmm....
I always try to make the code maintainable. After all, *I* might be the one who has to maintain it [years from now when I, too, ask similar questions about the author's intelligence and ability to code - whoops, it was me!]
Seriously, though, if I can't maintain my own code years later, something's wrong with the author.
as for a popup dialog asking me to trust the author of the code, pluma and a bash or csh shell are looking MUCH BETTER these days in lieu of any kind of "helpful" IDE...
These days I think it's generally easier to write a paragraph describing in detail what the code does at the top. Saves a lot of time looking at the code because you can probably then just write a new function to replace it rather than maintaining your own crappy old code! Also means you can skip the commenting throughout because during the month you originally write it you'll know everything about it. 10 years later you won't care because the blurb at the top should be sufficient to start from scratch and produce something better in less time
Sooner or later you run out of people to teach you and have to figure it out yourself. Being taught is nothing more than a leg up. And people who are taught don't always understand why it should be done like that. If you've tried it, you know, and know when you can cheat. And I certainly don't think people who are taught produce better code. These days they often produce worse code because they haven't grown up with the machines and don't understand that a CPU is actually going to have to execute what's been written.
I can sympathetic with finding old code brilliant and atrocious, or an amalgam of both; the impossible made possible before your eyes. Lets face it, for most of us, the code is a prototype that should be thrown away and rewritten (and then thrown away and rewritten again because you succumbed to second system syndrome). But that's just not possible. And the rest of the time the code is a quick bodge on a prototype that should have been discarded.
Within those limits, it's often pretty good. The bad habits acquired on 8 bit micros are rarely visible. It's well factorised. It's almost like looking at a codebase written by adults. And there are inspired flashes I think I'd struggle to match. No wait, I've just had an idea...
have saved my bacon more than once.
Sadly, in many places, these are now frowned upon. On the last Agile project I worked on, I was hauled over the coals for commenting on almost every line of my code. I was told that there was no time in this or any sprint for commenting.
I walked as soon as I could. The project foundered badly with unmaintainable code that didn't work.
A total waste of money.
Well intentioned but ultimately will, as others have said, lead to "button blindness", people will see it for the 27th time and just click "OK". I use VSCode and it's come up 4-5 times when I opened from JSON files off a network share, after the 3rd or 4th time I simply clicked "OK" with an annoying sensation that I should really be paying more attention but I'm too busy.
Nice idea, it will work the first couple of times but then it'll simply be annoying and people will find a way to disable it.
It would be OK if they didn't raise an obviously false alarm so often. Windows requires me to click to confirm it's OK every time I copy a jpg from my local NAS to my PC (although I can use a browser to copy a jpg from the real internet without any warning.) Etc., etc. It's a bit like car and burglar alarms, everyone knows that it's highly unikely to be an actual thief and takes no notice.
what a non issue, really, an article about it? I expected an opinion supporting it. With attacks with multiple vectors, which already happened in the IDE in the past (! see Xcode Chinese attack), I'd expect that the programmers will pay attention as hell to what they're running. This popup should not be ignored people. Those saying "I'll automatically click ok" should be fired.
"raise awareness that there are many attack opportunities when you download code from the internet."
If MS work on the basis that you won't examine and verify code you download before using it, the gross bug-ridden state of their products is perfectly explained.
The real problem here is that this question is pointless. It has no useful answer.
If the user trusts the code, it doesn't mean it is trustworthy. (Even well-intentioned code might have bugs.) It just means the user doesn't want to be pestered by an algorithm that is (inevitably) too dumb to answer the question by itself.
If the user doesn't trust the code, they presumably still want to read it, so they will click on the annoying popup to make it go away.
Either way, the user has been annoyed and Microsoft have learned nothing that they can act on. (I *assume* that MS don't do dangerous things on random pieces of code just because the end-user happens to be reading it. That would be like ActiveX on steroids.) On the other hand, the end-user has learned that they are using an IDE created by people who think this a security feature. Oh dear.
VS Code "is capable of running code from the workspace on your behalf to provide a richer development experience,"
Compiling and building code is one risk with external sources, but automatically running shite that it happens to be in a directory is just typically negligent and stupid.
By the same organisation that created auto-infect/auto-run, executables in emails and ActiveX in browsers (hell, ActiveX has just always been awful)