"Anything less than a 60fps cursor is garbage! Go code on an Xbox, peasant!" -- PCMasterRace
Blinking cursor devours CPU cycles in Visual Studio Code editor
Microsoft describes Visual Studio Code as a source code editor that's "optimized for building and debugging modern web and cloud applications." In fact, VSC turns out to be rather inefficient when it comes to CPU resources. Developer Jo Liss has found that the software, when in focus and idle, uses 13 per cent of CPU capacity …
COMMENTS
-
-
-
-
Friday 24th March 2017 12:50 GMT Anonymous Coward
Ah yes, I should have read closer- "Chrome is doing the full rendering lifecycle (style, paint, layers) every 16ms when it should be only doing that work at a 500ms interval."
Still, how much of that is necessary? And even at 60 times per second, I wouldn't have thought a text editor screen update would use 13% of the CPU time.
-
Friday 24th March 2017 15:51 GMT gnasher729
Here's a possibility: At some point the screen was updated twice a second (or when you typed), so that used 0.4% CPU time, and nobody cared how efficient or inefficient the actual drawing was. It was inefficient, but who cares if it's 0.4% CPU time inefficient?
Then someone was careless and introduced 60 updates per second. Now the inefficiency hurts. (Of course nobody will work to make the screen updates twice as fast, if you can just set the updates back to twice a second and save a lot more CPU time).
-
-
-
-
-
-
-
Friday 24th March 2017 18:06 GMT bombastic bob
Re: but but but
"It's slow and not very useful. I've tried it, but I wouldn't consider switching from Geany."
I just use pluma (or gedit if gnome 2 is installed). Works well enough, highlights code elements, few irritating features (other than having to occasionally strip ending white space off of code lines, but a lot of editors cluelessly let that happen these days...) and it DOES NOT HAVE PERFORMANCE ISSUES that I can tell (even in a VM).
-
-
-
-
-
-
Friday 24th March 2017 07:55 GMT stephanh
Re: The solution -
One of the many things I like about Vim is that it is so fast. "vim foo.c", bam, the file is there, no loading screens or other nonsense. You search for something, bam it is there. Syntax highlighting never takes a coffee break (et tu, Emacs?).
Non-bloated apps like Vim are great. They make you actually get to profit from faster CPU speed.
I tried Visual Studio Code once for kicks and it was like typing in treacle.
-
Friday 24th March 2017 08:45 GMT GrumpenKraut
Re: The solution -
> (et tu, Emacs?).
The memory footprints of Emacs and vim are virtually identical (try it!). Things like the splash screen can (and should) be disabled in Emacs.
The one situation where I find Emacs lacking in performance is for very large text files (hundreds of megabytes). And even that can be (mostly) fixed by disabling, line-number-mode and syntax highlighting if the file is larger than some threshold of your choice.
-
Friday 24th March 2017 09:56 GMT stephanh
Re: The solution -
@GrumpenKraut
I have no qualms with Emacs memory usage (and a quick "top" shows you are indeed right about comparable memory usage.)
I switched from Emacs to Vim around 2000, at that time I recall that Emacs tended to be sluggish in syntax highlighting, especially on larger files. Perhaps faster CPU or performance improvements in Emacs have made the issue moot by now.
-
-
-
-
-
-
Friday 24th March 2017 07:00 GMT JoeF
Re: The solution -
I am wondering if Munroe got the inspiration from the first "Real Programmers" rant: Real Programmers don't use Pascal
-
Friday 24th March 2017 14:52 GMT stanimir
Re: The solution -
>>I am wondering if Munroe got the inspiration from the first "Real Programmers" rant: Real Programmers don't use Pascal
Obviously it a reference. I recall this paper when I was a kid (like 10)... good stuff -- still recall quotes and some of the them a partly true (like arrays and datastructures)
-
Friday 24th March 2017 15:35 GMT Zippy's Sausage Factory
Re: The solution -
I am wondering if Munroe got the inspiration from the first "Real Programmers" rant: Real Programmers don't use Pascal
Or The Story of Mel?
-
-
Friday 24th March 2017 09:40 GMT Anonymous Coward
Re: The solution -
In our company they actually use vim but don't tell them about the flapping butterfly method or someone will propose it as a cost saving.
Why would anyone want to use an IDE? That takes time to set up, time that could be better spent eking out code on vim. Hell, not even ctags works thanks to crazy includes. But why would anyone want ctags when we've got grep?
-
-
Friday 24th March 2017 18:14 GMT bombastic bob
Re: The solution -
"Because a good IDE makes you much more productive"
which is why I've been working on one, off and on, for several years [using native X11, meaning a simplified C language toolkit to manage basic UI elements]. money would make it go faster.
Until then, there's pluma (or gedit on gnome 2) for the code, gimp for graphics images, and 'whatever tool' (including hand-coding) for HTML and dialog layouts. When you look at the older Visual Studio versions, where hot-keys quickly got you to the thing you needed to change something on a dialog box (or add a variable, let's say), the IDE _WAS_ more productive. Since 2000-something, though, it's gotten all "property sheets" and "mousie-clickie-mousie-clickie" where it JUST! GETS! IN! THE! WAY!!!
in short: if you have to remove one hand from the keyboard to operate a mouse more than a few times per hour, there's something wrong with the IDE.
And DevStudio is one of the _WORST_ at that (post DevStudio '98 anyway)
-
-
-
-
-
-
Friday 24th March 2017 14:55 GMT Dave 15
Re: "13 per cent of CPU"
ONLY, ONLY, ONLY.... Only 3% is precisely 3% more than shoudl be wasted on such crap
Really, I don't care if it is 13, 3 or even .0000003 it is extra crud that adds nothing useful and should not be present. It also adds to loading time, causes more memory page errors and use of cache etc etc etc
-
Friday 24th March 2017 05:40 GMT Anonymous Coward
Rule #1 for the user-facing components development
We used to have a rule saying that the developers working on the user-facing components (such as editors, data input, visualization tools, and other data presentation) should be allocated the slowest systems their users might be running. This tends to result in the code which works well for all users.
Obviously, the developers in question tend not to like this approach very much...
-
Friday 24th March 2017 09:11 GMT Patrick Moody
Re: Rule #1 for the user-facing components development
I think the developer should have access to all the computing horsepower he/she desires (within the employer's budget) but that testing/debugging should be done in a deliberately crippled virtual machine. This way the actual development tools will perform well on the high spec workstation, but the application produced can be seen to work well enough on a low spec end-user's machine if it does so in the low-end virtual machine instance.
-
Friday 24th March 2017 12:56 GMT swampdog
Re: Rule #1 for the user-facing components development
I couldn't agree more. I was once tasked with making an aix system "linuxy". The first issue was to build a reliable gcc because the freeware gcc would crap out on large builds, spuriously producing object files with no code inside. Now, they could have given temporarily given me a (then new) unprovisioned power 6 box (destined to become a new oracle server) but instead some wag with the ear of those above trotted out that "give the devs crap" twaddle. I got instead, a standalone dual core power 5 box with 1G ram and iirc it would take upward of 40 minutes to build gcc 4.0.2 - except it didn't because of the afore-mentioned freeware gcc bug. There was about a weeks worth of debugging to get an initial, reliable gcc built but actual time spent was more than two months.
-
Saturday 25th March 2017 08:40 GMT Anonymous Coward
Re: Rule #1 for the user-facing components development
I got instead, a standalone dual core power 5 box with 1G ram and iirc it would take upward of 40 minutes to build gcc 4.0.2 - except it didn't because of the afore-mentioned freeware gcc bug. There was about a weeks worth of debugging to get an initial, reliable gcc...
You should have invested couple of hours into reading the gcc installation manual. The SOP in cases where the native cc was missing or broken was to first build gcc as a cross-compiler on a different system, then use the cross-compiler to build stage1 for the target. This was a bit more work than the all-native installation (you also need to cross-build binutils and a few other bits), but it works very well. I used this technique to build gcc on several POWER3 and POWER4 AIX boxes far less capable than the one you describe; I do not remember it ever taking more than a day (with an overnight final test-case run judiciously thrown in - but that does not count as my time, does it?).
-
-
Friday 24th March 2017 13:40 GMT PM from Hell
Re: Rule #1 for the user-facing components development
From my own experiences as a Tech Support / Ops Manager then PM giving Devs Top end machines then testing on lower powered devices just delays tuning until UAT or even worse production.
there is a strong argument that devs need a higher powered machine than end users but they need to feel the hit if they are developing inefficient code other wise something which runs a 'bit slow' using test transaction volumes is unusable in production.
-
-
Friday 24th March 2017 15:37 GMT Zippy's Sausage Factory
Re: Rule #1 for the user-facing components development
Which is why I've had bosses ask why I don't want an upgrade to my development machine.
"But this is one of the oldest machines in the company? Why?"
"Because if it doesn't work well on here, it's not fast enough. And I don't like writing slow code..."
-
Friday 24th March 2017 23:31 GMT swm
Re: Rule #1 for the user-facing components development
At Xerox, for the STAR program, programmers needed to build an environment for building and testing the STAR applications so they built the Xerox Development Environment. It ran like greased lightning. The actual product code was very slow because the developers didn't use it and it used structured doodads everywhere.
Using slow hardware for development means that the programmers will make the software fast.
-
Friday 24th March 2017 09:10 GMT Douchus McBagg
why the fudging flipperdy flop, is the CPU giving two patooties about rendering the futher monking cursor?
this is why you never give the dev team decent kit. makes 'em lazy. if it runs good on a thing from ten+ years ago, it'll hit light speed on a reasonably current machine. and that equals "good user experience"
how much CPU does a serial link WYSE terminal have to render it's cursor? sweet gravy Winston, if i had 13% of the CPU power of a current thing back then, i'd have been playing quake in 1978 instead of having been eaten by a grue!
our current dev team have dual socket six core Xeons with gobs of ram and Cuda Cores. makes me want to puke.The business wonders why all their in house apps run like chewing gum on a sun baked sidewalk.
/Cave Johnson
-
Friday 24th March 2017 14:49 GMT Dave 15
yup crappy code from crappy design from crappy requirement
Who the hell needs a blinking cursor, and who needs one with an option to switch it to solid and a timer for a refresh rate so high....
*For heavens sake, I buy a more powerful processor with more ram so I can do more USEFUL things (like designing good porn finding programs) and NOT to waste on pointless screen decoration.
I mean... from what I can see the code needs...
setup()
create a timer()
on timer ()
go to the options storage area, find the option- probably including parsing some xml these days()
// yes, done every time in case someone has changed the option!
check cursor is on()
if (solid)
if cursor not on'
set cursor on
else
if cursor not on
set cursor on
else
set cursor off
retrigger timer if needed by the os design
ffks sake really, all this for something I don't need, and all the potential security holes as well (i.e. make sure the option code is good etc etc etc)
And then the writing the code for the setting and clearing of the option
No wonder modern computer code is slow, shit, bloated and requires gigabytes to print hello world on the screen
-
-
Wednesday 14th June 2017 08:52 GMT Stuart Castle
In the past, when I've done development work for work (which, admittedly, didn't go beyond utilities we needed for given tasks, so was never anything massive), I've always used two machines. I used a relatively fast one for development, as I usually code in C++ and most c++ compilers (especially Visual Studio, although I prefer not to use that) do really benefit from a fast machine with a lot of memory (although it seems to be the memory that generally provides the most benefit). For a test machine, I used the oldest, slowest machine I can find.
Why? The users that actually used the little apps and utilities I wrote were not likely to have had the latest and greatest CPUs and Graphics Cards. They would not necessarily have masses of memory in their machines. I needed to see what the users were seeing.
Reminds me of a story I heard about George Lucas. Apparently, when he had finished Return of the Jedi, he went to a local cinema to see it. When asked why, he replied that he had only seen it in Hollywood screening rooms, which always have the best projection and sound systems, and screens. He wanted to see what the man on the street would see when he watched it. Apparently, he was appaled, and that did lead to the formation of THX.