
If you want to do Low-Latency properly ...
... you don't use GC-ed languages. You just use C++.
The Go programming language tops the list of skills that software developers say they'll learn next, according to a survey of 116,000 programmers conducted by hiring biz HackerRank. Some 36.2 per cent of respondents expressed interest in the Google-backed language, which is suitable for systems programming for those reconciled …
No, but I do write GUIs in perl, they're quick even on raspberry pies. They use system calls (to things written in C) like pretty much all other GUI things do. GTK3 and its support libraries in my case.
I use C when speed and control are the prime requirements - sometimes standalone, sometimes inline in perl (there's a module for that...).
I think cause and effect are switched in the article. Because perl was hated, no one learned it - the hate was based on hearsay - crowd effect*, but real enough.
Therefore, there aren't many people who really get perl, and it's the old supply and demand thing.
*From Men in Black -
People are smart. They can handle it.
Kay : A person is smart. People are dumb, panicky dangerous animals and you know it.
https://www.youtube.com/watch?v=WPMMNvYTEyI
Because perl was hated, no one learned it - the hate was based on hearsay - crowd effect*, but real enough.
Shrug. I didn't hate Perl until I started learning it (from the O'Reilly book and articles in DDJ by Tim Kientzle and others). The more I learned, the more I hated it. This would have been in the mid-1990s, so Perl 5.
I have a strong UNIX background and had been writing sh and awk scripts for years before I first looked at Perl. And I knew Wall's work from patch and rn, both of which I'd used pretty extensively.
I wrote a few non-trivial utilities in Perl, but never warmed to it. The syntax made too much use of arbitrary punctuation. There were too many ways to accomplish basic tasks, with subtle differences. CPAN modules were far too inconsistent (a problem that plagues open-source component systems to this day, of course) and too many Perl developers relied on them for trivial things. Perl encouraged poor coding habits, particularly the production of unreadable and stovepiped code.
Some of these things are true of other languages I continue to use - the aforementioned sh (though these days I can assume bash on all the machines I use, which helps somewhat) and awk, C, etc. But Perl's no better, so there's no incentive to switch.
I think it's possible to produce good Perl code, but most of what I've seen is lousy, and enforcing that discipline on other maintainers is impossible. I already have that problem with C; I don't need it in Yet Another language.
I would add Forth to that list due to its ability to extend the language using itself.
For example:
* Add local variables to the language in 808 bytes, or even smaller version that takes 188 bytes!
* Add your own array implementation
But yeah, your point still stands. Forth is so niche it's not on anyone's radar.
This post has been deleted by its author
Erlang has GC but it's still used extensively in telephony systems and other (soft) realtime applications. I think it's more the case that most garbage collected languages don't give programmers the control / guarantees necessary to absolutely assure performance for certain tasks. In Java, Javascript, .NET or Golang you are at the mercies of the runtime.
I definitely agree about Rust. While I would never rewrite a C++ program for the sake of it, Rust would be my defacto choice over C++ unless there were reasons it couldn't be.
Properly? Like this? https://www.theregister.co.uk/2020/02/05/sudo_bug_allows_privilege_escalation/
Doing it properly means no buffer overflows. You'd rather repeat the same mistake over and over again than realise the statistics show C++ devs can't be trusted to do it "properly".
It's the Dunning-Kruger effect
> Doing it properly means no buffer overflows
All a side-effect of C not having counted strings. It's the biggest cause of buffer overflows in the history of the language. In order to know if my buffer is big enough to accomodate a concatenation of two strings I first have to walk the entire length of those two strings looking for a zero. Ridiculous.
"In order to know if my buffer is big enough to accomodate a concatenation of two strings I first have to walk the entire length of those two strings looking for a zero. Ridiculous."
WOW!! Are you even in computer science at all? You can't even be trusted to maintain an index... wow.
In-band signalling is a problem with C, not just with null-terminated1 strings but with formatted I/O, another common source of vulnerabilities.
Of course as with most problems in computing, this was a trade-off. It arguably makes sense for the language's original use case, system programming on a machine with rather limited resources.
In C++ there's rarely any good reason to use C strings, except string literals to initialize C++ strings and other objects, and transient use of the value returned by the c_str method and similar when calling C functions. Of course much C++ code is just a mishmash of poor C++ and poor C compiled as C++, because many of the people who write C++ can't be bothered to learn the language. (In part that's the fault of the language; it's too damn big.)
In C, non-trivial programs should refactor string handling into higher-level abstractions that employ appropriate safeguards and memoize intermediate results. Inline sequences of strcat() and the like aren't just dangerous; they're a sign that the programmer couldn't be bothered to abstract and refactor properly. The same can be said of the use of "safer" string functions like strncpy (which has broken semantics) and the Annex K string functions (strcpy_s, etc, or nonstandard predecessors like strlcpy). As Richard Heathfield used to point out, a well-behaved program should know whether the result will fit in the target before attempting the operation, so that it can handle the error case correctly.
But from what I've seen (and I've seen a lot of C), very few C programmers have the discipline to do that.
1An unfortunate aspect of the C standard (ISO-IEC 9899) is the overloaded use of the term "null", which can refer to a null pointer (a special value, not necessarily all-bits-zero, for a pointer type which indicates it does not refer to any object); a null pointer constant (an integer type with value 0, or the same cast to void*, when used in a pointer context); and the char object with all-bits-zero. For the last the committee would have done better to use "nul", the ASCII name for that code point.
C doesn't have strings AT ALL. That's the problem. There is no intrinsic string type of any kind. Instead you have a pointer to a char and the "string" is all the chars before a nul char.
As you say it has caused all kinds of overflow issues. A trivial one is a buffer too short to copy the string into, or off by one byte for the nul char. And even though C has "safe" string functions, it never bothered to remove the unsafe ones. So the likes of strcpy are still there as an attractive nuisance.
The situation is somewhat better in C++ but it doesn't have a string intrinsic either. Instead it has a std::string template class. This maintains the length of string as well as the string itself so it doesn't rely on a nul char but it sticks a \0 on the end anyway for c_str().
Doing it properly means no buffer overflows. You'd rather repeat the same mistake over and over again than realise the statistics show C++ devs can't be trusted to do it "properly".
We're talking about sudo, here. The source for sudo is C not C++. They're different languages.
C requires developers to manage buffer allocations and lengths themselves while C++ provides handy and well-tested libraries to do these things so that the developer doesn't have to. Buffer overflows are common bugs in C, but very rare in C++.
Technology has moved on.
It depends what you mean for low-latency. E.g. JDK + ZGC has a max pause time on 10 ms, and in real world usage the max pause time might be only 5 msecs, with the average pause time less than 1 ms. ZGC is optimized for very large GB/TB heaps. Shenandoah is another low pause JVM garbage collector that works well on small heap sizes (e.g. 1-2 GB). Again, it has very low pause times.
Or C or Rust or some other language with predictable behaviour from one second to the next. Even languages like Erlang that has GC is still suitable for many soft real time actions.
It's also important to mention that if you DON'T need low latency, or where something like disk / network latency is the biggest speed impediment, then using a language like C++ can be an invitation to a world of unnecessary pain.
There is of course still real software engineers. But I agree, or maybe it's an obvious fact that when Javascript is the #1 sought after, it's developers and not engineers you're looking for. It still takes the know how, but an engineer won't be stopped by the lack of unavailable modules.
But to most, any of it is magical and scientific, so maybe take it up a notch and have them refer to you as Mr./MS. Tesla or Walter Bishop.
It still takes the know how, but an engineer won't be stopped by the lack of unavailable modules.
This has really started to annoy me recently. I'm increasingly meeting 'programmers' who seem to not actually want to program.
"So what library do we use?"
"Well, you could just write it??"
"Hmm, but then we'd have to maintain it...."
"YOU'RE A PROGRAMMER!!???"
(I'm not talking about anything particularly complex either, don't re-invent the wheel but sometimes you do still have to actually write code....)
My last job would've been IT-pizza delivery boy, except that the PCs had already been delivered by somebody else the previous day. All I did was take it out of the box, put it on the desk, and plug it into the power and network. So... I arrive at your house after your pizza has been delivered, and take it out of the box and put it on a plate for you. And my name badge claimed "IT Engineer".
Every software developer should be familiar with Moore's law which says (in layman's terms) that computers get twice the processing speed every two years. In Reality it's more complicated than that, since you have overhead (i.e. for multi-threading) which makes things slower but also specialized processing units (i.e. graphics, video, cryptography) which can speed up bottle necks.
But let's assume you get twice the speed every two years. If you compare C++ with Perl, you'll find that C++ is about 20 times faster than Perl. Again, it's way more complicated than that, depending on the use case, which libraries you use or maybe you might even use foreign function interfaces in Perl.. yadda yadda yadda.
But let's assume in the year 2000 you wrote a C++ program that has a runtime of exactly 1000 seconds. If we apply Moore's law here we get the following runtime of the same program written in Perl (as a function of time, assuming you have contemporary hardware):
in 2000: 20,000 seconds
in 2002: 10,000 s
in 2004: 5,000 s
in 2008: 2,500 s
in 2010: 1,250 s
in 2012: 625 s
So if we interpolate here, we'll find that a Perl program has about the same runtime of a C++ program 10 years ago. If we do the same for Java (which is about 2-10 times slower than C++), we see the same result after only 2-5 years!
Of course this are only theoretical shenanigans. But it's important to realize that it can be misleading to talk about "fast" and "slow" languages without considering the implementation details, the underlying hardware or the actual bottlenecks of the task (what about I/O or network?). There a a few selected fields of programming who actually need the power of C++ (embedded, real-time, graphics..). But most of the time developer time is way more important than milking every last drop of optimization out of a given task.
You have misstated Moore's law. Moore's law only says that the number of transistors doubles approx every 2 years. It say's nothing about performance.
Single threaded performance is probably increasing by about 5-10%/year. Most of the recent performance gains have come from adding more cores, and that is only useful if you have programming languages that can make use of the multiple cores well, and most programming languages don't.
So, your python program running on a machine today won't be significantly faster than running on a machine that is 10 years old.
I'd add "and the mind to take proper advantage of multiple cores/threads." It's not easy to change the patterns you use in the specifications, design and actual code involved as I well know having done so right down to using assembly for the execution. Same grade of shift as when doing FPGA programming.
It's not just about performance. Turning 20 times as much battery power into heat is also an issue. If you are running in the cloud on a rented computer, you may be paying 20 times as much for that, too.
You may also find your program is expected to process twenty times as much data after 10 years, too.
All depends on the task in hand what language to use and how speed critical it is
I used to write perl scripts for doing heavily string processing intensive work on large datasets for the support team.
They wanted to know why, when most tools were written in C#, they had to use Perl for this...
So I wrote a C# app to do the equivalent and they then saw it was orders of magnitude slower (crucially slow enough that support would say this is no use for our needs) as C#t was just not a good language for the task whereas Perl was.
...and there's nothing to stop you wrapping a C library in a DLL and calling it from C# either. For some problems, that might even be the right solution. i.e. write the non-performance critical parts of your code in a high-level language with all its useful features like type safety, and write the critical parts in C / C++ and call them from the managed code.
If you're using a high-level language that doesn't have the ability to call external functions in low-level languages, well then you're either using a language that isn't feature complete, or deliberately doesn't allow that (for security reasons, probably).
One time I drove a couple of screws using a manual screwdriver, and it only took a couple of minutes.
Another time I used my impact driver, but I had to charge the battery for 20 minutes first, so it took an order of magnitude longer than the manual screwdriver.
This proves that manual screwdrivers are superior to impact drivers, regardless of the use case or how the tool is employed.
Your last comments get to the point. C++ and C typically only offer one advantage over higher level languages - performance.
If your software spends most of its life waiting on the click a button, or network IO, or disk IO, or some event that happens twice a day then maybe it's not a good language to write the code in. Or if the software needs to be reliable, or portable, or maintainable then the same.
I think this is the very obvious reason that C++ finds itself shoved out to such things as graphics, games, systems and some embedded stuff. Even in those places, I suspect people resent the bullshit C++ puts them through - null pointer exceptions, overflows etc. This would explain why Rust and Golang are increasingly seen as alternatives.
The numbers will be skewed by a few things. For one, certain fields tend to pay more, and also tend to use certain languages. In these cases there's a lot more to getting more pay than just learning a new programming language. You also need the knowledge of how things are done in that field of endeavour.
There are also geographic factors. Again, some languages are more commonly used in certain geographic areas, and those geographic areas may have higher pay, along with higher costs of living (so the people there aren't necessarily better off so far as quality of life goes).
The "intend to learn language 'x' next" statistic also needs to be taken with a grain of salt. Lots of those people will also "intend" to lose weight and exercise more, but not get around to those either. What often prompts learning a new language is that it is needed for something they need to do, which may have not much relationship with whatever their current aspiration is. It is though an indicator of a language that they don't currently know, but have heard of. Languages which are both high on the list of ones people would like to learn and high on the list of what skills hiring managers want will probably give a better idea of what languages people will actually put the effort into learning to a useful degree.
I can believe that Perl pays fairly well, so long as we're talking about large Perl applications and not some script somebody hacked together to comb through log files. Not many people are learning Perl these days, while there's still a lot of large commercial web sites that were built on Perl in bygone years and still need maintaining. Perl is like mainframe COBOL programmers in that respect.
@monty - and attitude. Being able to work well in an environment with all the usual constraints - (good, bad, indifferent, enough, inadequate) (people, tools, platforms, requirements, budget, processes, coffee) was always high on my list when recommending pay rises.
I started to learn Go, but then I found out that it forces you to put the "curly-bracket" on the *same line* as the function name (due to not needing as many semicolons supposedly).
As a victim of IBM's "Lines of code, per engineer, per month, metric" (as a junior engineer), I cannot put the "{" on the same line as the function name without PTSD-like issues, less someone line-counts my code and finds me under that magical 10,000-line bar.
So that, and the recent antics of google (yes I know Go is open source), prompted me to go learn something else.
Oh God! I'd forgotten about code rates. We had Software and Deliverable LOC measures. They were the bane of my life when I managed development projects. The company had mythical Line of Code rates. The engineers estimated how many Lines of Code a project would need and then these two very unreliable numbers were compared every month and I had to explain to the board why we were behind (we were never, ever, in front). The correct answer - "because the rate and the estimate were miles out" - would not have been an acceptable explanation. The LoC rates were set in stone and weren't allowed to change even when no project ever hit them.
Even worse was when the question was posed early on in the project. The answer "because we haven't finished design yet, so we haven't started coding yet" resulted in an even dumber measure. Someone decided that we could track the design stage by counting all the parsed customer requirements and then measuring how many we had "designed" each month based on a straight line vs time "development rate". The answer to why we were behind - "because we spend more time counting requirements than designing for them" was also not acceptable.
I'm so glad i don't have to do that any more, but I come out in a cold sweat just thinking about it.
I was interested in Go, took a class, wrote a web service with it, and swore to never use it again. There are too many things it outright refuses to do in an easy, clear, or safe way. Go is big at Google because they're using a dumbed-down subset of C++ that's arguably worse.
Any language that allows one to write expressively enough to create poetry, yet arcane enough that optimized code is indistinguishable from line noise rocks.
Obligatory Black Perl reference:
https://en.m.wikipedia.org/wiki/Black_Perl
Sometimes I look at a Perl script I've written and have to say, "how the ....does this work? It's unreadable! What idiot, er, I did that? Damn.. " Perl really is a write-only language
Increasingly I use Python where once I'd have used Perl, chiefly because the young and energetic all seem to know Python, and they may inherit my scripts in a few years.
Once, though, I was looking over a lot of PHP scripts for a system we were bringing in house, and not liking the quality of the code or the quantity of the documentation. Then I a found file that actually had decent structure and documentation. It turned out to be a Perl script that I had written to demonstrate something to these folks. It turns out that you can put all the Perl you want in a file with the ".php" extension, as long as the first line is
#!/usr/bin/perl
Also, self-taught programmers may be heartened to learn that 32 per cent of developers who work for companies of 49 people or less do not have a college degree.
I've been programming for 38 years now (started with a Sinclair Spectrum which is why it's 38 years). I don't have any programming qualifications and have done well for myself forging out a career I love that has left me - at age 53 - contemplating retirement at a time of my choosing.
I do have an OND in engineering but it's the 1970s version and didn't have programming on the curriculum.
How many of you have logged your CPD assignments in order to maintain your skills and professional (chartered) status, in the past year?
How many of you would go to a doctor, dentist, pharmacist, accountant or even lawyer who had not?
Until Software Engineers are required to have Chartered (or local equivalent) status to practice, then wages will be suppressed by the mass of (relatively) unproven workforce.
My Pharmacist wife earns more in 3 days than I earn in 5, despite me earning more than the UK average according to this survey.
I'm reaching for the flameproof coat now!
It's a nice idea but with the ongoing world shortage of software developers and the 'need' for rapid development I don't think it will ever fly. And truth is it's probably not necessary. The careers you mention all carry serious health and legal implications if things go wrong. Lives can be lost or irrevocably damaged. Whilst the same is true for some software development most of us don't do anything that important or risky.
I've been developing software for nearly 40 years and throughout that time if I screwed up and my code was wrong it mostly just meant restarting the application and/or avoiding a particular feature or using it in a specific way. Annoying, yes, but no real harm done. For better or for worse most software only has to be 'adequate'.
Software development is also slightly different in that issues can be fixed quicker and for less cost than most disciplines. Some can be fixed on the fly and most can be fixed by a simple exit application, apply an update, restart the application cycle which might only mean ten minutes of downtime.
Would it be a different world if all software was developed to the same rigorous standards as medicine or law (ha ha) is practised to? For sure. Software would be developing at a far slower pace, probably to the accompaniment of hordes of lawyers. I'm not convinced it would be a better world.
I'd also argue that most software developers get paid quite well for their work simply because they have scarcity value and are in high demand. I've only felt underpaid once in my career that was in my first job so to be expected. At age 53 I've earned enough to be able to retire any day now when the need for more golf becomes unbearable. I'd say I've had a lucrative career.
CPD's a good idea in theory, but in over 30 years of being involved in recruiting HW, SW and FW engineers - from advising the boss, through writing job requirements, filtering applications and interviewing people who would be working for me - I have never required, specified, nor seen specified by colleagues, the need to be a member of one of the institutions or to be a Chartered Engineer.
The IET's annual salary survey will always claim a correlation between high wages and membership, but if it exists then, in my experience, it's not a causal one. The CEng's might the pushiest, but they're not necessarily the best.
I gave up on the IET's CPD when it let me claim an hour's CPD credit for attending a "lecture" by an "engineer" who had worked for the British Antarctic Survey in Antarctica. The whole hour was, effectively, him showing a bunch of holiday snaps with zero technical content. He had two technical questions at the end of the lecture about specific uses of some of the measuring equipment he was responsible for and his answer, in both cases, was "I don't really know how they worked, I just did readings as per the schedule and if I had a problem I emailed the UK." I went to a lecture about building a steam engine, which also got me an hour's credit. It was of absolutely no relevance to my career development but at least it was interesting and the bloke really knew his stuff.
That the IET considers lectures like this to constitute professional development demonstrates the complete lack of quality control in the process and therefore my lack of confidence in any CEng presenting themselves as being better than a non-CEng merely on the basis of their membership. Work experience and subject knowledge are much more important and MIET/CEng are not necessarily guarantees of either.