O/T but a brilliant quote:
I have always wished for my computer to be as easy to use as my telephone; my wish has come true because I can no longer figure out how to use my telephone.
Bjarne Stroustrup (1950- )
Bjarne Stroustrup, creator of C++, has issued a call for the C++ community to defend the programming language, which has been shunned by cybersecurity agencies and technical experts in recent years for its memory safety shortcomings. C and C++ are built around manual memory management, which can result in memory safety errors …
Getting that speed right does indeed matter.
Rush too soon to Rust, and the world shortage of C/C++ programmers becomes total, we've still got billions of lines of code to look after for a long time to come with no one to maintain it. Draw it out too long, or take too long to add proper memory safety to C++, we'll be paying the price of not transitioning quicker.
It's almost as if there's the need for a plan. What could such a plan be?
Plan
I think it'd have to involve educators, projects, standards bodies and governments. Some sort of "date" would have to be set. Standards bodies would have to agree whether the future is Rust, or some sort of modified C++. Educators would have to start teaching C, C++ and Rust or modified C++ to degrees appropriate to where that generation of students are with respect to the "date". That is, keep teaching C/C++ before the date, start mixing in Rust or modified C++ as the data approaches and for a period aterwards (to make sure there's people comfortable with both), and then finally drop the C/C++ (unmodified) when the bulk of the work is done. Government would need to make Standards bodies and the education actually conform to the agreed plan and timescale, and nudge projects by procurement policy (i.e. an unimproved legacy OS gets banned from government services). Projects would have to commit to being ready-ish for "the date", or face becoming irrelevant.
An alternative plan - and one that a lot of C++ people seem to hope would pay off - is to somehow modified C++ to retain source code compatiiblity, whilst bringing in the safety benefits that we know other languages can readily achieve.
Reality
What actually seems to be happening is that Rust is unexpectedly good and hard to fault, is generating its own momentum (through the enthusiasm of the people using it), and everyone else is being left behind by the fast pace being set. Government is already pushing ahead with some heavyweight nudging. Which means, the C++ standards bodies and educators need to act now or be faced with a de facto Rusty future.
So yes, Bjarne Stroustrup's call for urgency is appropriate.
Thing is, it's going to be very hard to make C++ "competitive" with Rust in the "safety" stakes whilst also not totally breaking C++ source code and forcing a major re-write of source code anyway. That's a pretty narrow technical window to aim for, and a narrow window of opportunity time-wise. The odds are not in C++'s favour, I'd say.
C++'s Best Bet?
I think the thing that's probably best for C++ is to go all-in on the "safety" aspects of changes; add syntax to make it syntactically equivalent to Rust, remove old C++ syntax and all the trouble it causes, and force projects to do a lot of work if they want to use it.
That would make new C++ kind of a C++ homage to Rust. However, this could be a good thing. Rust's one weakness is that it is further removed from C/C++ syntax than is usual. Having all of Rust's tricks, whilst still resembling old fashioned C/C++ might make it a few friends.
And the nice thing is, such a language could be rapidly built as nothing more than a language translater that emitted Rust source code. This new C++ would be to Rust what TypeScript is a bit like to JavaScript; something a bit nicer, friendlier (and familiar), but totally fitting in with what's underneath.
C Dies Off
C's demise seems pretty hard to avoid, long term.
C++ guarantees cross module boundaries. An entire dynamically linked application can all have the same set of guarantees, right down to the OS.
Rust guarantees stop at module boundaries. Everything is lost when it calls to any dynamically linked module.
So even if you do rewrite those modules in Rust, you get nothing. A safe module is impossible, because you have to treat the Rust module as-if it's C.
Worse, Rust treats all breached invariants as immediately fatal. Your application may not corrupt memory, but it dies with no possibility of recovery.
C++ exceptions mean a breached invariant can be detected and recovered from before a memory corruption would-have occurred. Even if you choose not to recover for domain-specific reasons, the application can take the opportunity to save user data or roll back a partial change.
In the latter case, Rust will just corrupt your on-disk data instead - you call the module and it kills the application.
> C++ guarantees cross module boundaries. An entire dynamically linked application can all have the same set of guarantees, right down to the OS.
You're overselling how much real-world dynamic loaders ensure about ABIs matching up. There's a reason that Microsoft is especially adamant that you deallocate memory in the same compilation unit as the one you allocated it in, and it's down to how they reserve the right to have different DLLs link to different instances/revisions of the MSVC memory allocator.
> Rust guarantees stop at module boundaries. Everything is lost when it calls to any dynamically linked module.
First, they're working on a stable ABI you can opt into (the working name is CrABI if you want to look up the discussions).
Second, the reason their default ABI is unstable is tied into the same limitations that affect C++. Give these posts a read:
Third, there's the abi_stable crate if you want to build something like a plugin system. It uses Rust-to-C-to-Rust marshalling, similar to what something like Windows's COM APIs do. (Yes, Windows doesn't rely on C++ to preserve invariants... it uses an IDL and marshalling.)
> Worse, Rust treats all breached invariants as immediately fatal. Your application may not corrupt memory, but it dies with no possibility of recovery.
I don't know where you're getting this, but it's wrong.
You're referring to panic! (the unconditional operation that underlies Rust's assert! and other similar calls like .unwrap()) and, unless the project (not a dependency, the top-level package) specifies panic=abort in its build definition, you get stack unwinding, which will call destructors as it goes. (though Rust calls them Drop implementations.)
Rust unwinding has always been limited to the thread it occurs on (because Rust's approach to shared XOR mutable ownership, paired with poisonable mutexes, makes it very difficult to observe inconsistent state from an adjacent thread) and has had std::panic::catch_unwind since v1.9, which is intended to allow you to translate Rust panics into C return codes when writing libraries, but CAN be used to build a hacky exceptions system if you're that kind of psycho.
Also, most of the things which would raise exceptions in C++ instead show up as Result<T, E> (Rust's error-handling cousin to std::optional) in the return value.
> In the latter case, Rust will just corrupt your on-disk data instead - you call the module and it kills the application.
Rust's unwinding cleanup is just as able to prevent corruption as C++'s ...they just had a scare early on in Rust's development when they realized that multi-threading invariants they wanted to uphold at the type-system level for an API that almost made it into v1.0 would have broken if you used Rc or Arc to create a reference cycle (they called it the leakpocalypse), so they focus on getting people to understand that no destructor exists which will run on OOM Kill or someone tripping over the power cord.
"So even if you do rewrite those modules in Rust, you get nothing. A safe module is impossible, because you have to treat the Rust module as-if it's C."
If you are calling Rust from another language (e.g. C) then you build as cdylib. Even if the entry points to a cdylib are an unsafe the innards can be checked against themselves as safe. i.e. you're already in a far better position than if the library was built with C/C++.
"Worse, Rust treats all breached invariants as immediately fatal. Your application may not corrupt memory, but it dies with no possibility of recovery."
Rust prefers errors to be defined in the function contract, i.e. a function should should return a Result<success, error> and the caller should process the result. The language makes it easy to process results or propagate them up. This is also what Swift and Golang do in their own ways - Swift makes it look like a try/catch but it's the same under the covers, Golang has an ok, err return convention.
What you might be referring to are panics, which Rust considers to be untenable application errors, i.e. you did something dumb like unwrap something without checking for error, or exceeding the bounds of an array and the code panics. Normally it is fatal (to the thread) but you can catch it with a catch_unwind() if you really wanted to. So yes there are two possibilities of recovery - define results in the function contract and handle errors properly, or use catch_unwind(). Neither has to be "immediately fatal" although if I were a developer I would want to know about such failures, not hide them.
"C++ exceptions mean a breached invariant can be detected and recovered from before a memory corruption would-have occurred. Even if you choose not to recover for domain-specific reasons, the application can take the opportunity to save user data or roll back a partial change."
Safe by default makes extremely hard to corrupt Rust, and if it happened (e.g. a call out to a C lib) it would crash. But a panic is an orderly unwinding of the stack terminating either the thread or at the catch_unwind(). Secondly, C++ *doesn't* protect you from memory corruption - you will get undefined behaviour, a crash rather than a helpful exception. If you're lucky you might get some kind of stack trace that helps isolate where the crash occurred but not necessarily where it was *caused*. e.g. I had to fix a crash on exit bug that was actually caused if someone hovered over the system tray and it took days to find.
Underlying code has had a lifetime of bugs logged, investigated and fixed.
Nobody is suggesting that all C code is inherently unsafe, only that it can't be assumed to be safe. We have high confidence that long-lived legacy code is safe with good probability. The fact that we aren't ditching things that have working well for years doesn't mean we should continue to code the way it was made. We got a long way with leaded petrol, and we didn't need to scrap the cars that relied on it as soon as unleaded petrol was introduced, but we continued to use them, and then eventually they came up with a formulation that let unleaded petrol be made with the right octane rating to get it working in many of the old leaded petrol vehicles, and classic cars still run... but while we're happy to run vintage food vans for trendy hipster lunches, we don't make vans that way any more, because it's the wrong way to do it.
There are better ways to make cars and we continue to use parts that are made the old way because it's just not necessary to change. But we don't tend to invent new things that use the old manufacturing techniques, because that's silly.
Same with programming:
Keep using the old things that have been working well up to now, but making new things using old techniques is inefficient and a waste of time.
Actually it's not since they've changed the language by removing union and goto keywords. A LOT of code uses those things.
Aside from that there are some other considerations. This language doesn't make ancient code safe, it just makes it fail predictably. i.e. if someone abuses a pointer the code exits rather than be exploited. That's certainly a benefit but there is a performance overhead to this, because C code walks buffers with indexes or pointer arithmetic so something has to check each time at runtime.
And maybe you want the ancient code to actually handle the failure rather than have your program die (e.g. if there is an exploit you could recover from)? Well you're limited to the trap command which is very inelegant.
And what if your ancient code wants to call external APIs, system functions etc. One side uses RTTI to protect data the other doesn't. You can probably call from TrapC to C but not the other way around. So then you're going to have implement some of the kludges suggested.
So no, it's not going to be easy to recompile ancient code. It will take work and it could be a pain in the ass especially if you are not familiar with this ancient code. I've had similar experiences porting libraries onto platforms they weren't designed for and it's not fun.
Read the TrapC whitepaper. "TrapC removes 2 keywords: ‘goto’ and ‘union’, as unsafe and having been widely deprecated from use" If you use some random C library then chances are it contains either or both those keywords and will not compile against this C variant unless you remove their uses.
For example, people frown upon goto but it is legitimately used to denest complex code and ensure it jumps to a cleanup or error state that happens at the bottom of the function. For example, OpenSSL uses goto a lot
https://github.com/openssl/openssl/blob/c1cd6d89a32d08d171b359aba0219357acf0c5cb/crypto/pkcs12/p12_npas.c#L72
And union is arguably worse since it doesn't happen in the scope of a function. It could be pulled in from external headers, used internally between functions, or might form part of the API itself. If it's used then you're on the hook to refactor the code to not use it. Maybe it's as simple as replacing union with struct and taking the memory hit, or maybe it isn't such as when the union is imposed over a buffer to make sense of it.
Either way you're not going to be compiling that code with TrapC until you change the code and hope you did it properly.
OK, I see now, the article mixes apples and pears to come up with a lemon like this:
The C/C++ community has responded with numerous proposals to move toward memory safety, including TrapC, FilC, Mini-C, and Safe C++, to name a few.
TrapC is not the future of C++, it is an evolutionary offshoot of C which may or may not be incorporated into a C standard (and it probably won't precisely because it removes goto and union).
Once again, the tech press should know better. This is Bjarne Stroustrup talking about the future of C++ to the C++ community. The only proposal that should be listed in an article about C++ is Safe C++.
goto is best applied sparingly but I'd leave it in on principle as the alternative is worse.
https://stackoverflow.com/questions/744055/gcc-inline-assembly-jump-to-label-outside-block
include <stdio.h>
int
main(void)
{
asm("jmp label");
puts("You should not see this.");
asm("label:");
return 0;
}
or
include <stdio.h>
int main(void) {
asm goto (
"jmp %l[done]" // %l == lowercase L
:
:
:
: done // specify c label(s) here
);
printf("Should not see this\n");
done:
printf("Exiting\n");
return 0;
}
> At least the Fortran libraries are safe
Probably off-topic and borderline irrelevant but I'm reminded that CTSS (Cray Time Sharing System1, as opposed to MIT's similarly acronymmed Compatible Time-Sharing System) for the Cray-1 and Cray X-MP (upon which I had personal experience2) was written in FORTRAN -- or, more accurately, Lawrence Radiation Laboratory's variant, LRLTRAN.
As someone said, FORTRAN programs can be written in any language.
And, if I might add, vice versa.
_____________________
2 The seizures have long since abated, though my right eye twitches uncontrollably from time to time, but thanks for asking.
There is technical debt in the existing bodies of code, but also some gained value from the years of use and bug-elimination*. Re-writing established code is not without its risks so the decision whether to re-write or not is not always a straight-forward one.
* This is known to be far from perfect as bugs are still found, from time to time, in quite old libraries.
When writing new code the risks of repeating well known** errors is greater and so adopting languages (or as mentioned here - compiler enforced code profiles) that prevent those errors is more desirable.
** Despite some coding errors being well-known for decades they continue to re-occur in new code; people make mistakes. Based on the evidence of continuing repetition of well known coding errors, relying on people to know that they should not do something is not a reliable means of ensuring high quality code.
Having new code written in languages that prevent certain types of error helps prevent the addition of more problems to the code base and allows V&V activities to focus on other error types. Using well-established libraries, even if they are written in a less rigorous language, avoids making other types of error (such as handling corner-cases in the requirements) that might arise during a re-write of the functionality. The hybrid code might not be perfect but will often be the most cost-effective way of reducing overall system errors and provide the best opportunities of delivering effective V&V and high quality code in a constrained development budget.
First you treat them as warnings so you see them every build, then as errors for each subsection as it's updated.
Then it literally won't compile if someone does that.
C++ toolchains have a long history of providing warnings for things people did in legacy code that can be replaced by better methods.
I've done this in C++ codebases that predate the 1998 standard. Others have even done it to codebases that started in C before 1989, and slowly moved to C++.
It's not just being ugly that is the issue but the only way to protect it is at runtime. e.g. this TrapC is imbuing every pointer with RTTI data. So if you have code that walks a buffer an element at a time, then you're going to incur a runtime bounds check every single element of the array.
A modern language would provide and encourage iterators or streams APIs to avoid this problem, and also relieve the client code from having to do any bounds checking of their own. But in C (and some C++) you're stuck with the shitty concepts baked into the language.
So TrapC is offering some protection from ugly code but it comes with a cost.
That's definitely the problem with C++.
I once reviewed some shared_ptr code like this "shared_ptr<Foo> p2 = p1.get();" So instead of having a two shared_ptrs with a refcount of 2, each had a ref count of 1 since p2 was assigned the raw pointer out of p1. This crashed at some random point in the future and it took ages to debug. Another time I had an IO thread which crashed because someone passed a "this" into a timer, but by the timer fired, the "this" in question had readly been deleted.
These sorts of issues are caused by lack of lifetime and ownership checks and the language / compiler doesn't care.
So again, lack of experience is the languages fault because ?
It seems like you have people using boost ASIO, who haven't troubled themself to encounter https://en.cppreference.com/w/cpp/memory/enable_shared_from_this
A Proactor approach to asynchronous application requires taking care of this, it's in the very early tutorials for Boost Asio.
Boost Asio is a high performance async library.
If you want to use the most high performance / sophisticated tooling, you take on some responsibility, not least to read the documentation.
If you can't be bothered to work through the tutorial for complex libraries, you may have a poorer experience, if only there was some remedy available to you.
This specific issue is an architectural detail arising from using the proactor pattern - this would have the same issue (the async completion token needs to live long enough to be invoked), and the same fix ( heap allocate it with a ref counted smart pointer) in any language.
Lets say we were doing this in some other language, like Pascal, because I hate my life. Or Rust because I was down with the cool kids, how do you think that insulates me from the need to understand the concurrent lifetime requirements of my resources?
Why are you salty that running before you can walk is not the best way to proceed?
You could just use the exact same library in synchronous mode, which hides all the sharp edges.
boost::asio::io_context io_context;
// Get a list of endpoints corresponding to the server name.
tcp::resolver resolver(io_context);
tcp::resolver::results_type endpoints = resolver.resolve(argv[1], "http");
// Try each endpoint until we successfully establish a connection.
tcp::socket socket(io_context);
boost::asio::connect(socket, endpoints);
// Form the request. We specify the "Connection: close" header so that the
// server will close the socket after transmitting the response. This will
// allow us to treat all data up until the EOF as the content.
boost::asio::streambuf request;
std::ostream request_stream(&request);
request_stream << "GET " << argv[2] << " HTTP/1.0\r\n";
request_stream << "Host: " << argv[1] << "\r\n";
request_stream << "Accept: */*\r\n";
request_stream << "Connection: close\r\n\r\n";
// Send the request.
boost::asio::write(socket, request);
https://www.boost.org/doc/libs/1_87_0/doc/html/boost_asio/example/cpp11/http/client/sync_client.cpp
> Or Rust because I was down with the cool kids, how do you think that insulates me from the need to understand the concurrent lifetime requirements of my resources?
Because Rust guarantees that you'll do it without introducing undefined behavior, even if you don't fully understand it, which carries a side effect of making it much easier to learn. The compiler also tells you exactly why what you're doing is wrong and even suggests how you might fix it. This may be hard for you to believe, but you literally can not screw this up if you're using Rust. Really. The language honestly delivers on what it promises.
This is all managed not only through lifetime annotations (not even a thing in C++) but also with the Sync and Send marker traits, the latter of which also allows you to use Rust's implicit (and ALWAYS cheap) destructive move semantics (also not even a thing in C++) to send data between threads, and did I mention that it's trivial to combine that WITH concurrency? And it's fast? As an added bonus, Rust's async/await syntax is both a lot easier to write, and a lot easier to read and understand than everything you just did there, in addition to being a lot easier to optimize.
Essentially, spending years of learning how to avoid segfaults in C++ has conditioned you to believe that this is necessarily harder than it needs to be. Every long time C++ developer I've worked with on Rust projects has been absolutely mortified with the way I quickly whip up code that allows one thread to write to the same memory that another thread has two or more tasks that may be concurrently reading and writing from, without ever deadlocking or segfaulting. I have to remind them that...everything is ok...just do a little talk therapy, maybe a fidget spinner or two, and everything will work out.
I don't have undefined behaviour in C++.
You might want to try again, your code might be not very good, but that's not an argument for me to use Rust.
If you don't fully understand what you are doing, than you are less competent to accomplish a task, or to accurately assess if it has been accomplished correctly.
For example being less accomplished with C++ would suggest your critique of it is less valuable but when a skilled C++ developer of my acquaintance advocates for Rust over C in particular reasons, his critique being fact based carries some weight.
When a person who has a shallow grasp of C, slates C, that's less valuable than Linux Kernel devs of many years standing, saying I think I'd like to use Rust in the Kernel.
I might still disagree with them, but it's an argument that can be had, you don't really have one other than you don't really understand what it should do, so you want to write it in such a way as it's unclear what it does do, but you are sure it's "safe", no correct, but safe.
If you use simple code, and the STL, C++ is very easy to write correct memory safe code, threadsafe code, portable clean code.
Lock-Free, Concurrent code, doing fork-join parallelism memory safe, trivial in C++.
C++ gets better over time, your C code can become C++, can slowly migrate to later standards by deleting code. This example over the years has seen me discard threads(c++11), wrappers(c++20), smart pointers(C++11) the core of this is unchanged from C++98, but the undifferentiated heavy lifting just keeps leaving my code for the standard library, and I'm very happy with that.
What you should notice about the code, is what is absent: where are is the Memory management, the Traits, The Annotations, The Thread management, the other offerings to appease a jealous god, you seem keen to share with the missionary zeal of all proselytisers
https://godbolt.org/z/E8n54Tnva
#include <iostream>
#include <thread>
#include <numeric>
#include <vector>
#include <memory>
using worker = std::jthread;
int main(){
constexpr int max_workers = 4;
constexpr int slice = 1024 * 64;
size_t v1[ slice * max_workers] = {0};
size_t v2[ slice * max_workers] = {0};
std::fill(v1,v1+sizeof(v1)/sizeof(*v1),1);
std::fill(v2,v2+sizeof(v2)/sizeof(*v2),2);
{
std::vector< std::shared_ptr<worker> > workers;
for(int i = max_workers+1; (--i); ){
workers.emplace_back(
std::make_shared<worker>([&v1,&v2,slice,i](){
int end = i * slice;
for(int begin = end - slice; begin != end; ++begin){
v1[begin] *= v2[begin];
}
})
);
}
// implictly join all threads
}
std::size_t dot_product = std::accumulate(std::begin(v1),std::end(v1), 0);
std::cout << "Got dot product of v1.v2: " << dot_product << "\n";
return 0;
}
Turns out If you use std::array over plain array, from GCC 12+ onwards, you get free SIMD and std:fill can become std::array::fill.
Please show me this in rust and tell me where my opportunities for performance, clarity, portability, memory or thread safety are being neglected and how the Rust code improves on this.
https://godbolt.org/z/PsMr1xdfd
#include <iostream>
#include <numeric>
#include <thread>
#include <vector>
int main() {
constexpr const auto max_workers = 4;
constexpr const auto slice = 1024 * 64 * 4;
std::array<short, (slice * max_workers)> v1{}, v2{};
v1.fill(2);
v2.fill(4);
auto i = max_workers + 1;
for (std::vector<std::jthread> workers; (--i);) {
workers.emplace_back([&v1, &v2, slice, i]() {
auto end = i * slice;
for (auto begin = end - slice; (begin != end); ++begin) {
v1[begin] *= v2[begin];
}
});
}
// implictly join all threads
std::size_t dot_product = std::accumulate(std::begin(v1), std::end(v1), 0);
std::cout << "Got dot product of v1.v2: " << dot_product << "\n";
return 0;
}
The proposal is to add compilation flags to disable memory unsafe C and older memory unsafe C++ to be left with only the newer more memory safe C++ commands and classes.
It sems a much easier way to make existing software memory safe throw it out and rewrite in another language.
C and C++ do not have to be memory unsafe since language that are memory safe are still written in C and C++.
You just have to use the memory safe checks and can do so in C or C++.
But since bad programmers are part of the problem, all you have to do is have the compiler do it for you.
All you have to invent are some new names for initializing and checking pointers.
You could actually use the checks built into the fine compiler at your disposal
https://best.openssf.org/Compiler-Hardening-Guides/Compiler-Options-Hardening-Guide-for-C-and-C++.html
I understand you might not have seen GCC or Clang they are fairly widely used in the community, especially by people able to write code without obvious howlers which speak to lacking experience in fundamentals rather than language deficiencies.
Circle (the minor dialect change to the C++ LLVM compiler) by Sean Baxter's excellent research claims to have proven all the best bits of the borrow checker are possible with C++ today without substantially changing the whole language.
You can mark the security critical parts to under go this stricter checking with keyword that are in effect the opposite of rusts unsafe. This would obviously allow a suitably configured compiler to only allow 'safer' code or otherwise error.
So a path does exist that seems compatible with what exists, but the existing standards committee structure and viewpoint still has their head buried in the same. So maybe this recent announcement by Bjorne is to force them to chose a credible path. At the moment is is unclear what real world problem the profiles idea will be capable of solving or if it is enough to cover everyones needs. Is it still a talking point vapourware or is it implemented in at least one of the bug three compiler suites. So we can take it for a spin?
Its just a shame that Sean's work appears to be been self funded and has failed to see a commercial backer or see it being made available as open source available to all under existing compiler licenses.
`C's demise seems pretty hard to avoid, long term.` -- put down the skull bong and have a re-think.
There is a need for a systems programming language:
- C has fulfilled that role, and endured while an embarrassing parade of replacements have disappeared into disuse.
- C has maintained that, despite the best efforts of the C Standards Group and Industry Nonsense like MISRA to neuter it.
-System software in C is the infrastructure of so much of the world's medical, automation, control, sensor, (add word here), that you can pretty much negate every other technology.
Review the state of deployment of system software.
Review the life cycle period of systems.
Consider when the injection of Rust, or much more robust forms like Haskell (*), could optimistically arise out of the ` < 1%` status
Then I'll take your fiver.
(*) Haskell has unsafe just like rust; the entire infrastructure of rust rests on unsafe....
C is here forever. It was designed as a “portable assembler” and in that role it is pure dead brilliant as a general purpose language not so much… but the compiler was free or already paid for.
C++ on the other hand has always been my most hated language. The number of “improvements” that come with each new version just shows how wrong Soustrup got it first time round.
C's role as a "portable assembler" has caused issues though, especially coupled with the standard libraries one expects in a C ecosystem. Hardware has had to be good at running C, and it's become increasingly difficult to make an SMP hardware environment faster, safe, and power efficient.
We have dabbled with different hardware architectures. The Cell processor was a good example, whereby one had a PowerPC core (so far, so normal) sharing an internal DMA bus with 8 SPEs with their own dedicated memory (not normal). Whereas with C one expects to be able to share data between all execution cores by passing a pointer to it, that's not the case in the Cell processor. One has to explicitly copy data from one memory to another, e.g. from the PowerPC's main memory to an SPE's memory, if the programmer wants to process that data on the SPE. One could better think about it as a network of computers, rather than one single machine. The pay-off was that the memory subsystem was far simpler, and a whole lot faster. The internal DMA bus ran at a ridiculous rate, as did the external memory bus. The transistors that would normally have become cache memory were instead SPE core memory, and because it wasn't cache (and therefore not having to store lots of cache state information), there was a lot of it per SPE (256kbyte, as opposed to most CPU's L1 cache of 32kByte).
Once one mastered it as a developer, there was nothing else to touch it; it took Intel about 10 years to match it, but even then Intel's CPU was still more power hungry than the Cell's comparatively paltry 80W.
Ok, so this was all C or C-like language, but was a long way removed the usual machine environment that C is normally written to expect. The Cell processor was ditched by IBM largely because of this inertia in the software world, and not because it was a bad performer, nor that it had no hardware development roadmap. Had IBM carried on with the Cell, it'd have been ideal as a vehicle to "own" the AI market (instead of GPUs).
Yet another good opportunity wasted by IBM in the interests of short term profits...
> Ok, so this was all C or C-like language, but was a long way removed the usual machine environment that C is normally written to expect ... [my emphasis].
Isn't that the issue, though? Isn't this saying that the problem is not so much with C -- which has few "expectations", beyond which it's "undefined behaviour" -- and more with the programmer making assumptions, perhaps implicitly, perhaps unwittingly, quite likely unstated, and quite likely in difficult ways to detect, about the environment their code may be deployed in? And why would the same not apply to... just about any other programming language with pretensions to portability and cross-platform use?
There's plenty of Rust operating systems out there that say that C is not a necessary component, going forward.
Whilst it's true that one must use Rust's "unsafe" to do some of the things that an OS must do (like, drive hardware), that doesn't mean that the whole OS kernel source code is within an "unsafe" section. The benefit is that you get as much guarantee of safety as possible whilst still being able to muck around with things like hardware.
Ah, the dynamically linked problem rears its head! But Redox seems to have made progress in this regard, see this news which is fairly hot off the press.
Come to think about it, dynamic linking (or at least, the benefits of dynamic linking) seem to be dying off in OSes where containerisation such as Snap is becoming popular (or at least, imposed). Windows has also kinda abandoned it too; there's still DLLs, but Microsoft resolved DLL hell by making it so that each application had its own cache of the DLLs it needed. Code might as well have been statically linked from the get go...
That is not what MISRA does - it basically says "This feature can only be used if evidence is provided to show that it is being used safely".
There are some areas where it can be a bit of a pain as the authors' have had to "hit" some well-behaved code to ensure that "bad stuff" can be detected decidably, but MISRA do try to avoid that as much as possible and improve their guidelines as new techniques are discovered.
I've been using MISRA for over 20 years, and the only real pain I've had has been with checkers reporting false-positives (that then have to be managed through documentation).
BTW - if you want to improve MISRA, you could ask to join their working groups.
I’ve used MISRA before. It serves a purpose.
The tricky thing is getting a good implementation. Some I’ve seen have switches for the compiler, but their standard library sources are woefully below that par. Like, not even close. I always thought it poor form if the tolls didn’t follow their own rules…
MISRA C, last time I asked on their mailing list (which was over a decade ago) was somebody's opinion on what good coding was. There were no studies showing that programs programmed with MISRA were better (for whatever value of better) than those not.
MISRA bloats source code. A simple `if' statement, for example, expands from 2 lines to 4 lines by virtue of the redundant compound statement MISRA imposes. There are worse examples: try to write a MISRA equivalent of a search through a linked list. It takes three lines in C (the first line being the assignment of the search variable to the list pointer). It would take many more in MISRA C. The only argument against that is that MISRA C is rarely used in applications with linked lists. Bloated source code is more difficult to maintain because one is scrolling the source code up and down more.
MISRA C is used in the motor industry like the bible is used in Christianity. It is taken for granted, and one is expected not to question it.
No, C can never have a "demise" because an OS has to do things only C can do.
And it is trivial to fix C++.
All you have to do is provide an OS function that everyone uses to range check pointers before dereferencing.
I have been doing that for over 50 years and never had a problem.
The OS knows the location and size of the heap and stack, so there should be no problem.
If buffer overruns are the concern, it seems to me sizeof() works to tell you the length of the buffer, so a validation call should be easy to make?
> The failure of that experiment
Cough, there are those who take exception to that idea. Multics didn't sell many copies, but the literal experiment that was Multics worked and the lived a reasonable lifetime: "Bull canceled Multics development after 21 years. Multics continued to be used by customers for an additional 14 years". And, as Tanenbaum wrote, "[MULTICS] was designed to support hundreds of users on a machine only slightly more powerful than an Intel 386" and it showed that was possible for an interactive system.
> The failure of that experiment drove the development of unix + C
Thompson and Ritchie both worked on/with Multics, but Ritchie has said he was influenced by CTSS; Thompson just took the ideas he really liked (hierarchical file systems and a shell as a simple user process) then they created something that would fit onto the kit they could actually get hold of, starting with aassembler on a PDP-7. In 2007, Thomson decribed Multics as overbuilt and overengineered - and compared to Original Flavour Unix, it was.
As with Unix, which has given rise to Unixy variants, Multics legacy lived in on Pr1mos and others.
> Multics, which by its name really is many, was written in PL/I.
Well, PL/1 - and assembler.
> Extraordinary claims require extraordinary evidence.
Is El Reg extraordinary enough for you? The wild world of non-C operating systems
Although, as C was only created in 1972 to '73 (by which time it had struct! yay). K&R was only published in 1978 and the PCC was released in 1979 - that means that *any* OS, other than Unix, being written in, oooh, the late-1970s and before was, by necessity, NOT written in C.
Whether or not those OSes are still running to the current day isn't really important to the argument - especially given that "the majority of computer users" are *not* using "the majority of the different hardware architectures" that are still in use, let alone those that are now used mainly by die-hards.
FWIW I've had hands-on experience with TRIPOS (written in BCPL), AmigaOS (BCPL with C on top), Perq (Pascal), Apple ][ (UCSD Pascal), various Forth boxes (Forth), Pr1mos (FORTRAN, PL/P; it also used Modula-2 but probably not the version I was using).
We were taught about (but sadly could not afford to get the hardware for) Lilith (Modula-2, Oberon), the variants of LISP machine (LISP), the Xerox machines such as the Alto (BCPL, Mesa, Smalltalk).
As these are mostly Old Geezer OSes, I'm sure others can fill in some more recent examples[1]
[1] hmm, TempleOS isn't written in C, although it is probably only its author who considers it to be written in The Language Most High!
Generally speaking Rust is not a hard language to learn. It's very easy to install, has some great plugins for VS Code, has an easy toolchain, is very clean and terse, has a C-like syntax, and it's easy to get up and running in no time. A lot of the skills and mindset are transferable from C/C++ and also back to C/C++. I certainly don't put learning it beyond the capabilities of most programmers and I expect most could become proficient in a couple of weeks.
That doesn't mean there aren't areas that are bound to confuse people, but could probably be overcome with a tutorial written specifically for people from that background. Things like how lifetimes work, borrow checks, what the equivalent of smart pointers are, how move assignment works, how to do polymorphic style functions and so on.
If we're hoping to making progress with memory safety, we have to accept that at present the majority of compilation units will not be memory safe and that in future it would be unduly constraining for all memory-safe compilation units to be written in the same language.
We really need to focus on the semantics: the syntax du jour is then a matter of detail.
Indeed. The best thing C/C++ guardians could do for the future of safe software is to work with Rust designers and build a suitably consistent, cross-language, safe memory model, based on Rust's but presumably tweaked to accommodate C/C++ and others. David Chisnall is right, even if all the world suddenly used memory safe languages, we won't all use the same one (why would we?) so the glue will still be the next weak point, and while hardware may have a role to play it isn't the general answer. C++ isn't going anywhere anyway, but it could make itself more relevant than ever by getting ahead of the curve rather than just scrambling to bolt on some optional memory safe features in desperation.
Sadly, the "we're under attack" bunker mentality, the superiority-complex C/C++ proponents have because so much of the world runs on those languages, and the typically uninformed dismissal of Rust as an "ooh shiny", makes that scenario seem far-fetched.
Although C++ promoted itself as "a better C than C" at one time. Were that mentality to have persisted, perhaps the languages wouldn't have diverged in such odd and NIH-syndrome ways.
I'm looking at you, _Boolean
...
Stroustrup fails to see (or doesn't want to see) the simple fact that you can't retrofit C++ to become a memory safe language. A memory safe language requires a from-the-ground-up approach like Rust has been doing since day one.
He simply has to accept that C++ will one day be supplanted by Rust and other memory-safe languages but like most aging men he's trying to hold back the years and starting to look silly doing so.
I foresee the U.S. government mandating memory safe languages and microkernel operating systems for most of its critical computing tasks in the near future.
> Stroustrup fails to see (or doesn't want to see) the simple fact that you can't retrofit C++ to become a memory safe language. A memory safe language requires a from-the-ground-up approach like Rust has been doing since day one.
Well, Safe C++ (https://safecpp.org/draft.html) shows that you can have a C++ version of Rust, version that can be used as a migration path toward a memory safe language (IMHO this is a better alternative to Stroustrup's Profiles).
> I foresee the U.S. government mandating memory safe languages and microkernel operating systems for most of its critical computing tasks in the near future.
Uh huh. Just like the DoD mandated "ADA FOR EVERYTHING" - remember how well that worked out?
I saw all sorts of things that had Ada call C that never came back, including missile guidance software. It ticked the "written in Ada" checkbox though.
That is ridiculous because obviously C and be memory safe because Rust is written in C and has to be.
Whether or not any code in any language is safe or not depends on if calls are being made to initialize and check pointers.
Which obviously can also be done in C or C++.
That only trick is to automate it so that people do not have to be taught how to manually do it.
It's not really about pointers.
In many cases the compiler can prove a pointer check is not needed - if it's checked in all callers, the callee doesn't need to waste time doing it again.
In other cases some other thread could reallocate the memory after the check but before a usage. This used to be basically impossible because there was only a single CPU core, but that's not been true for a very long time.
The first requirement is really clear ownership.
The "borrow checker" is a great marketing name for what C++11 calls "smart pointers and references". The Core Guidelines came a few years later, and can be enforced in most toolchains.
The second requirement is recoverable failure from a false invariant - the application must be able to "undo" something that turned out to be impossible, or report its failure to the user and developer to fix it. Rust cannot do that by design - it just aborts, deleting user data.
> The "borrow checker" is a great marketing name for what C++11 calls "smart pointers and references". The Core Guidelines came a few years later, and can be enforced in most toolchains.
The borrow checker and smart pointers/references are not the same. In C++ you can use variables after they were moved, in Rust you can't (e.g. std::unique_ptr can be used after it was moved). Also in Rust the move operation is managed by the compiler, not by the user as in C++. The latter is problematic: the programmer can forget to move the pointer, can forget to zero the moved pointer, and last but not least, a zeroed pointer delays the detection of the problem to runtime.
And the case with the core guidelines is that their enforcement is completely optional (and yes, based in my long experience, this is a huge problem).
Rust's borrow checker and C++'s smart pointer or references, or the two combined, are in no way alike. Part of the hostility from C++ proponents is because Rust's concepts don't translate easily into C++ and so represent a blind spot for C++ coders. There's no better solution than first hand experience with Rust and it's borrow checker before trying to opine on it.
recoverable failure from a false invariant
This is just a much deeper pool than you're making out. Rust allows some invariant checking at compile time. It's working on other compile-time checks that probably won't fully replicate the full power of C++ templates (sadly) but should bring other more practical verification features (hopefully, eventually). It also encourages good error propagation at runtime, and if you don't explicitly handle an error because you declared that you weren't expecting it yea it'll panic. That's the safest thing to do. If you really want a transactional setup it's a design problem that I think is out-of-scope for any generalist language. Your transaction level is your application, unless you purposely build it otherwise.
> The borrow checker and smart pointers/references are not the same. In C++ you can use variables after they were moved ... The latter is problematic: the programmer can forget to move the pointer, can forget to zero the moved pointer, and last but not least, a zeroed pointer delays the detection of the problem to runtime.
Since he's talking about smart pointers, I think the poster is referring to moving things only via std::move and r-value refs, in which case it is a requirement that anything moved from still be in a valid state. You don't need to zero anything manually; if you std::move from a std::uique_ptr then the latter is guaranteed now to be set to nullptr by the language specification.
Of course other, more informal, meanings of 'move' remain possible — including anything you want to do with a raw pointer obtained via either .get() or .release() — and the compiler can't currently verify that your custom type with your custom low-level code is obeying the 'must be in a valid state after being moved from' rule. So those are problems, especially in legacy code.
> Since he's talking about smart pointers, I think the poster is referring to moving things only via std::move and r-value refs, in which case it is a requirement that anything moved from still be in a valid state.
It is up to the programmer to ensure that requirement, that's the problem! programmer can code a wrong move ctor or move assignment operator. Also the usage of std::unique_ptr remains optional.
> You don't need to zero anything manually; if you std::move from a std::unique_ptr then the latter is guaranteed now to be set to nullptr by the language specification.
I know how std::unique_ptr works (and how it is implemented), I was referring to the cases where programmers have to code their own movements.
std::unique_ptr can be used after it was moved
No, it cannot. It's explicitly defined to be atomically nullptr after moving.
That is, as they say, the entire point(er).
Same with all the other "smart" pointers.
And yes, enforcing core guidelines is optional. This is an advantage, because you can start enforcing them in parts of an existing codebase without having to upgrade everything at once.
The std::unique_ptr, after being nulled because what it was holding has been moved, is in a valid state and can be set to point to something else if you like — whether from a raw pointer via .reset, by assignment from some other std::unique_ptr that you've moved from, or via std::make_unique.
There's nothing unsafe about that. And I'm unclear what you'd even mean by setting a std::unique_ptr to nullptr as distinct from setting it to point to nullptr.
(Though, for clarity: I'm only in this conversation because I think what C++ does has been misdescribed, giving a misleading impression. I'm insufficiently-experienced in the latter to have a meaningful opinion on what Rust does better or worse.)
> It was bootstrapped in OCaml.
Yeah, and what was your OCaml written in, eh, eh? Yeah, it all comes down to C in the end! Phhhffttt.
Which is an entirely pointless statement. As would be any claim that "Language X can't be safe because its compiler was written in C".
What the compiler is written in has no bearing whatsoever on the abilities (or lack thereof) of the language being compiled (or interpreted or any of the weird half-way houses we get with runtime VMs and JIT and...).
If your compiler is written in C and happens to have a buffer overflow, triggered when you use a variable name longer than 44 characters[1], then it won't generate any runnable code. Bugger. If you trim back to 43 characters then it'll generate code - and that generated code does not magically inherit the 44-character problem or any other *possible* problem that the C code might have buried inside it.
Unless you believe that compiled code has some weird Lamarckian relationship with the code of its progenitor compiler?
[1] totally random choice of possible problem, honest.
While not relevant to the point, as I understand it, OCaml currently has components written in C but that's just because that's what their authors wanted to write them in. I believe OCaml was originally written in Caml, which was written in LISP, which predates C.
Implementation language aside, Rust may have intentionally garbed itself in C++ syntax to make itself less off-putting to mainstream programmers, but it's the latest in a separate lineage tracing back to 1973's ML (one year after C and not influenced by it), which itself was inspired by ISWIM from 1966. (ML was designed with influence from ISWIM. ML begat Caml. Caml begat OCaml. OCaml begat Rust.)
(Seriously. The bits of Rust you recognize were intentionally taken from C++ to look more familiar, and the bits you don't are things C++ has no concept of, which were taken from OCaml syntax.)
To support your point, even if it weren't, the fact that the C++-based re-bootstrapping mrustc compiler can be used to get the regular rustc compiler to recompile itself and produce bit-for-bit-identical output demonstrates that the implementation language of a compiler has no relation to the machine code it outputs. Interpreters can be made unsafe by their implementation language, but compilers don't have to put bits of themselves into the final binaries.
Rust is descended from ML?
Eek.
I never did a program to work using ML[1], I'm probably doomed with Rust :-)
[1] *entirely* down to me, not ML. I'll leave with a feeble excuse about that being when I left the Computer Lab, and the ML environment, to go into normal employment and read K&R...
That's where the weird explicit lifetime syntax comes from. 'a in OCaml equivalent to <T> in C++ and, on an abstract level, lifetimes are a special type of generic/template parameter.
It's also where un-C++-ish things like match, let, ->, Some, None, Option, and () as the void value come from... though OCaml doesn't capitalize `option` and Rust does for consistency.
Still, Rust is very C++-ish as long as you're willing to .to_owned()/.clone() or use Rc/Arc to accept less-than-perfect performance to dodge lifetime issues, so it's not that difficult to learn to the point of being productive as long as you don't try to run before you walk... it just has a known issue with making people want to engage in premature optimization by being too good at "make costs explicit".
Some maniac seems to be hitting the down vote button on a bunch of posts that make this point so: here's the Rust compiler's repository. Everything I looked at was Rust.
The default LLVM backend exists because of how many man-years it took to get LLVMs optimizers as close to GCCs as they are. rustc supports multiple backends.
If you want an all-Rust stack and don't mind a Go-like "Prefer fast compilation over fast code execution" dynamic, try the rustc_codegen_cranelift backend. It doesn't have support for the `std::arch` subset of the SIMD intrinsics yet, or stack unwinding but, once those are implemented, they want to make it the default for debug builds to improve the developer experience.
Outside the scope of official projects, there's also a GCC backend in development, as well as one which compiles to .NET bytecode for easy portability when incorporating Rust-based libraries into projects written in CLR-based languages.
There is no Rust compiler beyond the Rust frontend to LLVM worthy of discussion, you might as well posit GNU Hurd as a Linux competitor, sure it's technically an example, but it's widely unsuitable for like for like comparison.
The LLVM backend exists because Rust is a toy with no significant software to speak off, and the C++ backed generates better code than the borrow checker has enabled, even with the magic of pattern
matching and "safety".
I accept you can generate awful performing code using a backend in Rust, which still amounts to slightly less than one compiler.
I tried. However, you're clearly not arguing in good faith, so I see no need to continue responding.
It's well-established that you can write a compiler capable of generating a given output in any language... it'll just run faster or slower, and be easier or more difficult to debug and maintain. If you don't want to accept that the Rust devs chose to use a C++-based dependency because they have a "release a minimum viable product, then iterate" philosophy, that's not my problem.
It's well-established that you can write a compiler capable of generating a given output in any language..
That is true but irrelevant to the actually existence of a Rust compiler used in Anger by the Rust people, which even you admit is not credible for production.
Bluntly you claim all these improvements for basically a cheap suit on a C++ compiler backend. It's so wonderful that we use your C kernel, and your C++ backend.
Write your own foundations before claiming you've got a real compiler worth sneering at.
C++ already has nearly every safety feature of Rust.
Currently they are optional, but you make using the legacy unsafe things warnings or errors as you go.
I would appreciate it if you could suggest specific safety features Rust has that C++ doesn't.
This continued "rust safe use rust" makes me both suspicious (because I know it is objectively false) and annoyed.
Last I heard, you can't reasonably implement the typestate pattern in C++ because there's no mechanism for guaranteeing at compile-time that references to old state objects won't be held and used. That role is filled by the borrow checker and #![forbid(unsafe_code)] in Rust.
(The typestate pattern allows enforcing correct traversal of finite state machines at compile time. It's used in things like the Hyper HTTP library to catch stuff like "Can't set header. Request/response body already streaming" at compile time.)
I'm saying that, unless I missed something, Boost Spirit has to make up for the lack of a borrow checker using runtime checks.
Hell, I've READ articles on implementing the typestate pattern in C++ and they stop at "This is as far as we can go. It's still possible for X to hold onto a reference to a stale state here and attempt to call a method on it".
(By contrast, the Rust stuff has no problem with that but is apologetic that Rust only has an affine type system, not a linear one, so ways to check at compile time that you're only calling the destructor from intended-to-be-terminal states are hacky.)
If you can't implement a state machine in C++, that might not the fault of C++.
If you are constraining yourself to a particular implementation technique, I'm willing to accept there are things that might be unergonomic.
Boost has this
https://www.boost.org/doc/libs/1_87_0/libs/statechart/doc/index.html
My advice with C++ is try to write simple code, don't try to write monadic head scratchy stuff. Clean simple data models, be explicit, don't worry about performance.
It's far easier to write something which parses a textual description that to try to force everything into compile time - do it at build time, and use https://en.cppreference.com/w/cpp/preprocessor/embed
to pull it in, so you can have nice simple code.
The purpose of the typestate pattern is to achieve certain guarantees at compile time.
Of course you can implement anything in any turing-complete language. The question is whether you can get the compiler to prove, at compile time, that certain invariants will hold across all possible inputs.
As Dijkstra wrote in 1972 in The Humble Programmer when arguing for more ability to prove program correctness, "Program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence."
If you insist on doing things in a particular manner, you might find that uncomfortable.
So are you trying to accomplish the desired goal, or is your goal to utilise the desired technique?
C++ is unashamedly biased towards the former, and away from the latter.
If you want to do compile time computation in C++, you have been able to do so from the very beginning, it's not that pleasant or ergonomic despite improvements over the years.
If you want to compile time selection based on type traits https://en.cppreference.com/w/cpp/types/enable_if you can do so.
It seems you want concepts https://en.cppreference.com/w/cpp/language/constraints
If you want something specialised to your use case in the most ergonomic fashion, write a DSL.
If you want something maintainable, portable, write it in the most simple straightforward fashion possible.
Why make your life more difficult and your builds slower. Do the work upfront, compile into a nice representation and use that.
It's I think why the Rust and C or C++ communities are far apart
As a Humble programmer, I think you're perhaps in need of simpler code.
I'm sure you can pull off what you are requesting in C++, I'm also sure it will be somewhat unergonomic.
I'm reminded of an old vauldville line.
Patient: "Dr. Dr - it hurts when I do this."
Dr "Then don't do that."
Why do all Operating Systems have to be written in C?
There's no legal requirement for it as far as I know, it's just easier to do it in a high(er) level language than in Assembler.
As for Rust or any other compiler needing to be built in C?
Absolutely baloney of the finest grade.
It's probably possible to do it in REXX if you want...
I'm a bit rusty on REXX, so I won't attempt it.
I did write a compiler for a simple structured language in ADA once upon a time, though...
I am not a young bloke. I learned C in 1991 (and was teacher's assistant too). Since then I've heards call to increase meory safety both in C and C++. And I suspect those calls were even older than that.
I read time and time again in BYTE Magazine and Dr Dobbs proposals for memory safety and garbage collection for C and C++
The standards bodies did not pick up any of those. And no compiler vendor presented a propiertary solution (but plenty of propiertary solutions for other stuff).
And now, after 34 years of innaction (from my perspective, I am sure the total innaction time is even bigger), ¿NOW they feel threatened and want the stadards bodies to pick up the pace?
Cry me a river. At least C is mildly ussefull, C++ can die in a (dumpster) fire.
¡Real Memory Safety from the ground up (instead of bolted on memory safety after the fact) for the win!
> And no compiler vendor presented a propiertary solution (but plenty of propiertary solutions for other stuff).
Apple supported a garbage collector for C; you marked your pointers as __strong (i.e. owning) or __weak (i.e. automatically nilling) to opt in. It was primarily marketed as being for Objective-C but worked in regular C too.
It wasn't supported on iOS though, so failed to be at all relevant during the sudden explosion of Objective-C programmers circa 2008, was deprecated in favour of automatic reference counting which unlike garbage collection is for Objective-C objects only.
All runtime support is long-since gone; it was removed about a decade ago.
Garbage collection is explicitly unsuitable for anything that needs time or lifetime guarantees, because the developer has no control over when an object actually gets destroyed or the memory recovered.
This makes some things needlessly complicated - eg in C#, sometimes I must explicitly destroy, sometimes I must not, and there's no visible indicator of the difference anywhere in the application. I have to look up the documentation and hope it's accurate.
It also means (parts of) the application just stop
responding for a
while, effectively at random.
So nobody used it, as ARC and RAII are provably better.
I sympathize with the C/C++ folks, I really do. Even though I've designed and implemented a couple of replacement languages (Draco and Zed). If someone is really good at something, they are likely happy with that. Any change is going to to take away from that. Those at the very top of the game will be impacted the most. Those just learning may not much care what language they use. If you know the syntax, semantics and rules of several languages, you probably aren't *really* good with any of them. "Jack of all trades, master of none." The human brain can only do so much.
So, forcing C/C++ wizards to change to another language is going to cause them a lot of pain, and reduce their productivity. So, adding rules, etc. to the language they are a wizard in is perhaps less painful for them. BUT. It *must* be enforced. Expediancy or a bit of efficiency cannot be used as an excuse to violate the new rules. And yes, management must be in agreement on this, because it will affect their plans and schedules, and *those* cannot be allowed to force rule violation.
If someone comes up with a new language that can solve the problems, I believe it should not attempt to look too much like C/C++. The reason is that the similarity can trick the experienced programmers into doing things that the new rules will prevent. => frustration. But, I believe the "style" of the new language syntax should be similar to what the programmers are used to. Changing the style of function headers, declarations, etc. shouldn't be done just because you can. There must be real advantages. For example, I really don't like the "." cascades that Rust seems to encourage - the code is ugly to my eye.
Most folks know that C/C++ declarations are awful. The "cdecl" program that translates between C syntax and English was written long ago, so the need isn't a new one. Swapping things like which side the "*" goes on makes a big difference - lots less parentheses needed.
If you know the syntax, semantics and rules of several languages, you probably aren't *really* good with any of them.
Personally speaking I've found the opposite. The more languages you learn (to a point) the greater appreciation you have of their actual design, because you see and think about the differences, and that in turn helps you understand the power of each language feature more completely and pick the right design pattern for your problem. Natural language multi-linguists will tell you much the same.
It is difficult to 'retool' the brain quickly from one language to another though, that's a barrier that probably keeps many coders monolingual.
>” Those just learning may not much care what language they use. If you know the syntax, semantics and rules of several languages, you probably aren't *really* good with any of them.”
Much depends on what you mean by “good”.
I know from my experience and comparison with friends, having previously studied several languages and used Algol-68 and Assembler (x86) in anger, learning C was trivial. However, because of the environment I learnt C in; small fast growing startup, within a year my understanding and proficiency in C far surpassed those of friends who worked in much larger and less demanding organisations.
However, today, several decades later, I know because I don’t use any of these languages to any great extent, my skill level has probably reverted to competent beginner.
From TFA: it's clear the issue is not just slow progress but the absence of a public narrative that can compete with the tech industry's adoration of Rust.
"Narrative" == modspeak for, "story" (which may be partial, or complete, bullshit).
We don't need PR wars adding noise to the issue.
C++ (and C, and other computer languages) are not things which need to be "defended". They're either fit for their purposes, or they are not.
The bounty of buffer overflow and unsanitized/unchecked input string vulnerabilities we're seeing these days is the fruit of decades of "economic race-to-the-bottom" managerial decisions which have reduced/eliminated proper instruction and training of programmers, demanded projects be completed in unreasonably-short time, outsourced programming to offshore companies, whose managers also have reduced/eliminated proper instruction and training of programmers, and demanded projects be completed in unreasonably-short time.
Adding poor working conditions and pay (YMMV, substantially) incentivises programmers to not give a flying fuck, so those who know they ought to put in various checks, do not, because they answer to managers whose primary concern is being able to tick the "done" box on some spreadsheet, come Friday afternoon.
We don't need PR wars adding noise to the issue.
Some PR is needed. Even the tech press who should know better tars C++ with C's brush and Rust proponents capitalise on that too because, well, it makes their advocacy easier. (Why is Rust allowed PR but not C++?)
I agree that in the wrong outsourced hands, no language is fit for purpose, including Rust.
C++98 certainly depended on manual memory management. Newer standards, that people may or may not be using, provide an ever-changing set of memory management options. Looking for problems in Rust requires primarily looking at unsafe areas; whole C++ code sets built by the most careful programmers are based on versions of C++ that didn't have these memory management tools. The advantage of C++ drops a lot when requiring rewriting of older code to modern standards.
If you rewrite, you're throwing away decades of corrections to the actual algorithms and business logic.
You will make mistakes when trying to do that. Worse, you will not detect the mistakes because you can't verify anything until large sections are complete.
If you refactor it in the same language to swap legacy unsafe calls for modern safe(r) ones, you are less likely to make mistakes. On top of that, any mistakes you do make are easier to detect because the thing still runs while you're doing it
I've recently swapped out 8 bit integer maths for 32bit floating point in a C++ application. I could be confident that I'd found everything because I could use strong typing to get the compiler to tell me everywhere the 8 bit was used, and thus make the decision on the proper conversion.
The other 50k lines of code are untouched.
This means all the changes I have made are easy to find, and thus mistakes made can be corrected.
If I had instead rewritten the application in some other language, none of that would be true.
> C++98 certainly depended on manual memory management
It actually didn't, we just used to roll our own smart pointers (mainly shared pointer because we didn't have correct move heuristics required for unique_ptr).
The problem was that all of our smart pointers were different per project and it was a pain to marshal between them.
The legacy of C++ is what leads people to think it is much less safe than it is. 98% safe for C++ and Rust when dealing with raw C libraries is certainly workable.
The fact you had to write your own smart pointer proves C++98 didn't provide safe alternatives to traditional manual memory management. And I have enough C++ books on my shelves that describe the remarkable complexity of trying to write a decent smart pointer to suspect your implementation was as subtly broken as most were.
No other language seems to have so many books describing the subtle and not so subtle gotchas as C++ has. Add in the arcane delights of template meta-programmimg along with the massive changes in each C++ standard and it's no wonder so many C++ codebases are maintenance nightmares.
> And I have enough C++ books on my shelves that describe the remarkable complexity of trying to write a decent smart pointer to suspect your implementation was as subtly broken as most were.
Whilst it certainly can't have been any worse than C++98's attempt via auto_ptr, broken heuristics is not such an issue, so long as it was safe. And it was.
I find it strange that so many books did not cover smart pointers in that era. Again, this contributes to C++'s bad name for "manual memory management", when really there was just an extremely large number of weaker developers, trying to use ANSI C with a C++ twist rather than embracing it properly (it didn't help that the language was complex so fairly few compilers supported correct features). Again C++'s popularity is its blessing and its curse when it comes to making a name for itself.
They can but they don't get away with it because there's no excuse when it comes to code review time. You don't need unsafety in Rust to get performance (outside the most obscure low-level situations). You also need the overhead of reference-counted smart pointers much less frequently than in C++.
> C++98 certainly depended on manual memory management.
`std::auto_ptr` existed in C++98 but has the ignominy of not only having been deprecated since but actually removed — removals are extremely rare in C++ world and more or less flag that something was an active hazard.
(In this case, it's because std::auto_ptr had std::unique_ptr-style single-owner semantics but without std::move to flag programmer intent it moved ownership upon assignment. So `=` was a _move_ operator. Which really screwed with generic code since it's pretty normal to expect the assignment operator just to assign, i.e. to set the value of the thing on the left without mutating the thing on the right.)
I look forward to one of these experts proving their argument by demonstrating an operating system written in Python and a Playstation game written in Javascript.
Rust is like the newly graduated whizkid arriving at its' first job and telling all the old guard that everything they know is old hat and he's going to improve it all with the trendiest new shiny. C/C++ are the people who know that the whizkid hasn't yet had to deal with the real world as it is - imperfect and tricky - and that the new shiny isn't able to deal with those imperfections.
Show me a PS1 game written in C++23. Prove that C++ is better than BASIC by writing a game for Altair 8800 in C++.
If we're talking about PlayStation 5, there's no reason there can't be if someone cared to port it. There's commercial Switch games in Python, like Milk Inside A Bag of Milk Inside A Bag of Milk.
More importantly, prove that C++ is better than Python by teaching CS101 in it. Prove that C++ is better than JavaScript by writing a web frontend in C++. Python and JavaScript aren't competitors for C++ in the OS space.
Also importantly, people are doing things with JavaScript and Python. Most popular on GitHub in order: Python, JavaScript, TypeScript, Java, C#, C++. Most jobs (IEEE data) in order: Python, Java, TypeScript, JavaScript, C#, Shell, C++.
We could talk about the old people sitting around telling themselves that Fortran on mainframes is fine, OSes have to be written in assembly, PCs won't go anywhere, WWW is just another toy for Internet newbies, real programmers don't need source control, nobody is ever going to read a webpage on a mobile phone, games on phones aren't worth writing, nobody will ever write an OS in C++, nobody is going to use this cloud thing, etc. etc.
Or we could acknowledge that as time goes on, new tools become standard, even while many are flashes in the pan, and it's adapt or die. If someone can't recognize why JavaScript and Python are important now, they're about ready for the retirement homes with the Fortran programmers and their punch cards. It's not PR; those languages have held top spots for years because they do what people need done.
> More importantly, prove that C++ is better than Python by teaching CS101 in it.
CS101 courses change languages on a whim. Whatever is going to be (freely) available at the time (to those students) and will get the job done. Assembler, Fortran, Lisp, BCPL, Algol, Simula, Pascal, C, C++, Java, Python, Whatever-comes-next.
I was taught ANSI C/Posix in CS101.
Which imperative language is chosen doesn't really matter, as long as it has scalars, nullable and non-nullable reference types, recursive functions and some analogue of plain-old-data structs and preferably classes.
It's the concepts that matter.
For a time it was Java, but Oracle broke that.
I understand that C# and Python are both popular now, though Python suffers from its weak type system and significant whitespace, and C# suffers from its limited portability. Neither of those really matter at CS101 level.
Personally, I think Python is possibly best for that level of course because it's a very good "glue" language. Many people taking course aren't going into actual software engineering, but will want "glue" to stick their "real" research together.
I'm not sure whether functional or declarative languages get taught at that level these days - those paradigms seem to be considered "hard".
> time goes on, new tools become standard, even while many are flashes in the pan, and it's adapt or die
New tools become an addition to the standards and you have decades (probably your entire working life) of codebases written in the extant tools to work on[1], if, for whatever reason, you don't abandon it all for the latest (but not necessarily the greatest) thing.
[1] and as time passes and you are one of the few who still can work on those projects you'll be in demand, more so than "pick any one of these replaceable programming units".
operating system written in Python and a Playstation game written in Javascript
An awful lot of games these days are written in memory safe languages, just with tight, well-tested low-level APIs in C++ for graphics underneath. You should look at what Eve Online achieved with Python fully 22 years ago pretty much just on the back of implementing light threading and safe concurrency very well. World Of Warcraft? Mostly LUA. Outside MMOs C# and Java are very popular choices for high performance games.
Lack of memory safety comes from only a handful of causes such as abuse of dynamically allocated memory, overrun of buffers and misuse of pointers. The type of code that's prone to this is very common but because of this its easy to forget that this isn't all code, just the most common type of code you'd find on a PC or similar platform. Its very easy to fall into the trap of thinking that's the only kind of program structure that matters and then squeezing every paradigm into this familiar pattern. The result is that memory safety becomes a problem of overriding importance, especially if you're using C++ coding techniques that don't make it particularly easy to design in or inspect / verify the code's operation. Its laudable but its narrow focus ignores the rest of us and any attempt to say "but what about us?" gets drowned out because we're old, behind the times and so on.
I've met this mindset many times. Typically its been about 'reliability' or 'security' and it tends to obsess more about the concept than the actual meaning (its a variation on "if something's good then more of it is going to be better"). I've got no argument about enforced memory safety in situations where memory safety is a concern but just throwing out the concept without really understanding the why, when and how is likely to meet pushback. Especially as a lot of modern software is not only extremely bulky for what it does but its not particularly reliable, suggesting that its easy to point the finger at the toolset as an alternative to really getting to grips with the fundamentals of what we're doing.
A friend that I was lucky to have worked with described this solution as `blunting the scissors`.
Having worked with a chef friend who kept very sharp knives, this resonated with me -- even experts have to push harder on dull knives to make them work as required.
It is as if somehow the magic of *memory safety* will make incompetence a non issue.
"even experts have to push harder on dull knives to make them work as required."
I have only ever seriously cut myself with a blunt knife (usually not mine.) Unnecessary force strips threads and removes fingers. :)
Curiously really sharp metal edges cause a peculiar sensation when placed against the skin so you seem to get an "early warning."
I had the impression that at least recently the thrust of Rust safety was thread and lock safety which is probably a more practical concern not really directly addressed in other languages. Possibly memory safety and these concerns are formally equivalent in some sense.
Never having earnt a crust programming, I have always used the simplest available language for what we're mostly simple tasks. The one thing about C that struck me years ago was the dependence on pointers for everything except scalars - even structures were passed by reference (address) in the very early days and arrays/vectors will always be. I have to wonder whether this oddity has meant programmers preferring to work with references even when the operation on object essentially requires a copy (such as storing an object in container.) When you are intentionally working with references viz there is a single object which must be simultaneously in more than one context there is unavoidable complexity around object lifetimes, locking etc.
The really big fly the ointment I would think is the human component. From my experience programmers, coders, developers whatever title de jour are a pretty mixed bag of which between 1% to 5% are really clever and can actually grok this stuff, 50%-60% who really don't and readily admit that, with the residue who to various degrees delude themselves that they are "all over it" ("the full bottle" even. But of what?) It's this latter group who are the most dangerous (A little knowledge....)
With the AI inspired rounds of retrenchments in tech generally, I can foresee fewer talented candidates envisaging their future careers in software development and those already in that industry facing reduced renumeration and loss of employment security contemplating alternative career options creating a perfect storm that would not be so much a skills shortage but a devastating drought.
Unfortunately, in IT, we hand "guns" to the untrained all the time.
You just have to SAY you have been trained, throw a few buzzwords around at the interview, and you are given the title "senior whatever" and thought to be an expert. Or get someone else to interview for you and then you show up at the job (best for "remote" or "overseas" work, of course).
Programming is hard. Now, I'll be quick to admit that C is dangerous as heck, and yeah, some simple memory constraints and checking embedded in the language would probably be a good thing. But look at what the "safe" Java language did -- lowered the bar to entering "programming" so low that almost anyone can become a programmer, and it shows. And that "Java sandbox" they used to talk about seems to constrain your program within its boundaries about as well as a real sandbox does for a real toddler (not at all).
I'm not a "memory safe denier" by any stretch, but let's be realistic -- there aren't enough GOOD programmers to go around, and there aren't enough managers willing to pay them to keep them or let them spend the time needed to write code, refine code, document code and cross-train their coworkers to make the next generation of good programmers. Memory Safe languages will not change any of that. I suspect "memory safe" languages will do for programming what automatic transmissions did for driving -- enable a lot more less competent people to do more damage in other ways.
If we go back to Golden Age, everything will be better. People were just better then.
In reality, the Gold Old Days weren't so great. We can look at the finger daemon, that brought down the Internet in 1988, because it doesn't check for buffer overflows. We can look at the old days of zero security, where even multiuser systems like ITS had little protection. Or C itself, where just using the standard function gets meant a program could have its memory overwritten by too much input.
Nope, that is the current situation and demonstrably does not work. 99.99% of people writing code are low paid unskilled grunts who will never, ever be able to write safe code in C/C++.
This is why we have seatbelts in cars, because 99.99+% of drivers are complete dumbasses. And unless you're willing to take them off the road, or not let dumbasses code C/C++ (which I would be fine with but will never happen), they put everyone else in danger so you just have to assume everyone's a dumbass.
Of course you can wave your hands and say 'But imagine every programmer were responsible for their code!', which again, would be nice, but will never happen and can not possibly ever happen.
This is just ignoring the fundamental problem and wishing it away.
>” 99.99% of people writing code are low paid unskilled grunts who will never, ever be able to write safe code in C/C++.”
Trouble is I don’t see their use of Rust improving matters.
I remember in the mid 1990s contractors who claimed to be expert VB programmers, but with no formal computing qualifications, who struggled to understand the basic programme design concepts I was assuming they knew to solve the problem at hand.
I think, what is being lost is real high level languages: do you really need to use C or Rust for many business application programmes? Ie. “If the only tool you have is a hammer, you tend to see every problem as a nail.”
The idea is that the 00.01% who know what they're doing can maintain the modules that need `unsafe` and then, during code review, reject any attempt to remove the `#![forbid(unsafe_code)]` from the other modules.
I won't prevent logic errors, but it'll prevent them from accessing the constructs that are to programming as radioisotopes are to power generation and medicine.
> This is why we have seatbelts in cars, because 99.99+% of drivers are complete dumbasses
Bloody stupid statement. The best drivers in the world needs a seatbelt because the .1% - not 99.99% - are total dipshits who will read-end you, drive through a red light and so on. And bloody stupid attempt at an analogy.
Gonna take a punt and assume that you believe yourself to be in the 0.01% of non-dumbass drivers.
"What about...... if programmers actually took responsibility for their work"
The problem is... good code takes time to write IN ANY LANGUAGE. If you do it right, you will be beaten to market by your competition. It really is a race to the bottom (or a race out the door). Why learn your tools in detail if you can grab a few pieces off the shelf and shove something out the door NOW, get the praise of your managers, a jump on the competition, and start raking in the money, grab a bonus, and then jump jobs before it all blows up?
And then there's the career problem. Employer/employee loyalty is a thing of the past, and employment cycles are shorter than major software revision cycles in some/most environments. I've seen too many cases where the main application was (poorly) developed by people no longer there, and new people trying to figure out how to bolt new features on old code and fix bugs.
I'm not going to pretend to know what the answer is.
Then there would be far fewer programmers in the world and as there has been a shortage of programmers for the last fifty years with the situation possibly getting worse the IT industry would be in a worse state than it already is.
Computer programming is mentally difficult requiring considerable intelligence and diligence. Most humans don't have enough of either. Mistakes will be made and the earlier they can be detected the lower the cost of fixing them. Having a language catch them (ideally via the IDE but at least via the compiler) will catch a lot of inevitable mistakes and for relatively low cost.
Intersecting why all this things started to grow when rust become a bit mature and had incredible amount of promotion? There where a lot of memory safe languages before, a lot of memory safety problems where fixed, why now? Maybe someone lobbying their investments through white house? Hmm nooo way
Which version of “now” do you mean? For 8 years running, by far the most loved language (i.e. want to use it again) in the huge StackOverflow survey. 2½ years in Linux, more and more companies (MS, Google…) reporting sharp drop in vulnerabilities and speedup even vs. C (as the strong type system enables more optimisations.)
And it’s not just memory safety. One writer xor multiple readers and variables being immutable by default enhance that. Then there is the eradication of CAR Hoare’s billion dollar mistake, null pointers. As well as the elimination of data races. Even deadlocks can be avoided, if you wrap them into a typestate ladder. Then the compiler can enforce order of locking.
"FOSS" is tainted, as it's about trying to be neutral between freedom (free software) and licking the boots of corporates ("open source"); https://www.gnu.org/philosophy/floss-and-foss.en.html
Of course the GNU/Zealots wrote the best compiler and linker in the world and don't hesitate to discuss freedom.
Regardless, I don't see how you would attract anyone if you were to privately run GCC on your computer, as how would anyone know?
> why would you use such weakly licensed compiler
* Because it works
* Because you should always run your code through as many compilers as possible to catch oddities in your code (including GCC)
* Because it has allowed for things that GCC is historically bad at (the library-based approach and ability to use chunks of LLVM in your own way, for new languages, new tools that parse languages - GCC first came around in 1987 but trying to make use of its parser to create any other tools has always been non-trivial: GCC-XML (2002 - 15 years!) was a valiant (!) attempt, but that has been superceded by CastXML, based upon LLVM/Clang, because - it works better and isn't based upon a bodged copy of the code!
BTW I use GCC - a lot, as it is a basic toolchain for many processors - but dissing LLVM shows a - distinct lack somewhere.
A Golden Rule that I've lived by is that if I'm stretching the compiler to a point that it exposes bugs then I need to find another way to write that code. I have come across compiler bugs, typically code generators for more obscure processors, but I've never found a situation where the problem couldn't be solved by just rewriting the source.
This reminds me so much of the time I rented a Tesla at the airport. It had a few quirks, but nothing serious... until it suddenly started raining. I fumbled for the windshield wiper control, but there wasn't any. Apparently, I was supposed to enable them from a screen full of menus. WTF? I had to pull over to the shoulder (this almost caused an accident in itself, as I couldn't see much out of the windshield without my windshield wipers). I was able to enable the windshield wipers after studying the menus on the screen, but if I tried to do this while driving on the highway, I'd probably cause a 60 car pileup.
I've heard the EU will not give any new car that doesn't have a physical control for critical functions like the windshield wipers or headlights a 5 star safety rating. What an excellent idea!
Look, C++ has had a great run for 40 years. And will continue to be used for decades more. If you a are C++ programmer you will have no problem finding a job for your entire career lifestyle, stop being so reactively defensive about that. COBOL programmers are in hot demand!
But it was designed for times that no longer exist. For when you could have an innocent memory overflow without North Korea, Russia, China, Iran, the NSA, Israel, and every other bad guy on the planet being instantly all over it because their automated vulnerability detection is way better than any safety checks that exist (or can exist) in C++. Languages change when the environment changes.
Yes, I know, Bjorn, you always WANTED safety in C++ (and I want a pony), but you ALWAYS threw that under the bus in the name of performance and memory use. Which is fine - that's what people valued at the time. Well, now people value not having their PC or phone pwned by an overflow error in the code of an entirely unqualified coder a giant amoral corporation paid pennies to create their app. C++ is not 'under attack', there are just better alternatives.
And, as someone who's coded hundreds of thousands of lines of C++, I can only imagine what bletcherous syntax vomit you'd manage to come up with for 'memory safe C++'. Even worse than templates? I'm horrified yet fascinated!
And that is the problem with C++. When it was just C-wirh-classes, or structs-with-methods, normal human beings could conceive how it works
But operator-overloading, templates, generics, and all the rest of the clever syntax that makes declarations look like a fandango on the shifted numeric row just ensure that only the elite practitioners can actually cope.
C++ jumped the shark years ago.
Ah yes, the 'C++ is old, so it's obsolete' argument. By that logic, we should stop doing math since it's ancient too. C++ is still here because it works. Your OS, your games, your embedded systems. Guess what they run on? Not Rust.
'Memory overflows weren’t a big deal before hackers!' Yeah, because buffer overflows totally didn’t exist in the 80s. The problem isn’t C++, it’s incompetent devs who shouldn’t be writing low-level code. Security is about competence, not slapping 'memory safe' on a language and calling it a day.
'People value safety now, so C++ is outdated!' That’s why every performance-critical, security-sensitive system still runs on C and C++. Linux, Windows, MacOS, game engines, real-time systems. All C++. If Rust was so superior, why isn’t it replacing them?
'There are better alternatives!' For what? Rust compiles slower, has a smaller ecosystem, and forces devs to fight the borrow checker more than they fight actual problems.
'C++ isn’t under attack, but imagine memory-safe C++!' You mean modern C++ with smart pointers, RAII, and sanitizers? Rust fanboys ignore that safe C++ already exists, they just don’t want to learn it.
Rust isn’t replacing C++, it’s just a crutch for devs who can’t handle manual memory management. But sure, keep waiting for Rust to take over while your code takes twice as long to compile and still doesn’t outperform C++. We’ll be over here getting work done
* why would they be ashamed of cheer leading for it?
They are just quietly getting on with coding and don't see any value in screaming and shouting, for or against anything. Because if anybody was REALLY interested in knowing about any language these days they can just download the implementation and try it themselves. It just costs a bit of time.
"Cheerleading" in an internet forum is a pointless activity.
I'm a fan, but honestly what's the point in arguing with some of this. It's the same set of complaints, without evidence.
You can't really have a decent argument, because it's not fact based, that you can offer a position which can itself be discussed.
Instead we get people who clearly don't use C++ or C in it's modern incarnations complaining.
Because "someone is wrong on the Internet" is a waste of time when we could be spending that time either doing something more productive or touching grass?
When I happen to find myself here and I see someone being factually wrong, I'll correct them but, otherwise, why write a letter to the paper shredder?
> But it was designed for times that no longer exist
The youngsters probably don't realize this but barely anything has changed for 20 years. Same processors, same operating systems, same GUI libraries, same network stacks.
C++ was great decades ago and thus is provably great for today.
Maybe once quantum computers become mainstream things will change, but lets be honest, the first language for that will be qC (some C compiler with Quantum extensions).
The "designed for times that no longer exist" they're talking about is how, when C++98 was standardized, the overwhelming majority of devices still either never connected to a network or only connected intermittently over a dial-up modem, making for a very different threat profile.
I normally program in C++. I rarely use raw pointers. I use static analysis, valid, compiler flags all on...
I rarely see any issues with memory except when interfacing to 3rd party libraries. This is where rust would be using unsafe ...
I've written SIL-IV code in C. It's painfully time consuming and expensive.
The approach to writing safety critical code means that Rust is not ready yet.
It lacks a standardized specification that can be validated all the way to assembler code...
It lacks the tools for full execution path verification...
It may get there but it's not there at the moment.
Rust in peace.
Once again the mention of 'rust' evolves into the usual quasi-religious pointing of fingers and allocation of 'blame.
C & C++ are not perfect BUT the majority of coders in the world will know it more than rust.
Q: Which is the bigger problem (in terms of costs & time needed):
Retrain all the coders to be *proficient* in rust !!!
Rewrite *some/all* the existing code in rust !!!
Define a usable memory-safe construct in C++ & teach coders to use it !!!
Modify *some/all* of the existing code to use it !!!
This problem is NOT about which is the 'better' language BUT what is the easier/quicker solution.
If it takes too long and/or costs too much it will NOT happen ... this will lead to a fudge ... yet again.
It is 100% assured that the coders of the next 20+ years will 'once again' look back and allocate 'blame' ... and this time it will be 'you' ... so get it 'right' :) !!!
:)
I'm sorry if I'm misunderstanding the entire issue here, but why would one need to "defend" a programming language from "attacks"? A language either serves a useful purpose or not, on the basis of how usable it is it's either popular with developers or not, and it's either the best available option for any given purpose or not. That's how languages rise and fall.
It's like evolution: languages that successfully compete with others on the basis of their capabilities survive, the rest doesn't. It's also like science: in individual cases it can be wrong but as a whole it is self-correcting. If a wrong scientific idea is proposed, subsequent testing will disprove and discredit it. It's only a matter of time.
I agree that Rust may be over-hyped at present, but if it proves itself to be a better option for C and C++ in the long run it will eventually replace those languages through the IT-equivalent of natural selection.
If you have to "defend" a language that means it's probably on the way out already. If you feel a language is subject to "attacks" you haven't understood how computer languages evolve in the field of software development.
I have a great respect for Bjarne strOuStrUp() but I strongly feels that in this he's wrong. I can understand his sentiment - C++ is his baby. But babies grow up, grow old, and eventually die, sometimes before their parents do. That's sad. But it's also life. Deal with it.
[quote]
If you have to "defend" a language that means it's probably on the way out already. If you feel a language is subject to "attacks" you haven't understood how computer languages evolve in the field of software development.
[/quote]
That's assuming that the attacks make technical sense. As others have pointed out, the urging to move to memory-safe languages offers false promises. Rust can be made to operate unsafely, and the risk of migrating legacy code seems to have been understated. I don't agree that this is an existential threat. C++ is not going away. But this is an annoying reputational one.
Whether Rust has an unsafe mode is not the issue. Languages should rise or fall based on their merits. A community should not have to "defend" a language from its competitors. The language's properties, advantages and usability should do that.
Right now C and C++ are losing out to memory safe languages because of... well, memory safety. There are cases when memory safety gets in the way, which is why Rust has an unsafe mode as well. C and C++ don't have a memory safe mode, nor can they easily be fitted with one without changing the language so much that they're not really C and C++ anymore.
Defending a languages reputation is fine; I'm all for it. But reputation alone is not enough. I love C; I wrote my first applications in C in the late 1980s. But I have to admit that if I had to choose a production language for major applications today, I'd be forced to consider the advantages of more modern, memory safe languages. That's a pragmatic decision based on language capability. Reputation does (and should) not feature in any of it.
I think you've misunderstood the thrust of my argument. C++ needs to be defended against technically unsound attacks because of the risk that these will lead to code migrations or replacements that are likely to introduce new bugs, and because reams of code are written in it. This is a risk that has been understated.
> Any chance for a dig at Trump.
I don't think it was a dig. They just stated a fact: most of the memory safety government staff were fired. It's a relevant point. It's possible to discuss it respectfully.
> The Register is becoming irrelevant...
This is laughable. Stop being a snowflake.
The rise of bootcamps, scripting-heavy roles, and 'learn-to-code-in-3-months' courses has flooded the industry with underqualified people who barely understand programming. These same people push for 'simpler, safer' tools like Rust; not because they're better, but because they mask incompetence.
The irony? The problems they complain about: unsafe C, bad C++ code, memory leaks...etc. are self-created.
Who's writing bad C++? People who shouldn't be writing C++.
Who's failing to manage memory? People who shouldn’t be touching low-level code.
Who's making software slow and bloated? Overengineered abstractions and useless frameworks.
These people are the physical embodiment of the very problems they claim to be solving; overengineering everything, refusing to learn fundamentals, and bloating the industry with broken, inefficient software.
I don’t care what language you write C, C++, Rust, Brainfuck, Assembly. as long as you actually know what you’re doing. The real problem isn’t the tools, it's an industry that is filled with people who don’t.
Rust isn’t the enemy, but the mentality behind it is: a band-aid for people who don’t want to actually learn programming. At this rate, we’ll soon need 64GB of RAM just to open a web browser, and that’s the real joke
I'm interested in systems and games programming. I learned C but never used it commercially.
I am not keen to learn C++ given its size and complexity. Is there a major downside to using C over C++?
I assume with careful implementation (and maybe TrapC) many memory bugs can be avoided.
What is OOP in C like? Are structs enough?
You don't *have* to use all the features in C++ in one go, especially not if you are just getting it. It is perfectly valid "just" to treat C++ like "a better C": start by renaming your dot-c to dot-cpp...
> What is OOP in C like? Are structs enough?
ESPECIALLY if you are interested in OOP - that is the oldest "paradigm"[1] that C++ added onto C, with cfront in 1983; the template metaprogramming (epitomised by the STL) and the like came years (almost a decade) after - and it was a long wait for compilers that could handle it all!
You can quite usefully - and cheaply - pick up older editions of C++ books, which just cover the basics. Ok, doing that you'll not learn (from Day 1) about some of the library bits that have been mentioned in these comments (e.g. the various flavours of smart pointers that the various libraries provide) but considering that you'll be using a compiler + RTL that does provide them, when you are ready you can pick them up and slap on. If you feel the need - after all, your still be doing better writing OOP in C++ than in C.
> What is OOP in C like? Are structs enough?
But to actually answer that: you *can* do it with structs in C, but it gets messy really, really easily. There are various macro libraries that help, but these days, unless you *have* to, OOP in C is very hard to justify.
Use C++ - even if you just use the struct keyword and ignore class entirely, being able to inherit from a parent struct and then not have to add another layer of self->parent.grandparent.greatgamps.some_field every time is a blessing. But then using class and restricting access to implementation - mmmwaaah! Delicious.
[1] sorry
In C++, struct and class are the same thing.
The only difference is that struct members are public by default, and class members are private.
That's it.
In C, structs are merely data layout. They don't have any encapsulation or even member functions, so it becomes an exercise in "faking it". You absolutely can fake it - C++ started out as syntactic sugar to do the faking - but the compiler can't help you.
> In C++, struct and class are the same thing... The only difference is ..
Yes.
Precisely.
That is why I suggested, as a starting move for someone wondering about whether to try doing OOP in C, that he just take what he can see is common - 'struct' - and compare the results of "faking" it versus getting the real thing. The ease of member naming is step one.
Next week, step two is adding in some member functions, then I'd suggest switching to 'class' (less typing than writing 'private' everywhere!) to get into encapsulation. And then add some polymorphism.
If someone is wondering about how hard it is to OOP in C (answer: best not) then just take a nice easy route to showing how much simpler it'll be in C++ with only a very, very few changes to their code. When they are happy with that (aka sucked in, there is no going back for you now, we have you in our grasp...) you can move them on towards greater mastery.
Depends upon what you mean by "systems work".
C++ design goals start from "if you don't use a feature[1], you don't pay for it" and then "if you do use a feature, you can understand its cost - which is as low as we can make it, under the circumstances".
So if your "systems work" is able to use a C++ compiler, go for it: embedded systems, bare to the metal microcontrollers etc can all gain from appropriate use of C++. I wouldn't try using std strings on an Atmel with 2K of RAM, but would happily drop in a class or two to make i/o simpler - simpler to read when I get back to it in two years "for a quick tweak".
If the "systems" work is getting a userland program to run network traffic really fast - "a better C" is a good choice there as well. Just avoid using something if you can not easily determine what the resource costs are going to be.
[1] of the actual language, as opposed to a library - those can get opaque quickly, in any and every language!
> "If you mark your code to enforce a Profile, some features of the C/C++ language will stop working,"
So if I understand correctly: the proposal is in C++ you denote those sections of code where you do want protections enforced, whereas in Rust you denote those sections of code where you don’t want protections enforced.
It would interesting to know (research opportunity?) whether the released code in these denoted sections is significantly better than the surrounding code. Ie. The affirmative action of denoting these sections mean the programmer is more aware of what they are doing and so has greater clarity of thought and logic.
Exactly. If you're writing SIL-IV or MISRA C code, you have to check every line. (There are third-party tools for some requirements.) If you're writing Rust, you only need manually examine the lines marked `unsafe.` In my experience, that's often very little.
Combine that with design by contract and heavy use of assertions, and your code is nigh-bulletproof. It's like driving a large truck vs. a small car—one feels much safer than the other.
Back when I was a C++ programmer full time, you could always tell the junior programmers from the senior programmers by the way they handle memory management. It took a lot of experience not to create all memory issues that people are complaining about today.
Fast forward to today, junior programmers create a bunch of buggy programs and then some techno-idiots complaint that C++ isn't safe enough. Well, if companies hired senior programmers to do the programming, this would have never happened.
Stop blaming the tools for poor quality work.
If they didn't hire junior devs, all the C++ programmers would be 60 years old and about to retire/die of template-programming-induced heart arrhythmia. Companies always had to hire inexperienced devs, from the very beginning. What a nonsensical argument.
The solution is not to never hire junior devs. They will be the programmers of the future after all. The solution is to not let junior devs do the mission critical/safety critical programming. Too many times companies try to save money by hiring only junior devs and they just end up committing all the memory unsafe, resource unsafe, race conditions, deadlock code that we rail against.
C++ has grown so hideously complex, nuanced and unsafe that I don't see how it can be fixed. At least not without gutting the language or doing something similar to TrapC. But TrapC is hiding some of the issues of the language, not making them go away and comes with a runtime cost since every pointer access has to be bounds and possibly type checked.
To me it sounds more like a figure of speech, used in the hope of conveying the seriousness of this issue to an audience which may be complacent or even comfortable with the status quo.
This gives me some hope that C++ has a chance to evolve and thrive. As long as Bjarne doesn't pull a Perl 6 in the heat of the moment, that is.
Y2K wasn't bad enough to learn. Like the squaw in Custer's last stand, posting a darning need thru Custer's ear, "perhaps listening in the after life change your force de jour", you got too many Indians.
And soon down the road a quantum computer, AI, power and grid struggles, mapping of the universe and mapping of the brain, not just the genome.
I would say, this is a way of diminishing and purposeful quamite of the ability to use open source. For without open source modern world would not exist with those who innovate new patterns yet unseen.
This knee jerk reaction seems analogous to Y2K and the worries of removing modern day uses that mitigate far greater problems from occuring that the lack of management of memory or how we structure programming. What evidence and prevalent of cases are there ? Causing failures with less open up more problems not less.
Don't create more problems. It is bad enough.
Remember Y2K?
C in all it's variants is an abomination and should never have lasted this long. That it has taken this long to evolve type safe just demonstrates the fact. It is something other, much denigrated languages have had for decades! Lets be honest here - C was intended to be a bridging language from the days of Fortran to the 'new' languages that were coming. But C and it's fanatical adherents went out of their way to destroy any language that threatened it. I prefer to learn the appropriate assembler language than deal with C or C++ or JAVA or Rust - they are all just derivatives of C. (That will get the dogs howling)
> Lets be honest here - C was intended to be a bridging language from the days of Fortran to the 'new' languages that were coming
Citation?
> But C and it's fanatical adherents went out of their way to destroy any language that threatened it.
So, there must be lots of citations you can provide for that, in half a century of C. What is the lust of languages "they" have "destroyed"?
> C or C++ or JAVA or Rust - they are all just derivatives of C.
C++ is a derivative of C? Good grief! Has anybody told Bjarne? Java and Rust just share a similarity in simple syntax - use of braces instead of begin/end, semicolon as a terminator rather than a separator, infix operators and a basic function declaration style.
> That it has taken this long to evolve type safe just demonstrates the fact
Hang on, hang on. The Big Discussion now is MEMORY safety, not TYPE safety! You are complaining about a lack of type safety and you diss C++, Java and Rust? Whoop, whoop, whoop!
> other, much denigrated languages
Presumably, these are the languages that were "destroyed"? Hmm, what languages have been denigrated? There is that famous comment by Dijkstra - ah ha! Got it!
You want us all to use BASIC, don't you! No curly braces, type set by the variable name, that all makes sense.
In the last 15 years I have worked at a few companies and encountered on multiple occasions memory leaks with Java and C#/.Net applications.
Mostly developer error, but a few runtime bugs. I have seen Java actually core on Linux as well as a hard crash on .NET on Windows. In al these cases, the app developers had no clue how to debug it.
Also a shocking number of vulnerabilities in Java apps for enterprise. The Atlassian stack seems to have major security flaws every week. Last time I checked, it was Java.
It's the development process and I trust Linux more than I trust nearly every other application stack written in whatever language. Promises of memory safety don't cut it.
int main(enter the void)
...