Good luck...
...as they say, with that.
Nobody likes The Man. When a traffic cop tells you to straighten up and slow down or else, profound thanks are rarely the first words on your lips. Then you drive past a car embedded in a tree, surrounded by blue lights and cutting equipment. Perhaps Officer Dibble had a point. There's no perhaps about the FBI and CISA getting …
I assume you mean that it will underpin more of the world of finance, banking, transport and government than you image for many decades to come and provide employment for many grey-haired code-slingers in the West and fresh-faced young programmers in India for as long as it does so.
For how many years did the UK banks still have a shim layer to convert from decimal the LSD that the core code ran (on an emulator of the original hardware) and then back with the results?
I will give you a clue - decimalisation was 15 February 1971, and I know someone who looking after one of the afore mentioned systems this century.
The author really needs to explain what they mean by this, as it doesn’t make any rational sense.
I suspect the problem with COBOL is that it wasn’t designed by academic computer scientists and was focused on business applications, something the academics weren’t interested in. Plus given the social environment of the day, I would not be surprised if part of it was sexism given the leading role women played in its initial development.
COBOL is derided today, because the bias has been institutionalised by those who only pay attention to social media influencers.
> decent languages
Interesting choice of words. I would say they are decent in the same sense that the QWERTY keyboard and TCP/IPv4 are decent: they are long in the tooth, there are improvements, but they are well established and so difficult to displace (for example the IPv4 to IPv6 migration…)
So yes, I would defend C and COBOL as being decent languages, but am aware we really need to improve on them. The question is what is the improved waiting in the wings replacement for COBOL, as it isn’t C or Rust, just as C and Rust aren’t replacements for Fortran or LISP.
Almost anything is a good replacement for COBOL. Java or C#, or just Python? There's a big difference though between starting a brand new project and making small, incremental improvements to a legacy system which works well, will naturally be replaced eventually but may never-the-less be written in COBOL or involve IPV4. (I think your keyboard example is less clear cut in practice, for all kinds of reasons).
The issue with C/C++ is that those legacy systems frequently don't work well when up against bad actors. Arguably C is the COBOL of last century rather than the other way around.
Can you explain why you think it'll use 20 times as much CPU when it'll boil down to very close to the same machine instructions?
In any case the reason to use a more modern language for a modern system is that the vast majority of your workload will not be actual financial calculations but all the associated IO and glue that'll be vastly easier in a different language. If you have a literal one-line calculation to do and a line of COBOL will do the job then that's great, but it'll be because you're fixing an existing system built in COBOL, not developing a brand new one.
(Just for fun I checked some tax software I use that's validated by HMRC and it happily uses .NET's decimal type. You might also be interested to peruse an actual tax collector's API and note the complete absence of any COBOL references or any complexities at all really: https://developer.service.hmrc.gov.uk/api-documentation/docs/reference-guide#common-data-types )
> Have you tried fixed point decimal arithmetic in any of the above mentioned languages?
Yes, it's easy. Do all your arithmetic in integers, in the smallest unit of your chosen currency. Your output routines can easily reinstate whatever laout you require.
(Yes, integer overflow may be something to consider on some machines. Or all machines. But that's even more easily solved.)
Using integer-only operations is a workable approach for addition and multiplication, but division is also necessary in financial calculations, the result of which cannot be guaranteed to be an integer value. Having the fractions after the decimal point disappear into the ether can lead very quickly to large errors if a division operation is carried out hundreds, or even millions of times, and especially if it is applied recursively. Fixed point arithmetic combined with an officially sanctioned rounding protocol has been the standard way of doing financial calculations. COBOL supported this, and it is a significant part of the technical debt that makes institutions reluctant to abandon COBOL.
"what is the improved waiting in the wings replacement for COBOL"
I have no love for Cobol (I worked with it for too long), but I do not know of any modern language that does what Cobol does - and does well.
Yes, I know it's verbose, clunky, has over 1k reserved words, has no real standard ....
I suspect that most folks who want to replace COBOL with some "modern" language have never looked at a real COBOL program. I am not, never have been, and I'm quite sure would not want to be, a COBOL programmer. But I have had to peruse COBOL code a few times and found it to be remarkably readable -- more so than any other language I have encountered in the 65 or so years I've been dealing with computers. I can't say the same for C
I really can't see much of a case for abandoning COBOL. I can see a case for replacing C (and assembler) with a memory safe language where security is an issue and where such memory safe language has adequate performance and will fit into the machine memory.
Perl is much worse than C for readability, unless you are genius at parsing a regex thingy. Seems like a shell with terminal and file I/O to run a regex.
C can be readable, but smart alecs programming like it's 1981 with multiple statements per line and expressions on the left of an assignment are idiots. We have decent compilers now, don't need to print it or save source text file space.
> Perl is much worse than C for readability
Almost certainly. But that's not saying much, and the unfortunate implication that C has to be compared to Perl- i.e. an intentionally-chosen poor example- to make it look better isn't doing it any favours.
Also, I don't think many people are using Perl nowadays anyway, at least not for anything new. (All indicators I've seen suggest its market share has declined to a small fraction of what it was during its heyday twenty-plus years ago and I suspect much remaining use is for "legacy" applications).
All that aside, I'm inclined to agree that C- along with most languages based upon its syntax- is quite capable of being written legibly. It's just that it doesn't *force* the user to do that via rigidly-defined and restrictive formatting and syntax.
Whether one would even consider that a "problem" of the language or one of crappy developers being incapable of writing legible code without being forced into a straitjacket is open to question.
C can be readable, but smart alecs programming like it's 1981
Exactly!!!!!!! Sorry for all the exclamation marks but this is so true. I have maintained legacy code where some idiot developer FW who loved using used l1 and I1 that is (lowecase L)1 and (uppercase i)1.
When he coded the font may have shown them as considerably different but in Courier New they are close enough to be a major pain!
Even today, perl has its place - because of the "regex thingy"
Peri is simply easier than almost anything else when it comes to sometimes-quite-complex regexes. I doubt that makes me a "genius" at parsing them, but I've never had a problem with script readability because of it.
Yes, a couple of years ago I wrote a Perl program (script?) to read an old Wordstar format where 8th bit is special and output a text file any current text editor understands, with soft carriage returns (for page formatting) replaced by space. Never written a Perl program before or since. It was the easiest way for me, though no doubt there is some Linux utility that does this easily and perfectly.
C was written for single-processor architectures. I know you can multithread in C, but I've never actually done it, and I imagine it's a bit hacky. You certainly can't write for a SIMD architecture in C, and I suspect that it's C and its successors who have stopped the move away from x86 and ARM to SIMD processors for most tasks. I can imagine a world where CPU development stopped and only had to concern itself with GUIs and all the big stuff was data processing....
You can write SIMD code in C. I've done it extensively over many years for both AMD64 (x86-64) and ARM (both 32 and 64 bit).
The only problem is that it requires extensions to the C language and these extensions are not standardized. GCC has these extensions, and MS C does as well. Although I haven't used it with MS C, the extensions that I did look at in the documentation were very similar to GCC's except for the actual name of them (which I suppose could be papered over with a macro). You can't do it with Clang/LLVM however, or at least not the last time that I checked, as it didn't copy that feature from GCC.
Using SIMD effectively often requires using completely different algorithms than used for architecture independent code, so there's no way for the compiler to automate using SIMD instructions when working from architecture independent code except in the simplest and most trivial cases. It also means you often need different code for different chip architectures due to SIMD simply working differently with different chips.
Using the C extensions is still far, far simpler than writing the code in assembly language however, so they are very good to have. All they really need at this point is for the names to be standardized between compilers.
Instead of using SIMD extensions to C directly it’s well worth using a standardised library, and have source portability across platforms that way. If one is using SIMD in the first place there’s half a chance you’re doing so to implement a standardised library algorithm, eg an FFT. Using libraries like VSIPL gets you a very optimised FFT implementation for a variety of platforms. The US department of defence favours VSIPL.
There are non standard libraries too, eg Intel supple MKL and IPP libraries. These are good for only Intel / AMD chips, with accusations that they’d deliberately knobble themselves running on AMD hardware.
You can multithread in C/C++ fairly easily, but not portably unless you use POSIX APIs for everything you want to do. Absolutely not hacky at all. There are libraries you can use that bring portability to the mix, but your binaries themselves will never be portable like pure JVM (Java) or CLR (C#) binaries can be.
You can also write SIMD code for general-purpose processors (x86, x64, ARM or MIPs) with SIMD extensions in C/C++. You can #pragma into ASM to use the SIMD opcodes directly or you can call libraries that do that for you. Is did the #pragma thing with x86 SSE2 back in the late 90s. Nowadays you are more likely to use libraries than roll your own like I did. Rolling your own was fun though!
C was initially developed for Unix, which eventually became a multiprocessor (capable) operating system. The C language however does not have direct support for threads. This is what operating system runtime libraries were for, with POSIX as an attempt at a portable one (that the likes of IBM and Microsoft only supported halfheartedly unfortunately). This way the C language could remain pure. Not all operating systems supported threads (or even multiple processes) back in the 70s/80s. You can even write C code right down to the metal with no OS available. This is what makes C great for embedded applications and bootstrapping code. A C compiler can allow you to be small and tight with the ability to layer more complex stuff on later after you have the hardware functioning properly.
It's important to note that multi-threading != SIMD. Single instruction multiple data means a single "thread" of instructions that can process vectors in parallel with each instruction. For example, 8 additions in parallel rather than 8 adds back-to-back in serial. You can have multiple threads of serial instructions or multiple threads of SIMD instructions depending on what your hardware supports. Back when I was working with SSE2 you had to be careful to keep threads from messing with each other because there were not enough SSE registers to go around for all the (hyperthreaded) cores you could throw instruction streams at.
Operating systems generally support the serial or native SIMD instruction threads directly. Since CPUs and NPUs can operate in separate memory spaces from the OS itself those instruction streams are handled by coordination between the OS and external devices through device driver communication mechanisms to send data back and forth across memory busses. This type of work currently falls outside the capabilities of language compilers, which do not have enough context on how the communication happens. Instead, you compile code in special languages such as CUDA and then use a library/device driver to throw that code from CPU code through a vendor supplied library that copies the code over a memory bus to where the GPU/NPU executes it. Then the results get copied back into memory space that the serial code (and operating system) can see and manage,
Perhaps someday there will be enough standardization so that a language such as Rust, or even a modern C/C++ or Java or C# could handle both sides of the CPU/G(N)PU problem with a single language. And maybe someday operating systems will also deal with the low-level memory management in a way where a compiler could reason about things at compile time to protect against mistakes in interop between the CPU/GPU/NPU code that can lead to CPU/G(N)PU panics.
C11 Standardised the memory model, so portable C11 compliant code is concurrent and thread safe, it is essentially pthreads.
C++11 using the same memory model, also did the same thing.
Why do you think we don't take your Rust advocacy seriously - its 2025 and you're suggesting C11 wasn't a thing with the sign of true mastery "C/C++".
lock-free concurrent thread safe code portable in C++11 - https://gist.github.com/jamal-fuma/eb3f46fd2da666bcf7c3
Today that's 30 lines shorter at least given that std::jthread was standardised.
SIMD code is already linked in for you when you call boring functions like memcpy, because your standard libraries are written by C programmers https://github.com/bminor/glibc/blob/master/sysdeps/aarch64/memcpy.S not "C/C++" rust advocates.
You can multithread easily. Doing it safely is another matter. I'd also say that multithreading in C++ is awful unless you use Boost or something to hide platform differences and even then it's incredibly easy to screw up because the language itself has no concept of sync, async, guards, mutexes etc. So if you don't write your code properly the compiler won't save you and nor will the runtime since both are oblivious of the programmer's intent.
That is why a language like Rust would be far better for multithreading because it has proper rules of ownership, move semantics, sending data between threads, async / await, data access, guards etc. baked into the compiler and runtime. It makes it easier to parallelize code because the guarantees are so much stronger.
My knowledge is up to date. It doesn't mean code already written has the choice to move to the latest standard without being substantially rewritten. As a mitigation it is generally sensible to use Boost in preference STL since it compiles down to STL if it can but offers additional functionality like ASIO and richer types over a wider range of C++ standards. C++11 on its own only has the bare minimum of what we'd call multi-threaded support. But let's look at what you might do in C++ to protect shared data through C++11 or boost:
recursive_mutex mutex;
SharedResource sharedResource;
//...some code block
{
guard<recursive_mutex> lock(mutex); // What happens if I forget this line?
sharedResource->doSomething();
}
What happens if you forget to lock? Nothing, at least not at compile time. At runtime your code might blow up immediately or only once in a blue moon. But it is unsafe.
Let's look at the equivalent Rust code:
Arc<Mutex<SharedResource >> sharedResource;
//...some code block
{
let sharedResource= self.sharedResource.lock().unwrap();
sharedResource.doSomething();
}
The programmer has no choice but to lock because the shared resource is only accessible by explicitly obtaining it. The compiler will fail if you don't. It will also check that ProtectedResource is thread-safe without side effects just in case there is something about the struct that would break that guarantee.
Multithreading support also extends to other multi-threading patterns, e.g. mpsc send & receive, move semantics being the default, compile time checks on safe-to-send structs, old-school spawns, joins, barriers, condvars. It even supports for async / await. C++ has some building blocks for futures and C++ 20 adds co-routines (which are borderline incomprehensible) but neither futures nor co-routines are remotely simple to use.
So the point is Rust has your back and compels thread safety by design. It also supports modern patterns and terse code. While C++ throws more and more templates at you and doesn't give a damn if you use them properly or not. Maybe Super Programmer (tm) does everything correctly. Back in the real world real programmers make errors. I prefer my errors are found at compile time, not by the customer.
It's not even a fair comparison quite honestly. I might add that Go and Swift also wipe the floor with C++ for multithreading but Rust is the closest analogue in terms of portability and runtime performance.
Let us be generous to your point: Use of certain constructs can prevent a class of issues, and in Rust you claim these are obligatory.
Let us also stipulate that a trivial wrapper does exactly the same thing, and sure that in C++11 but you could do that in C++98 with more noise.
You are required to take some care when using C++ or C, there is a lot of wisdom in using safety tooling, but if your code matters that it's safe from thread bugs then you design it so that its possible to verify that it's correct - e.g. you use active objects and concurrent queues, no raw locks or threads. That's slightly more convenient in C++ than C but essentially the same code.
Futures are a little... but you can happily build WhenAll and WhenAny out of them trivially. Yes composing futures is a little bit of a pain, but doable it would be nice if it had chaining but it's not exactly a difficult wrapper to write.
// https://godbolt.org/z/zP9a53hG4
template <typename T>
struct with_lock {
with_lock(T && callable):
callable_{std::move(callable)}
{}
template <class... Args>
void apply(Args && ...args){
std::lock_guard<std::mutex> lock(m_);
callable_(args...);
}
T callable_;
std::mutex m_;
};
// guess I don't need rust anymore
with_lock([](){
std::puts("what with lock");
}).apply();
with_lock([](int foo){
auto sv = std::format("what with {} lock",foo);
std::cout << sv;
}).apply(42);;
There's no need to be "generous" to my point. It IS the point.
C++ nor STL nor boost ships with your code and even if it did, it would still be optional because the compiler DOES NOT CARE. Rust forces you to implement thread safety whether you want to or not. C++ sucks because if doesn't enforce shit and even if it has templates that might protect your code it will not check if you use them properly or at all.
I honestly find it perplexing that people leap to defend languages that are demonstrably less safe. Especially in an article about C being not fit for purpose, but the same applies for C++ especially in the modern world.
C++ doesn't ship with my domain logic either. Are you suggesting Rust does in order to buttress your point?
I was being generous because your point is feeble as trivially shown. You structure your code poorly then complained that poor structure left you open to bugs absent from well structured code.
You suggest Rust (language) is what protected you, when it's obviously the use of a library template provided in Rust Stdlib, which causes the "Compiler to care". Which was a different problem than you originally posed. Simply it's easy to enforce something is only used correctly in C++, provide you aren't incompetent, apply a modicum of restraint and good taste.
if you want to express compile time assertions in C++ you are looking for concepts https://en.cppreference.com/w/cpp/language/constraints - personally I'm still good with plain assert, but go for it.
Modern C++ is trivially memory safe, trivially thread safe, and easily incrementally applied as seasoning to an existing C codebase.
Rust offers nothing, it's just solutions for problems I don't have. The computers are so powerful that a scripting language should be the default choice.
Where I want ultimate control over my codebase, I don't want a functional language with write only type calculus pushed on me by zealots.
The Indomitable Gall wrote: "You certainly can't write for a SIMD architecture in C"
Assertion failed!
We had vectorizing C compilers (along with FORTRAN) 40 years ago. SIMD was a thing when CPUs were at least the size of a fridge, long before it arrived on microprocessors.
One of the world problems is that most multithreading is derived from C’s way of doing things and C’s expectation of running in a Symmetric Multi Processor environment. It was always a bad idea but unfortunately this cheap and lazy approach to multiple CPU architectures took off and became difficult to displace.
Now, we’d dearly like to get rid of SMP to help solve problems like Spectre and Meltdown, but there’s now far too much software that assumes SMP is what it’s running on.
> Now, we’d dearly like to get rid of SMP to help solve problems like Spectre and Meltdown
What?
Spectre and Meltdown are both problems concerned with speculative execution, e.g. when used to try to cope with a failed branch prediction.
This is an issue with the microarchitecture within a single CPU core - and it need not be a "hyperthreading" core either.
These problems have absolutely not connection to SMP.
> most multithreading is derived from C’s way of doing things and C’s expectation of running in a Symmetric Multi Processor
Care to elaborate on that? C the language grew up without any concept of SMP - it has no concept of either "thread" or "process" within the language.
If you so desire, you can use libraries to provide multiprocessing: e.g. use the Posix fork() function[1] to create a new process (which you can leave running the same code or use exec() if you so desire) OR if you want to use threads for a lighter-weight approach you can use the Posix() PThreads library calls.
If you find that those libraries do not give you the control you are after (which, in your case, is presumably some level of control upon which type of CPU in your assymetric machine each thread/process is bound to) then go and look for a different library (or write it yourself, of course). Either way, C is happy to do whatever you want.
[1] if you are lucky enough to be on a Unix, otherwise go and read up on the Windows calls.
Go read the explanation of how Spectre works. It's capable of accessing data in another process. Go look at section 2 on this paper from kernel.org. This functions only because there's more than one processes sharing the hardware resources, such as the cache.
C doesn't, but what the libraries that sprung up around C like pthreads, shm, sem, are all about not radically altering C, which is that the process can see all of the memory used in the process; threads are not separated off into a separate address space. The later additions to languages like C++, e.g. std::thread, simple wrap a lot of this stuff up, but none of them break the "process can access all its memory" paradigm. The whole point of archictures such as CSP, Transputer, OpenVPX, Cell SPEs, is that the whole "process" (one's program) cannot directly address all of the memory used by process; bits of it running in one core have to explicitly ask or cooperate with code running on another core at a software level for data currently held by another core.
The problem caused in designing hardware that panders to C's "see all memory" model is that it requires there to be an SMP architecture, or a virtual one synthesised on top of NUMA (which is partly what we have today, especially if you have multiple CPUs in a machine), and speed requires shared caches or coherency between caches, which then create opportunities for side channels for exploits like Meltdown / Spectre to use.
This post has been deleted by its author
> Go read the explanation of how Spectre works ... Go look at...
I have, ta. But any new refs are, of course, happily accepted.
> because there's more than one processes sharing the hardware resources
Well, yes, I was going to make that my immediate response: once you have a hole opened up by Spectre et al then it is potluck which bits of the shared resources are then also vulnerable. That is not specifically an issue with SMP:
(0) Spectre et al does *NOT* require SMP to operate - they happily affect single-core CPUs in a single-CPU system
(1) SMP does *NOT* imply all cores are on one die (and hence potentially vulnerable to Spectre) - see chiplets, multi-CPU mainboards, all the way up to multi-cabinet monsters
(2) Shared on-die resources does *NOT* imply SMP - there is no (logical) reason for (at the programmer's model level that SMP occurs) for separate processors (need even be CPUs) not to share the lowest-level hardware resources, there is nothing in the scheduling requirements that prevents that[3]
> threads are not separated off into a separate address space.
Well, no, that is the definition of what "threads" is. We use threads when we *WANT* to share resources between them. Curiously, we also have "processes" which *do* separate address spaces (and, where available, make use of the MMU to enforce that - until we use explicit mechanisms to create sharable resouces).
> libraries that sprung up around C
And around every other language that provides am inviolable single-line-of-execution model to the programmer - you know, those poor fallible beings who have to actually *use* all this stuff - and we see enough problems moaned about around here because so many still seem to struggle with *that* model :-(
> The whole point of archictures such as CSP ...
True. And it is a shame they haven't taken off as much in the mainstream. Then again, the last Transputer project I was (perpherally) involved with, down in Brizzle, failed because the time spent in comms was vastly outweighing the time spent in actual compute. This may well have been a problem with the programming - *BUT* as they needed a result that actually worked better and they could do that without the Transputer...[3]
> The problem caused in designing hardware that panders to C's "see all memory"
Hmm, I really don't see why you have this idea that C has this "issue" that needs to be "pandered to": a C program is gven a chunk of memory and works on that; at root, said program expects to be single-core[1]. But your program only sees the memory that it has been assigned - malloc() can fail before the entire hardware RAM has been exhausted, stack space be constrained!
Really, at bottom, you just seem to have a big issue with on-chip multiprocessing that has tried (and, as Spectre etc shows, failed) to create optimisation algorithms for the hardware (yes, it is an algorithmic failure, it just happens to be implemented "in the hardware" - well, the microcode and the bits below that) but then want to blame that solely upon the use of SMP (see above - nope) and then blame the popularity of SMP upon C (again, nope).
We could all do without the ills of Spectre - and of RAM-smashing exploits and everything else - but trying to scapegoat C is getting daft. There are other languages around that could have become the low-level lingua franca, C just happened to get there first (and, IMO, not because it was C but because it was dragged along in the wake of popular - not even "the best" - software that happened to be written in C). The programmer's model (which does NOT have to match the reality beneath) of a single-line-of-execution is one that has proven highly productive. The provision of (various) multi-process models on top of that, which are not forced upon anyone, has - issues - but not anything specifically related to SMP (from the p.o.v. of the programmer, that just happens to be the next simplest model up, so, yeah, it'll be popular, go figure) and we have (or can have) very-close-to-C languages that provide sugar to alleviate them (but NOT totally remove; you can work around even the best planned language's restrictions the very second that you allow it to - tada - call into the operating system!).
[1] and? so what? many languages do - in fact, the coder's model for *most* languages do - some higher level ones are clever enough to "do better"[2] but we still want the C-level to build up to those tools.
[2] Prolog can be coded to do parallel searches - but beware, because they have to be resynchronised *and* if the code didn't need to backtrack after all then you can end up dropping the results of a lot of pre-emptive computation; where have I heard that before?
[3] Given the way that Spectre etc crept in, I wouldn't put it past hardware optimisers to be able to create a similar problem within the actual implementation of (some of) those designs, even the Transputer. "The programmer's model hasn't been changed", they'll say...
Backward compatibility gives the illusion they haven't changed much, but they've changed immensely. I almost don't know where to begin.. The memory latency bottlenecks are completely different. Pipe-lining has hidden all those cool little instruction-level optimisations we used to do. Expensive instructions are no longer expensive (except the few that still are). Really expensive workloads can be out-sourced to specialist cores, but only if your algorithm sufficiently batches them. Everything is multi-threaded and dealing with processor contention requires thinking and experimenting in high level abstractions. Overall, algorithm is king, not instruction efficiency, and the ability to radically change your code by swapping in a completely different or even just subtly different data structure or other generic approach with a one-line code change is what gets performance.
I feel for the dogged C programmers because I used to love that kind of tinkering where you understand every single thing your program is doing too but, for results, that way of working has gone the same way as world-beating computer games written by one man in a garage, and fundamental physical laws discovered in a home lab. The world has moved on.
The 8086 family was designed to match the requirements of c and pascal*, so in that sense c is a way of describing the 8086. But not a particularly good way. C describes the PDP-11 better, and in places where the 8086 architecture doesn't match the PDP-11, it's just annoying.
*the CISC contains stack operations for both pascal and c calling conventions
No, the original 8088/8086 was designed to make porting 8080 programs easy. Hence the 64K segments and the 8080 assembler translator. The 8085 was an updated 8080 and the 8088 was a related bus. The Z80 was an expanded 8080 with some extra features missing in the 8088/8086.
The original 8088/8086 design didn't much care about C or Pascal, or even "real" 16 bit computer architectures with flat addressing.
Well, that is certainly one way of looking at it. But the people who wrote the Intel 8086 documentation (which I still have on my shelf), thought different.
You don't have to get your information from random people on the internet: you can go read the 8086 programmers reference yourself.
the CISC contains stack operations for both pascal and c calling conventions
This is an interesting point, but it's the Pascal ones (ENTER/LEAVE I assume you mean primarily) that are specialist, not the other way around, and you can of course compile Pascal just fine without them. The more basic stack instructions - which allow virtually any calling convention you care to imagine - date from the 8080 which was designed essentially concurrently with C and so can't have been influenced by it.
Just to emphasise the point, the 8080 Assembly Language Programming Manual is awesomely quaint in it's explanation of stack instructions:
Before understanding the purpose or effectiveness of the stack, it is necessary to understand the concept of a subroutine. Consider a frequently used operation such as multiplication. The 8080 provides instructions to add one byte of data to another byte of data, but what if you wish to multiply these numbers? This will require a number of instructions to be executed in sequence. It is quite possible that this routine may be required many times within one program; to repeat the identical code every time it is needed is possible, but very wasteful of memory. A more efficient means of accessing the routine would be to store it once, and find a way of accessing it when needed. A frequently accessed routine such as the above is called a subroutine, and the 8080 provides instructions that call and return from subroutines. When the "Call" instruction is executed, the address of the "next" instruction (that is, the address held in the program counter), is pushed onto the stack, and the subroutine is executed. The last executed instruction of a subroutine will usually be a "Return Instruction," which pops an address off the stack into the program counter, and thus causes program execution to continue at the "Next" instruction
It doesn't get any more basic than this. It's a processor designed for assembly.
(C compilers meanwhile quite happily support a myriad of calling conventions and the originally common 16-bit ones have little relevance to modern 64-bit architectures)
A stack is the easiest way to implement a procedure or function call with parameters and return address on the stack. A return value can also be put on the stack.
Only the most primitive mpu has no variable size stack, such as the PIC16 and later PIC18 from Microchip. Originally a Peripheral Interface Controller (PIC1640 in 1976), it can only save the current address when there is an interrupt, though C has been ported to it.
The 8088/8086 documentation may claim a connection with C etc, but in fact that's nonsense, because special features had to be added to C compilers because of the 64K architecture and instruction set from the 8 bit 8080. The 8088/8086 is a crippled 16 bit cpu compared to the 80286, which was also inferior to contemporary 16 bit CPUs, some of which pre-dated the 8088. The 8088 was chosen for cheapness and ease of porting CP/M applications and HW in the 8 bit ISA slots. Not because of any inherent support for C.
C doesn't "describe" the underlying processor, unless your code is peppered with near and far keywords in which case you're probably writing DOS or Win 3.1
C exists because to be an abstraction over the processor, i.e. writing machine code is a pain in the arse and 10x worse if you're targetting multiple processors since each demands its own machine code. So rather than _write_ machine code (or assembly) per processor, Kernighan & Ritchie wrote a language and compiler which was an abstraction over a processor and produced reasonablish machine code for any processor the compiler could target. And it was abstract enough to hide most of the processor detail but not so abstract that it couldn't do pointer-ish things to memory if you wanted.
Which is all well and good to a point but modern C has to deal with threats and issues that ancient C never had to concern itself with. And it's not fit for purpose in that role.
> C is a way of describing the underlying processor - which haven't changed much in the last 50 years
Funny you should say that, because a critique of C made several years ago was that processors *had* changed in the past few decades, and that while C's programming model may have been close to the metal at the time it was invented, "C is [no longer] a low-level language [because] your computer is not a fast PDP-11".
Not sure about COBOL. but C is a decent language for embedded development to get things bootstrapped so you can then move to higher level and safer languages.
Rust is an attempt to provide a safer alternative, but it will take time for that to happen.
And a C compiler is easier (and cheaper) to get up and running when new chips/boards are involved.
That being said I have not personally used C in years.
Nowadays for the things I would have used C/C++ for (other than board bring up) I would instead use Kotlin, Java, C#, JavaScript, Python, PHP or Rust.
And, of course, there is a lot of C code out there that will either need to fade away or get rewritten. Rewriting costs time and money and just throwing money at the problem is no guarantee that it will work and lead to better code than what we have now.
It's also a perfectly decent language for embedded development where your system is so resource-constrained that you still need the efficiency offered by C. I've just passed my 27th anniversary as an embedded systems engineer, and other than a bit of asm in the early years, every line of firmware I've written has been C, and for the forseeable future that isn't likely to change much if at all.
I like to fling *some* C++ into my embedded development, mostly using it as "a better C"[1]
Congrats on the anniversary.
[1] if the compiler support is there, of course. But considering that some platforms barely even handle C decently...[2]
[2] e.g. a Zilog ZNEO dev kit where the C compiler was so - grrr - that I set the Makefiles to use GCC's pre-processor before the ZNEO compiler, otherwise it could not handle macros to readably build it the bitfields for various GPIO config registers :-(
> COBOL is that it wasn’t designed by academic computer scientists
Grace Hopper was a computer scientist.
At the risk of quoting Wikipedia "Hopper was the first to devise the theory of machine-independent programming languages, and used this theory to develop the FLOW-MATIC programming language and COBOL".
> focused on business applications, something the academics weren’t interested in
Pick a subject, any subject, and you can find an academic interested in it (if not, that is just because there aren't enough academics around and they're all busy on something else). Even an area like "A study in the use of social media influencers as an insult in tech-oriented fora" could get somebody a couple of papers and a quick thesis.
> given the social environment of the day, I would not be surprised if part of it was sexism given the leading role women played in its initial development
Sorry, what? What was caused by sexism? The fact that COBOL *was* taken up by its intended audience? Or do you think that its adoption was artificially slowed by all those manly FORTRAN programmers barging their way into banks and ripping up the COBOL manuals? Or was Hopper trying to get girly lowercase keywords intot the language? Nope, sorry, you're going to have to expand on that one before I have any idea what it was referring to.[1]
> COBOL is derided today, because the bias has been institutionalised by those who only pay attention to social media influencers.
Cor. So that means when I was being taught (and - briefly, thank the FSM - using) COBOL back in the 1980s we were *really* ahead of the curve in deriding it, especially its more - interesting - features!
[1] Not saying that there wasn't sexism in 1959, just - what effect are you referring to?
Interesting take.
And yes, Grace Hopper was one of the greats upon whose shoulders we all rest.
Ada (named after Ada Lovelace) is another language that has lost favor. It was popular in military and aerospace applications because it had a rigid syntax checking for interfaces between code that allowed many things that would get past a C compiler to be caught before runtime.
Complaint against Ada was that it was too rigid and slowed programmers down, and so things moved over to C/C++.
One reason Ada didn't catch on better, at the time, was the size of the symbol table the compiler had to keep in memory in order to do the required type checking -- this was back in the day when 256 or 512K was a lot of expensive memory for a PC. You could buy a cheap C compiler that ran in 64K or less, or the one Ada compiler for DOS that required a 286 processor and a special memory card that cost more than the original PC. Unsurprisingly, most budding programmers went for C instead. Ada95 expanded the language to make it less inflexible and of course computers are massively cheaper and more powerful now, but by then it was already too late.
re "the new Cobol", in my experience writing financial software that would be Java, or at least JVM-based languages. IBM threw its weight behind Java (at least for a while the only way to talk to an AS/400 was through a Java API), and there is just a boatload of useful Java libraries available.
(at least for a while the only way to talk to an AS/400 was through a Java API)
Somehow I seem to have missed that period. I started working on the AS/400 before JAVA was even invented and I am still working on the successor to it. I've seen JAVA come and go (you can still run it on the iSeries or System i, however IBM likes to call it this month) but it just isn't as popular as COBOL which is a lot less popular the RPG(LE) these days.
Yes, "only" was too strong a word. But after IBM dropped support for SNA over twinax connections, I found the iSeries Toolkit for Java extremely useful. In theory I could have used TCP/IP instead, and written my own server-side interfaces (and that would have been a fun project) but my boss would have wanted to know how long that was going to take.
I don't think Ada ever had favor, outside of some government contracts where it was specified. I got to wonder how many contracts were awarded to companies that bragged (lied) about their "Ada experience", with the expected results from inexperienced programmers.
I also don't think things "moved over to C/C++" -- my memory was that C was established in much of the world long before Ada, and things FAILED to move to Ada, unless they had to. By the time Ada was being talked about, much of the world had a computer that could run a good C compiler; few had access to anything that could run Ada. Things STAYED with C.
I have no hard numbers, but I can't think of anything in the consumer world which was written in Ada at any point in the last 45 years, and of course, a lot was and is written in C.
Yes, C is the new COBOL in the sense that nobody has come up with a viable backwards compatible language which would allow a gradual migration of an existing program to the new language. COBOL programs can realistically only be replaced with entirely new programs, not gradually migrated. This means that COBOL programs often live long beyond the lifetime of their creators.
So the people who see C as equivalent to COBOL are essentially saying that C isn't going away anytime soon, and it will be underpinning the foundations of the computing world for decades to come, and that anyone who wants to work closely with those underpinnings is going to have to know C.
C is not my favourite language, but I'll work with it if that's what I have to do to get the job done. And a lot of jobs that I need to get done require knowing C.
The problem with a lot of proposed replacements for C is that they look like the new Ada, languages that were oversold on their supposed security and reliability benefits and which will fade away once they are no longer trendy. It shouldn't surprise us that the US government were the main driving force behind Ada back when it was relevant. It seems they want to repeat that "success" again today now that a new generation of managers have taken over there.
The one supposed successor to C which has been successful is C++. That also is not my favourite language, but there is no denying that it has been hugely successful and will no doubt be with us for a very long time to come. The reason why C++ was hugely successful is that it was designed as a backwards compatible development of C. That provided a migration path for those who wanted one. C++ is far from perfect, but it was a very practical solution to a real problem.
What is needed for a C replacement for existing projects is a language that takes a page out of C++'s book in terms of extending the language, but this time in a very minimal way, to solve the perceived problem while remaining backwards compatible. It doesn't have to incorporate every new feature that people have dreamed up since C was first created, it just has to solve the problems that are found in most major existing C projects.
And the reason that C++ is far from perfect is that it was designed to be backwards compatible with C.
This is where we hit a problem: you can't move programming on without getting programmers on-side, but having programmers on-side means not making the improvements that are badly needed.
The issue is not whether programmers are "on side" with a language. The issue is that loads of important existing major software projects are written in C and we are asking what can we do to improve security in those programs.
Sure you could simply re-write them from scratch in whatever your favourite new language is, but that is a huge, expensive, and very long term project and that is the thing that programmers are not on side with.
So the question is, how can we start from where we are now and get better versions of the software we already have. I think the answer is to provide an evolutionary path forward from where we are now instead of telling people to go back several decades in development time and find a different path altogether starting from there.
The reason why C++ was hugely successful is that it was designed as a backwards compatible development of C
People use C++ because it's fast. People who've never touched C use C++, because it's fast. I learnt to code in the early 90s and I can only code C because I learnt C++ badly first before realising how great C++ was by comparison.
Rust is fast too.
Ada.. Ada is fast too. People keep throwing this word around like it's proof C/C++ are okay. Did you think maybe Ada was simply a victim of being ahead of it's time (or even may yet have it's time now Rust is providing real competition)? 25 years of C/C++ code getting pwned on a daily basis because of dumb memory bugs and still you can't see the problem that's long overdue being solved.
As long as a language can interface with legacy C/C++ it's fine. If it has syntax that's quite similar (as Rust does) even better still. If it has a community so much larger than Ada ever had, golden. It's 4 years now since I came around to the reality that it really isn't going to be a fad, and I haven't for a minute regretted the couple of months it took to learn it.
People are against it because, so highly ironically, a lot of people involved in the ever-changing world of high tech...are personally resistant to change when said change means they must change.
Understand that this pattern affects all of us and is all around us: cell phone reviewers will downgrade any phone that doesn't behave like an iPhone or a Samsung. Windows changes a button location and it's the End of the World as We Know It (ahem, regtards). GNOME changes and all hell breaks loose; init is replaced by something [they state is far more manageable] and it's heresy.
For a group of people intrinsically involved in one of the the cutting edge of social endeavours...I am constantly amazed at how change resistant individuals have become.
You got into programming knowing that it was a moving target, that change would come to you constantly and that one of the requirements of the industry was constant study and updating of skills. I am not one of those personally but those on the front line of the core of the structures - programming, servers, networking, chip design, etc - know this going in.
Yet they forget and then resist, when said resistance allows them a personal benefit even though it hurts the industry as a whole.
"C is powerful! C is the Swiss Army knife of languages!", I've been told. Yes, and just like a Swiss Army knife, it is useful for a lot of things...but nothing is specialized. You can cut a rope and crudely filet a fish, but you wouldn't want to build a car with it.
The comparison might be more apt than they imagined: C is also the Swiss cheese of languages. Many, many possible holes and, maybe, too many to patch closed. That's the point of the FBI et al statements mentioned here in this article, but that stands in irrelevance compared to our comfort with the language, fanatic C supporters essentially claim. C was developed at a time of low resources, limited I/O and compute power compared to today's hardware - we all, compared to when C was created, are sitting in front of mainframe-level power. Multi-core, multi-threaded, multitasking behemoths if we brought this hardware back in time when C was created. So now humans are responsible for dealing with all this complexity and - forget your egos - it is almost humanly impossible to do it without errors. Period. Stop your ego denials saying that "Real programmers can handle it!"
Haven't so far, and never will. Humans are fallible and you saying that you, especially, aren't, is stupid by this point in time.
You have the processing power to help you in your endeavour of reaching programming Nirvana. Use it. Leverage it. Get over yourself and allow the tech in front of you to help instead of believing yourself to be the uncontested master of every aspect of its operation. You aren't. Sorry. Unless you also laid out the circuit design and microcode in that CPU you are using, there are (many) things in these boxes that are beyond your ability and/or reach.
So, use the best modern tools you have available.
"resistant to change when said change means they must change."
Change for the sake of "trendy new methodology" means taking something that mostly works and likely has a long history behind it...and rewriting it from the ground up in a different language which, given how people like to fiddle with things and managers like to optimise (read: cut costs), this will essentially mean that you'd be swapping one set of potential problems for a different set.
Often changing from a limited hacky approach that while it works it is very much bailing wire and chewing gum. It "works" with constant intervention from developers.
The new way allows integration with diagnostics (with zero effort: work already done) so requires less support.
Reality this often isn't about anything expect an older developer trying to quietly make himself (IME always a man) invaluable.
In any long-maintained piece of code, the vast majority of the "hard" problems have long been fixed - and probably nobody remembers how or why.
If you attempt to re-write from scratch using something that prevents trivial mistakes, while you'll avoid some types of trivial - even stupid - errors, you're certain to repeat some of the "hard" mistakes and make some new ones.
If instead you slowly transition the existing codebase, you're more likely to fix the trivial/stupid mistakes while keeping the difficult fixes.
"Often changing from a limited hacky approach that while it works it is very much bailing wire and chewing gum. It "works" with constant intervention from developers."
This is it EXACTLY and the topic of the FBI's statement. It works...by spit, bailing wire, and prayer. But they are SO entrenched in their righteousness of C's "superiority" that they'll never change, regardless of the fact that not a single piece of modern C code as EVER been found NOT to have bugs.
Not. A. Single. Damn. One.
But that doesn't matter, right? It "Just Works!tm". We don't have to change because we get the code out...just ignore the bugs, because they aren't really happening. The bugs are due to "inferior" programmers...of course our project as bugs but they don't count.
And so, as usual in today's society, nothing really gets fixed. Nothing really improves. Nothing really changes. And we all get Pikachu-faced when having to deal with yet another dangerous zero-day, yet another CVE, yet another patch, yet another incursion, yet another ransomware attack. Because it's not really happening, because C is SOoooo good and worthy and perfect in every application of usage that they couldn't be happening, could they??
So. It's better to keep the 40 year-old beater BL Princess on the road, the one with the rotting frame and 2 leaking Hydrogas bags, because you already know the problems, rather than trade it in for anything newer...because the problems you (might) get from the new one aren't worth dealing with.
How does that comparison work for you??
The more-magic the magic, the larger the explosion when it goes wrong.
Sounds like Pratchett's Discworld Unseen University Wizards views on the use of "magic" from which they mostly and wisely refrained. They might have quibbled about the "goes wrong" condition as when magic "goes right" the consequences are often even more devastating.
"resistant to change when said change means they must change."
The other thing I've seen is a new developer fresh from some training course on some new language/library/tool set/ etc comes in and rewrites a block of legacy code in new whatever then leaves and you end up with a maintenance nightmare.
If it 'aint broke don't fix!
Well, I suppose it is always good to have rant, although I do think you might want to tighten up your focus.
> You got into programming knowing that it was a moving target, that change would come to you constantly and that one of the requirements of the industry was constant study and updating of skills
True. Although I'd represent it as something that has constant additions (and, yes, this does mean that I look upon a "change" - where we've lost the previous rather than just had a new option addded - with suspicion, because I've tried to make things have a decent long service life and be capable of holding up for, say, a 25 year life as a part of a large and expensive piece of machinery).
> people intrinsically involved in one of the the cutting edge of social endeavours
Whoooah there, Nelly. I never got into this to be part of any "social endeavours"; I'm not trying to "overturn the world" or "create a new paradigm" and I don't personally know anyone who was! It was a fun job to do, always lots of new stuff to learn, problems to solve and you got the pleasure of building things that Users, well, use. Ok, some of the Users were...
> I am constantly amazed at how change resistant individuals have become.
> are personally resistant to change when said change means they must change.
But then you mix up changes that are clearly just marketing and have removed functionality (Windows buttons - and the taskbar, grr), changes that were done against expressed wishes of users (GNOME were told beforehand), changes that have genuine technical concerns being expressed (replacing init - ok, there is also a lot of politicing going on there).
And you even mix up complaints from the techies, the ones that actually have to deal with the ugly bits, with reviewer playing marketing wank (the cell 'phones bit).
> (long comment about C which anyone who uses C already knows - it came from the 1970s, lots of much bigger machines exist now - hey, bigger machines existed in the 1970s than the ones that C targeted)
And totally ignoring that many, many (have I said "many"?) systems are written NOT in C - the vast majority, in fact. Web site creators - do they write in C? Building a new database - are you using SQL or trying to do it in C? Text processing systems (mailing lists, documentation aggregation) - are you using C or are you using, oh, LaTeX or a Markdown processor or Word?
> So, use the best modern tools you have available
Yup. I and those I've worked with have done just that. As do all the people doing the jobs I referenced above.
Trouble is, many (frankly, probably all) of the things that you hear being shouted about are not necessarily the best tools - they may be the best *marketed* tools instead! But we (in my Corner) have to be sure they are going to pass the test of time - and that they are clearly designed with the intent of longevity![1] Or we'll be piling up even more problems for ourselves.
Oh, and strangely enough (!) there are still situations where C (and we'll lump "old fashioned" C++ in here as well) still is the best, stable, modern solution: all those boring little embedded systems that you probably don't even notice. Hopefully "more safe" options will occur (the MCUs are big enough now to run Python - but you can save on the BOM by using a smaller device...) but we need that to have stabilty and longevity.
> when said change means they must change.
(Back to this again): for an awful lot of people, who is *paying* for them to change? Is *anyone* paying for the change?
[1] Are you supposed, on day 1, to rely on the presence of a server "somewhere on the 'Net", in order to pull in modules just to make "Hello World" work? Or is the default install totally self-contained, can be put into Escrow for 10, 20 years and still work?[2]
[2] Sorry, sorry, don't get into my own rant, or I'll be going on about throwaway code and the problems/waste that causes.
@that one in the corner
> Building a new database - are you using SQL or trying to do it in C?
Here I suspect the answer is C--you will, if you want anyone to use the database provide SQL as the standard language of the database. I recall that Michael Stonebraker wrote that his team thought that it would be useful to write Postgres in LISP; but the need for reasonable performance drove them to C. (Or was it C++? It has been a while.)
But in general, I agree with you.
"> people intrinsically involved in one of the the cutting edge of social endeavours
Whoooah there, Nelly. I never got into this to be part of any "social endeavours"; I'm not trying to "overturn the world" or "create a new paradigm" and I don't personally know anyone who was! It was a fun job to do, always lots of new stuff to learn, problems to solve and you got the pleasure of building things that Users, well, use. Ok, some of the Users were..."
OK, well, think. Why did you get involved in computers and programming them? Because you enjoyed, as you say, fixing the problems to solve - both programmically, as a logic problem, as well as creating something useful and meaningful. Because we all enjoy doing that, the satisfaction of solving problems and creating solutions.
It's wired into the male psyche, which is why programing is a mostly-male occupation. We enjoy fixing things, be it for our significant other who can't get the toaster working to society when we identify a new computer use that needs a solution. This is where the personal satisfaction of getting involved in a FOSS project comes from, and how that is leveraged from simple volunteers to create something greater.
So yes, if you search your own motives very deeply, you'll find a satisfaction of a job well done when you solve a problem and create something beneficial...to yourself, or someone else. Therefore, you enjoy and got involved with all this to, fundamentally, try to make a difference. It is the thing we all want to do with our lives.
People are against it because, so highly ironically, a lot of people involved in the ever-changing world of high tech...are personally resistant to change when said change means they must change.
This 1000%.
I've seen multiple times "senior" (in nominal experience, but not in ability) keep repeating the same approach. Even after training and all the support they needed. That the approach was based on an already unsupported platform using a hack to integrate. The (then) current way would have been less effort, but require a little thinking.
Luckily we can add C++ constructs to C code over time which makes it quite easy.
Let us remember the wise words of Isidore of Seville who said in the 7th century, "where there is a fixed-length buffer, may we have a string. Where there is error, may we have exceptions. Where there is a malloc, may we have scoped variable. And where is a hand-crafted array, may we have a vector".
In C I once wrote a system that had structures preceded by headers with a 32bit unique ID and a 32bit length. (Would be a 64 bit length now I suppose.)
Then any pointer could be checked (by a macro) that the structure was preceded by a header of the correct type which could spot a huge number of issues and could also (again by a macro) check that it did not exceed the length specified in the header. Macro's also dealt with allocations and frees to maintain the headers.
The macro's became empty in the production build and so execution time was not compromised.
The forerunner of C++ classes I suppose, although implemented in a very different way.
In C I once wrote a system that had structures preceded by headers with a 32bit unique ID and a 32bit length
A bit like VMS argument descriptors, from the 1970s? Available to C and every other language which used the VMS calling standard?
In C I once wrote a system that had structures preceded by headers with a 32bit unique ID and a 32bit length. (Would be a 64 bit length now I suppose.)
Possibly...but the ID would be 128 bits....
I did this too. Definitely a smart thing to do back in the day, and not really that hard given that you could hide malloc/free behind an evil macro.
If you look at the standard libraries for modern operating systems, they may have "moats" before and after allocated memory to (1) validate freeing a pointer is legit and (2) detect when something has written past the memory boundary and scold you on deallocate.
For the last embedded project I work on (mid 00's) I had moats on both side of calloc/malloc (also malloc always calloc'ed for extra safety in #debug code) that would tell me if anything got double freed or something wrote past the memory boundary (again for #debug code). It would assert on free if anything bad happened and tell you what line the memory had been allocated on and what line was doing the free when the problem was detected. I eventually used this same code with Windows and Linux projects I worked on even though those OSes will not kernel panic like an embedded OS would when you are too naughty, Having the extra information of where the memory came from and what line was freeing where the problem was discovered was nice.
Saint Isidore of Seville was indeed a remarkable divine. But I suspect that the poetically modified quote here is the one widely referred to as the "prayer of Saint Francis" (of Assisi), even if not actually traced with any real certainty to him. Unluckily for Saint Francis, Mrs Thatcher used it - somewhat ironically - on the steps of No 10 Downing Street when addressing the press after winning an election. It's not yet really fully recovered from that experience, but in a hundred years time I wager that while no one will remember that detail, people will still be reciting it in English and in other languages. With a little effort it could no doubt be rendered in Perl, C, Rust or even COBOL. Q: which (after exhaustive and professional optimisation efforts) would win in terms of execution speed and resource usage ?
Francis of Assisi was the patron saint of animals and poor people so took some liberties and chose a more suitable saint.
Thinking about it some more maybe Assisi is the right saint after all...
and completely virtualise all memory management without even having to change any source code (all the oh-so-clever C hacks that misuse the lack of such things would of course then break). The main problem is that your operating system compiled with it will now be 10 times bigger and 10 times slower than all your competitors. So who's going to jump first?
Speed differences can be huge. A factor of 1000 between fastest and slowest. When I first got into programming the first DOS pc's, I wrote some tests in 8086 assembly, same tests in C, COBOL and BasicA.
Assembly = Transwarp.
C = Warp 9
COBOL = impulse drive
BasicA = Are we there yet? Are we there yet? ...
In a college project (30 years ago....God, I'm old!), I used some embedded assembly code in my Visual C++ project to load a polygon vector file. While most of my colleagues' code took several minutes to load the largest files, mine did it in a couple of seconds - it loaded the first file so quickly the teacher first thought my code just didn't work - he only realised it had done it's thing when I told him to press the menu button to show the 3D object, and everything worked
I think that since then there have been two major changes: massive expansion in the CPU instruction set; compilers. I suspect that some tasks (encryption, video encoding, vectorisation) that modern computers run could not be written in assembly anymore. And I think there was a paper around about 20 years ago that demonstrated that automated compiler optimisation was better than hand-crafted code: this is, of course, really applying all the lessons learned over the decades.
That's not to say that you can't still write code that compilers can't optimise to death – because you obviously still can – but that the goalposts and in fact the whole playing field have been transformed.
But this is still not playdoyer for not learning how to things right – in the programming language of your choice – because that will still always make a difference.
Could be written in assembly, but would take longer for likely no performance benefit, may even be slower. Any opcodes a compiler throws down could be done in assembly or direct binary coding.
It was fun and frustrating coding in assembly. Not worth it these days except for rare circumstances where you need to do something weird on bootstrap.
I mean, sorta. But compilers are way better than they used to be. I can write software in 68k Assembler which is faster than the code I write in C (and the code I write in C is pretty nifty - over 30 years of experience have to count for something). But, on a modern architecture, the C compiler builds more efficiently than I can do in, for example, ARM64 assembly language.
>Assembly = Transwarp.
>C = Warp 9
That's perhaps the only big change in the last 20years.
You used to do the fast bits in assembler, even if you just did the maths in SSE.
In C you learned loop unrolling, bit shifts and register variables yourself.
Then C compiler optimization on modern CPUs became magic and now the fastest code is the most simple obvious naive C
I spent many happy years working with COBOL on IBM mainframes. The way data was defined made immense differences to the compiled size of programs (and presumably the run times, but memory usage was more important in those days). COBOL was very good at converting data from one format to another, but it generated vastly different amounts of assembler code.
As title.
COBOL is readable. Traditionally it is also run on industrial strength operating systems which simply don't allow buffer overflow nonsense.
You try to write A to B and B is to small, everything stops. Dead. No overflow. Can't happen.
Perhaps it's time to be realistic about C?
It's a coder hostile language, and makes most assembler languages look easy.
Half the problem with the rising C greybeard problem is the macho hostility of its proponents, as commonly seen in Linux communities where newbies are derided for not being experts.
You've reaped what you sowed.
> It's a coder hostile language, and makes most assembler languages look easy.
You'll not have coded in assembler, then.
C is essentially a portable assembler language, based on an abstract machine which supports sequential processing of instructions, branching, linear memory layout, and little else. As with assembler, you can code pretty much anything in C, but it doesn't hold your hand, and it requires that you know exactly what you're doing (in that respect it is demanding rather than "hostile"). As with any language, you may write good or bad code in C. C is appropriate to some programming tasks, and inappropriate to (many) others.
FWIW, I suppose I am a C "greybeard", but I did code a bit in COBOL back in the day when I worked in the telecoms industry. It was horrible, but rock-solid. Several of my contemporaries from that time continued for decades to earn top dollar as contractors in the finance sector (as far as I know, some still do). The first language I learned was Fortran 77, and I still have a soft spot for it - no buffer overflows there either (I think that was only introduced in Fortran 90 ;-)) I have programmed in several flavours of Basic - that was okay, it got the job done. I was attracted to C++ for a while, then backed away when it crawled too far up its own arse (as did my code). I find Java unpleasant and Python offensive - I'm not quite sure why. I have no experience with Rust, and probably never will; it is not useful to me. As a mathematician, statistician and research scientist, nowadays I code mostly in Matlab, for which I write C plugins for non-native functionality requiring high performance. I would probably have switched by now to Julia, which is rather lovely for scientific programming, if not for the fact that it's not quite mature enough (yet) in terms of libraries, and in any case Matlab is still de facto standard in my research area.
It's not one-size-fits-all. Never will be.
>” > It's a coder hostile language, and makes most assembler languages look easy.
You'll not have coded in assembler, then.”
I have, having written an OS for the 8086/286 in ASM, a C code generator for the 8086 platform which supported segmentation, and back in thee day some Unix device drivers in C, I agree with the original post in that it can make assembler seem easy. However, I much prefer to use C than assembler.
With (x86) assembler it is harder to do some of the things that C will do for you ie. You have to deliberately rather than inadvertently do some things. Buffer overflow has always been and will always be an issue in languages that support string processing /runtime variable length parameters. However, with assembler you can get bogged down into the details and so lose the thread of logic you are actually trying to code.
I've also done a little assembler yonks ago (M68000 - a much "cleaner" architecture - and assembler - than 8086/206, IMHO). But no! No way would I say it is easier for anything than C (except arguably some device-driver stuff, where you might well have to drop down to assembler anyway). I mean, come on - shunting data between registers and addresses, branching and looping all "by hand"... you'd kill for an if(), a for() and a couple of named variables - and let's not even talk about functions! So... really? Not buying it. That's pretty much why C was invented.
Motorola's VERSAdos 68000 structured assembler let you use a few control constructs, if, for, while, etc. and expressions. Of course it converted these to the appropriate machine code at assembly time. It had many other nice features controlled by source directives.
It was good for making code a more readable/structured, and a total pain in the neck when switching development platforms to one without such a clever assembler meant ploughing through all the source to rewrite those bits.
> Motorola's VERSAdos 68000 structured assembler let you use a few control constructs, if, for, while, etc. and expressions.
Is that right? It was a long time ago... in the 68000 version I used (on Alpha Micro minis - nice machines, never saw them outside BT) I recall labelled subroutines and something similar to a Basic GOSUB - and macros! - but that's about it.
VAX/VMS assembler (there was, grudgingly, such a thing),
Nothing grudging about it. Macro-32 was an extension to the PDP11 Macro-11, and was provided with all VMS systems. Nice assembler, too, I wrote quite a bit of system code in it.
I don't recall there being if and for opcodes, though, are you sure you're not confusing it with Bliss, the "structured assembler" that was used to write much of VMS itself?
When the VAX-11/780 was first released, it came with a nice collection of languages (6 of them IIRC), all interoperable (i.e. you could write a module in VAX COBOL and another in VAX C, and another in VAX FORTRAN, and they would all link together, and allow you to call routines from one into any of the others seamlessly (and transparently). However, an assembler was pointedly not included, because DEC didn't think it was necessary. There was a lot of interest in the VAX, of course, including by several (US) Gub'mint agencies. But they wouldn't buy systems unless there was an assembler available. DEC's position was, you had C, you didn't need an assembler. (and their C was quite good, did a lot of cutting edge optimizations and did them well). But, in order to get boxes into these highly prized customers' hands, DEC went and slapped together* an assembler, Macro-32** (as stated above), and the rest, as they say, is history.
*OK, it wasn't slapped together; they produced a high quality software offering as was their wont at the time. It was just done exceptionally quickly.
**Macro-32 bore really very little resemblance to Macro-11, (except perhaps for the name). The primary reason was that the underlying instruction set was quite different, and the assembler directives were necessarily different due to the vastly different architectures of the PDP-11 and the VAX-11/780.
When the VAX-11/780 was first released, it came with a nice collection of languages (6 of them IIRC), all interoperable (i.e. you could write a module in VAX COBOL and another in VAX C, and another in VAX FORTRAN, and they would all link together, and allow you to call routines from one into any of the others seamlessly (and transparently).
Not quite. The VAX Procedure Calling Standard defined the basic interface but some languages pass arguments by value by default, others pass by reference, so you would sometimes need to know what format to use. That might mean dereferencing a C pointer to call a Fortran routine by value, or using %REF in Fortran to pass an address to Pascal. I don't remember any languages being bundled with the system, they were extra-cost products. The early systems I used only had Fortran77, we had no need for COBOL or C, and so didn't buy them.
However, an assembler was pointedly not included, because DEC didn't think it was necessary.
The VAX was introduced in 1977, I've been using them since early 1980 (VMS 2.0), and I don't ever remember seeing one that didn't have the assembler bundled with the OS. VMS 1.0 still shipped many PDP (RSX-11) utilities that ran in compatibility mode, but VMS 2.0 added many more native ones, many written in Macro-32, and they needed an assembler to build them.
Macro-32 bore really very little resemblance to Macro-11, (except perhaps for the name).
The instruction set was obviously different, but the syntax and lexical directives were for the most part copied from the PDP's Macro-11, with additional ones added for the extra VAX addressing modes. It made it fairly easy to swap between the two systems.
"It's a coder hostile language"
You mean it doesn't have training wheels and it will quite happily build code for pointer[-1234] because that's valid syntax even if there's a pretty good chance it's not what was wanted.
"and makes most assembler languages look easy"
Ummm... While C is often called a higher level assembler, the massive benefit of the compiler is that I don't have to deal with all of the tedious bullshit. Stack frames? Register allocation? Remembering what was stacked to unstack it later? Not my problem. Plus you can often write things in C that would take _many_ lines of assembler to do.
"Perhaps it's time to be realistic about C?"
Let's be properly realistic about C. It doesn't go out of its way to hold your hand or cuddle you. But, then again, it was intended for writing operating systems using hardware vastly less capable than the sorts of things we have today.
"You've reaped what you sowed."
I think a lot of the problem is the changing development environment in which it is expected to ship early and ship often, rolling updates, and using the users as testers (though nobody openly admits that). This gives fewer opportunities to get things right. Couple that with the number of systems that are now always connected to a hostile outside world, it poses real problems. But, alas, such things as effective testing and unit tests and a set of employees whose job it is to break things, all of this costs money. Money which is better put into the pockets of shareholders rather than, you know, making sure the bread is buttered correctly. Just look, for example, at how often Microsoft screws up their updates. And they're not alone, just frequent offenders. Changing language can fix a certain class of problem (by sheer virtue of that particular problem being specifically catered for by the language), but it can't fix institutional malaise. It can't fix sales promising the moon on a stick in two months and if you don't deliver it will be the apocalypse, nor can it fix penny pinching, bad management, and poorly written specs.
> Changing language can fix a certain class of problem (by sheer virtue of that particular problem being specifically catered for by the language), but it can't fix institutional malaise. It can't fix sales promising the moon on a stick in two months and if you don't deliver it will be the apocalypse, nor can it fix penny pinching, bad management, and poorly written specs.
Yep, full ACK.
You bet that I am more than capable in writing shit code in any language, no matter how fancy, new or safe: I have done it in Assembler, I have done it in C, I have done it in C++ and now I am doing it again in Python. Only inexperienced people do not get it, that the bloody language is not the problem, but the doofus taking short cuts when writing software is. Like I said, I have been there many times.
COBOL was one of the many languages supposed to be used my non-programmers. As such their productivity was considered more important than program execution speed: this is one of the trade-offs that has dominated computing from the start and will probably still be with us in another couple of centuries.
> COBOL was one of the many languages supposed to be used my non-programmers
I'd qualify that simply by pointing out that, at the time, "programmers" were far and few between, heads deep in assembler. The "non-programmers" were - anyone else who wanted to make the computer do their bidding but not get involved in the icky bits.
From that perspective, FORTRAN was for "non-programmers" (i.e. people whose main job title was "scientist", "engineer", a few "mathematicians" etc) who happened to want to make the machine do all their Mathematical Formulas for them.
> their productivity was considered more important than program execution speed
And we see that in every programming language since: yes, even C[1] gives you[2] more productivity than assembler, at the cost of speed (and size).
> and will probably still be with us in another couple of centuries
Yes. We'll keep on trying to make the optimising compilers make the code smaller and faster, but that is basically the entire point of all of the programming languares.
[1] hopefully
[2] well, for most of us mortals who don't have the x86-64 registers floating before our eyes
I dispute the macho hostility part, not least because some of the best developers I've ever worked with have been women. And young (or newbie) doesn't mean not being expert*. That's purely down to being inexpert. And if you don't know your stack from your heap, or your malloc from your calloc, or if you think that OO code can't be written in plain old C, if you don't know your SOLID - well. You've provided me with all the evidence I need.
* Equally, having a grey beard doesn't convey expertise. I've known some total muppets with decades of 'experience'. Experience of WET (in both senses of the acronym) as far as I can tell.
If your carpenter can't make a cabinet, do you settle on getting few planks tied with a gaffer tape so at least you can put something on it, do you change your requirements so that you actually be okay with a piece of wood, or do you hire competent carpenter?
This bufferflow conundrum is corporations wanting to hire cheap incompetent carpenters and force them somehow to make great cabinets to increase profits.
Or kicking square peg into round hole.
Nonsense.
Just pay up for the skill!
"... do you hire competent carpenter?"
Actually, if you want a good cabinet you need a cabinet maker, not a carpenter. Carpenters make big nailed up structures such as roof trusses and such like. And between the two there are joiners who do stuff like hanging doors. So as they say "horses for courses" and the same goes for programming.
I guess I'm just getting old. However, this is a conversation I've heard for over 30 years. "Try out my new language X it's so much easier than Y. No, really, your language will be dead in only a month because the whole industry will decide that my language is better; anyone that doesn't agree with me is an idiot." Most of the time, the PFY that is espousing moving to the new language is advocating some form of scripting language to replace low level code. Rest of the time, the defender of the language to be replaced says no real programmer would want to change it and the sides square off ad infinitum.
To quote the Simpsons, "Willie hears ya, Willie don't care."
Wake me when you're up to paying for a language to be developed that can replace C everywhere it is used, and then pay for it to actually be replaced. I shan't hold my breath. Oh, and no, whatever language you chose doesn't already exist; C has its place and no one has created one that can completely replace it.
The problem with C is coder incompetence, not C per se. Would you also advocate doing away with Assembly Language or Machine Code (oh, wait, you can’t - not without baking Rust, or something similar, into the Control Unit of what would be a hideously baroque hypothetical processor.)
The problem is that many developers these days have no sympathy with, or even understanding of, the processor that executes their instructions. So of course they want to abstract it all away, sweep it under the carpet, pretend it doesn’t exist. But if you do that, how can you write efficient code? Or perhaps it doesn’t matter - just buy a bigger processor and more memory to make up for developer inability.
Now don’t misunderstand me. I do think that there’s a place for RAD tools like Python and BASIC (and many others). They make programming easy for non programmers. They enable them to solve problems without having to worry about memory allocation, mutexes and the like. And if it runs inefficiently, so what? The problem is solved and perhaps, one day, the non programmer will spread their wings, learn about the architecture they’re developing for, and start using professional tools.
Like C. Which, in many ways, can be thought of as a universal assembly language (remove all of the libraries from C and try to print something to the screen if you doubt what I say). An assembler for which a lot of handy libraries have already been written to take the donkey work out of developing software.
The alternative might be a world of virtual machines, underlying which is one that will have been written in C or Assembly language - and you better hope that the virtual machine developers have been sympathetic to the underlying hardware, just as the application developer will need to have been sympathetic to the virtual machine. It’s a world of hurt. Rust doesn’t need a VM in the sense of the JVM of course - but you’d better not mention what LLVM that Rust needs is written in, at least not unless you want a turf war.
Many languages have been touted as C replacements over the years. And yet C persists - even as some of its lauded successors have stumbled. Why? Because, in the right hands, it does what it does well. And if you want to wring every last ounce from your CPU then there’s still nothing quite like it. Even if it isn’t the cheapest langage to develop for (experienced software engineers don’t come cheap!)
If you are starting a new project, without a legacy of existing libraries, etc, that are important to it, then you have multiple languages to chose from with varying levels of ease and functionality. For me python is OK-ish, free, and tolerably stable and readable. But I hate weakly typed variables.
The big problem is what to do with an active, large, and complex bit of software written in C. More so if it keeps getting monkeyed with rather than simply bug-fixed and tidied. (*cough*Linux*cough*). Then you have a massive ball-ache of a problem! To re-write it all is a huge undertaking for minimal gain, and it is unlikely that automated tools will work properly AND preserve the safety features of the language du joure you want to change to. If you want to make it a dual language project you get problems of the interfacing of the two and who is press-ganged in to implementing, supporting, and debugging the interface aspect. That seems to be the main point of contention recently, made all the worse but the deliberately fluid ABI in Linux.
I have a couple of projects in C and won't be changing them, but I would be very hesitant about exposing my C-code to the abuse of t'Internet even though I try to use safe coding practices. Generally I try to put any such a situation in an AppArmor profile to contain what it can do when rogered.
> The big problem is what to do with an active, large, and complex bit of software written in C. More so if it keeps getting monkeyed with rather than simply bug-fixed and tidied.
Maybe it ends up as an active, large, and complex bit of software written in Rust. Which keeps getting monkeyed with rather than simply bug-fixed and tidied.
Garbage collected languages are the best application programming languages. They're easy to use and safe and much more limited training is needed to be productive in them.
The only drawback is that the most popular ones (Java and C#) are not compiled but virtual machine based. Luckily there are other options, like Go.
Not sure where I would put this as a reply, so I'll put it here.
As the title says: each technology/approach/... should be evaluated on its own merits.
Some factors (eg. existing skills) can be mitigated (training). Attitudes of "I don't want to change" may need stronger action: if you are a developer you should expect change. All the time.
Sometimes not changing should be chosen, just remember the clock is ticking and how much are you willing to pay for support/fixes/.... for older platforms with few other users?
Just keeping things the same is just stasis. I'm sure many in my great-grandparents generation felt the same about oil lamps and candles over this new fangled electrickery.
When you stop and think about it there really aren't that many places in a computer where you can overflow buffers. It invariably requires some form of user input, I'd guess mostly over a network. The problem's been so pervasive because the Golden Rule for handling this type of data is that the code should be prepared to accept any garbage at any rate up to and exceeding the maximum specified for that interface. Testing should routinely verify this.
This has nothing to do with 'C'. This language was never intended to write user facing applications in, it was designed to write systems and end-user languages. Building in user QA checks might be useful to backstop the QA that should have been done by programmers on the code anyway but its likely to cause a cascade of exceptions which may stress the system in areas unrelated to the original problem.
Still, just as COBOL is perfectly usable despite its age I suppose you can't push back against progress. Or churn. So suggesting that maybe, just maybe, the real reason for all of these problems is actually the universal pervasiveness of web technology and especially CGI and the over-use of scripting will probably just get me beaten up......
> When you stop and think about it there really aren't that many places in a computer where you can overflow buffers. It invariably requires some form of user input, I'd guess mostly over a network
* Load a directory path into a buffer - works great, until you find out that MAX_PATH isn't a thing anymore (or it is, but only with respect to certain functions/libraries, which may well be (older, deprecated - but your code was compiled a while back...) OS API calls. Better yet, those calls may prevent you creating a deep directory hierarchy (so you're safe sticking with them) - until you cd into your "deepest" tree and create another level down using relative paths, then another...
* Format a date using names for months and for weekdays. It works fine for every combo in *your* language...
* Format the total number of bytes of drive space or RAM available: *nobody* can have more than 4 gig of RAM, the CPU can't address it, and there is a limit how big an LBA is, so multiply that by 512 to get the max possible size of a drive.
* You can only have a max IP address length of 255.255.255.255 so reading the IP assigned to a NIC will always fit into this buffer. Oops, forget to check if it is IPv4 or...
If you can assume it, you can forget to validate it and one day - poof!
It is the failure to validate that gets you, every time. Doesn't matter which your approach to a validation failure is (dynamic buffers or Just Saying No[1]).
[1] although, take care with how you say no - and document it!
Careful there.
If you tell them about the IOCCC they're only going to use it as a rod to beat us[1]!
Quick - we must start up the IORCC to demonstrate the joys of Rust!
[1] Yes, *we* know it is all done in humour, but, well, look at the problems that occur because the cosmologists decided to keep the name "Big Bang" as a joke on Fred Hoyle.
I get that this is an opinion piece.
But even opinions have to be right; you are not entitled to invent your own facts.
The sweeping generalisations, the wrong implications, the offensively poor metaphor distorted beyond any useable parallel.
Never mind that your idea is so poor, but it is so poorly argued that reading this was a waste of my time.
I enjoy a good rabble rousing as much as the next guy, but I no longer have patience for brain farts written out in article form.
Now I will ignore any Rupert Goodwins article on C, and probably any on programming.
>I loved Algol-68, but it’s of little use when writing programmes for real-world application.
The problem with Algol-68 was too few compilers. The full language was all but unimplementable and the 68R subset was only available on systems like ICL's 1900 series. I thought it was a pretty useful language and actually wrote a usable program in it, something to simulate (processor) pipeline behavior. But this was the 1970s, another era for computers in general.
The problem with 'C' is that its too useful so it gets used for things that it really shouldn't be used for. Combine this with perfunctory testing, the idea that software that doesn't immediately crash ("runs overnight") is working and you've got a formula for problems. I've worked mostly in the embedded space where the requirements of product testing mean that we don't just happen across problems, we actively have to look for them (and even go out of our way to cause them). Its a practice I've not seen applications people doing but its not necessarily because they're bad programmers, its just that their development environments tend to encourage a "throw it out and wait for the user to report a bug" because there's just too much to test systematically.
The problem with the analogy (C is the new COBOL) is it ignores the ecosystem that COBOL resides on:
- mainframes whose design is for zero downtime where upgrades (hardware/software) can be done on the fly
- it's friends like IMS, JCL and CICS which are core to running a COBOL application
Few still running COBOL legacy applications can exist outside of this ecosystem whereas 'C' is free standing....Unix, for example, was ported to scores of different CPU architectures as has Linux.
Thus the reasons for COBOL's stickiness is totally different to 'C''s stickiness.
The other point is COBOL, FORTRAN and C came to exist in a vacuum.. they become entrenched because they were good, no alternatives existed over time and they kept evolving over time to meet needs.
But now we live in an environment where new compiled languages pop up basically every week and thus it's impossible for anyone of them to gain traction.
Rust is touted as the successor which might be great for a new green field development but it wont help in the large body of existing C projects, both commercial and FOSS.
So if the author means C will still be running the world in 50 years time (as COBOL does in many industries like banking) then they are right.
Bluck
C is the ABI. The one that everything comes down to in the end.
The only realistic replacement for C is C++, because a C application and libraries can be slowly ported across, and dynamic linking is officially supported between C++ modules without losing any language features - unlike all the language-du-jour, presumably because dynamic linking is hard.
However, C++ only supports dynamic linking when it's the same toolchain and in some cases compiler settings.
So still, everything ends up with a C ABI at the edges.
Seriously, folks? I can write great code in C and crappy code in Rust. Or the other way around. I can write great code in COBOL and crappy code in FORTH. Or the other way around. And I can write a database manager in Assembly, but... why? I can probably (if torqued hard enough) write some sort of system control program in dBase III, but... why?
The point is, we are fortunate enough to have a whole toolbox of languages in which to program. Doing something close to the hardware and don't need the guardrails? Sure, use C. Need more safety? Use Rust. Need more user-friendliness? Use Python. Need [insert need here]? Use [insert better choice here].
Don't denigrate a language simply because it's "old". Hammers and screwdrivers are "old" too, but still damn useful. And in the wrong hands, dangerous, but we still need them, and probably always will.
coding by hand with switches, and then if you've shown that you're a good programmer, we'll let you loose with the punch cards.
As I've said often enough, buffer over runs should not exist in any new code, even code that is 20 years old should not have any. its not like the old days where the CPU took time to check if the data fitted in the buffer and it was quicker to go splat and hope.
Coat... because I'm an old phart on the way out
The more I'm told not to use C, the more I'm going to use it, because I like coding with pointers and pointers to pointers and having to think about checking a variables integrity especially if used to control loops and indexes.
I'm starting to think we have a generation of programmers that shouldn't be let anywhere near C until they have totally mastered how powerful the language can be and that it won't protect you from your own stupidity.
If you don't want to use C then you are free to use whatever programming language you like, but don't keep trying to persuade me to not use C because you can't write descent C code.
Yes C and its variations is dangerous. Yes there are ‘safer’ languages. However you can write crap code in any language and probably will if you are not taught to code properly. A lot of institutions here (I’m in Australia) do not teach what those of us who have been, to borrow this article’s metaphor, cut by chainsaws would consider good coding. A simple C example in particular are undergraduate (and the like) C programming courses which teach students to use unsafe string functions rather than their safe (<functionname>_s) equivalents… Maybe if we fixed the teaching we’d have less rotten code in any language?
Maybe if we fixed the teaching we’d have less rotten code in any language?
The problem with that is that teachers themselves have to be taught correctly and that requires them to have real world experience, where they can make better money once they are good enough, so they won't turn to teaching.
Those who can, do.
Those who can't, teach.
The gov is (by nescessity) beige and constipated, so it's no wonder it advocates for a belt and suspenders approach to coding, fed by a high-fiber diet of vegetable proteins for regularity, rather than metaphorical gut-bomb chainsaws. But that doesn't make it right (or wrong really). It's appropriate in some circles, but way too uptight for many others ...
C is a loose-fitting straightjacket that one can escape out of, with calls to assembly, to make rather full use of the capabilities of the underlying hardware processors. By comparison, most languages above that level are like inflatable sumo wrestling costumes, wrapped around your data processing small intestines, occlusively slowing throughput to a crawl, preventing you from ever feeling properly and truly relieved. You need to poke holes through it just so your code doesn't spend its whole day constantly fainting really.
If bounds-checking is so important, then it obviously has to be implemented in hardware, straight into the CPUs' and GPUs' inner out-of-order speculative pipelines and throw-up exceptions on stomach-flued data accesses, not bolted on as an afterthought in the form of fashionably "new" throughput-impeding constrictive boa programming idiolects. Software implementations of gastric garter belts and shapely girdles have to be thought of as useful temporary stopgap measures imho, but not as the sustainably healthy future of programming we all crave for and lust after!
Does anyone else feel like Rust is a sledgehammer to crack a nut? There is the nut of memory safety to crack but C is used partly because it's minimalist poetry. Rust puts tons of junk everywhere.
Why do you need let and fn to declare variables and functions? C gets by fine with int x.
I've thought for a while we need some lightweight extensions to C to allocate and free memory for common cases safely. But the language makes coding easy and that's why it because standard.
But they aren’t. Linux is pretty robust at the kernel level and most of the well known applications are equally robust. Same for all the BSDs and their applications. Plus countless other kernels and embedded OS’s
The only OS (and its applications) I can’t think of that fits into your assumption is …can you guess??
For finance, COBOL has the advantage of using fixed-point arithmetic with user-specified numbers of digits before and after the point. In some places, there are laws that specify how to round amounts to a specified precision, and COBOL often supports these requirements.
But other languages can encode fixed-point arithmetic in libraries, and when writing to databases you often convert integers to strings anyway, so you can get around this in many languages.
Some financial institutions have moved from COBOL to OCaml or F#, as the functional programming style fits well to the domain. Some even use APL. So it's not like there aren't any alternatives. But many financial institutions use mainframes for high transaction throughput, and not many languages are supported by mainframes. COBOL is. On IBM mainframes also PL/I (of course), Java, and C/C++. But the choice is usually quite limited.
Some people have claimed that COBOL is readable by non-programmers. This is true only to a limited extent. The original COBOL was mostly readable until it came to control-flow primitives, and there are cases where the COBOL meaning of words is subtly different from their English meaning, which can cause confusion. And there is a long step from reading code to writing code.
and when writing to databases you often convert integers to strings anyway
Preposterous. Converting integers (and fixed point decimals are integers for this purpose) to strings consumes processing power and storing them that way consumes more space, especially compared with BCD storage of the numbers.
I currently make my living rewriting ancient C code into Java code. Some of this C code is in control of things that can and would actually kill people in an extraordinarily messy way. It is possibly the worst codebase I have encountered in 40 years, and by happy chance, I don't think it's actually killed anyone yet.
The new code runs at approximately the same speed, not that anyone can tell because the difference between "insanely fast" and "slightly less insanely fast" is hard to perceive of course. What is most interesting though is that for every 1000 lines of C code or so, I can replace it with about 100 lines of Java code, and I now have the added benefits of being able to reason about what it's doing, write easy-to-run-and-maintain unit tests for it, and most importantly I don't have to worry about the enormous number of completely unchecked memory issues that existed in the old C code.
As earlier mentioned, C was designed to write systems code for PDP11 and related computers. That is all very well, and C is not a bad choice for writing device drivers. The problem is that it is used for so much else, probably because cheap or free C compilers became available before ditto for other languages. Nowadays, compilers are pretty much universally free.
Some people have mentioned that a failing of C is that its support for data-parallel programming is limited (because that wasn't around on PDP11 and Vax), but IMO that is not the worst problem it inherited from the 1970's machine model. The worst problem is that C is designed around a single, flat memory space that you access through pointers that can be converted to and from integers. This is where most of the security issues originate: You can easily address outside the range of arrays and other objects because addresses are not checked to be within the range of these objects. Often, it is not possible to do so, as there is no information about the size available at compile time or runtime. For example, to find the size of a string, you look for a zero byte. But it is easy to overwrite this with another value, and then the string can look like it is much larger, and trying to find its size can mean accessing addresses that are not mapped to real memory, so you get access violation errors. It is not better with arrays, as their sizes are not stored anywhere (by default), so adding index checks is pretty much impossible -- at best, you can check that the address points to valid memory. Sure, it is possible to define a "fat pointer" type that in addition to the actual pointer value also contains the first and last valid address (or equivalent) of the object. But this means that every operation on these must be through library functions, which is cumbersome and also not checked -- nothing prevents you from messing with the fields of fat pointers. Manual memory management is also unsafe, as (even with fat pointers), you can access objects that have been freed and whose memory is used for something else, and if you conservatively don't free objects to avoid this, you are likely to get space leaks. Also, because pointers can be converted to integers and back, a memory manager can not move heap-allocated objects to close gaps, so you get fragmentation. Adding a conservative garbage collector can prevent most cases of premature freeing and some cases of space leaks, but it doesn't prevent fragmentation.
It IS possible to code safely in C, by following strict coding practices, but since these practices can not be enforced or checked, this is a weak promise.
IMO, pointers should be abstract objects that can not be converted to integers (or anything else for that matter) or vice-versa. You can create pointers by allocating an object, you can follow a pointer with an offset that is verified to be less than the size of the object (at compile time or run time), and you can split an object into two. Joining requires that objects are adjacent, which is a property that should not be visible, so you shouldn't be able to do that. You can even free the object when the pointer variable goes out of scope (otherwise, you will have a dangling pointer), but only if there are no current copies of the pointer. Yes, some of that sounds a lot like the Rust rules.
But in many cases you don't even need all of these capabilities -- you might not need to be able to split or explicitly free objects. This is the case in most languages with automatic memory management.
Some object to these restrictions because they are restrictions. And some because they impact performance. But the performance impact is usually minimal, and because you expose less information about objects to the programmer, the compiler or memory manager may be able to perform optimisations that they can not do if, say, pointers can be converted to integers and vice-versa. And a programmer should be able to work with restrictions in the same way that electricians have to follow safety standards for electrical installations.
Please don't insult COBOL.
Have you heard about the buffer overflow problems in COBOL? No? Neither have I.
When a COBOL program does
MOVE SOME-LONG-FIELD TO A-SHORTER-FIELD
the long field gets TRUNCATED.
That applies even to null terminated strings.
See https://www.ibm.com/docs/en/cobol-zos/6.3?topic=arguments-handling-null-terminated-strings and
https://www.ibm.com/docs/en/cobol-zos/6.3?topic=strings-manipulating-null-terminated
Any competent senior/lead programmer who has ever thought long and hard about sources of errors in large codebases and then looked at a large (working) COBOL codebase can only be impressed at what a fantastic job Grace Hopper (and the rest of the original crew) did in architecting and spec'ing the COBOL language. Thats what a proper language for truly robust software looks like. Sure its verbose but all properly engineered structures have wide margins of redundancy built in.
The only reason COBOL fell by the wayside is that universities did not teach COBOL. Trade schools did. Before the 1990's most working IT programmers did not have some university "comp sci" degree. They were either self-taught or got a very good grounding at some 'trade school". But in the last 30 years most IT depts have only hired people with "comp sci" degrees. And the only languages (badly) taught in university comp sci depts were Java. And maybe C++. Both languages terrible for typical IT dept enterprise application development. A task COBOL was perfected for 60 years ago.
It was never a matter of reinventing the wheel. By not teaching COBOL. It was throwing out the wheel and replacing it with some roundish shape rocks. Of different sizes. Which broke all the time. Because teaching about the fully perfected wheel was both boring and beneath them for a typical Comp Sci Prof / TA. What do you mean I cannot write Aspect Oriented Code in COBOL. I wrote my PhD thesis on that...
As for "memory safe" C. Any competent C programmer can write "memory safe" code. And has been for the last 50 odd years. For embedded software. RTOS software. System software. Application software. And so on. Its not that difficult. It just requires some forethought, planning, proper engineering analysis (i.e total paranoia) and serious coding discipline. Do that and you will never have an uncaught buffer overflow. Ever.
If you dont know how to write "memory safe" C code then maybe you should stick to languages like Java or JavaScript. With the performance / resource hit this entails. And leave the serious low level / high performance programming to those who spent the time to fully understand what the language does on the bare iron (which means learning asm) and then write many 100k's LOC's of boring mundane nothing fancy code. That works. Always. Because we assume it will always try to fail. And design and implement accordingly. You know. Proper software engineering.
ADA was becoming largish in it's niche by about 1983. It got steam-rolled by Borland's Turbo Pascal, primarily because of price and resource requirements. I remember watching an employee at Stanford University Bookstore removing all 6 singleton Ada books to help make room for a fifth shelf full of Turbo Pascal books. That would have been 1985ish.
Yes, I know, that's not a scientific survey ... but ...
Something like 35 years ago I was with a small R&D company that was developing embedded systems using Concurrent Euclid.
The company was very productive and in general system reliability was good.
We eventual had to move to C because of starting work using Unix platforms.
Our productivity went down the toilet.
I have done lots of coding in C and It may be my favorite language but it is way too error prone.
Around 2000 I was marketing Y2K services to banks in the UK.
We found a bank (US based IIRC) that ran jobs on it's 3090 (now Z-series) mainframe in 370 emulator mode.
Running a simulator written for an IBM 360
To emulate the instruction set of a Univac mainframe
Which ran the real application.
That I know to have happened. Are they still doing so? Have they bitten the bullet and done a ground-up rewrite?
IDK.
Someone tries to copy A to B without checking if A fits or not?
Computer languages are merely collections of programs called “commands,” if one command isn’t working right, just revise it!
(Barely anybody programs Fortran ‘77 anymore, it’s possible to revise and to move on.)
My programming is shit, the almost none that I do. When in college in the 90s I enjoyed programming, but, was sadly shit, struggled with the logic. A lottery number picker I made in Pascal I had a leacture look at. My loop that was about a page long, he made one into a few lines, I could see it after that. But then when another logic problem came up and the maths, my brain hurts.
By time I did a year in Uni I was bored of programming. I just wasn't getting it but I remember we did C++ which seemed good. Roll on a few years ago and I discover CS50 with David Malan who is an amazing teacher. But the course did C as a start then you move along different languages. Fucking 'ell C appears annoyingly complicated. I saw why C++ appeared so much better.
So not being a programmer, maybe C does need to go? And surely all those C experts can make a fortune being "consultants" when companies struggle to find good C programmers to maintain legacy software.
Whilst we can debate what is a better alternative than C, and the relative merits of COBOL, this article is spot on, and everything I have thought about C since 1981. Very powerful, very flexible, very easy for inexperienced programmers to make mistakes, and for some reason academic programmers have some kind of perverse desire to want to make their code as cryptic as possible. Fine as an academic exercise, but cryptic showing off is not for mission critical apps which will outlast the egotistical programmer by many years. I have both programmed in C (and many other languages) and tested C programs written by others and had to make enhancements to other's code, which sometimes has resulted in an almost reverse-engineering exercise as they have been written so tersely. I wonder if the disciplines of programming (defensive programming/programming with security in it's core etc.) are ever taught. Mine was because I had a good mentor and was working on military projects which have long maintenance cycles, but it seems to be hit & miss as to whether others are taught with this in mind and therefore are, in a sense, ignorant of good practice. The Elements of Programming Style by Kernighan and Plauger was mandatory reading before I even wrote my first line of C. It is a massively outdated book but its principles remain relevant.
I've said this here before and will say it again: C is a *systems programming language* designed to program operating system and low-level systems code with which we're abusing as an *application programming language*.
Remedy this and the problem is manageable.
The problem isn't C per se but obnoxious programmer who believe they're smarter than everyone else and insisting on that last ounce of speed at the expense of safety.
The argument for C ? Try writing high-speed code for highly embedded platforms where only a "bare-metal" platform or maybe a real-time OS like FreeRTOS solution will fit. What do you want to use that will give you low-latency, tailored access to machine hardware registers ? That doesn't tie you to some other software engineers' "template" that nearly does what you want but is badly written/badly documented/doesn't work quite as advertised but you can't find out why ? That greatly rewards creative thinking and the urge to innovate ? That encourages you to circumvent counterproductive and unnecessary so-called "safe" roadblocks ? Bear in mind the goal of the suits that employ you is to de-skill your occupation so they can pay you less and then not-at-all. Here in the engine-room, for all I hear the hype about Rust, Go etc.,. when I have road-tested them I find they appear as thinly-coated attempts to recreate C as a more clunky, less streamlined and unsatisfactory doppelganger. This has been tried before, it was/is called Ada and it failed due to an overriding level of bloat and a lack of elegancy. In some ways the roadmap of C++ has delivered a similar result and similarly it has failed to unseat C. Now, if I want to do some high-level maths-based computing I will use Matlab or Julia or R. They are brilliantly suited to their application - and only their application. If I want to do a GUI I might use Delphi even though I left Pascal behind long ago. If I feel like punishing myself I can try Lisp or Prolog or Haskell or a million other weird concoctions. But if I want to ENGINEER software in several different domain-specialisations then only C has the breadth, the depth and the elegance to corral all the different prancing horses!
I didn't get past the first paragraph where the author was rattling on about speed as if it was a cause of accidents (See official UK road accident statists RAS1007 and you will quickly discover that the fastest roads are the safest; largely because driving retarded causes reckless overtaking, frustration and lethargy while retarded speed limits invoke complacency, but I digress)
If you don't trust your coders then what makes you can think that you can trust the coders that develop the framework for high level languages?
It is a race to the bottom if we are simply going to have all consuming frameworks slurping up compute power just so that they can go around cleaning up after garbage incompetent programmers.