Enough like C that adoption should feel painless but the 3 $1Bn questions are....
1) How does it achieve that safety?
2) How does it achieve speed?
3) How does it handle problems that cannot use full safety?
It sounds like they enhance the syntax to catch a lot more of the errors than C does (same approach as Pascal, but on a much broader scale) at compile time. Which is the way you'd need to go to have a shot at fast execution.
Where it gets murky is how that speed (I'm pressuming code executes at rates comparable to compiled C on the same platforms)is ultimately achieved. If the answer is "Turn off all the runtime safety checking" then it's a waste of time. I'm hoping that's not the case. I'm betting the process of writing that code is better than in C and makes the code more, not less, optimisable while retaining the safety.
And now the worst of all. Those very nasty, low level, bit twiddly situations. C's initial design goal was to implement Unix, on machines down to something with 64KB of ram and no MMU (a PDP11). If you think of it (more or less) as structured-assembler-with-structured-data-type-support your'e not far wrong.
How (can?) it handle these. If the answer is "Call a function written in assembler" that basically the same as disabling all runtime checks for speed.
Having spent a long time studying software failures I'm convnced of a (very) few things.
1) A good development environment makes development in any language possible provided the processor(s) is/are up to the job of running the code in the first place.
2) Good environments and languages consider what's outside their bubble, and how to communicate with it, preferably as libraries not as built in commands (which card reader unit do you want to mount? in 2022? :-) )
3) S**t happens at interfaces. All interfaces and thin interfaces (IE only passing the used parameters of a 40 parameter structure, not the whole structure) but this is an examplee of writing-stuff-twice (like import/export lists and C templates), which some languages mandate but don't offer any support for.
4) Writing big software (or using a highly georgraphically and temporially dispersed team) is not like one person writing a utility on their own.
5) There are 2 choicse to scale up. Have a language that has features that support big systems, or run the code through a bunch of tools after you've written it. The tools approach is C
6) How you provide that big systems support unobtrusively in a language is likely to have a big influence on developer acceptance, unless you have a big customer (DoD in the case of Ada) to wave a big stick at you. Otherwise its formal requirements IE import/export lists, (especially without tool support) are a massive PITA most people will just avoid.
7) There is the language, its implementation and its environment. The latter two can make up for the deficies of the first, but a well defined language raises the whole game to begin with.
8) There are several situations where you really need to backward chain your logic. IE start with the parameters you're going to pass. What types are they? What range limits should they have? In terms of C, write the templates first then write the code (there's got to be a tool that can do that for you in 2022)
9) I realized that it's the combination of unrestricted placement of both the label and the GOTO that will turn code into spaghetti. There are GOTO use cases (like writing FSM's produced by a code generator). Yes the proverbial "competent" developer can design them out, but after how many person-hours to do so? (After studying the Bliss compiler, a fascinating DEC system language to produce executables with very tight runtime resource constraints and no GOTOs I though "So how about only jumps from inside flow control structures to a label outside any structure? Tight enough? Too loose?" ).
C (the language) was developed in an enviroment where full screen editors were not standard and AFAIK none of the developers were touch typists. Those 2 trival observations explain quite a lot about its structure. Basically anything to avoid a few keystrokes. Anyone still develop like that?
The Linux kernel is unlike the usual embedded scenario as the devs have no control over what hardware it will run on, or even what architecture it will have. MIPS, SPARC, ARM are quite different to Intel (at least on the outside :-) ).
I'm intrigued.