back to article Modular finds its Mojo, a Python superset with C-level speed

Modular, an AI startup with above-average technical cred, has unveiled a programming language called Mojo that aspires to combine the usability of Python with the speed of C. There are numerous ongoing projects to make Python faster, such as Jax and more recently a Python compiler called Codon. And that's to say nothing about …

  1. katrinab Silver badge
    Meh

    There seems to be another one of these "faster than Python" alternatives every week.

    The speed comparison was against pure Python in a scenario where any semi-competent programmer would use Numpy, and to get those speed increases, you have to rewrite the code to the point that it is basically a completely different language.

    1. A Non e-mouse Silver badge

      It's all about choosing the right tool for the job. Is a dynamically compiled language like Python going to be as fast as hand crafted assembler? No. Is writing code in Python faster & easier than writing in assembler? Yes.

      To be clear I'm not saying everyone should switch to writing in assembler. You write in a language that's appropriate to the problem space you're working in. Python is a good general purpose language but it's not always the fastest language at run time. The problem with general purpose languages is that people expect them to solve every problem well.

      1. abend0c4 Silver badge

        A lot of jobs need more than one tool to complete successfully, so to say 'this "two-language" approach makes it more difficult to profile, debug, learn, and deploy' is very much looking at the wrong end of the problem.

        Whereas there's no harm in making individual tools better, I think we may have previous experience of the outcome of the "one final standard that will replace all the others" way of thinking.

    2. Charlie Clark Silver badge

      To a degree, yes. But this is focussed even more on the hardware side, ie. running stuff on GPUs with hardware acceleration. So more like numba than numpy. I'm also quite intrigued by the keywords for the optimised code; I think this could work well. I still haven't forgiven Guido for failing to introduce a new keyword for generators and think the move towards async processing would have been a lot smoother and easier if it had.

    3. Anonymous Coward
      Anonymous Coward

      But this one is helmed by Chris Lattner. Who has far more contacts to help him with marketing this product.

      Yesterday, no-one has heard of Mojo. And now suddenly, you have this article and random stuff like:

      https://github.com/golang/go/issues/59968

      You can almost pinpoint the minute that their marketing campaign started.

      1. CowHorseFrog Silver badge

        How many people are using Dart ? Thts backed by Google, and basically nobody uses that, so what chance has this ?

        1. katrinab Silver badge
          Alert

          Being "backed by Google" would be a very big reason for not using it. Their graveyard is full of abandoned projects, and there is a very big risk of this joining them.

  2. ACZ
    Thumb Up

    Static typing in Python

    As much as I appreciate the ease and convenience that can come from dynamic typing, I do *love* static typing (not just type hints, but actual static typing). Bring it on!

    1. Charlie Clark Silver badge

      Re: Static typing in Python

      I think there are whole PhDs written on the subject. PyPy illustrates quite effectively how good JIT compilers can be at inferring types (for optimisation purposes) and there are plenty of examples of static typing being skirted. What should never be in doubt is the need for strong typing.

      But I hate type hints and would like a mechanism within Python for declarative types (descriptors look like the best place to start, needs extending for functors and return values) so that the implementation wouldn't look like it was optional when, in fact, it's a clumsy attempt to enforce type declarations through the back door. Did I mention that I hate type hints? Well, I do.

      1. Neil Barnes Silver badge

        Re: Static typing in Python

        I have never been able to get comfortable with a language in which a return value from a routine might be 3, 4, or thursday...

        1. Paul Herber Silver badge

          Re: Static typing in Python

          I've never got the hang of Thursdays.

        2. An_Old_Dog Silver badge

          Return Values

          Then you won't care for Perl, Tk/Tcl, or ...

        3. G40

          Re: Static typing in Python

          Or, indeed, all 3.

        4. Claptrap314 Silver badge

          Re: Static typing in Python

          I understand where you are coming from, but if you like methods whose inputs might be 3, 4, or Thursday (and not just printf), as well as metaprogramming, then you like all the things that make up a method whose return might be 3, 4, or Thursday.

          And don't kid your self--3, 4, or nil is just as bad, and in some ways worse.

          The problem that I have with static typing is that it requires you to make decisions about types up front. Maybe that works okay for programming. But the way I do software engineering, I explore a problem, and type decisions come late in the game. What I have is a need to manipulate data. As I learn what types of manipulations are actually going to happen, my understanding regarding the actual types likewise refines.

          Serious software development spends a lot of time on the spike--figuring out what you want do to really, as well as the shape of the implementation. By the time you are ready to start the TDD, you still likely only have a strong hunch of what type structure you will use.

          For almost all of us, development time costs way more than run time. So rapid development is important. And static typing doesn't deliver there for most of my work.

        5. Charlie Clark Silver badge

          Re: Static typing in Python

          I think this tells more about habits than anything fundamental in a language. Static typing is primarily an aid for the compiler which optimise based around it. While I have had plenty of facepalm moments when passing the values with the wrong types into a function, raising an exception at some point, the return value has never been an issue; this could happen in any runtime and would not necessarily be avoided by static typing. There is something to be said about not being able to explicitly return an error value in Python but other than that it's not something people discuss.

          My own experience has been that I much prefer tests, preferably using pytest, to understand how code works over any kind of comment or declaration. This gently encourages code that is easier to read and maintain, which in turn leads to fewer problems.

          But I would have no problem with something like a returning keyword for the definition. Anything is better than the type hints noise! I think we may get an idea of what works best in Mojo where the declarations necessary for optimisation are explicitly optional.

      2. 9Rune5

        Re: Static typing in Python

        Isn't the attraction of Python the dynamic typing?

        And if you move away from that... What exactly do you gain over e.g. C#? If you want a labrador, don't start with buying a cat and glue some floppy ears and a longer tail to it.

        1. CowHorseFrog Silver badge

          Re: Static typing in Python

          People who say they like dynamic type are actually most of the time simply lazy people. What they are really saying is they are too damn lazy to type the type into the source file. These types of people are of course the ones who do so many other things poorly that bad shite happens. They skip writing documentation or comments in source, they name everything poorly, they most likely dont even bother to check values, why even bother with sharing context in exceptions/error messages and more.

          Stay away from these people, you dont want to follow them to fix their shite.

          1. LionelB Silver badge

            Re: Static typing in Python

            > People who say they like dynamic type are actually most of the time simply lazy people.

            Or scientific programmers. As a research scientist, the "spec" is a constantly morphing target (in your head and somewhere in that sprawl of papers on your desk); you need to be able to hack code around and refactor at speed. Static typing is not your friend in this scenario, and an interactive execution environment a must.

            This is why dynamically-typed languages like Matlab, Python (with NumPy/SciPy) and R are prevalent in the scientific arena. Nor is performance necessarily (or even generally) an issue - firstly, you avoid language-level looping by vectorising the hell out of anything and everything (an art in itself), then most of the grunt-work is shunted down to low-level computational libraries like the BLAS, LAPACK and fftw, which have been optimised (frequently by the chip vendors) to within an inch of their lives, down to algorithmic blocking, SIMD and CPU-specific caching. When that doesn't do the job at hand, you write C, or even FORTRAN plug-ins. Importantly, these languages also have vast repertoires of mature and comprehensive domain-specific libraries available, in areas such as statistical analysis, signal processing, control systems, etc., etc. (Oh, and of course if you are writing code for distribution, you structure and document your code appropriately.)

            I am intrigued, though, by Julia, which is interactive via JIT compilation but may also be precompiled, and is by default dynamically-typed using a multiple dispatch approach, though static typing is also available. Haven't really got into it in a big way, but like what I've seen so far.

            1. EarthDog

              Re: Static typing in Python

              In my experience requirements are also constantly morphing. I've spend time in scientific/academic computing and business computing. In my experience business requirements shift faster than scientific requirements. The people I've met that were doing the Sci. programmer had no real training, exp., or desire to develop software and therefore produced crap.

              1. LionelB Silver badge

                Re: Static typing in Python

                > In my experience business requirements shift faster than scientific requirements.

                Not even close - but I don't disagree with the rest of your comment. I speak from the perspective of a research scientist. In researching a scientific topic, you almost by definition do not understand what the "requirements" actually are, or if that is even an appropriate word to describe your brief. Do "See how far you can get with this (ill-defined) problem", or "Figure out how this stuff works" qualify as "requirements" in a sense that would be recognisable to a programmer in the business sector? (And yes, I too have worked in both industry and academia.)

                As a research scientist, for the large part you are literally making it up as you go along. You are experimenting, and trying out all sorts of different approaches. If an approach stalls, you back up and try something else. You are working to a very flexible brief. As a musical analogy, you are improvising rather than reading from a score.

                I happened already to be an experienced programmer when I entered research (I had spent some 15 years previously as a software engineer in telecoms), but that is unusual. The point is, that for a large percentage of research scientists (with the arguable exception of data scientists), software is very much a tool to develop and illuminate ideas, rather than an end in itself. Frequently (although this is domain-dependent) they will have had no formal training in coding and cannot be bothered to learn to become "good" programmers, because they see that as a time-consuming distraction with limited benefits to their work. They are quite likely not sharing code, so do not care whether their code is intelligible to anyone else. Sure, from a software-engineering perspective their code may indeed (and frequently does) suck - but it is their sucky code, and it does the job they needed it to do.

                I don't condone this culture - in fact it makes my own life harder, since much of my time is spent engaging with PhD students* and junior postdocs, so I have to deal with their crappy code - but I do understand where it comes from.

                * My own PhD students do not get away with this nonsense!

            2. Justthefacts Silver badge

              Re: Static typing in Python

              Could you expand on what you mean “Static typing is not your friend in [scientific computing]”?

              There’s several different definitions here. Do you mean “I don’t want to decide whether this is float or needs higher precision”? Well, two comments - a) That’s very specific, not a reason to shape a whole language around b) Please for the love of God learn numerical analysis properly. Because I intensely doubt that just using double-precision is going to make your ill-conditioned matrix inversions any more correct. This, right here, is why the whole argument that scientific computing needs double-precision is flat wrong, and indeed why large chunks of scientific computing routinely output wrong results. Because people need to go back to grad school, rather than hit-and-hope the computer will do it.

              Or, is the argument “I need to hack code fast, because I don’t have a pre-defined idea of what I’m building, I’m just trying some model ideas. So, I really don’t have time with this types crap”

              Yeah, see previous comment. Unless you have actually done your numerical analysis and stability correctly, those “trial models” are basically outputting crap anyway. You can’t use them as intuition pumps, if they are 90% wrong. We saw this exactly with all the Covid social modelling out of Imperial, remember? It drew nice curves. But once actual software people took a look at the model (the *released* model) it was realised it was all nonsense. There were so many obvious bugs, it was no better than saying “if there is more social contact, this will spread faster”, and we didn’t need a computer model for that. The numerical output was rubbish. It didn’t even reproduce the official results on computers that were multi core…..and nobody could say whether the single core or multi core results were right. They weren’t even close.

              The argument for quick hack prototypes in science is very, very weak.

              1. LionelB Silver badge

                Re: Static typing in Python

                > Do you mean “I don’t want to decide whether this is float or needs higher precision”?

                Perhaps I could have been clearer - I speak as a research scientist (see also my reply above to EarthDog). At the research stage, precision considerations and the pitfalls of numerical computation are premature - while indeed crucial, that happens later down the line, once you understand the nature of the computation required.

                > Or, is the argument “I need to hack code fast, because I don’t have a pre-defined idea of what I’m building, I’m just trying some model ideas. So, I really don’t have time with this types crap”

                Yes, pretty much that.

                > Unless you have actually done your numerical analysis and stability correctly, those “trial models” are basically outputting crap anyway. You can’t use them as intuition pumps ...

                I respectfully disagree. Speaking from experience, those trial models will not in general be crap. But of course trial models are just that - trials - and there is no excuse whatsoever for not performing due diligence prior to publication and software release. Believe me, if every scientist spent half (no, make that three quarters) of their time performing a watertight numerical stability and precision analysis on every interim stage of a complex, iterative process of trialling a range of approaches and explorations of parameter spaces, no science would get done. That's the scientific equivalent of the dreaded premature optimisation.

                > We saw this exactly with all the Covid social modelling out of Imperial, remember?

                Did we, though? Were the failures of those analyses actually down to numerical stability/precision issues? Correct me if I'm wrong, but my impression was it was more a matter of ill-thought out and speculative assumptions, coupled with poor-quality input data and over-reliance on the veracity and accuracy of prior evidence. I.e., bad modelling, bad programming, bad science - but hardly a numerical computation failure per se.

                > The argument for quick hack prototypes in science is very, very weak.

                As a working research scientist, again I respectfully disagree - but I'll re-emphasise the importance of due diligence.

                1. Justthefacts Silver badge

                  Re: Static typing in Python

                  I’ll take you up on the Imperial Covid modelling case, because it makes my point:

                  “speculative assumptions, coupled with poor-quality input data and over-reliance on the veracity and accuracy of prior evidence.”

                  While those things might be true….they weren’t the actual cause of failure. The cause of failure was neither the science, nor (AFAIK) numerical stability, it was the absolute shitshow of bog-basic bugs that invalidated its output. It gave results, they were just wrong. I had a look at that code, which was made publically accessable. In a random survey of a single 200 line function (does that concern you?), I found 8 bugs in ten minutes. Off-by-1s, using the wrong variable in a formula because the name was confusing, code commented out during development that should probably have executed with a TODO note, an entire cause of transmission that was included in the formulae but inexplicably zeroed halfway through.

                  All things that would have been easily caught in even the most cursory code review process. Which means the meta-issue is that there was no such code review process.

                  What we actually have now, is the worst of all possible worlds. We started with an almost scientifically untestable model, because without a pandemic there is no experimental data. From a scientific point-of-view, manna came from heaven. Not only did the pandemic occur, but as a country we were actually able to actively drive separate transmission causes to validate the model, nail down important parameters *at scale*, and identify unspoken assumptions. But this…..this model achieved none of that. It gave some output curves that bore no relation to either it’s input data, or the science assumptions.

                  We know no more now, than the start of the pandemic. Literally all we know now, is that lockdowns work. The useful output could have been achieved by no more than a half-dozen lines of the most basic formula of exponential growth.

                  Buggy software models aren’t approximations. They teach you nothing at all. Current operating practice in academia is insanity. We have universities with computer-science and software engineering departments, and also physics, chemistry and biology. But one does not code review the other, before publication. That should be a minimum requirement, not some special “inter-disciplinary project” that essentially never happens.

                  1. LionelB Silver badge

                    Re: Static typing in Python

                    You made, as I understand it, two points originally: the first was about numerical precision/stability. The Covid code fiasco was clearly not about that.

                    Your second point was that hacky coding at the research/exploration/prototyping stage is unacceptable (although even there you seemed to be talking - at least in your first reply - about numerical precision/stability). You claim that the primary reason for the failures of the Covid code was "bog-basic bugs that invalidated its output". This may well be one of many causes of failure: see, e.g., this article. To quote from the abstract:

                    Epidemic forecasting has a dubious track-record, and its failures became more prominent with COVID-19. Poor data input, wrong modeling assumptions, high sensitivity of estimates, lack of incorporation of epidemiological features, poor past evidence on effects of available interventions, lack of transparency, errors, lack of determinacy, consideration of only one or a few dimensions of the problem at hand, lack of expertise in crucial disciplines, groupthink and bandwagon effects, and selective reporting are some of the causes of these failures. [My emphasis]

                    In any case, I thought I had made it perfectly clear that, while I do indeed consider hackery to be an essential part of the exploration/experimentation phase of scientific research, there is no excuse for not performing due diligence prior to drawing conclusions and disseminating results, including model validation, numerical analysis and - obviously - code review. Those Covid analyses clearly failed catastrophically on all those points.

                2. Justthefacts Silver badge

                  Re: Static typing in Python

                  “if every scientist spent half (no, make that three quarters) of their time performing a watertight numerical stability and precision analysis on every interim stage of a complex, iterative process of trialling a range of approaches and explorations of parameter spaces, no science would get done”

                  I’m sure that’s correct. There’s a meta-problem here. It’s a well-known truism in many sciences that you need to get a statistician involved as early as you possibly can. And yet in academia, many scientists routinely treat software and mathematical correctness as not being domains in their own right, so there just is no process. The right answer would be the “model-just-a-tool” scientist to be working in code-review process with an “algorithm / numerical stability” person from Applied Maths dept, and probably a second “software engineer & test” person from Engineering. That’s what you would do in a business context.

                  Given that numerical modelling has been part of most researchers lives for a long time now, Universities should have a better handle on effective process. I believe the silo’ing in academia is due to combination of funding model, tradition, and hero culture of mistaken self-sufficiency that starts when we tell PhD students that they need to do everything themselves. But at some point this has to mature to professional process.

                  1. LionelB Silver badge

                    Re: Static typing in Python

                    Broadly agreed, but it does depend on the domain. For example I work in an area which by its very nature involves advanced mathematics, statistical analysis and modelling, and those aspects are deeply intertwined. It is not optional for my PhD students to get to grips and develop expertise in all these areas - they simply wouldn't be able to do the job otherwise. That can be a tough ask: my last PhD student, as it happens, was an excellent mathematician, but hadn't written a line of code in his life prior to joining our lab, nor had he laid hands on real-world data or have any understanding of statistics. So he picked up the maths quickly, but the rest was a brutal learning curve, which took up the best part of his first year. Happy to say that he is now a more than competent programmer, has a sound statistical knowledge - and passed his viva a couple of months ago :-)

                    The problem here is that, for example, passing their work to an expert statistical analyst for review would not work out, because chances are that the statistician would not be familiar enough with the (specialised) mathematical basis; ditto the modelling. I do encourage my students to get their code reviewed, though, at least by their more experienced peers (unfortunately senior researchers, faculty and postdocs tend to be too snowed with teaching, marking, grant applications and admin nonsense to help out there...)

              2. Paul Kinsler

                Re: The argument for quick hack prototypes in science is very, very weak.

                Just because you shouldn't *publish* based on a "quick hack prototype" doesn't mean that it wasn't a useful step in the process or working out the scope of the working version (or whatever you want to call it). Indeed, I have found that trying to write some initial code can even bring unforseen aspects of a problem to light.

                And on the more general point, we need to remember that scientific codes cover a very wide range of approaches, from the short to the long, from stand-alone codes to ones that rely heavily on library functions, and ones that vary greatly in the rigour of their design.

                One particular thing that does, in my experience, need to be borne in mind when tempted to criticise scientific code is that for a reasonable fraction (I might be inclined to suggest even most), their usage needs to be treated somewhat like chainsaw juggling. To use them, you need to understand what it does, why it does it, what it is intended to represent; and benchmark where possible against non-computational results; and in particular to always be ready stop and to question any and all output for signs of misbehaviour. These are not artifacts that are in anyway designed or intended to be used by the untrained, the uncritical, or the non-specialist.

                Otherwise, for what it is worth, and also as a research scientist, I am broadly in agreement with LionelB's comments here.

                1. LionelB Silver badge

                  Re: The argument for quick hack prototypes in science is very, very weak.

                  > To use them, you need to understand what it does, why it does it, what it is intended to represent; and benchmark where possible against non-computational results; and in particular to always be ready stop and to question any and all output for signs of misbehaviour. These are not artifacts that are in anyway designed or intended to be used by the untrained, the uncritical, or the non-specialist.

                  Agree 100%. As an example, I developed, maintain and support a software toolbox implementing a particular set of analysis methods in neurophysiological data analysis. The methods are quite mathematically sophisticated, and place strict constraints on the scenarios and data for which the analysis will be valid. Failure to understand the underlying principles and appreciate the constraints will result in at best a screed of error messages, and at worst GIGO. I actually went out of my way to design the code (unlike many other software resources in this area) not to be a "black box" - to force the user to engage with the analysis process and understand what they are doing - and to document it thoroughly. As you might imagine, this makes my support role quite taxing (although RTFM covers a goodly proportion of queries).

          2. Roo
            Windows

            Re: Static typing in Python

            Python's duck typing can (and usually does) save an *awful* lot of redundant noise in the source code - and that conciseness can benefit the poor old meatsacks that have to read the code. In practice I write in both C++ and Python, using C++ (and liberal sprinklings of 'auto') to tackle the (relatively) well-defined performance sensitive jobs, and Python for the stuff that tends to change frequently or needs to be maintainable by someone under the age of 50.

            I genuinely enjoy coding in both, and yes I do like the type safety in C++, but I also like to write comprehensive unit tests and concise code which Python makes very easy. Meanwhile Java has come to represent the worst aspects of C++ and Python. All too often in Java & C++ land I see code broken up into lots of meaningless micro-classes and layer upon layer of "design patterns" to enable unit testing - which results in a shit-ton of unnecessary code - and programs that don't work anyway because folks haven't been able to properly test the interactions between all those micro-classes. Python's laissez-faire approach to typing and encapsulation drastically reduces the verbosity of code, and makes unit testing (and integration testing) much simpler - both wins when it comes to maintaining a large code base.

            Of course folks can still write FORTRAN in any language. ;)

            1. LionelB Silver badge

              Re: Static typing in Python

              Yep, in other words treat Python (or Matlab, whatever) as scripting languages, with the heavy computational lifting devolved to super-efficient libraries or bespoke C or C++*.

              * As a scientific research programmer, where flexibility is paramount, a decade or so ago I actually shifted from C++ back to to C; although C++ offers more sophisticated program-structuring paradigms and mechanisms, those same paradigms and mechanisms, in my experience, encourage hard-to-refactor program structures. The only thing I really miss is generic programming.

              > Of course folks can still write FORTRAN in any language. ;)

              Of course, and so they should ;-) As an aside, Matlab actually started life as a scripting wrapper around FORTRAN linear algebra libraries such as SCALAPACK and EISPACK. FORTRAN 66 (punch cards and all - yes, I is old) was actually my first programming language, and I still have a soft spot for it. It is, in particular, a minimally-deceptive language.

  3. T. F. M. Reader

    stick to what you do well

    Certainly python can use a new core implementation allowing parallelism and improving performance (by a lot).

    Hooking the backend to MLIR/LLVM or similar sounds a good idea.

    Strong typing may certainly be useful and will be welcome as an option (to keep the original python working).

    If the above can facilitate static analysis - great.

    Doing all that and sticking to a language that many people use is a very reasonable approach, too.

    But for heaven's sake make Lattner stick to what he does well (like compiler backends) and keep his paws from changing or extending the language syntax! He is, after all, responsible for the abomination called Swift, the only language I know in which 2+2 may not even compile, let alone return 4 (hint: it does type promotion on assignment, but not for arithmetic ops). Also the only language I know where there is a difference between the function argument's name used by the caller and the same argument's name used in the function's body. And where you need to decide once and for all, for all the client applications, whether you want your data structure to be passed by value (struct) or by reference (class). It looks like there is a difference between class (python) and struct (mojo) here, as well, as well as between def (python) and fn (mojo), and at least in the latter case you need to decide once and for all the arguments what you want to do from the start. Not a good start, IMHO.

    Judjing from the docs on Modular site it does look like (half-baked?) explicit splicing of python with a subset of C++, or maybe the C subset of the latter ++ some additional features like a bit of metaprogramming.

    All in all, it has potential. The backend has a good chance of being good. I am not so sure of the frontend so far - needs more work, I'd say.

  4. CowHorseFrog Silver badge

    So basically type safe languages are always going to be considerably faster than those without declared types.... who would have guessed ?

  5. mpi

    I'm undecided.

    I like the idea of actually giving Python the option to do static typing. And if the language core allows actual parallelism...great.

    But let's see how this pans out.

    1. Tom 7

      Re: I'm undecided.

      Check out ROOT (from CERN) C++ interpreter.

      Having said that I tend not to worry about what language I'm writing in if I want speed as that tends to come from the routines I'm calling and what they run on ( a couple of graphics cards if I need some speed) Writing a new language is always a tricky thing when you can RTFM of pretty much any computing language and find a solution to the problem you naively think you are the first to come across!

  6. froggreatest

    Not ready for development

    A couple of days ago I wanted to write something with this new language. Unfortunately, you must ask for permission, and even then you can try it in an online notebook environment only.

    If a developer cannot run it locally it’s a bit far away from adoption I guess.

    As for the use cases I thought it might be a great fir for Robot OS or small devices like RPI, but I cannot test it to verify my assumptions at the moment.

  7. tetsuoii

    Ho hum...

    Just learn real programming in C and ditch all those meaningless toy languages that will never compete in real world performance, scalability and maintainability.

    1. fg_swe Silver badge

      FALSE

      See here why C and C++ have systemic problems when used by real-world, fallible folks: http://sappeur.ddnss.de/SappeurCompared.html

      1. tetsuoii

        Re: FALSE

        So you're downvoting my sound advice to use the language that runs every device, operating system, library or program that has any impact in this world to promote Sappeur... another language that nobody heard of and that will never ever be useful in any real sense. Sappeur is now my new joke like Ocaml, Bosque, Go, Erlang, Julia, Haskell, Smalltalk, Lisp, Zig, Crab and all the other junk languages to throw at my friends for a laugh...! Does it support multiple inheritance, polymorphism, lambdas, template metaprogramming and R-value references? That would be great, for sure. Can't wait to try it, because with such useful features C will be obsolete by Christmas, no doubt.

  8. Marshalltown

    Bah

    "Architected" means "designed," just as "trialed" means "tested." Idiot usage.

    1. Paul Kinsler

      Re: "trialed" means "tested."

      Are you sure? To me they have slightly different implications. I might test something in the lab, or on the production line, to see if (e.g.) it met the specs; but "trialed" to me implied that you have actually used it in its intended working environment, and measured performance there (e.g. "sea trials"). Thus a "trial" is a "test", but a test is not necessarily a trial.

      I'm not sure about "Architected", however. I think it hints (or tries to) at perhaps a higher-level or more sophisticated attitude to design; but then it also suggests that its user it trying to use fancy-sounding words to impress. For me, I think this latter meaning would usually win out. Especially if "carefully architected foundations" and little else is said, as in the quote.

      1. Claptrap314 Silver badge

        Re: "trialed" means "tested."

        Architecture is about the role a piece of code has in the business, and what tools are to be invoked to fill that role.

        Design is about breaking down the role into discrete functionality and mapping out the flow of data within the code.

        Use words as you will, and I especially accuse marketing (including web sites) of the same, but there is a useful distinction to be made.

  9. fg_swe Silver badge

    Static Typing: Safety, Security and Performance

    Dynamic typing has been a Dangerous Thing since its beginning. It can be safely used for toy projects without real-world relevance. E.g. adding up the results of the local tennis club or the like. As soon as cybernetic attackers are a concern, do not use it !

    Also, if you need performance, dynamic typing requires fancy optimizers, which are themselves security problems.

    Essentially, dynamic typing is Fast Food Programming. Quick results at long term cost.

    Here is my shot at strong typing and memory safety: http://sappeur.ddnss.de/

  10. fg_swe Silver badge

    Generic Programming Using Standard Tools

    Use a proper Macro Processor such as m4 to define generic code and then generate instances for different types. Here is an example of a generic Sappeur quicksort algorithm:

    http://sappeur.ddnss.de/quickSort.ad.m4.txt

    http://sappeur.ddnss.de/quickSort.ai.m4.txt

    http://sappeur.ddnss.de/SortUnterstuetzer.ad.txt

    http://sappeur.ddnss.de/SortUnterstuetzer.ai.txt

    The C++ template system is mostly an unnecessarily complex/hard to debug macro system. The purpose seems to be to scare off newcomers with a load of hard-to-decipher error messages upon a single mistake.

    So, get rid of templates, get rid of dynamic typing and employ a proper macro processor for generic code. Write the instantiated code on harddisk, so that the programmer can look at it.

    1. fg_swe Silver badge

      Re: Generic Programming Using Standard Tools

      I got this idea when working for d'Assault on CATIA. They use cfront macros for generic programming. It works rather well for CATIA and its 2000 odd modules. m4 is superior to cfront.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like