
Nah
We need line numbers in C++ first, then we'll worry about types in Javascript.
Type-fans rejoice! Plans (or a proposal, at least) are afoot to pop some type-checking into the infamously dynamically typed JavaScript. The proposal planned from Microsoft and others, including devs from the Igalia Coding Experience program and Bloomberg, is all about adding "types as comments" to the language. The theory is …
I can't help but think that the Russian trolls are far too busy trying to change their bitcoins into something, anything, that isn't roubles and keep it out of Russia.
That's perhaps why the internet seems relatively quiet right now - the trolls, with better access to the outside world, have realised exactly how hard the shit has splattered across the fan, the walls, the ceiling...
"Would far rather use VBScript than poxy Javascript"
Until VBScript becomes VBS.NET then VBS.ASP.
Although, I really can't disagree with you as we're experiencing the same outcome :-/
Javascript becomes Microsoft TypeScript.ts then Typescript.mts (with unholy speed penalties).
Instead of progressing WASM (which has typing) to a more robust state, people want to chastise Javascript with a can-only-be-slower abstraction layer.
Independent developers need to get behind reworking the web browser under an entirely new approach, one without HTML/DOM/JS/etc.. I'm not sure why most people seem dead set on keeping HTML/DOM/CSS as the ONLY way, but people are and I'm wondering if those same people realize who controls the mutation evolution of it all (it's not them).
Not really, I take the proposal to be more of a generic standardised annotation to infer some static typing to parameters.
Which probably isn't a bad thing, wouldn't be a bad thing to require code invoked by HTML elements to require type expectancy on its params for example.
I certainly don't see it as a replacement to ts or WASM more of a least effort non breaking effort to improve performance and security
Too many Web developers have fooled themselves that they are full fat developers, instead of framework and runtime manipulators, so everything has to shackled by dom and accreted css knowledge and practice, with js being the hammer used to crack every nut encountered.
Frankly would hate to deal with Web developer attempts at WASM, just a shame as it goes full circle desktop front end devs are a dying breed...
The syntax may be "in your face" but it can be very useful and it IS optional. Its also easy to ignore.
As for companies mandating them, it depends on why and where. Large bodies of complex code on which static analysis tools are run or small scripts? If people are requiring its use where the tradeoff between the extra code and the benefits of it is bad, that is their fault.
Not sure why you're down voted, but as someone who is _NOT_ a professional "pythonist", I think they are very distracting in any language. They make me feel like I'm looking at a test file or a documentation file, not a source file.
In a typescript file I saw on github a few years back, someone wrote a test file for their typescript in typescript that was dependent upon these hints. This took me longer to understand than you'd think as both the test file and "source" typescript were nearly identical (had to keep a close eye on the filename between alt+tabbing). Effectively, it was a test file for a test file as typescript is just a test suite in disguise, so it's by design confusing (and confusing on why people want this as syntax).
I blame Doxygen. While Doxygen is fine for it's documentation routines in C++ (I guess, still not the biggest fan after ~23 years), a lot of it's syntax styles bleed over into a lot of other languages as language syntax. The problem there is Doxygen is 1 big bag full of other syntax styles ripped from languages "throughout the land" without ever the intention of being actual language syntax, so when it's actually incorporated as language syntax, you wind up with some nasty looking stuff.
Commenting from a Python POV - type hinting is great for two reasons - static analysis is cheap and finds bugs, and secondly it allows editors to provide more robust auto-completion.
Well engineered code says what arguments a function takes and what it returns, its a minor concern to add proper type hints to something, and the value gained from adding them increases exponentially as the proportion of type hinted code increases. It's universally a good thing.
"What if you deliberately engineered your JavaScript (or whatever)"
Then you're not using Typescript.
After all, the entire purpose of Typescript is to enforce typing, so there wouldn't be a parameter called "any" or anything like that to make itself irrelevant. Of course if it did, then you'd have to write some needless first order logic to discriminate solely for Typescript itself. Furthermore, then the "language" would be designed to only support certain types to compile, but that isn't a problem since Typescript always supports Javascript fully. An absolutely great thing about Typescript is, it doesn't promote you changing your performance intentions because Typescript always supports the most performant methods. Typescript is amazing!
Oh, I almost forgot, you won't have to learn any Javascript ever because there's nobody using Typescript that ever asks "How do I do this in Javascript?"... never happens.. Never!!
Static analysis of Python code has always been good and there are numerous example where static typing fails to prevent bugs. You need fuzzing and things like Hypothesis to test inputs.
Type hints are essentially compiler optimisations and were originally ruled out of Python for precisely that reason.
Is there a reason why languages from "web people" always favour the types after the variable names in function arguments? I.e
void do_thing(a: int, b: float)
vs
void do_thing(int a, float b)
Go, Rust, Typescript all seem to do it. Surely it isn't simply because these guys are all just used to Adobe's ActionScript 3?
I'm simply more used to the latter and it doesn't really matter, I am just interested in the legacy of where it arose or if there is a technical reason (i.e easier to parse, etc).
> void do_thing(a: int, b: float)
That appears to be the same (optional) type syntax already used for a long time in server-side JScript (*) under ASP.Net.
That might or might not be why MS have suggested it here.
(*) JScript being MS's version of JavaScript in both client-side flavour (where it's essentially their "version" of regular JavaScript with EEE tweaks) and for server-side ASP and ASP.Net (where it lets you use both JS and Windows/.Net objects).
So the long and short of that, is that all those "modern" languages aren't strongly typed, and those that are strongly typed (such as C#) allow type inference via the var
keyword where applicable.
This is because there is a trend towards non-compiled, or semi-compiled languages, where the runtime has to do the work of figuring out the actual variable type (or using some horrible struct that can store a pointer to a real variable, or an instance of all the types something can be represented as, or, worse still, a string which gets cast to e.g. an integer, when it is treated as such).
Because computers are getting faster, people don't notice that this sort of thing is much more computationally intensive, or wasteful of memory, so they don't understand why their tight loops run slower than something written in C running on older hardware, or why their program keeps stopping for seconds at a time to do a garbage collection.
Personally, I find the abstractions provided by modern languages are useful to allow you to focus on what the program is supposed to do, rather than the implementation detail of how it does it under the hood. However, I don't trust any developer who can't explain what 2's complement is, or can't tell you what bit-shifting a 32-bit unsigned integer three places to the left will do.
I don't trust any developer who can't explain what 2's complement is
Us old farts would insist they know about 1's complement, BCD and Excess-3 encoding as well. You never know what you're going to meet.
[I once had to convert 9-track data tapes of CDC 6600 1's complement 60-bit floating point numbers using -0.0 as a missing data marker for use on an ICL 1900 machine that had 48 bit 2's complement floating point and tape decks that (for reasons known only to God and maybe the hardware designers) inverted the sense of the bottom three bits of every byte compared to the rest of reality. Happy days.]
I recall, back in what was probably the late '80s, reading a book on Z80 machine language coding on the Amstrad CPC (I was a very nerdy pre-teen at the time). This actually had a good discussion of both 1s and 2s complement, and why you don't want to be dealing with -0 (11111111 not being the same as, but considered equal to 00000000, so having to be handled as a special case in all arithmetic comparisons being the most obvious reason).
Personally, I think all developers should know about the really low-level stuff, and have at least an inkling of what the processor might b having to do under the hood, then be thankful that we don't have to write in assembly.
What you're saying here is that you wouldn't trust any developer under 50, which is your prerogative but it's going to make it hard for you to work with younger colleagues as you constantly sideeye them.
Having to bit-shift unsigned integers is an old-people thing now (and embedded systems, but that's quite an old-people field) and honestly that's fine. Interpreted languages are productive and developer cycles are more expensive than processor cycles, so let's use the approaches that allow us to do the important work and allow our compilers and runtimes to handle the bitwise arithmetic for us.
Don't get me wrong, I'm old and can remember when we had to build our own compilers out of plywood uphill both ways in the winter, but our field has changed, as all things must. The truth is a younger dev getting their head down and writing a few lines of javascript will get more done than an old man spending the same amount of time yelling at a cloud.
Nope, that's not what I'm saying; I'm under 50 myself (still got a few years to run), and most of what I do is done in high level languages with multiple layers of abstraction between the keyboard and the bare metal.
What I'm saying is that it is helpful for developers to understand how what they write gets turned into something a computer can run, so that they don't write really shitty inefficient code. Sometimes this is a simple as knowing not to put that database query with the unchanging result in the middle of a tight loop, or knowing when it's better to cache something rather than work it out repeatedly (and also when it isn't).
Of course, modern compilers will probably work out that if you want to multiply an unsigned* integer by a fixed power of two, it can bit shift it, without you having to worry about telling it so. However, appreciating that this means that multiplying a signed double by 8 is going to be much faster than, say, multiplying it by 6, is a useful bit of knowledge to have.
*It can probably work out that signed integers need the high bit wrapped to the low bit, and preserved as well.
I know some very good devs who are at least a decade younger than me, and I know some very bad ones who are older (one of whom doesn't even own a computer outside of work), so age really isn't a factor, it's about attitude, and willingness to actually understand what you are doing at a deeper level.
The very worst developers I've met have been those who write monolithic unmaintainable, uncommented code. Invariably, they are people who work with scripting languages, like Javascript.
A "dev" could be anybody from a self trained former priest to a guy with a PHD in CS.
What I found is that computer science is indeed a science of its own and neither EE nor Math people automatically know about efficient algorithms+data structures. They typically never had a lecture on computer architecture.
In addition, most people are too lazy to get to the bottom of "boring details" like sort and hash algorithms. They assume their self-invented hash code will be more than good enough. Which is wrong. They assume the built-in sort algorithm will be good for all purposes. Also wrong.
In other words: if you want a top class program, you better hire top class people who know the theory behind what they do. The self-trained ones will most likely produce solutions which are much worse complexity-wise. E.g. O(n^2) for a merge program which can be O(n).
The self-trained ones will most likely produce solutions which are much worse complexity-wise. E.g. O(n^2) for a merge program which can be O(n).
Not strictly true. Technically, I'm "self-trained" in that I don't have CS degree. What I do have are two "hard science" degrees, experience going back to my pre-teen years of learning how computers actually work, and 20+ years of actually real-world programming experience, where things like performance and maintainability really matter.
"Self-trained" isn't the problem here, it's the willingness to actually learn and gain a deep understanding of what you're doing.
I've known CS graduates who've got a very good degree, and know all the ins-and-outs, but lack the domain knowledge of real-world computing. They've gone and written reams of very clever, but completely unsuitable code, because a CS degree doesn't teach you about the real need for "quick and dirty" under some circumstances. Sometimes meeting a client's deadline with a suboptimal solution is more important than writing perfect code, and this is just as important as being able to write that good code when permitted to, rather than unmaintainable shite. It's experience that teaches you which path to take for any particular problem. In particular, the experience of knowing when you can bite off some of that technical debt under the pretence of another project, in order to save yourself work later on.
So, in short, it's not important how you got the knowledge, whether you're an autodidact, or a trainee, it's having that knowledge (and breadth of knowledge) that counts.
"Having to bit-shift unsigned integers is an old-people thing now"
Bit shifting is useful for extracting data. The alternative, that I've actually seen, is:
divide by two, divide by two, divide by two, divide by two... (yeah, they didn't think to divide by eight, they know it needed shifted three places so it was divided three times).
And even then it sometimes failed to work because the division was calculated as a real number, not an integer, so round-to-nearest sometimes gave the wrong result.
Simple bit shift - literally a single instruction on any useful processor, versus invoking the FP to divide anything.
"and embedded systems, but that's quite an old-people field"
Is it? We've just hired a graduate, are in the process of hiring another, and have three other engineers in the team in their 20's and 30's - And thinking back to my previous employers, the age ranges of the engineers in the teams were largely similar - a few senior engineers (the role I now play here) who've been around the block a few times, working alongside a larger team of younger engineers bringing with them a much needed shot of youthful enthusiasm and experience with some new tech that we might not have had a chance to mess around with ourselves.
So whilst there undoubtedly will be some embedded systems teams out there which look like a gathering of the old farts club, IME this is far from the case for most teams, and especially not the case for teams employed by companies who intend to still be doing embedded systems development in years to come...
Relying on an optimizer to tidy up after poor coding practice is a dicey strategy.
> The truth is a younger dev getting their head down and writing a few lines of javascript will get more done than an old man spending the same amount of time yelling at a cloud.
The same could be said for BASIC, provided you're program's requirements are within the framework of the problems that the language is intended to solve. The real fun comes when one of these programmers is tasked to write some code that does something that there's no library for -- they can put a graphical front end on something but they're absolutely clueless about what that 'something' is.
>Having to bit-shift unsigned integers is an old-people thing now (and embedded systems, but that's quite an old-people field)
Any idea what a barrel shifter is and why it might be used?
"The real fun comes when one of these programmers is tasked to write some code that does something that there's no library for -- they can put a graphical front end on something but they're absolutely clueless about what that 'something' is."
Well, sure, but all you're really saying here is, "get someone who knows how to do X to do X". Putting a graphical frontend on something is a useful skillset. Sometimes you need a graphical frontend put on something. If that's what you need, someone who knows how to do that is more valuable to you than somebody who can write a really efficient sort algorithm.
Sometimes you need the sort algorithm person, sometimes you need the person who can write a graphical frontend. The field is much too large these days for anybody to be great at all of it.
Not necessarily true. I'm self-taught, and knew about these things at an age well before I would have been taught them at school (if a CS course had existed at my school at the time, which it didn't). I've met CS graduates who look at you blankly when you mention 2's complement, so YMMV.
Some of the best devs I know don't come from a CS background, they come from a hard science background (usually physics or chemistry). One particular one I know is a physicist by training (and went to uni with Stephen Hawking), long before CS was even a named thing taught as a course at university, let alone in a secondary school, and he has his name on several networking patents, for which he still earns royalties, although for $reasons, the cheques denominated in USD are not worth trying to actually cash in the UK.
If you want to see a "young" language with rather strict typing and high efficiency, look here: http://sappeur.ddnss.de/
The trick is to divorce yourself from the media messaging and use your own rationality. Just because something (in this case dynamic typing and type inference) is being hyped, does not mean YOU should agree with that. Use your experience, your own rationality and you can create something that is truly an improvement on the state of the art.
For example, I observed:
A) C++ programs are highly efficient
B) Java* programs are more robust than C++
C) Java programs are inefficient
D) The "trade-off" between Java and C++ programs are not for inherent reasons.
E) Algol, Pascal, Ada, Modula-2 were in many ways better than C++ is today.
So I proceeded on to create a strongly typed language that appears to be old-fashioned in some ways, but results in very robust programs which are at the same time rather efficient.
*and a raft of similar languages such as C#, F#, Python, Scala
Is there a reason why languages from "web people" always favour the types after the variable names in function arguments?
It comes from the maths notation used by the type theoretic people who introduced type inferencing(*). It's a trivial syntax rewrite between "a: int" and "int a" so utterly irrelevant in the grand scheme of things except to the pointy headed.
(*) Little known fact about accidental syntax: SQL has its SELECT syntax because Codd's proposed precursor relational system Alpha was based on PL/I syntax and leveraged the newly introduced SELECT statement in PL/I.(**)
(**) Warning: This from heavily bitrotted 40+ year old memory. Back then I had some academic colleagues working on the newfangled relational database stuff and others pooh-poohing it as far slower than proper CODASYL databases.
It makes retrofitting easier.
Suppose the parser has seen the tokens
function something( int
at this point `int` could be a variable name in old style-code (it's not a reserved word and I'm sure it's used in the wild for integers) or it could be the type for a new-style type declaration. Suffix declarations remove that ambiguity. The parser knows `int` is a variable name. And that means a simpler, smaller, faster parser.
EDIT: of course you could prefix it via a unary punctuator e.g. function something(@int x) {}
but by this point in a language's development, most are taken.
> always favour the types after the variable names in function arguments
Its probably the influence of ADA which may have picked up the habit from PL/1. Ultimately it may turn out to be the leftover influence of IBM -- they've always marched to the beat of their own drummer (but then it used to be a very big drum).
Personally, I think JavaScript is an abomination: it's the hate child of BASIC and PL/I. For those who share my view, I can't see the point of these insignificant cosmetic fripperies. For those that don't, they'll likely never use them anyway.
In the browser (the place that complex pieces of arbitrary software least belong), it is at least becoming possible, if you must write code, to use serious programming languages that compile to WASM. Outside the browser there's so much choice it's inconceivable anyone would want to use JavaScript for anything.
It used to be all fields round here, you know.
I think Brendan Eich has been quite candid about the hurried recasting of his initial concept in order to meet what was essentially a marketing deadline. The ideas behind it are rather clever. The trouble is they're dressed up as a vaguely Java-like procedural language and the result is neither one thing nor the other and a lot of confusion results.
Brace yourselves for seeing lots of code liberally sprinkled with :any
because the developer was too lazy to work out what the type should be, and their compiler warnings* are set such that untyped variables won't be allowed...
*yes I know JS isn't compiled, but that doesn't stop you running it through a linter does it?
The key is in the name, Java"script" if you are writing unholy amounts of code for every webpage and refactoring it continuously you are doing it wrong IMHO.
Microsoft can use Typescript for their Web monstrosities, and they are welcome to keep their god awful HTML to themselves too. (still, in 2022, ffs)
Everyone else should KISS.
Yeah, it's much better to make your code "simpler" by pulling in external libraries from all and sundry rather than writing the code you need (to just do the things you need it to do) yourself. There's no potential for libraries to be updated with breaking changes or bugs, be withdrawn, be malicious, or any other sort of security flaw, is there?
A lot of languages have "compiler hints", this is no different.
[[unlikely]], [[fallthrough]], [[nodiscard]] and friends spring to mind.
The compiler is free to ignore all of these, but they are helpful.
They are either markers to tell the compiler to generate a warning if code is syntactically correct but not as the marker expects, or hints to guide optimization.
In a dynamically typed language, standard type hints are a very powerful optimization tool. The JIT can generate the most efficient representation the first time through, rather than keeping its options open until more execution info becomes available.
Of course, if you lie then the code will run slower than if you'd said nothing. But that's ok. Don't lie.
On this very site there are breathless (although usually well caveated) articles about how "AI" is capable of ... well everything.
Then you have an article which suggests the best place to start with "AI" would be in interpreting scripting languages.
If your "AI" can't work out what type a variable is meant to be when a human programmer could then I don't really want it diagnosing me for cancer.
And if we are supposed to be building languages as an abstraction of the real world, then why the hell do we need typing anyway ? After all in the real world a "variable" (like a page of paper) can hold a number, a sentence, an image, a music score. Or all 4.
Is starting to sound like the "cloud is bad because I can do cheaper" folk. Listen: in your own little world things are simple, don't change frequently and the longer app you've had to deal with is about 10K lines of code. Outside of that world, type checking may not be perfect, but at least is one more barrier against bugs.
Agree, but I'd be even more strict. If the program is bigger than one screen it benefits from typing. And if there's more than one developer it's essential. Find type bugs can be painful.
It's really the same argument as why you should use meaningful variable names instead of a/b/c/d: It is quicker to write a line of code if you have fewer key presses. It is not a quicker way to write code.
I am used to C.
I got quite a shock, and a lot of headaches, when doing some little things for my site using PHP, ans discovering the difference between == and === because two things with the same value aren't equal because for $Reason PHP has decided they're different types. Not to mention the horror of the coercion between this and that when setting a variable to a value and then using it in a different way (like as a string).
Ugh. I'll stick with my unsigned long long long longs, thanks. At least then I know what to expect.
One side of me thinks rational, sensible changes to JavaScript would be appropriate and would help tremendously with the language.
The other side of me immediately goes "Microsoft? Does it smell a bit embrace-extend-extinguish in here or is it just me?"
As good as it looks, somehow I can't get it out of my mind that this large, wooden, horse-shaped object is just a little bit too conveniently gift wrapped...
1.) JavaScript is only moderately efficient, because JS VMs will create a "shadow type system" during program execution, then optimize the code based on the shadow types. This mechanism is hugely complex and creates lots of attack surface. Also, it consumes energy which could be spent more wisely.
2.) Strongly typed languages can detect many classes of programming errors before the program even executes. These errors are usually the cheapest.
3.) Type systems can even eliminate serious multithreading race conditions. See this language of mine: http://sappeur.ddnss.de/