JavaScript is for scripting HTML DOM and browser API stuff
I was one of those who played around with JavaScript back in the late 90s, as VBS in IE slowly died off and newfangled languages like Java figured that they could rule the Web with their Applets before people yelled loudly enough at Sun to GTFO with that nonsense.
Anyway, JavaScript. It led to things like DHTML (Dynamic HTML), which as essentially the use of JS to create animated drop-down menus and other fancy dynamic stuff on sites. You also had a few people (ab)using hidden iframes to dynamically load data from the server without having to refresh the main page before this somehow got hyped up as a 'new' thing with AJAX (XMLHttpRequest).
To call JavaScript a 'programming language' is rather generous. It supports a grand total of two types (ASCII & IEEE-754 FP) and runs in a single-threaded fashion. It is essentially a way to allow for the scripting of actions involving the DOM (adding/removing content) and calling of native browser functions which are exposed to the JS VM.
There's no 'standard library', and as a prototype-based language the default programming style is 'whatever works'. Over the years its shortcomings have been patched up in a variety of ways, first by making the JS VM more performant, then adding more and more native API functions, but as a JS codebase grows, it threatens to collapse under its own weight.
Having done a few years of professional JS development for embedded purpose (yes, it exists) a few years back, it was interesting to note how the JS ecosystem has bifurcated so many times that it has become essentially non-portable. Take just the concept of 'modules'. Just about any real language has the concept of 'modules' or 'includes', where one can reference external code files. JS only added this quite recently, but not before many different alternatives were developed (AMD, Include.js, etc.) and NodeJS is essentially its own JS dialect now as it cannot and will not support native JS modules according to the project developers.
Getting any bit of non-trivial JavaScript to run reliably across a number of browsers, headless runtimes and NodeJS was a nightmare, with the concept of unit tests clearly developed for the sake of JavaScript to have any chance of producing code that would not immediately self-destruct (quietly, JS doesn't do stacktraces) in production. We would have plenty of bug reports of issues in testing and production that would have to be reproduced using copious amounts of console.print() statements, however.
In that regard TypeScript is amazing. Sure, the typing is optional, but you can remove the prototype nature of JS and turn it into a deterministic language, with fixed structures that have required and optional fields, remove the need to check whether the right parameter types were being passed in every goddarn function (nobody likes random '<foo> is not a function' and so on) as if one is programming bleedin' Java (Null types, etc.).
Instead, you write the nicely annotated TS, pass it through the transpiler which will nicely tell you where you screwed up. It almost brings a tear to one's eyes when you see something like that after years of dealing with plain JS. Of course, then try to convince the customer and one's colleagues that switching to TS is a really, really good idea.
So yeah, long story short, JS is being abused in every sense possible for all the things it was absolutely not designed or intended for, but I fear we passed the point of no return roughly two decades ago now.
Whatever WASM will or will not do I do not know. To me it feels mostly like a way to compensate for the lack of native NPAPI plugins, but so long as every single WASM outside call has to pass through JavaScript, the real use of WASM seems to be mostly to allow for for JS frameworks to offload some sluggish processing to zippier can't-believe-it's-not-native code.
Maybe if Microsoft grew a pair and added direct support for TypeScript in its Chromed Edge browser?