Welcome back Java promise!!!
The EXACT same claims were made 30 years ago by Sun Microsystems. 20 years ago, Microsoft promised it with .NET. 10 years ago Node.js was making some familiar claims. And now.... WASM.... WASSSUP!!!!
The big promise of Java was "write once, run anywhere" but it implied "compile once" because apparently compiling for different targets it just too hard dammit! But it isn't that hard really and fully compiled programs haven't gone away and Java was always a runner up to C++. WASM itself is pretty much a niche on the web since most apps don't need it and WASI will probably remain niche too.
You are dead right - it actually did (and still does) deliver 100% on the promise. And 99.99% of all compiled Java code that you will find deployed out there right now *still runs without issue*, 20 years later, because of the extreme care with which the SDK has been evolved.
Sarcasm Overload, Baby!
Unless you mean that you can still run all the compiled Java, just so long as you have kept a copy of every single (just to be sure - you did give four 9s) Java Runtime, each one installed on its own machine (let you use VMs, to be kind) so that you have a fighting chance of finding one that will run your code and all of its dependencies?
Cute. But I've been a full time Java developer since 2000 (Java 1.3) and I haven't seen the issues you've described for over 10 years. With two specific exceptions (applets, now dead, and the javax.script package) code written for Java 8 will run without any issues on 11 or 17, as well as JVMs from alternative vendors like Bellsoft. It's so predictable we've scaled back our release testing on different versions - no need. Wasn't always thus, but it is now.
Yes, of course we keep the compiled versions and there are issues with compilation tools, eg ant. However those issues are C-based - stuff linked against lib32-glibc, old versions of libssl etc - which doesn't seem to help your argument.
Java in the browser failed because at the time because of Microsoft. It was about the same time Microsoft did their Microsoft Java (tm) thing and got sued, so they then made dotnet. Now Google of course doesnt like Microsoft so Silverlight failed, kind of justice given what Microsoft did to Java.
While that certainly played a part, I think the bigger issue is that starting the JVM in a browser, back on single core, single thread, CPUs and computers that still had RAM measured in Megabytes, could take several seconds. Several seconds where your browser was seemingly completely locked up and aside from a possible "Starting Java Virtual Machine..." message in the status bar and the HDD light going crazy on your system, you might never know why. And for what? So some company could make an interactive ad? As some may recall, for a time Sun was trying to develop a Java CPU, which could natively execute Java bytecode, but eventually gave up because it was just too expensive and the end result was really nothing to write home about.
Maybe I came across some and have since forgotten, but I can't recall ever seeing a single Java applet that was actually useful. I don't recall any examples outside of the sort of Hello World example on Sun's site with the Ouija board puck mascot waving and stupid ads. The ads were generally replaced by Flash which gave them all the benefits of Java without the massive runtime that needed to be loaded. Silverlight was another example of the Ballmer era Microsoft which was constantly chasing the taillights of other companies, offering competing products that often didn't even match the competition's offering and had absolutely no other compelling reason to use it. And the vast majority of people who did adopt it were using it for things like interactive ads that everyone would have generally been far happier without anyway.
Java applets were always a solution in desperate search for a problem to solve, and the only solutions they seemed to come up with were to things people would have been happier being left unsolved. Java servlets are another story entirely, but there the runtime is being handled by the server. Same with system apps that may be written in Java. If you tried to do applets today, where you have a glut of CPU cycles, many computers have SSDs, and generally around 8GB of RAM, history might have played out a little differently, but I suspect it would still be an effort of trying to pound a square peg into a round hole.
Sadly, many network and server admin tools were (or even still are) provided as Java applets. Examples include various server ILOs with Java-based remote console, and the Cisco ASDM management interface for their ASA firewalls.
The server vendors have generally moved to HTML5, but Cisco remains a relic of the past.
It was always a pain trying to get a compatible version of JVM and Webstart which worked with them, *and* which didn't violate Oracle's licensing terms (*), *and* which didn't reject the applets due to code signing issues or other problems.
Apple stopped shipping Java way back, from macOS 10.7.
(*) In many cases you would be prompted to download a newer version of Java SE. However, only Java SE 8 8u202 and earlier were free; anything later than that exposes your entire company to a per-seat licensing cost.
Sun's 'expertise' in GUI/UX architecture doomed Java Applets and made desktop applications painful. WASM is trying a different approach: No UX. Maybe that's cheating (let somebody else solve it) or maybe that's a good idea. Time will tell.
Java's AWT needed by applications and Applets is a mess. Some graphics operations are a tangle of callbacks and delegation that prevent some atomic operations from being atomic. That means endless debugging for each permutation of runtime. On top of that, Java's AWT graphics are so excessively abstracted that simple raster operations frequently leave the JVM's 'fast path' and degrade into billions of method calls. Or maybe it looks awful because the JVM's fast path implementation is bad. This is, again, endless debugging for each permutation of runtime. (The Java 7 API is the oldest I can find. Imagine trying to get 1.0 working.)
The Netscape plugin API used by Applets (and everything) wasn't performant either because display access wasn't thread safe. You were supposed to inform Netscape that you wanted a display update and it would then call you to do that...sometime later. Or you could illegally draw directly from your own thread and hope that all the structures stay in a valid state until you finish. That's why video playback was prone to crashing until it was a native feature.
That's an interesting question.
Asked a different way ...
1) Why isn't java script ( or whatever other client side language) converted to java byte code ?
2) Why isn't wasm just a "compatability" tool to compile C down to java script ?
I think that the answers are
1) The JVM is too burdensome to deploy, update and/or run
2) java script isn't a suitable backend target for the programming languages being targeted.
In other words,
WASM is intended to be the right tool for the job.
A virtual machine running byte code that started out as an application platform for the browser? That would be Java in the 90s.
If you want to go back to the 70s, a virtual machine, running byte code, used as an application platform? That would be UCSD Pascal.
Or the 60s, it would be IBM CICS on DOS/VS or MVS. Hell, CICS can even serve web pages nowadays and IBM z/Series will still run code from the 60s.
Doing any less just seems to create a pointlessly different environment just for the sake of it. Do any more and you are dangerously close to reinventing Java or .Net for no other reason than "but in a browser".
The essentially single-threaded nature of the UI (which is why the browser exists in the first place) and the ephemeral nature of mobile browser apps in any case puts practical limits on how you might use your chunk of bytecode: without defined methods of creating and communicating with background threads or processes and clear descriptions of how they survive (or don't) the closing of browser tabs, backgrounding of mobile apps, what context from the foreground they might reasonably access or inherit it's going to struggle ever to rise above novelty status.
And when you do want an API, the WASI API is open and unencumbered by Oracle's licences.
They've also learnt from thirty years experience of the JVM. In particular, it's a simpler lower-level, thing, where language features aren't tied into the VM. Upgrading should be less necessary and cause fewer compatibility woes when it does happen. It really is a virtual CPU, not a philosophy.
TypeScript (including write/edit-time type checking within an IDE) is by far a higher level language experience than writing in C++ or RUST,
not least because writing while relying on a garbage collector is easier (and typically slower at run time).
Interestingly though, TS/JS garbage collection can and does utilize runtime multithread under the hood for JS garbage collection, even as the JS user code is running single thread.
The purpose of WASM is to allow other languages (like C++ and RUST) to run in the browser environment.
I can understand, to some extent, the "wouldn't it be cool if...", but once you take account of the Ts&Cs and "serving suggestion" footnotes it appears to me to deliver very little actual benefit.
You're holding it wrong.
Now with WASI some people want to use WASM as a generic portable-executable framework, running in a sandbox. It is exactly a Java replacement, and like the JVM now supports other source languages.
What it isn't is a direct replacement for containers, and the waffle about it doing so is a category error (even if it's Mr Docker saying it). I am not particularly a fan of containers (a half-assed alternative to proper VMs), but having an entire segregated userspace is significantly different from just using an interpreted or JITted language in a sandbox.
"They've also learnt from thirty years experience of the JVM. In particular, it's a simpler lower-level, thing, where language features aren't tied into the VM."
Now *that* remains to be seen. One big reason that the JVM directly encodes certain language features is to make it easier to sandbox. Java fell out of favour in the browser because it still wasn't (apparently) possible to sandbox reliably. Can WASM be sandboxed? We'll see. And if it can't then it's just another ActiveX.
> The purpose of WASM is to allow other languages (like C++ and RUST) to run in the browser environment.
And yet the article is all about everything EXCEPT running in a browser!
It is all about using a VM that you can run in your VMs, um, instead of using Docker? Hang on, why not just take an existing CPU instruction set (say, Arm) and an existing system level API (say, Linux) and run those? What is so great about WASM in this respect? Oh, Arm is copyright & licenced? Ok, base it on RISC-V! Oh, WASM is smaller (or less convoluted than all the RISC-V variants)? Well, just you wait until WASM ver 6 when we add the third GPU emulation opcodes into it!
You know, we are running so much WASM ver 8 code now, I bet we could run it direct on silicon, but we'd need to add a supervisor mode of course, and this bit so we can support a proper UEFI boot.
Yes, indeed, this is all about the browser.
 RISC-V got autocorrected to "rose-coloured" - how lovely; glad it wasn't "rose-tinted"!
 you know, the ones that let us recognise we can JIT this direct to an AMD GPU, because the ver 5 only mapped to CUDA.
The great thing about articles of this kind is not so much the information they contain as the fact they're often a springboard to go off and read more about the subject.
Having done that, I think you've probably hit the nail on the head. It really is more about a universal bytecode for downloadable applications. The much-recycled quote "If WASM+WASI existed in 2008, we wouldn't have needed to create Docker" seems to be the clincher.
So why is it in the browser in the first place? The answer to that seems to be twofold: firstly it's an existing tool from which people can download things and secondly it has a display model and a runtime that allows (for example) network access that means you don't (at least initially) have to provide these from WASM even though the interface is hardly clean.
I don't think the principle of a universal byte code is a bad idea (though the history of universal anythings is not promising) and there are things about WASI (capabilities) that seem like potentially good ideas. But it's seems completely out of place in the browser - they're big and complicated enough already. Perhaps start with some other host program that can download and execute the code which the browser can access over HTTP in the same way as it accesses everything else?
As others have said, this is just Java bytecode / VM technology. (i.e Dis/Limbo, JVM/Java, .NET/C#)
In some ways, the popularity of .NET is also because different languages could be used, including C++/clr, IronPython, F# and.... er VB.NET
Off-topic, also just found these old articles, quite charming about the early COOL/C#:
This post has been deleted by its author
I'm somewhat surprised that WASM hasn't already become the main compilation target on most operating systems. Its advantages are obvious and numerous and could substantially reduce software development costs.
Certainly it will take off in the cloud, allowing people to "bring their own binaries".
Both of those were pretty successful before they fell out of fashion. Flash was the interactive internet for a good couple of years before JS took off. So you're saying you expect this tech to be pretty successful before it is inevitably superseded?
I think it's probably about as popular as it's going to get. The question for me is going to be whether its credibility survives its first serious documented security issue that hits the news. At that point I expect to see the usual knee jerk reactions of corporations banning it on their networks, while still relying on it for their corporate websites.
I know it took a while for Flash to die, and all three ActiveX users were quite upset when it was found to be about as secure as locking your bike with a string of sausages, but I just can't see how reinventing the wheel as a triangle again isn't going to have some of the same triangle-shaped issues.
For many programmers the web browser is the center of their universe, replacing an actual computer as the target platform, so its natural to try to incorporate as many capabilities as you have in a proper computing platform into browsers. Its doomed because the browser is a an hoc architecture, its primarily a display mechanism used to interact with human users and its neither a particularly efficient nor a particularly secure one at that. Numerous extensions have been grafted onto browsers to patch up -- paper over, more like it -- browser problems but the result is always going to be less than satisfactory. The entire ecosystem is buoyed entirely by marketing, we have to put up with it because "There Is No Alternative" but its really crying out for a complete reset,a 'burn to the ground and start over'.
WASM is just another kludge. One that you put up with because that's how you earn your keep. But it has absolutely nothing novel to offer.
And I so hate all the stupid "technologies" that, over the years, wanted to bend the browser into looking like a desktop app, from Flash to ActiveX to Silverlight to... To OOB, a thing they came up with to run Silverlight out of browser.
So it wasn't in the browser anymore, was an app looking 100% like a desktop app, had it's own icon on desktop, had an uninstall entry, but was still web-based?
WTF was web based about it anymore?
Now with WASM... So it runs in the browser but doesn't let you interact with any textbox or form button in the page, won't receive an onchange event from a drop-down, won't display a pixel and, more recently, you don't need a browser cause you run it on the server.
Why is it called "W"ASM anymore?
We've been working with WASM on the server side for a few years now. It's impressive. Some thoughts:
- Startup is fast .. much master than other containers. Typically measured in milliseconds even for payloads with tens of thousands of lines of code.
- Technically WASM is partially interpreted. The language can best be described as assembly for a generic stack machine. There are a series of transformations that happen to match it up to the local architecture. Fist interpretation followed by a series of compile steps that can include optimizations biased on observation. This all happens very fast because it's so close to machine language. Most JS environments will also offer the ability to cache the fully optimized native code.
- Performance is within about 30% of native, fully optimized C++ and typically 100-1000x faster than the Excel files we start with. We have take files with over 100M formulas that took 20 minutes to open on modern desktops and got them to run in seconds on single core workers. (I'm constantly amazed at how large some Excels get... they represent dozens of person years of expertise, handed down from generation to generation of analysts)
- It's super easy to run -- AWS lambda, browsers, mainframes, you can even live load it into IOS applications. It's probably one of the few ways you can change iOS app logic without republishing the app.
- The security story is solid - it's fully sandboxed - so if you don't pass in an API hook, it can only do pure compute. This makes it possible for architects to get (reasonably) comfortable running arbitrary compute packages inside even sensitive compute environments.
Anyway -- came across WASM about 5 years ago on a hike with a friend who mentioned it as an interesting curiosity at the time. And it's been a major technical component of our architecture for years now.
Why do people upvote anonymous snark? It's not like this is a clever comment, just typical techy snobbery.
Someone who's actually used the tech has something good to say about it.
We support the twat with no knowledge of the tech who's sniping from the sidelines.
I didn't vote on it, but do you really not understand why people might support what they said and why your views on their comment are similarly open to debate? For example, you say that they have "no knowledge of the tech". You don't cite any reason for saying that and it wouldn't matter because they didn't say enough about anything to be able to guess what their experiences are.
As for why someone might be negative about Excel, it's because it's a really inefficient method of coding which supports almost none of the useful features of programming languages, and if you rely on it, those differences will start to be noticed. The original comment that started our thread pointed this out: if you can take a spreadsheet that takes twenty minutes to load, and possibly longer to update, and turn it into a task that lasts seconds without having to optimize it beyond a generic compiled language, then Excel is not only showing significant inefficiencies such that a 99.59% speed boost was possible, but it's inefficiency that is clearly noticeable to the user. At that rate, you always do the 99% speedup just because it's there, but if it's 0.5 seconds to 5 ms, then it doesn't matter as much unless you're doing it frequently. When you're improving from minutes to seconds, it always matters. This is a real problem, and you've already read about it here, but you attribute negative views toward Excel to snobbery and insult the person who wrote it on claims you can't back up.
It's secure if.....if every single major player on the web doesn't do or act in the same manner by which they live and breathe and track and sell everything they can about everyone. And certainly there are no known vulnerabilities that allow a VM to swipe information it should not be able to from other VMs or the host system while running who knows what program in said VM, right? And even if there were the average user obviously keeps their bios and drivers and everything else up to date so naturally you won't be putting millions of ignorant normies at risk all for your dumb app, right?
Hang on, did you take a set of Excel sheets and convert them once, then keep using them for many moons?
Or are you actually taking new Excel sheets continually, then compiling them down into WASM? Which is certainly what it sounds like you are doing.
That is, is what you are working on a compiler for Excel that targets WASM? Because, if so, that it a different use-case than most of the other discussions here and, with the sole exception of the "changing the logic in an iOS program"  prompts all sorts of questions about "Why WASM?" Like: cool the WASM runs in many places, but if your compiler used another backend it could be generating native code for each anyway (more than one method for that these days).
But this all seems even further removed from the browser; the W in WASM stands for - Wibble!
 which I had understood to be a no-no and a reason they blocked a slew of scripting languages from the app store? Correct, me, please, I'm not an iOS person, just a bystander.
Compilation typically takes 20-60 seconds... so not ideal for single usage. Primarily the use case is where an SME maintains the model and updates are released once every few weeks/months/years. From there the benefit is when you need to scale it to many users in mobile apps, web sites or attach to a data pipeline/cloud db to scale to for large analytics runs. Some of our customers are clocking hundreds or thousands calls per second.
You can think of it as programming calculation logic in Excel -- something that many business people are more comfortable doing in Excel than in a programming language (becasue most people don't program). Also ,for a number scenarios, Excel is more maintainable than code. I know this will get some flames-- but once you get beyond a few hundred logical functions / steps - code can be tough to debug. Excel shows every intermediate value, plus loop progression via tables. Not for everything, and there are bad Excels just like there is bad code, but I feel far more comfortable giving a 20 sheet excel file with a few hundred thousand formulas in it to a new hire to figure out than the equivalent few hundred pages of JS, Python or whatever code.
We can compile native code, but then packaging and deployment becomes more complex -- so a 30% drop in performance is relatively minor price to pay. W.r.t iOS- yes the abiliy to update independently of the published app is a definite plus. WASM (along with JS) seems to have gotten a carve out. I suspect it's because apple feels comfortable enough with the security mechanics/sandbox in WebKit.
I think there's something missing here, what he maybe meant to say was:
If WASM+WASI existed in 2008, and we threw away all the code that had ever been written in the history of computing and rewrote it all in the new environment, we wouldn't have needed to create Docker.
All of them have compilers/VMs which target x86-64, ARM, Power, S/390
All of them have a stable POSIX API.
What is the advantage to use Wasm instead of Posix+Source ?
Is it the Universal Assembly Code, transcending CPU rypes ? I can see some benefits in that, at the steep price of not having proper pthreads, proper Tcpip API and so on.
Rust and C++ are based on native pointers so they'd be difficult to sandbox efficiently and securely. A JRE or even a Java bytecode JIT would be a big chunk to bundle inside a browser. The POSIX API is designed for C-like languages on UNIX-like operating systems running with full user privileges so it's not relevant.
You lock the Rust server into a Docker VM and have the same security as locking it inside a JS engine ?
After all, the light weight is the point of dockering ?
(the article says that the Docker man is so impressed by WASM. But yeah, maybe he i just a talking head)
I am referring to server side here.
First of all, before Java, Visual Basic was ported to UNIX, later Linux, and other minicomputer OSs by a licensed third party, which achieved “write once, run anywhere”. Because there was no IDE that ran natively on those OSs, it never caught on.
Second, Microsoft Blazor has pioneered the use of WASM and still does. C# in the browser and the backend has been around a few years, and works beautifully. .NET, starting with .NET 5, and open-source C#, in a short time delivered “write once, run anywhere”.
If you want to take advantage of WASM today, the best implementation is found in Blazor with C#. As far as WASM on the server, why? We have C# and Java for that, which already have great performance and broad access to everything from IoT to massively parallel servers.
I am at a loss as to why this article, otherwise quite good, has no mention of the positive and pioneering work Microsoft has done in bringing WASM into the mainstream.
WASI, okay, maybe, but not right now.