back to article Azul lays claim to massive efficiency gains with remote compilation for Java

Azul, a provider of OpenJDK (Java runtime) builds, has introduced a "Cloud Native Compiler" which offers remote compilation of Java to native code, claiming it can reduce compute resources by up to 50 per cent. When a Java application runs, a JIT (Just-in-time) compiler, usually the OpenJDK JIT called HotSpot, compiles the …

  1. claimed
    Coffee/keyboard

    80-100 per cent faster than what you can do with static compilation

    Ahhhhaaaa haha ha ha ha ha ha ha!!!!

    Oh wow, ***wipes tear from eye***, thanks El Reg, that's cheered me right up.

    ***walks off, chuckling***

    What a bold claim

    1. Richard 12 Silver badge

      Re: 80-100 per cent faster than what you can do with static compilation

      Compared to what?

      Definitely not comparing with any native code, like C++ or Rust.

      All the popular C++ and most other hative toolchains have been offering profile-guided optimization for a decade or more. It can give you a few percentage points on your native code, but they've never claimed anywhere near that kind of improvement. 1-2% is usual, 10% is about the best.

      Because the native compilers are already really pretty darn good.

      1. fg_swe

        Re: 80-100 per cent faster than what you can do with static compilation

        That was exactly the original posters point.

      2. karlkarl Silver badge

        Re: 80-100 per cent faster than what you can do with static compilation

        I *think* they are trying to make out that static compilation can only target the lowest common denominator of architecture whereas the AOT/JIT approach can utilise processor features it finds at runtime.

        However in practice this never wins. Static compilation always comes out faster in the end. Not only can multiple code paths be generated by the compiler for a number of processors and features (involving a trivial jmp) but the additional time that compilers and optimizers can take *during compile time* makes up for that. Not to mention 99% of the rest of the program tends to be more optimal when statically compiled.

        In principle it might have merit but VMs and big tech stacks like Java, .NET, Alef/Limbo, tend to disappear or get heavily refactored/reimplemented before they ever start to get competitive (or the universe heat death occurs, whichever is sooner).

      3. Nomis

        Re: 80-100 per cent faster than what you can do with static compilation

        Full disclosure, I work for Azul.

        To address the comment, it's not a comparison with C++/Rust compiled code, it's a comparison with statically compiled Java code.

        JIT compilation can take advantage of techniques that definitely lead to better performance. More aggressive method in-lining is only possible if you know what classes are loaded at runtime (not possible with AOT). More importantly, is the benefit of speculative optimisations that are not possible in AOT code. PGO helps but it's still not possible to get the most significant improvements in performance this way.

      4. Steve Channell
        Facepalm

        Re: 80-100 per cent faster than what you can do with Graal

        The comparison is not with {C++,C#, Rust etc} but with current Graal VM - the premis is flawed.

        Graal comparability is mentioned, but not that the specific area is RMI dynamic (remote) loading - not supporting log4j RCE is a feature, not a bug.

        Graal is hampered by Java's lack of a module system, but dynamic loading dependency is not an issue for containers (unless using RMI)

        Java Hotspot is a profile guided generator, but is only reliable if the first 100 odd executions are representative - can be significantly slower than Graal or JRocket (optimising for closed accounts)

        I'm sure Azul has (relatively) great technology, but remote runtimes compilation exacerbates the problem with RMI flaw - that is driving people to ditch Java completely.

  2. Pascal Monett Silver badge

    "you're constrained by local machine resources"

    Um, if I'm not mistaken, local machines have grown in power to an impressive extent.

    The cheapest laptop today can run Office. Okay, not at blinding speeds, but have you tried Word 95 on a Pentium ?

    Besides, a server is nothing but someone else's machine.

    Some people are really desperately trying to reinvent the mainframe.

    1. Nomis

      Re: "you're constrained by local machine resources"

      Full disclosure, I work for Azul.

      The point about local machine resources is targeted specifically at cloud-based microservice architectures. Utility pricing means it's cheaper to limit as much as possible the resources allocated to a service instance. If you allocate four cores to a service and one of them spends most of its time compiling code, you've reduced your possible throughput by 25%. Shift the compilation to a centralised service and you don't degrade throughput. You can also take advantage of shared compilation so for the same service you don't need to compile code again, you just serve it out of your cache. The same applies every time you start the same code; you can eliminate a lot of the warmup time (and avoid costly deoptimisations) by centralising the compile service.

  3. YetAnotherJoeBlow Bronze badge

    hmm...

    Sending and receiving code at runtime - what a great idea.

  4. Kevin McMurtrie Silver badge

    Because every Engineer likes sparse documentation

    I found the documentation and it's a relief that it's for your own cloud, not somebody else's cloud. I'm still skeptical of the advantages. If you have a very large cluster of servers doing completely identical things all the time, I could see this helping as long as your network is blazing fast. It's not documented how dynamic optimization is handled as conditions vary over time and between clients. It's also not clear what compilation optimizations they're adding and how well tested they are. The resource requirements for the CNC nodes are massive and I'm struggling to see how those costs are recuperated on client systems.

  5. Abominator

    Just give up and write it in C++ or Rust.

    1. fg_swe

      That will reduce RAM consumption by 50%, given equivalent algorithms.

  6. DaemonProcess

    +O9

    Well I have seen large (300%) gains from optimisation, partly from profile-based and partly by letting the compiler go to the max and also enabling linker optimisations. Most developers go for fastest compilation, put whatever is working into test and then nobody wants to change anything for production.

    Some optimisations can even effectively re-write source code so that +3 and then +4 gets turned into a single +7 machine instruction.

    What cannot improve though is dependency on i/o. A single stream will not go any quicker; - but you will probably be able to run more processes in parallel and have them all blocked on i/o... This is have also seen, but not since 20 years ago.

    So, yes, I can believe gains in non-i/o bound cpu-hoggers, such as ML learning in memory, but not for anything much else.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

Biting the hand that feeds IT © 1998–2022