Python, for everyday scripting of everything.
Lua for embedding in C + C++.
My jaw drops with running JS on the server - Node and all that. Just because the JS JIT runs fast does not mean you should use JS outside of the browser.
Ive never seen a Node setup which could nto be done better, faster and safer than using Elixir/ErlangOTP.
This summer I spent a full month re-learning JS (I'm old). You may not be aware that many of the bad things in JS have been corrected or superseded by updates, particularly ES6. A lot of new/old programming capabilities have appeared too, and transpiling allows anyone to use modern JS without having to manually refactor for old browsers. There's even plans to introduce things like 'await' to the language, if you can believe it. So, over time your objections (tho valid) are becoming less and less relevant.
I'm not saying JS should be in servers because I'm not competent to judge. But Node is, as far as I can tell, not really meant as a server language but as an ecosystem for open source dev tools. For that usage it seems to be more than adequate. Node also has the huge advantage of running in a language known to millions, albeit many with only superficial understanding. It lowers the bar enough that almost anyone can contribute, which has caused an embarrassment of riches regarding tool selection, oy.
Finally, it ain't going away soon, due to browser inertia. That's why so much effort is spent on improving it. I'm betting in a few more years you won't see that much difference between JS and Python.
I’d guess you have never actually given it a good go. Most people who hold this opinion have always held this opinion (or heard it somewhere) and maybe hacked around with it in the past but tried to treat it like their primary language. Once you figure out which parts are “the good parts” and which parts to stay away from it is a really amazing, or at least interesting, expressive language.
"ES2016 formalised that into "promises""
Yeah whatever, more nomenclature buzzword bollocks lingo.
"we added language mechanisms that let a programmer call a function and asynchronously delay or wait for that function to complete"
So its basically threading in disguise. *yawn*
"The other big feature is shared memory."
Wow, cutting edge stuff! In 1980.
"It has evolved into the general-purpose programming language that advocates originally envisioned,"
You forgot its main use is breaking web sites - "web devs" - you do not need JS to display test, process a link click - your site should do that with JS enabled
And the extra delay in loading in all those huge frameworks just to do a bit of DOM twiddling, and (high) chance of some broken JS taht hogs memory and cripples performance.
There is very little need for lots of web site JS
And with JS enabled always the lottery of seeing what malware randonm ad js is trying to chuck at you.
So its basically threading in disguise.
No, not really.
Premises are Good Old Stuff: Promises.
Oh well, Java 9 is out, time to go steady.
"No, not really."
Yes really. If a program has more than 1 simultanious execution path its either threads or processes, take your pick. The end result ultimately is the same for the programmer. Sure, "promises" might mean you don't have to fiddle about with mutexes and whatnot but other than that there's no difference. Same wine, different label.
"Writing a web app in C++ is just stupid if you ask me"
Depends whats going on underneath. The client facing browser side is just the icing on the cake, it gets all the attention but there's usually a whole lot more going on below. If you need some fast DB access and industrial strength data processing then there's nothing wrong with using C++ especially since you don't have the VM overhead and memory hogging of Java or C#.
I'm posting this anonymously as I still work as a web dev and have for over 12 years.
I think JS is perceived as "cool" especially to a lot of younger developers. And it's started to make its way on to servers with things like Node, so also appeals to wannabe-sysadmin types who have made an alert on a web page before.
For me it just seems inappropriate for about 90% of the things it's used for. Or, there are better ways.
I believe the future is C/C++ in browser natively.
Regardless, 6 months ago I dove in using the latest node.js enabling all beta features and all the latest browsers, I wanted the full effect regardless of compatibility. I used zero libraries in the browser and only 2 core libs for node. Here's a short on what you're missing...
By core dependancies, I mean as in no depedacies on other libraries. Node libraries have HUGE dependcies on other libs if you don't watch out (look up the lib "Express", it's bloooooated). To be honest, node.js isn't bad as a server language once you stop using crazy libs like Express, but the pratice of intentionally crashing without any trusted way of validating input prior to crash becomes stunting.
The hype is justified for I believe that the ease of creating interactive widgets makes it tinkery fun. The real drawback is memory usage. I don't care if it's serial, parallel, interstellar or orgasmic... the memory usage of even a simple waveform analyzer after render is large. It all apparently comes down to the broswers' garbage collection, which more times than not, operates in JiT so memory comsumption just seems to linger regardless of what js methods you use on the graphical objects (drag'n drop is ok, just don't drag and drop graphical collections and expect even decent memory usage).
"I think JS is perceived as "cool" especially to a lot of younger developers." As it has a very low bar to use on the browser and now on the backend, it is easy to get involved with. Like back last century for me, for enhancing user interactions with lists. (woo)
"... and just seems really obsessed with it." That enabling immediate response to users can be rewarding and addictive for devs should be obvious. I'm surprised you haven't noticed.
"For me it just seems inappropriate for about 90% of the things it's used for. Or, there are better ways." Mmm, okay John Henry, don't have a heart attack.
"As it has a very low bar to use on the browser...."
Right there is the problem! jquery users who have no real footwork with memory managemen. Let us generalize that "low bar".
Have you ever noticed how anything graphically intense rendered by the browser isn't running in js? The bar isn't low, it might be as high as it gets in pure js. In fact, js performs so terribly slow as DOM reflow/repaint is a horrible design (no, I sadly don't have a better solution), they had to invent "workers" to make any real interaction even tolerable. So, maybe the bar isn't low or high, maybe it just doesn't exist.
"That enabling immediate response to users..."
Again, a jquery comment. The problem is your just bloating the browers references for what... drop down menus? The heart of interactive js can't be done with jquery (even jquery devs understand that, regardless of the newer native methods). Once you try to do actual interactive designs in js you will understand why strict libraries like ASM.js have arose (and will undoubtedly be replaced by Mozzilla's push).
And it's started to make its way on to servers with things like Node
Until something else comes along to keep the hyperactive scriptlers with learning challenges busy and the managers on "muh scale-out, scale-up asynchronous web-scale" hype-drenaline.
Meanwhile, normal people will do stuff that actually works.
Using threads, for one.
Unfortunately this fashion for making even static pages into 'web apps' is essentially born of crashing ignorance. For example, I found today an online shop which declared that in the interest of security "our shopping cart runs entirely on the client, so there's no server to hack". Where do they think the shopping cart code resides at rest? What happens if that repository is contaminated by malicious actors?
Until software development has been raised to at least the minimum standard of a professional engineering discipline we remain at the mercy of fools and ignoramuses. Dunning and Kruger rule!
Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.
The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.
This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.
In Brief No, AI chatbots are not sentient.
Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.
The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.
In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.
"Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."
The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.
Google has placed one of its software engineers on paid administrative leave for violating the company's confidentiality policies.
Since 2021, Blake Lemoine, 41, had been tasked with talking to LaMDA, or Language Model for Dialogue Applications, as part of his job on Google's Responsible AI team, looking for whether the bot used discriminatory or hate speech.
LaMDA is "built by fine-tuning a family of Transformer-based neural language models specialized for dialog, with up to 137 billion model parameters, and teaching the models to leverage external knowledge sources," according to Google.
Opinion The Turing test is about us, not the bots, and it has failed.
Fans of the slow burn mainstream media U-turn had a treat last week.
On Saturday, the news broke that Blake Lemoine, a Google engineer charged with monitoring a chatbot called LaMDA for nastiness, had been put on paid leave for revealing confidential information.
As compelling as the leading large-scale language models may be, the fact remains that only the largest companies have the resources to actually deploy and train them at meaningful scale.
For enterprises eager to leverage AI to a competitive advantage, a cheaper, pared-down alternative may be a better fit, especially if it can be tuned to particular industries or domains.
That’s where an emerging set of AI startups hoping to carve out a niche: by building sparse, tailored models that, maybe not as powerful as GPT-3, are good enough for enterprise use cases and run on hardware that ditches expensive high-bandwidth memory (HBM) for commodity DDR.
In the latest episode of Black Mirror, a vast megacorp sells AI software that learns to mimic the voice of a deceased woman whose husband sits weeping over a smart speaker, listening to her dulcet tones.
Only joking – it's Amazon, and this is real life. The experimental feature of the company's virtual assistant, Alexa, was announced at an Amazon conference in Las Vegas on Wednesday.
Rohit Prasad, head scientist for Alexa AI, described the tech as a means to build trust between human and machine, enabling Alexa to "make the memories last" when "so many of us have lost someone we love" during the pandemic.
Analysis After re-establishing itself in the datacenter over the past few years, AMD is now hoping to become a big player in the AI compute space with an expanded portfolio of chips that cover everything from the edge to the cloud.
But as executives laid out during AMD's Financial Analyst Day 2022 event last week, the resurgent chip designer believes it has the right silicon and software coming into place to pursue the wider AI space.
GPUs are a powerful tool for machine-learning workloads, though they’re not necessarily the right tool for every AI job, according to Michael Bronstein, Twitter’s head of graph learning research.
His team recently showed Graphcore’s AI hardware offered an “order of magnitude speedup when comparing a single IPU processor to an Nvidia A100 GPU,” in temporal graph network (TGN) models.
“The choice of hardware for implementing Graph ML models is a crucial, yet often overlooked problem,” reads a joint article penned by Bronstein with Emanuele Rossi, an ML researcher at Twitter, and Daniel Justus, a researcher at Graphcore.
AI is killing the planet. Wait, no – it's going to save it. According to Hewlett Packard Enterprise VP of AI and HPC Evan Sparks and professor of machine learning Ameet Talwalkar from Carnegie Mellon University, it's not entirely clear just what AI might do for – or to – our home planet.
Speaking at the SixFive Summit this week, the duo discussed one of the more controversial challenges facing AI/ML: the technology's impact on the climate.
"What we've seen over the last few years is that really computationally demanding machine learning technology has become increasingly prominent in the industry," Sparks said. "This has resulted in increasing concerns about the associated rise in energy usage and correlated – not always cleanly – concerns about carbon emissions and carbon footprint of these workloads."
Biting the hand that feeds IT © 1998–2022