Lor! They're assembling you again?
-- Data
You may have heard before about Intel's Loihi neuromorphic chips that mimic the way brains work, but what hasn't been clear yet is how the chipmaker will make money from the experimental silicon. In a recent roundtable with journalists, Intel Labs lead Rich Uhlig offered two possibilities: integrating Loihi in a CPU for PCs to …
Debatable claim. How parts of brains work is known in outline. Visual and speech processing for instance. Specialised sections recognise lines, colours etc. Fine. Memory ? Partly as there are several types of human brain memory. As for how our brains work, I thought that was a long way away. We know some brain parts functions well, but how the whole works ? Still worth working on as the classic Von Neuman architectures seem to be approaching end game
Looking forward to having voice command and general speech recognition that does NOT transmit what I say (or room background noise) to "the cloud" but instead keeps it on some device that I own and have control over.
I'd also like to see this as a USB HID device with hotkey support and things like that. Or it could even be a new class of HID device (not a keyboard or mouse or touch screen) with the kinds of capabilities that allow you to input hot-key combinations or "run this application" kinds of commands.
All good, I think! And, NO "out to the cloud" with what I'm saying or what background noises are in the room at the moment...
I thought that our brains were analogue. Therefore, I cannot see any comparitors, multipliers summers or any other analogue type devices - is there even a single opamp, so please stop calling it like "our brain". Try a digital representation of what our brain might look like.
... shut a computer with neural chips off, did I just commit murder? If I run Windows on it, am I committing mental abuse? If I sell it, am I engaged in trafficing in sentient beings?
And finally, if I impress female engrams upon the chip, will it be surly one week every month, and illogical the other 3 weeks?
Qualcomm knows that if it wants developers to build and optimize AI applications across its portfolio of silicon, the Snapdragon giant needs to make the experience simpler and, ideally, better than what its rivals have been cooking up in the software stack department.
That's why on Wednesday the fabless chip designer introduced what it's calling the Qualcomm AI Stack, which aims to, among other things, let developers take AI models they've developed for one device type, let's say smartphones, and easily adapt them for another, like PCs. This stack is only for devices powered by Qualcomm's system-on-chips, be they in laptops, cellphones, car entertainment, or something else.
While Qualcomm is best known for its mobile Arm-based Snapdragon chips that power many Android phones, the chip house is hoping to grow into other markets, such as personal computers, the Internet of Things, and automotive. This expansion means Qualcomm is competing with the likes of Apple, Intel, Nvidia, AMD, and others, on a much larger battlefield.
Lenovo has unveiled a small desktop workstation in a new physical format that's smaller than previous compact designs, but which it claims still has the type of performance professional users require.
Available from the end of this month, the ThinkStation P360 Ultra comes in a chassis that is less than 4 liters in total volume, but packs in 12th Gen Intel Core processors – that's the latest Alder Lake generation with up to 16 cores, but not the Xeon chips that we would expect to see in a workstation – and an Nvidia RTX A5000 GPU.
Other specifications include up to 128GB of DDR5 memory, two PCIe 4.0 slots, up to 8TB of storage using plug-in M.2 cards, plus dual Ethernet and Thunderbolt 4 ports, and support for up to eight displays, the latter of which will please many professional users. Pricing is expected to start at $1,299 in the US.
Updated Intel has said its first discrete Arc desktop GPUs will, as planned, go on sale this month. But only in China.
The x86 giant's foray into discrete graphics processors has been difficult. Intel has baked 2D and 3D acceleration into its chipsets for years but watched as AMD and Nvidia swept the market with more powerful discrete GPU cards.
Intel announced it would offer discrete GPUs of its own in 2018 and promised shipments would start in 2020. But it was not until 2021 that Intel launched the Arc brand for its GPU efforts and promised discrete graphics silicon for desktops and laptops would appear in Q1 2022.
Analysis Supermicro launched a wave of edge appliances using Intel's newly refreshed Xeon-D processors last week. The launch itself was nothing to write home about, but a thought occurred: with all the hype surrounding the outer reaches of computing that we call the edge, you'd think there would be more competition from chipmakers in this arena.
So where are all the AMD and Arm-based edge appliances?
A glance through the catalogs of the major OEMs – Dell, HPE, Lenovo, Inspur, Supermicro – returned plenty of results for AMD servers, but few, if any, validated for edge deployments. In fact, Supermicro was the only one of the five vendors that even offered an AMD-based edge appliance – which used an ageing Epyc processor. Hardly a great showing from AMD. Meanwhile, just one appliance from Inspur used an Arm-based chip from Nvidia.
In yet another sign of how fortunes have changed in the semiconductor industry, Taiwanese foundry giant TSMC is expected to surpass Intel in quarterly revenue for the first time.
Wall Street analysts estimate TSMC will grow second-quarter revenue 43 percent quarter-over-quarter to $18.1 billion. Intel, on the other hand, is expected to see sales decline 2 percent sequentially to $17.98 billion in the same period, according to estimates collected by Yahoo Finance.
The potential for TSMC to surpass Intel in quarterly revenue is indicative of how demand has grown for contract chip manufacturing, fueled by companies like Qualcomm, Nvidia, AMD, and Apple who design their own chips and outsource manufacturing to foundries like TSMC.
Analysis After re-establishing itself in the datacenter over the past few years, AMD is now hoping to become a big player in the AI compute space with an expanded portfolio of chips that cover everything from the edge to the cloud.
It's quite an ambitious goal, given Nvidia's dominance in the space with its GPUs and the CUDA programming model, plus the increasing competition from Intel and several other companies.
But as executives laid out during AMD's Financial Analyst Day 2022 event last week, the resurgent chip designer believes it has the right silicon and software coming into place to pursue the wider AI space.
Intel has found a new way to voice its displeasure over Congress' inability to pass $52 billion in subsidies to expand US semiconductor manufacturing: withholding a planned groundbreaking ceremony for its $20 billion fab mega-site in Ohio that stands to benefit from the federal funding.
The Wall Street Journal reported that Intel was tentatively scheduled to hold a groundbreaking ceremony for the Ohio manufacturing site with state and federal bigwigs on July 22. But, in an email seen by the newspaper, the x86 giant told officials Wednesday it was indefinitely delaying the festivities "due in part to uncertainty around" the stalled Creating Helpful Incentives to Produce Semiconductors (CHIPS) for America Act.
That proposed law authorizes the aforementioned subsidies for Intel and others, and so its delay is holding back funding for the chipmakers.
Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.
The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.
This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.
In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.
"Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."
The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.
Having successfully appealed Europe's €1.06bn ($1.2bn) antitrust fine, Intel now wants €593m ($623.5m) in interest charges.
In January, after years of contesting the fine, the x86 chip giant finally overturned the penalty, and was told it didn't have to pay up after all. The US tech titan isn't stopping there, however, and now says it is effectively seeking damages for being screwed around by Brussels.
According to official documents [PDF] published on Monday, Intel has gone to the EU General Court for “payment of compensation and consequential interest for the damage sustained because of the European Commissions refusal to pay Intel default interest."
Biting the hand that feeds IT © 1998–2022