catoms
Video ?
I've only got a small pic on http://www.intel.com/research/dpr.htm to go on, which is apperently more than you guys :-)
Intel sent sane journalists screaming for the exits this morning when it unveiled a nightmarish future vision where robots are more intelligent than humans, computers can change shape, electronic devices are recharged remotely, and humans are probably going to be ruled by an x86-based server farm. Wrapping up the Intel …
Scary enough, If you've read "The Age of Spiritual Machines" by Ray Kurzweil he predicted a lot of this stuff. It was written in 1999. The most interesting thing that he brought up was so called "utility fog", called "catoms" here. A mass of micro-robots that could changes shapes to become whatever you needed it to become. One downside, they could give new meaning to "computer virus", if there were turned loose inside a human body with instructions to destroy everything in its path.
Kurzweil also wrote "The Age of Intelligent Machines" back in 1989, and many of his 10 year predictions had come to fruition on time. His prediction is that following Moore's Law, computers will be smarter than humans by 2020, not 2058. And with the massive neural net beasts getting more and more sophisticated, and the focus on parallel processing starting to become more prevalent, 2020 seems more accurate.
I still don't think we understand enough about what intelligence is on a fundamental level to start getting overly caught up in when machines are more intelligent than us. If we don't really know what intelligence is, how will we know when they are more intelligent- a computer that is better at solving IQ tests than me may not have much empathic or emotional intelligence.
I can see that hardware is getting good, but I can't see any equivalent progression in AI, although I'm sure there may be subsequent posts that can point to it...
Also, why IDF? Surely this story should have been tagged as ROTM...
Jesus, what turds. Computers haven't gotten more intelligent since they were first built - they have certainly become more powerful as far as compute cycles but alas, they are no smarter. What "AI" researchers call intelligence is like me saying that my beagle has become more intelligent because I taught him to sit and stay.
I remember an STTNG episode in which Wesley does something naughty with nanobots or nanites as they were called, which pushes it back well into the eighties (especially since for an idea to appear in Star Trek it has to be reasonably familiar, at least in the SF world). So unless the the Intel researchers have made actual progress towards realising the idea, the whole thing's pretty ho hum.
By now, you likely know the story: Intel made major manufacturing missteps over the past several years, giving rivals like AMD a major advantage, and now the x86 giant is in the midst of an ambitious five-year plan to regain its chip-making mojo.
This week, Intel is expected to detail just how it's going to make chips in the near future that are faster, less costly and more reliable from a manufacturing standpoint at the 2022 IEEE Symposium on VLSI Technology and Circuits, which begins on Monday. The Register and other media outlets were given a sneak peek in a briefing last week.
The details surround Intel 4, the manufacturing node previously known as the chipmaker's 7nm process. Intel plans to use the node for products entering the market next year, which includes the compute tiles for the Meteor Lake CPUs for PCs and the Granite Rapids server chips.
Having successfully appealed Europe's €1.06bn ($1.2bn) antitrust fine, Intel now wants €593m ($623.5m) in interest charges.
In January, after years of contesting the fine, the x86 chip giant finally overturned the penalty, and was told it didn't have to pay up after all. The US tech titan isn't stopping there, however, and now says it is effectively seeking damages for being screwed around by Brussels.
According to official documents [PDF] published on Monday, Intel has gone to the EU General Court for “payment of compensation and consequential interest for the damage sustained because of the European Commissions refusal to pay Intel default interest."
Updated Intel has said its first discrete Arc desktop GPUs will, as planned, go on sale this month. But only in China.
The x86 giant's foray into discrete graphics processors has been difficult. Intel has baked 2D and 3D acceleration into its chipsets for years but watched as AMD and Nvidia swept the market with more powerful discrete GPU cards.
Intel announced it would offer discrete GPUs of its own in 2018 and promised shipments would start in 2020. But it was not until 2021 that Intel launched the Arc brand for its GPU efforts and promised discrete graphics silicon for desktops and laptops would appear in Q1 2022.
While Intel has bagged Nvidia as a marquee customer for its next-generation Xeon Scalable processor, the x86 giant has admitted that a broader rollout of the server chip has been delayed to later this year.
Sandra Rivera, Intel's datacenter boss, confirmed the delay of the Xeon processor, code-named Sapphire Rapids, in a Tuesday panel discussion at the BofA Securities 2022 Global Technology Conference. Earlier that day at the same event, Nvidia's CEO disclosed that the GPU giant would use Sapphire Rapids, and not AMD's upcoming Genoa chip, for its flagship DGX H100 system, a reversal from its last-generation machine.
Intel has been hyping up Sapphire Rapids as a next-generation Xeon CPU that will help the chipmaker become more competitive after falling behind AMD in technology over the past few years. In fact, Intel hopes it will beat AMD's next-generation Epyc chip, Genoa, to the market with industry-first support for new technologies such as DDR5, PCIe Gen 5 and Compute Express Link.
Patch Tuesday Microsoft claims to have finally fixed the Follina zero-day flaw in Windows as part of its June Patch Tuesday batch, which included security updates to address 55 vulnerabilities.
Follina, eventually acknowledged by Redmond in a security advisory last month, is the most significant of the bunch as it has already been exploited in the wild.
Criminals and snoops can abuse the remote code execution (RCE) bug, tracked as CVE-2022-30190, by crafting a file, such as a Word document, so that when opened it calls out to the Microsoft Windows Support Diagnostic Tool, which is then exploited to run malicious code, such spyware and ransomware. Disabling macros in, say, Word won't stop this from happening.
RSA Conference Intel has released a reference design for a plug-in security card aimed at delivering improved network and security processing without requiring the additional rackspace a discrete appliance would need.
The NetSec Accelerator Reference Design [PDF] is effectively a fully functional x86 compute node delivered as a PCIe card that can be fitted into an existing server. It combines an Intel Atom processor, Intel Ethernet E810 network interface, and up to 32GB of memory to offload network security functions.
According to Intel, the new reference design is intended to enable a secure access service edge (SASE) model, a combination of software-defined security and wide-area network (WAN) functions implemented as a cloud-native service.
Analysis For all the pomp and circumstance surrounding Apple's move to homegrown silicon for Macs, the tech giant has admitted that the new M2 chip isn't quite the slam dunk that its predecessor was when compared to the latest from Apple's former CPU supplier, Intel.
During its WWDC 2022 keynote Monday, Apple focused its high-level sales pitch for the M2 on claims that the chip is much more power efficient than Intel's latest laptop CPUs. But while doing so, the iPhone maker admitted that Intel has it beat, at least for now, when it comes to CPU performance.
Apple laid this out clearly during the presentation when Johny Srouji, Apple's senior vice president of hardware technologies, said the M2's eight-core CPU will provide 87 percent of the peak performance of Intel's 12-core Core i7-1260P while using just a quarter of the rival chip's power.
AMD's processors have come out on top in terms of cloud CPU performance across AWS, Microsoft Azure, and Google Cloud Platform, according to a recently published study.
The multi-core x86-64 microprocessors Milan and Rome and beat Intel Cascade Lake and Ice Lake instances in tests of performance in the three most popular cloud providers, research from database company CockroachDB found.
Using the CoreMark version 1.0 benchmark – which can be limited to run on a single vCPU or execute workloads on multiple vCPUs – the researchers showed AMD's Milan processors outperformed those of Intel in many cases, and at worst statistically tied with Intel's latest-gen Ice Lake processors across both the OLTP and CPU benchmarks.
The European Commission's competition enforcer is being handed another defeat, with the EU General Court nullifying a $1.04 billion (€997 million) antitrust fine against Qualcomm.
The decision to reverse the fine is directed at the body's competition team, headed by Danish politico Margrethe Vestager, which the General Court said made "a number of procedural irregularities [which] affected Qualcomm's rights of defense and invalidate the Commission's analysis" of Qualcomm's conduct.
At issue in the original case was a series of payments Qualcomm made to Apple between 2011 and 2016, which the competition enforcer had claimed were made in order to guarantee the iPhone maker exclusively used Qualcomm chips.
The Linux Foundation wants to make data processing units (DPUs) easier to deploy, with the launch of the Open Programmable Infrastructure (OPI) project this week.
The program has already garnered support from several leading chipmakers, systems builders, and software vendors – Nvidia, Intel, Marvell, F5, Keysight, Dell Tech, and Red Hat to name a few – and promises to build an open ecosystem of common software frameworks that can run on any DPU or smartNIC.
SmartNICs, DPUs, IPUs – whatever you prefer to call them – have been used in cloud and hyperscale datacenters for years now. The devices typically feature onboard networking in a PCIe card form factor and are designed to offload and accelerate I/O-intensive processes and virtualization functions that would otherwise consume valuable host CPU resources.
Biting the hand that feeds IT © 1998–2022