Sweet!
Now, how soon before these can show up in real products?
(Please don't say five years, that's a Death sentence!)
Christmas is a time of miracles, to paraphrase Hans Gruber, and US researchers claim to have pulled off one in silicon photonics: they say they've mainlined super-fast optic communications into a RISC CPU using cheap, bog-standard manufacturing techniques. Chips using light to shunt data around a processor and to and from its …
An early prediction for you
1) They wil patent the hell out of everything in sight
2) They will charge and arm and a leg to license it
3) It will die a death as other companies take the ideas and make them work in ways the circumvent the patents
4) go to 1, rinse and repeat.
As for it ever appearing in a product? Somehow I doubt that it will in appear anything outside Datacentres and Cloud factories.
"Somehow I doubt that it will in appear anything outside Datacentres and Cloud factories."
You're most likely correct. x86 still has a stranglehold on software development for the desktop, Arm for mobile. Other Architectures generally only appear in embedded or niche markets. Even Intel don't really build x86 any more but still have to emulate the instruction set on the current CPUs, It does make one wonder how fast Windows or Linux would be if that emulation layer was stripped out and the CPU could be coded for natively.
After all, look at how fast bloat-free assembly language OS GUIs can be.
The problem is getting the coders to switch. People went to Apples iPhone and iPad because of the financial incentives. It wasn't a great leap from those (and Linux) to Android. But look what happened when MS tried to a couple of times over the years to get 3rd party support for Windows on none i86 CPUs.
An early prediction for you1) They wil patent the hell out of everything in sight
As they should. This is a novel, innovative manufacturing process.
2) They will charge and arm and a leg to license it
More likely, they will charge what people are willing to pay, or keep it back for use in their own products. What's the point spinning businesses out of a Uni unless they are going to make money from it? The most obvious way to make money would be to license the technique, and to make money from that they need to charge what people will pay.
These are not patent trolls, they are universities.
3) It will die a death as other companies take the ideas and make them work in ways the circumvent the patents
Fair enough. If they can achieve the result they want without using the specific technique described in the patent, there's no problem. Patents only (or should only) protect the particular innovation you came up with, not everything vaguely similar (*cough* Apply *cough*).
This post has been deleted by its author
"The problem is getting the coders to switch"
Yeah, here's the funny thing - the way I'm reading this, it's a bog-bog-bog standard CPU that uses photons as I/O only. It still might make a difference small or huge, but it ISN'T processing anything with light - only interfacing with it...
Actually it's even less than that (which may be a very clever idea).
figure 3 on the linked article shows both chips (processor and memory) modulating laser light from an external laser.
While this has not been how previous efforts have tried to do it in a way it makes perfect sense, treating light as resource to be piped in from outside from an optimized source.
I was reminded of the bug found in the US Embassy in Moscow in the 1940s. being a passive acoustomechanicalelectromagentic device all the failure prone elements (the RF transceiver) was outside the building.
The diagram also shows the processor as a "RISC V" so not sure if this is another shot at trying to get MIPS (or whoever owns them today) back into the server market.
Well, you don't need to drive the off-chip signal lines with actual electric power, so it's both faster and more energy-efficient. Plus you can put the signal lines closer together (no worry about capacitance and cross talk) and maybe even go through-air (signalling from anywhere inside the CPU - not only from the edge of the CPU - to another CPU on top of it for example). We will see what comes out of this.
google Orac3. A guy in Australia made his interpretation of Orac. Look for results in bit-tech.net.
It is a very long article, which is well worth it if you like reading about invention and perseverance.
Plenty of pictures if you want to skip bits. It is a truly novel and interesting computer.
cheers
Pete
> Was there recent trouble with cop cars in L.A.?
Doesn't it mean that Microsoft can start writing software that is bugproof . Free at last Free at last. Thank God almighty, we are free at last.
I am looking forward to an article on the subject let me see; who do we choose?
“Choose well. Your choice is brief, and yet endless.”
Next: Fiber optics to interconnect ICs.
Little grooves on the package, aligning the interchip fibers and bringing them close to the on-chip photonics, that are behind a transparent window. A hinged cover then snaps down to hold them in place.
Replace overly complicated PCBs with simpler PCBs and a handful of fibers.
Better, Faster, Cheaper.
Merry Xmas.
Next: Fibre optics to interconnect ICs.
Little grooves on the package, aligning the interchip fibres and bringing them close to the on-chip photonics, that are behind a transparent window. A hinged cover then snaps down to hold them in place.
Replace overly complicated PCBs with simpler PCBs and a handful of fibres.
Better, Faster, Cheaper.
Merry Christmas.
The UK.
"...run... ...patent office..."
My post may be used as evidence that the fibre-to-chip grooved slot concept described is clearly obvious to anyone 'skilled in the art' (or even unskilled, like me); and is therefore clearly not a patent-able idea.
Such ideas are a dime a dozen.
Maybe somebody can patent the optimum hinge design for the snap down cover...
Happy Boxing Day.
Indeed. See:
An open-access “predatory” academic journal has accepted a bogus research paper submitted by an Australian computer scientist titled Get Me Off Your Fucking Mailing List.
- http://www.theguardian.com/australia-news/2014/nov/25/journal-accepts-paper-requesting-removal-from-mailing-list
The PDF is here. Actually worth looking at to see the diagrams:
http://www.scs.stanford.edu/~dm/home/papers/remove.pdf [NSFW as it contains bad language writ large]
More on publishing here, guys from ... 10 years ago. It's like it was yesterday.
Ten years ago? That practically is yesterday for this debate. The MLA Ad Hoc Committee on the Future of Scholarly Publishing was established in 1999, and that's in the humanities, where the problems, while still serious, are less pressing (because publication fees are rare, among other reasons).
It's easy to find all sorts of studies and reports on issues with scientific publication from the 1990s, like the Council of Biology Editors (now the Council of Science Editors), Ethics and Policy in Scientific Publication, 1990. There was a ton of this stuff in the '90s.
I don't know how prevalent such discussions were in the '80s and earlier, as I wasn't really an academic then; I read a fair bit of research in various fields in the latter half of the '80s, but I didn't pay attention to the industry. I wouldn't be surprised if it's been going on for a while, though. Certainly there have been organizations of journal editors and the like for a long time; the MLA's Council of Editors of Learned Journals1 has been around for four decades. The aforementioned CBE/CSE was founded in 1957, according to Wikipedia, so it's past the half-century mark.
1I love that name. It's like they were sitting around and said, hey, what would really sound like a front for a sinister cabal?
The paper by Chen Sun, et al. Single-chip microprocessor that communicates directly using light is published behind a paywall, but the Processor Demonstration Video http://www.nature.com/nature/journal/v528/n7583/fig_tab/nature16454_SV1.html is in the public domain.
If there's essentially no latency for the conversion from electrical signals to light and back again, and there's a way to integrate "light pipes" on die, they could use it for on chip signaling. That would be huge for speeding things up, as the RC delay of a given length wire goes up with every process shrink.
There's a simple thought experiment that can reveal the approximate latency of any such system.
This: "How many bits can it hold?"
Let's say a transducer or subsystem or media is passing 100 Gbits/s (100 bits/ns)..
Somebody is worried that it may impose, for example, 100 ns of latency.
So... How many bits can it hold?
Can the structure store 10,000 individual bits? (100 bits/ns x 100 ns = 10,000 bits)
Have they accidentally invented 'The Fastest Cache Memory....In The World'?
If it can store 10,000 bits and at 100 Gbits/s (!!), wow!! A Nobel Prize awaits.
If the structure can only be in one state (0 or 1) over its entire length (ref Speed of Propagation if required; c x velocity factor), then the latency must be less than the inverse of the data rate. In this example, less than 0.01 ns.
It all becomes perfectly obvious once you consider this thought experiment approach.
Merry Xmas.
(I'm just waiting for the family to awaken on Xmas morning.)
I can't see FIFO ever becoming obselete. Far too useful in, for instance, MTI (Moving Target Indication) in RADAR. Hmph, LIDAR anyone? And this device is perfect for LIDAR use already. And the nifty, new around the corner LIDAR processing. Oh yeah, I could have fun with this.
I can remember when the Americans said let there be light and on the third day they realised that it was a mistaken belief about the decay of "safe" atoms of lithium and on the third decade they handed back Bikini Atoll several decades early.
My question is what are the Victims of Stuxnet going to make of it. Their problem is that America made war on them too while they were still on a roll, after Japan.
Colin Tree found one example Tx <---> Rx pair 65ns...
It works the other way too. Assuming that the example device that you found isn't physically capable of storing bits beyond just being in one state (On/Off), then the latency is a hint about the maximum speed of the device.
65 ns of latency hints that it's very slow device, operating in the single-digit MHz range. Even 10 MHz is likely to be a horrific phase-shifted sinewave. There are alternate explanations, but they're perhaps less likely and just as low performance.
If a device with 65 ns latency was used (for example) to transmit a 1 Gbps signal, then it needs to hold at least 65 of those bits within itself physically, which may clearly and obviously be impossible.
This is all a very useful technique to detect tech-nonsense instantly.
Happy Boxing Day.
Ref Google Images= CISCO 15454 manual...
IMHO= every fiber terminal similar to the CISCO 154454 fiber transmux / terminal has had a direct fiber to electrical interface chip assembly... these folks have a single chip processor version with similar functions that unfortunately look like the 10 year old stuff... great for hobby and makers kit, just not patentable as new teck stuff...RDS.
The CISCO 154454 terminal is exactly the same as this new development in the same way that a horse is exactly the same as a Bugatti Veyron. They both do the same function, just that one is a bit slower than the other.
Can you point to the page in the CISCO 154454 manual where the phrase "terabit per second" is mentioned? :-)
Obviously, integrating the photonics onto the System on a Chip is opening up a new field. It's not really unexpected news, but it's worthy of mention on the back page.
The unexpected bit, perhaps about a decade from now, will be when the speed of the future on-chip electronics catches up with the physics of light in fibers, because then there will be another new field of study within Physics that will go off in surprising-to-some directions. Maybe Broadband CDMA waveforms on fiber. Maybe phase coding of baseband light. That sort of thing, where light can be treated the same way HF radio wave waveforms are treated now, amplitude controllable at any point in time, bumping into new quantum effects and limitations.
Happy Boxing Day.
Sounds like a 3-D array of these chips would be able to run a feedforward neural net natively.
One major improvement over conventional chips here is the optical interface does not suffer from anywhere near the pin density issues; a handful of fibers can run a massively parallel interface that would normally need tens of thousands of pins with crosstalk issues limiting speed.
Yea! Let picojoules per bit be the next meaningless benchmark race! Since joules and bandwidth both have a base interval of one second, the math comes out to simply (Watts / Bit rate) / 0.000000000001.
Yikes, my home Internet uses over half a million picojoules per bit! I'm going to start calling around to see who can do better than that.