disagree
I think you're underestimating the potential value of this-- I'm guessing you know some circuits and comp arch as well, but I'm going to break it down a bit more than you need for the benefit of others.
You're obviously right that something like this has little or no value for local interconnect, but global interconnect is a major pain in the ass that does not scale or improve with process technology, and this could be a big help in that context. Pipelining global interconnect is most certainly not a panacea-- basically, by adding in more stages, you increase the number of cycles that pass before it's possible to determine whether or not a branch instruction was predicted correctly. When you guess wrong, therefore, you've wasted more time following a bad path, which brings down the average number of instructions that get completed per cycle. Intel tried this strategy with P4-- their thought was that they would use techniques like wire pipelining to reduce the cycle time so far that the drop in efficiency would be outweighed by blazing speed. It didn't really work. By slashing global interconnect latency in half, you could potentially have much larger cores without resorting to these shenanigans (allowing larger branch predictors for example, and possibly helping with sequential execution speed), or you could facilitate communication between cores.
For global interconnect, optical communication has a lot of attractive features even beyond the roughly 2x reduction in latency. In a traditional bus, you have multiple long wires in parallel. The metal layers used for global interconnect tend to be relatively tall and thin-- the thinness is to get density, and the tallness is to compensate for the effects of that thinness on resistence. As a result, you have large plates close to one another, and you develop significant capacitance, which means that the relative voltage of the neighboring lines will tend to stay the same. So, if one line moves from high to low voltage, it will push its neighbor down as well. This is called cross-coupling, and it can do some very nasty things. Suppose that one line is driven at constant low voltage (we'll call this the victim). The other line starts off at high voltage, and transitions low (this is the attacker). The attacker pushes the victim down below 0 volts. Suppose that the victim is driving to a latch (memory element) which is not enabled. In a typical latch, there are cross coupled inverters, and an nmos transistor acting as a gate, with the input at the drain and the output at the source. When there is a positive voltage difference between the gate and the source, electrons flow from the source to the drain. Normally, when you have 0 volts on the gate, then, you expect nothing to flow through the device, and your state will not be written. But, if the victim gets dragged below 0, you potentially have enough of a voltage difference between the source and the gate to turn the transistor on, allowing a write to your memory when the latch is supposed to be disabled. This is a serious ugly, it's transient, and it's very difficult to catch.
Circuit designers sometimes use techniques called shielding and half shielding to reduce these problems. Shielding involves inserting lines tied to ground, either every other wire, or every two wires (half shielding) in a bus. As you can imagine, this uses up a lot of area. There are other issues from cross coupling as well (burning more power for example if neighbors transition in opposite directions)-- the hard core analog side of things is not really my cup of tea-- but pretty much all of this crap should go away with optical interconnect.
Also, with all the capacitance in global interconnect, you can blow a lot of power charging and discharging the lines, and to get it to go fast, you need large, power hungry transistors, and probably repeaters every so often which burn still more power and add extra latency (now you have gate delays on top of your wire delay).
In short, if it's fast enough to convert between the electrical and optical domains, and the pitch of optical interconnects is fine enough, and the interconnects can be forked (one driver multiple receivers), this could be a big winner (faster, lower power, more reliable, what's not to like?). I do agree with you that they've been talking about this kind of thing for years and nothing's come out of it yet, but that's not to say the hurdles will never be circumvented, and there are obviously some fine minds working on this stuff, so I feel it's a bad idea to dismiss the possibilities out of hand.