"Kuma envisages a model where 5G phones act as relays for signals – a mesh network – so that the mobile phone operator can reduce its expenditure"
Won't this kill battery life?
A team from Stanford University has patented a technology which could make a huge difference to mobile phone operators by halving their bandwidth requirements overnight. Kumu Networks is showcasing tech which allows radio equipment – such as that used by mobile telephones – to send and receive on the same frequency. It does …
> Won't this kill battery life?
No more than the updates for Angry Birds to squeeze yet more adverts into it.
Seriously, there's other stuff in the LTE spec that allows the operator to suck the juice from your phone. Take MDT, for example - a feature designed to make your mobile a radio test station and let the operator suck up the logs to do radio optimisation. It'll be a lot cheaper for the operator to do this than send out a van stuffed full of expensive electronics - and it will obviously let them collect data in places the vans can't get to.
"Kuma envisages a model where 5G phones act as relays for signals – a mesh network"
A Mesh network would be harder to tap with legal intercept. Think pre and post Microsoft Skype (or the original bomb proof mesh Internet and the current star/tree design). So for this reason alone it will not happen as envisioned.
Don't get me wrong I think the idea is great,
A Mesh network would be harder to tap with legal intercept.
Actually, no. If the intercept is on a proper legal footing, they can simply ask the network operator for a tap of the voice stream inside their network, the operator is set up for that. But if it's al ILLEGAL intercept, a mesh network could potentially mess (mesh?) things up.
However, mesh networks like, for instance, MANET, coordinate to manage the spread - all you need to do is pose as a strong, well connected node with plenty of battery left and you'd pull all local traffic again.
The simplest defence against a mesh tap would be the ability to see how you are connected, a bit like etherape - a high connect node would stand out like a sore thumb so you could asses if it's a just an edge node or a malign entity. It's one of the issues with using the concept for military purposes: you can easily identify leadership unless you camouflage the command identity with lot of deception traffic going elsewhere. Having said that, if we can't even see that we have an unencrypted GSM cell connection (in breach of the GSM specs) because some high end git decided we should not know about what they get up to, the chances of getting a view of the local mesh are slim.
Veering back onto the main topic: network and cell saturation has been a problem for years, and has led to active filtering of base stations (cells) in case of emergencies to retain capacity. I'm in favour of anything that can improve upon that.
The best solution is encryption. Let them intercept. You'd still need the operator's server for billing and key management, but the mesh can handle the bulky traffic.
Make sure to pad the bitrate or use a CBR codec though - it's possible though tricky to reconstruct a good guess as to the words uttered just by the bitrate fluctuation after compression.
So how do the TDD LTE services work then, if they can't transmit on the same frequency?
Maybe the reuse of frequesices for mobile transmitt in the base transmit parts of FDD band plans is what is going on here? Listening to certain operators road maps for mobile - mobile transmit using the base transmit frequencies, I think this will be seriously considered. However getting the on board radios to do multiple bands and technologies and then throw in the curveball of some transmit down a previously only receive part of the rf chain will no doubt cause some sleepless nights (and fat profits) at Qualcomm et all
This "new" (?) tech is already a relatively common feature on satcom modems; the sort of modem that fits into a 19-inch rack and is ultimately connected to the big silly dish outside. In that application, the EIRP needs to be adjusted down a bit to account for the fact that the two transmitters are aimed at the exact same transponder at the exact same time. Each modem cancels out what it transmitted (Y) from what it receives (Y+X), revealing the desired signal from the other guy (X). So the transponder can be used in both directions at once, in almost exactly the same manner as a single twisted pair can carry telephone conversations in both directions at once.
Base stations can transmit at powers up to +64dBm, and receive at levels down around -100dBm. That's 164dB difference in signal level, or more than 16 orders of magnitude. How many bits of resolution would you need to separate out those two very different amplitude signals? My rough calculation says 27 bits.
Is that even possible? I don't know of a way of doing that even at audio frequencies, let alone RF.
Possible? All you need then is a 32-bit ADC or a sufficiently large number of smaller ADCs (or some trickery with signal attenuation and increased sampling rates). Not necessarily cheap, though. But the first UMTS demonstrator systems weren't cheap or portable (unless you count dumping them in a car) either.
For a proof-of-concept setup, you could restrict the dynamic range somewhat. But what I'm more afraid of are reflections - sorting out the multiply reflected time-delayed and doppler-shifted transmission signals from the multiply reflected time-delayed and doppler-shifted reception signals while keeping the SNR at an acceptable level is going to be a major problem.
To stay on the safe side, I'd grant them a 25% gain in bandwidth. Anything more than that in a realistic evironment (like a car going down the Autobahn at 250 kph - or cruising on a city highway at 130 kph while overtaking a refrigeration truck) will require a lot of convincing.
Well you can use directional couplers, plus you can actually build one "into your antenna", by making an antenna with the right geometry. Both technologies can bring you about 40dB of separation tops. Maybe you can get into the range where local reflections get relevant.
But it's not going to be much of a revolution.
It will be very interesting to see if in practice they can double throughput/half bandwidth and or get that close to the Shannon limit.
The real world of practical radio communications has a habit of chucking up interference, noise, intermodulation and other problematic effects sufficient to give implementers a headache.
First it is kumU networks, not kumA networks... big difference if you go looking for them.
Actually as someone who does know a little, it is an approach that is looking particularly interesting, and it is not new. Self-interference cancellation is not new, there are multiple university groups in the US, UK, Ireland that I am aware of working on the topic. I am sure there are more that I am not aware of. There are a number of issues that make it challenging to deploy into modern communication schemes that can be overcome - at a cost. It is particularly attractive as a means to simplify the passive filters problem (circulator/duplexor) However these "full-duplex" single band radios tend to be SISO systems where this is a problem. When you get to MIMO systems, then the problems become much more complex and I've not seen published work that gets anything close to 100 dB cancellation (including their own work).
So it's attractive solution for many reasons but the complexities of getting it to work in practice with realistic systems are staggering. I think these guys are credible, but commercial grade systems are orders of magnitude more complex than prototype systems. Someone will get it to work.
PS: there is European tech in this space in universities, but there is a snowball's chance of getting 25M$ to develop the tech. being first is not relevant if you can't get it out of the lab :(
First, the +64 dBm is rather high, especially as it probably is an EIRP that includes antenna gain which boosts the received signal too. See http://lteuniversity.com/ask_the_expert/f/69/t/2982.aspx. Taking the +43 dBm figure somewhere in that thread we've gained a factor of 100; and that’s still a large number as small cells grow in importance.
Second, you don’t digitise the signal at the antenna but first use some passive circuitry to reject at least some of the transmit signal entering the receiver – Stanford (Kumu) use a circulator but there are other directional circuits that can be used.
Then there is some analogue cancellation to further reduce the value to a level where you can convert to digits.
Then quite a lot of linear and non-linear digital signal processing to recover a clean received signal with the transmit signal cancelled out.
By the way the basic technique of cancelling what you are sending from what you receive has been used in phone line modems for quite a while (actually now it’s obsolete!). In RF it was first introduced in a product called Groundsat by Plessey in the 70s IIRC, which was an on-channel repeater for VHF combat radio. A team at Bristol University did some research on applying the technique for personal communications “PCN” when that was first mooted and published a paper, must have been in the mid-80s.
An obstacle to using this in cellular is that in the FDD bands, though you might be able to cancel your own transmissions, as the technique relies fundamentally on knowing what you are transmitting so you can cancel it, you don’t know what your neighbours are transmitting and they may come in at rather high levels though they are not on the same frequency. This is a similar problem to TDD, but possible more difficult to solve.
But an interesting technique - phone lines have always worked both ways on the same frequency, about time for the same technique applied to wireless.
"phone lines have always worked both ways on the same frequency, about time for the same technique applied to wireless"
Agreed, but there's a couple of significant differences: traditional telephones/circuits are (a) essentially linear, the sidetone coil and hybrid transformers are both linear devices, and (b) the disparity in levels between the transmitter and receiver is not excessively great.
On the other hand, wireless can have and enormous disparity between the levels of TX and RX signals and non-linearity is easily introduced in the early RF stages, thus crosstalk can become rife.
"non-linearity is easily introduced in the early RF stages"
Or in the staples holding the wires to battens in the fences surrounding the base station (it turns out that rust makes a fairly decent diode).
Then there's the issue of harmonics being radiated from the switchmode poswer supplies in the equipment bay 2 aisles over.
These are both issues I've had to deal with.
You can't.
Shannon.
There is no free lunch. In real world you need more power or less speed. We are already close to Nyquist / Shannon Limit.
Mesh networking is REALLY SLOW and high latency compared with real basestations.
All of this has been done for years. Nothing new here just hype,
I must agree with Mr Robinson's comments earlier. The practical numbers are difficult. Even in a portable device the TX power would be 33dBm (2W) and reference sensitivity, say -110dBm (that's 0.1 picoW BTW).
What the technique is doing is subtracting a huge number from another huge number to derive something very tiny.
So, summarizing my thoughts:
this techniques is pretty old.
modern processing may be making it more feasible, but still, I suspect, not practical.
the degree of isolation claimed is still not enough to double the bandwidth for Operators (as the headlines would have you believe) in a practical cellular network.
the degree of isolation in a practical device will be significantly lower than demonstrated
the processing power required with today's technology will still be battery drain
better isolation is achievable at higher frequencies.
So I'm not surprised that suggested applications are all at high frequencies, with low TX power, short range and for mains powered equipment - like a repeater.
Interesting, but not practical yet.
Er, actually in an LTE terminal the maximum Tx power is +23 dBm and receive sensitivity ~ -100 dBm, according to 3GPP specs. So chop 20 dB off the problem. Maybe it wasn't clear from the original report but the technique has already been demonstrated working real time from relatively small hardware with ~15 dB isolation from a passive circulator and over 100 dB isolation from a combination of analogue and digital cancellation.
as per the guy's own papers, it can work with less cancellation if the receive signal is stronger than minimum. so just because it worked in demonstration is no proof that it will work with minimum strength signals.
btw the issues of leakage from tx to rx within the radio is the least of the problem, I love that comment about rust ... which is so true :)