Re: Not a lot to see here, kids.
Do you know what spectrum had been used for that 5G? Most had initially been deployed in the lower frequencies so the mast gets the range, and it's only recently the higher frequencies are starting to be used.
2172 posts • joined 4 Jul 2009
Nope, it's an entire suite of protocol implementations that are hideously complicated but enable ridiculous speeds with very low latencies (admittedly gobbling spectrum to do it).
4G, aka LTE, is a completely different set of documents containing similar material. Similarly for 3G and 2G. They've evolved over time, so you can see some commonality in contents and overall structure of the standards, but they are quite different in the detail.
Geolocation of a mobile in 3D is very tricky as all the signals the mobile is measuring are effectively at ground level. Even if you had pure line of sight and excellent timing resolution or perfect power control, the differences in signal measured needed between heights would be tiny, and the standards don't allow that - measurements are basically a short int for signal power and/or quality for each cell reported.
Our you could mandate phones use GPS to work it out and report to the base station. Also bad, as it won't work indoors.
None of the above stops the mast from radiating a bit into the air, anyway. You'll not be able to stop that (and most masts are already directional) as the signal just bounces off everything.
The solutions are either to widen the guard band, mandate a better cutoff on used frequencies (for all devices concerned), make better altimeters or keep everything suitably far apart to avoid the issue (assuming tests show there is interference).
Not in general - there is little use in radiating upwards for most masts. At lower altitudes, this can be an issue - climb a skyscraper in London and you get more visibility of masts a long way away, but as you go higher this effect is reduced (not necessarily eliminated - but at the very least you're dealing with the inverse-square law for distance from mast, coupled with the beam shape, so you should get a very low signal).
It's more an issue on the network with a phone on a call running through cells every few seconds on take-off/landing as the plane goes at ~200mph
It has been studied by Boeing and some instances where there has been strong correlation between use of portable electronic devices and interference with plane systems. Those instances are >20yrs old, and none of them were for phones. However, the report goes on to state:
The laboratory results indicated that the phones not only produce emissions at the operating frequency, but also produce other emissions that fall within airplane communication/navigation frequency bands (automatic direction finder, high frequency, very high frequency [VHF] omni range/locator, and VHF communications and instrument landing system [ILS]). Emissions at the operating frequency were as high as 60 dB over the airplane equipment emission limits, but the other emissions were generally within airplane equipment emission limits. One concern about these other emissions from cell phones is that they may interfere with the operation of an airplane communication or navigation system if the levels are high enough.
I don't know when this report was generated - phones may have moved on since then...
More modern planes are designed with mobile phones in mind, and at least some are perfectly happy for you to have your phone on while the plane is in motion - but I would say that cell-towers do not like phones entering coverage at 200+ mph (take off), and certainly not at 500mph+ (cruising) - but cell towers are not likely to notice a phone up at 30k feet as their antennas don't point upwards in general.
If plane altimeters use the same frequencies as cell towers nearby, then I wouldn't like to rely on that measurement. It's easier to tell folks to turn their phone off than it is to make sure all network cell towers that could conceivably interfere not use that frequency. Or they could have not sold off that frequency, or they could try shifting the frequency of operation of the altimeters (!).
It should be noted that mobiles using ~4GHz spectrum probably won't have the best range. It's not the highest frequency for 5G, but it's not going to have the range of 800MHz at the same power levels.
For most folks, you don't have to click on links in the email for them to know you've read it. Marketing platforms report "open rate" as well as "click rate" - at the very least they can just bury some user specific HTML in the email that all points to the same micro-pixel/image, so once that loads they know the user has opened the email.
Yes, the more technically minded can possibly bury that, but most people don't/can't. And some people may just do the "add sender/domain to safe list" for a recognised sender so that the email displays correctly in some clients.
To paraphrase the shop analogy in another post - if I walk into a shop, I'm ok with them noticing I'm there. I'm not ok with them rummaging in my pockets for other till receipts to see where else I've been and what I've bought, or for them to shove other bits of paper in my pockets that they staple to the material to make it hard to throw away.
"self-acting, moving or acting on its own," 1812 (automatical is from 1580s; automatous from 1640s), from Greek automatos of persons "acting of one's own will;" of things "self-moving, self-acting," used of the gates of Olympus and the tripods of Hephaestus (also "without apparent cause, by accident"), from autos "self" (see auto-) + matos "thinking, animated," *men- (1) "to think."
Automatic = self-thinking
While I generally allow some sites, it is somewhat ironic (not the Alanis Morisette version) that El Reg is running this headline while also being "guilty" of the practice. Accept all El Reg cookies - 1 click. Customise El Reg cookie settings - 2 clicks (assuming "Tailored advertising" and "analytics" are default unticked in this dialog).
Interesting concept, although I wonder how each of the thousands of radio nodes are configured with the very long list of possible core network nodes to use. Not sure how many vendors support that at present, even if multiple nodes are supported. Last I saw, which may have been a while ago(!), most address a single box (per MNO hosted?) which may then contain a bunch of servers that are load balanced in a similar fashion.
I think the main point they were making is that the infrastructure investment also needs to include the huge number of base stations to deliver sufficient bandwidth to the punter, pointing out that VF (DE) have not invested in their radio access network to deliver on contracts that tout (up to) 500mbps. RANs are expensive to commission and run - historically some infrastructure providers have offered major incentives to take their kit in order to get hardware on the ground (ripping can also be expensive), but I'm not sure how much that happens anymore. Perhaps VF are yearning for such good old days.
Secondly, if a speedtest returns shite, it's a poor showing by VF as the traffic for those tests are usually prioritised anyway so reality is likely to be worse! Unless the poster is living deep in a Bavarian mountain range with nowt but the occasional yodeller for company, and the office is a shepherds hut at the end of the garden. And the cost of shipping bits to the end user is not just about the various fat cables interconnecting networks - there's also the cost of supplying the last mile to bear in mind.
I don't think OP was complaining that Netflix or Ericsson weren't paying for VFs' infrastructure - just that where they are the infrastructure is shit
500mbps LTE? Seems optimistic to expect that everywhere, as a single 20MHz carrier of LTE peaks out at 100mbps, so you need 5 carriers in carrier aggregation to get this (the most supported by the standard), plus a device that can receive it, plus a network that has deployed it in your location, plus limited contention, plus decent RF propagation (no noise) so that you can use the higher coding schemes...
Amusing. Now compare profits between a Telco and their infrastructure providers. The likes of Vodafone make far more profit, and the last 15-20 years has seen margins on infrastructure squeezed so hard the number of infrastructure vendors large enough to finance this has been greatly reduced.
Basically they're saying, Nik, Ericsson, Huawei (where possible), please give us your boxes for free. It won't/shouldn't fly.
2000-2010 was the period where UK telecoms infrastructure expertise died - out competed and manouvered by cheaper alternatives. There used to be loads of companies on the M4 corridor, and they gradually cut R&D/engineering until it vanished. Motorola, Alcatel(-Lucent), Nokia, Nortel, all employed significant numbers (me included), but UK staff was too expensive, so all the work gradually either dried up from competition, or gradually migrated eastwards where it was cheaper.
My personal machine is also about 11 years old, a trusty i5-2500K based jobby.
Processing power hasn't really gotten all that much greater in the intervening years, what has significantly improved, though, is the efficiency of the chips.
Compare specs with a new-ish i5-1135G7 and while there is a bit of an uptick in GHz, and there's double the threads supported, the TDP has dropped from 98W to 28W.
Combining that uplift in efficiency across my whole system and I wonder what power savings I achieve in upgrading, and how long that will take to pay for itself (somewhat more rapidly if I have to change energy tariff soon with the current price hikes!).
The only significant increase in performance is seen in graphics cards - I have an old Radeon 5770 which was a good one when I bought it, and still does a reasonable job on the games I can run (e.g. CS:GO)
Overall, there's nothing in my system that screams "upgrade me", but it's a bit annoying that the latest iterations of WinOS (and graphics card drivers) mean that I can't run any new games that I might be interested in.
Beyond limiting (artificially?) new software products/features to new OS versions, I see no reason that upgrading a reasonably specced machine from the last 10 years will increase most people's productivity (excepting those that hammer C/GPU cores in their work)
It's another money grab, pure and simple. There are many companies that watch games and produce many stats - time on the ball, possess completed, shots, tackles, assists, ... The list goes on, and is across many sports. IIRC some of these are automated with player/ball recognition on the video feeds.
Those companies make money from their efforts, ergo the players/clubs want their cut. No idea what's in the fine print off a ticket, as I don't watch any sports, let alone the overpriced buffoons (YMMV, e.g Rashford) kicking a ball around.
It will be an interesting case, but one I hope the footballers lose. IANAL, so no prediction from me.
And as for assumed consent for sharing data, it’s important to remember that assumed consent is still informed consent: patients are told that they are assumed to have consented to the sharing of their data for use in metadata analysis and, should they wish to opt out, how to do so.
Also, not forgetting that some govmts (mentioning no names, UK) don't do a very good job of informing the population about the availability of consent options in the first place, or of making it easy to opt-out.
I'd be happy with my data being shared with organisations working on cures/treatments who have a socially responsible approach to such work, and whose results will not be used to gouge as much cash out of patients as possible. Until such agreements are in place, it's a "no" from me on that basis (although at present my records are unlikely to be of interest to anyone)
> training this based on 30.000 images
Depends on the image, the article mentions zooming & annotating - if each image has many thousand cells on it, some of which are/may be cancerous, then that would help, but even then the number of labelled positives to train on might be a bit light for my liking.
Saying that, how many slides are used to train a medic? IANAMedic, so no idea. How many different features can a cancerous cell/cell group have? How big a grouping of cells do you need to look at to say "here be cancer"?
> I would guess that creating a virtual world prior to powering up the personality would be a necessary step.
You;ll need to have to figure out how to pipe sensory inputs into the brain, first, and somehow figure out how the brain can provide output into that world, also. You won't be able to just 'jack the brain into an Unreal Engine and expect it to be a seamless interface
You could have a silicon brain the size of a planet, but if the code reads:
10 PRINT "I AM AWESOME"
20 GOTO 10
you will not in any way achieve "intelligence".
I doubt anyone is anywhere even close to replicating a single neuron in code, let alone how umpteen billion of them interconnect, take inputs, produce outputs, grow, adapt, change
If it was possible to copy and paste a brain into a computer and run it then I would suggest that the reproduced brain would essentially be imprisoned in a state of sensory deprivation and is unlikely to cope at all well with these conditions; I certainly wouldn't volunteer to be a test subject.
Agreed in part - if it were posible to copy/paste the state of a developed brain, then the bits that are expecting visual/audio/touch/taste/digestive/... inputs would certainly have a somewhat disconcerting experience, and this may have an impact on that silicon-brain. Similarly, severing mechanical ties to movement may also have a bad effect - I wonder if there might be a similar amount of pain experienced to those that have limbs severed with their "phantom limb" syndrome. Would there be a "phantom body syndrome"?
So, how long would it take this silicon construct to readjust itself to its new confines - could it sit as a higher layer protocol on a computer and interface to a web-cam for eyes? Or would an optic nerve interface need to be developed (see Night Walk). Similarly for audio reception.
On the whole, it would probably be easier to get input into the "brain" than to extract output as our commuication is inherently controlled by a physical process. Would we need to also incorporate some form of nervous system that interfaces to loudspeakers?
On the whole, rather challenging before you get down to the ethics of it. Technically you may well be creating a new person with the paste operation - the personalities will instantly start diverging.
Does the copy get to vote? Is removing a power source from it murder?....
They're working from the same playbook. Here's a side by side comparision with Big Tobacco self-defence PR:
Apparently in the USA, "it is against the law to have computers that house a searchable database that might be construed as a registry of guns".
Source - from 7:00 -> 8:00 in the video.
Not sure if this is just for govmt institutions, or if it also includes private companies/citizens owning such.
In the US it would seem that to get the same result, a perp would have to steal several tonnes of paper from a warehouse, then manually process it all.
> just size the links appropriately so there is no significant congestion.
If the links are dynamically varying their properties that gets quite hard, I imagine. QUIC is also being looked at in the mobile space, and RF conditions change rapidly as well as load influencing bandwidths available to the user
> Just why the 5G edge needs AI wasn't explained to The Register.
Variety of use cases available in the real-time optimisation category that vendors are happy to say are "AI enabled" to make them sound more modern and cutting edge.
Plus the posibility of deploying a model to be trained/tested/validated on bits of the network without having to backhaul loads of data to a central server (farm).
a) radio parameter optimisation (handover thresholds, antenna tilt adjustments)
b) mobile geolocation
c) traffic analysis/prediction
And that's all in the existing traditional fields, which can further expand when you start playing with multi-user-MIMO.
Regular analytical techniques can require a lot of modelling expertise - AI can possibly simplify this, assuming deployed correctly (and no doubt mistakes will be made...)
Indeed, I'm shocked that this has been granted (although not shocked when considering other things granted in the past!).
I wouldn't expect this to survive a challenge, except perhaps on very specific narrow claims that are buried in the patent somewhere but don't leap out at all as obvious.
There are claims for:
a) eye-tracking (used in military pilot HUDs, IIRC)
b) facial expression tracking (an obvious extension?)
c) body tracking of whatever is in field of view of device (as per (b))
d) local environment sensors (duh, in a phone)
e) authentication methods (hat won't work for everyone, but phones do this so prior art should cover this)
f) Notifications (isn't that what a HUD is partly for?)
g) neural input sensors in the hat (probably lots of prior art for that)
h) bone conduction output for audio (exists already)
i) computing power in the hat structure to deal with all these sensors (duh)
j) flip-away display (obvious extension to existing stuff)
Can't be arsed to read further into the detail beyond the claims.
Prior art indeed.
That last link, to me, shoots this patent squarely in the face as prior art, although would need to examine filing date and date of this picture to determine who might have had the idea first... (EDIT: quick check has FB filing in 2019, "hacked-hat" in 2012, so he's in the clear!)
Biting the hand that feeds IT © 1998–2022