Lots of Nvidia staff have seen their stock options mature, they're now multi-millionaires, so they're cashing out at a decent peak.
Simple.
595 publicly visible posts • joined 20 Sep 2007
Apparently the Flight Plan has to go through two other systems before it gets to NATS and would have been reprocessed by them as well. So it's M2M data, not user supplied data, however it may have featured something out of bounds for NATS. Just remember that that same flight plan also went to every other system in Europe at the same time and it was only NATS that Flop'ed
It's all well and good to call your industry colleagues incompetent, until you f-ck up and everyone points at you.
Someone made a mistake, likely they made that mistake years ago and didn't anticipate something in the validation checking that years later is now possible. Calling people 'incompetent' doesn't help.
Huh?
It's utterly irrelevant to the Linux maintainers as to if the submitter is doing it to maintain internal KPIs or not. It's kernel maintenance that probably no one else wanted to do and serves to make the kernel better.
If the author happens to be doing scutt work or busy work and it produces something useful for the world? Great! Is it even Huawei's loss?
Don't get me wrong, Huawei has issues but in this context it seems more like the maintainer has an axe to grind. Want to talk about people being forced to work too hard? Let's also look at Samsung and others.
I have to agree, almost none of our cloud-centric developers are writing code that is compiled anymore and those few that are, are usually doing it in Java.
With the rise of "serverless", nodejs and python it's clear that not much is needed to be compiled. Yes, Rust still needs to target a platform but with LLVM intermediates you aren't talking about a major risk when writing.
Disclosure: I previously worked for TT, but not in Broadband.
Many of their issues come from the fact that they are a mismatch of different companies which were absorbed and integration of those different businesses over the years has created massive legacy. Furthermore a more significant part of their issues with provisioning and maintenance comes down to OpenReach and the way they are integrated with them. The installation of my staff incentive phone line was a farce, but it all started to go wrong with an OpenReach engineer failing to check the right boxes, then rectifying that clerical error took too long and was too hard.
TT had invested substantially in their core network and its capacity was astounding, they had the fastest DNS cluster for customers in the market and the fibre core capacity was good. One area they struggled with was again BT, they are largely dependent on BT and even when they have their own kit in exchanges they often need to use BT's fibre to get to the exchanges. Some BT exchanges are just child exchanges of larger exchanges and so they only have limited data capacity upstream. That being said TT's ADSL2 kit was ancient and was desperately in need of retirement.
Embedding things in the PCB is really, really hard and the PCBs are made in a different facility to the assembly. Firstly it would be really hard to embed something in the substrate and require a completely different process at the facility it was made in. Then It would be totally random where those boards were inserted and they would possibly have to bypass the normal quality checks on arrival. They could only be targeted if multiple people along the supply chain were involved in the conspiracy and being coordinated.
The PoE HaT is massively over priced, instead of re-inventing the wheel they should have just purchased one of the existing PoE modules that are freely available on the market and left headers for that on the Pi (or built a cheap Hat for them).
I think this smells like an engineer designing in-house because they can, not looking at the market and finding the best value for the customer. "NIH"?
If they had been able to match DDR3 in performance and released it to other SoC vendors then there was a chance to make big inroads with embedded devices like STBs and tablets because they would quite quickly adapt to a unified memory architecture and their storage needs are relatively small. A TV manufacturer would easily buy +1m 1GB RAM sets and they don't have to be the fastest tech on the market.
Our business has a large part of our capacity in AWS but we have it spread across two availability zones with 100% resilience, two identical and parallel systems designed to balance and scale to the demand. If one goes out then a geographically distinct zone will take over with customers hardly noticing.
We can't easily expand to different cloud providers because some of the technology we use gets complicated at that level. But as our business expands we are further looking at to what extent we have eggs in baskets and what would happen if the whole of AWS went down (which probably won't happen, normally it will just be a geographical issue).
With then demo I saw a few years ago they didn't need to know the position of the transmitter but it helped. If they didn't know the location they just needed a few accurate references (e.g. starting GPS coordinates) and they could then use those to locate the alternative transmitters, when the GPS signal was lost they had enough information about the 'signals of opportunity' to carry on with some accuracy.
WiFi might give you a location but it is never accurate. SOO can be used for accurate positioning in areas that GPS can't. I've seen a demo a few years ago by a UK military research company where the researcher was able to show continuous location tracking inside and outside a building. It spotted a single GPS satellite as he passed a window and otherwise used local terrestrial masts for its references.
I was amazed at the time, glad to see someone a few years later doing it again (*cough* its not new *cough*) but I'd really like to see someone commercialise the concept.
I was thinking that myself, the cost of a 1Gbps link is probably well over $2m per year, sure it will take a long time to amortise $2m into $150m but when you consider the military value and the potential improvement in healthcare or just GDP it will pay for itself.
I think the article is rather unfair to the locals, there are probably parts of rural Britain which weren't much better before the 70s, Margate for example.
My first thought when I saw this? How much did the launch event cost and couldn't that money have been spent saving someone's life? I don't remember the Gates Foundations having big press launches for his initiatives other than when he wants to raise public awareness on an issue, not to announce he's spending money.
Vanity much? If it was true altruism they wouldn't need a PR launch.
People think that the public IXPs are important but the majority of most large ISP's traffic doesn't pass through IXPs. Most traffic is routed through private peering to people like Google, YouTube, Netflix and the CDN providers. Sure BT has 350GB of LINX peering, but I can bet that their total traffic volume is much higher than that.
This is also why I find the Netflix rant strange, they only have 80GB of LINX peering because most of their connectivity is private peering and the rest of Netflix content is delivered by on-net appliances within the ISPs themselves (providing 100's of GB of capacity).
It seems a strange argument from Netflix, the majority of their traffic doesn't even pass through the IXPs. Any ISP who has a decent number of Netflix users will be using a Netflix CDN appliance inside their network. You might argue that this is necessary because of the limitations of the IXP system, but in reality for any larger ISP most of their traffic is through private peering to Google, Amazon, Microsoft, Netflix and the various CDN providers. Public internet exchanges are just there for the sizeable minority of more unusual routes.
https://openconnect.itp.netflix.com/deliveryOptions/
Reminds me of the Epiphany-IV 64-core Microprocessor by Adapteva, sure it has less cores but the architecture seems similar. The problem with the Adapteva and I imagine a similar problem for this 1000 core design, the on-core memory is tiny and so it is difficult to fit useful workloads in to them. So you spend loads of time on an external CPU with a bigger core scheduling the tasks to the tiny cores. The architecture is really hard to programme for.
http://www.adapteva.com/products/e64g401/
It's not just about dark fibre, there is lots of under-utilised fibre which could be exploited. I worked for one company where we found that we had just a 2Mb E1 going to one site and we wondered what else we could use it for.
Lets look at what other wavelengths could be better utilised around the country and put them on the market.
We also need to explore more innovation in nuclear instead of relying on designs from the era of the atomic bomb. I like the Thorium designs and I don't think they've been given nearly enough investment. The Indians and Chinese are starting to invest in Thorium reactors and I think it would be really good if we didn't get left behind.
This chip really suits IPTV providers, streaming companies and broadcasters. Previously you bought an appliance that cost tens of thousands of pounds to do transcodes using ASIC chips, they provided excellent quality but the cost hurt. Many encoder and transcoder companies have been moving to software and cloud solutions in recent years which has hit the video ASIC market. With chips like this providing over a dozen transcodes in a 45W TDP for $450 I can see it being very attractive in my line of work.
The VDI business is interesting, but look beyond that to the encoding space and there will be people jumping on this chipset when it hits the street.
@Mage
There are a variety of different DSL bandwidths/profiles depending on the customer need. There is however a 2Mbps/2Mbps profile which is called SDSL (and there is also SHDSL).
Also ADSL2+ Annex M allows for uplinks up to 3.3Mbit/sec, <sarcasm>but what would someone want with that much bandwidth is beyond me.</sarcasm>
2.6% of defectors were apparently in military service at the time of their departure from the North.
https://en.wikipedia.org/wiki/North_Korean_defectors#South_Korea
http://www.reuters.com/article/us-southkorea-northkorea-defector-idUSKBN0OV04W20150615
http://edition.cnn.com/2012/10/06/world/asia/north-korea-defector/
There is plenty of spectrum, it just needs operators to use it better. They need to invest more in femtocells and other small cell architectures to off-load capacity at a local level.
In my view 5G is technology m*sturbation, operators need to work harder with what they have.