Re: IoT always the weakest link in the network....
So there is less space in space than we initially thought? Do we have the space needed in space to make more space if we do run out of space?
903 posts • joined 1 Oct 2009
Depends if you go Church of Satan or the Satanic Temple.
Satanic Temple is the more mainstream option and tend to go with the follow tenet for their vaccination policy:
"One should strive to act with compassion and empathy toward all creatures in accordance with reason."
The Church of Satan is much smaller and their policy on vaccination is very much down to individual choice but don't cause harm to others unless you are prepared to be destroyed by them so arguably that does support vaccination...
For anti-vax you a probably looking for the evangelical nationalists but they all paint Satan as the bad guy while they do what ever they want. Without any acknowledgement of the irony...
"... than poorly heated Legionnaires' disease coffee made from a warm tap."
How can you get Legionnaires' from a tap that's been broken since 2017?
And if you're fast enough to get to it in the few seconds between the maintenance people "fixing" it and it breaking again, I suspect what ever gave you superhuman speed likely also gave you an improved immune system.
Having supported Adobe/Coral products on Windows and Mac's in the 90's, the differences were night and day in terms of what you could produce.
Windows was fine as long as you didn't run into memory problems (i.e.you weren't doing anything large) and didn't require clip art or fonts. And even support for things like lartge capacity removable storage (i.e. Zip drives) was significantly more reliable on Mac's. And if you wanted to get material printed externally, the printing bureau was probably using a Mac so guess option didn't require IT support.
Over time those differences disappeared as Windows hardware overtook the PowerPC platforms and by the time I next had to deal with mixed Windows/Mac environments it was 2007 and the differences between the platforms were largely down to user familiarity with an OS rather than genuine differences - it was not uncommon to see high end Windows servers doing the heavy lifting with a mixture of Mac's/Windows boxes scattered through studios depending on user choice. That presented other issues, but they weren't unsolvable.
Consistently the worst may give the impression that is unchanging (particularly if you used competitors and just wished it would do X "that" way - you could never accuse Notes of being just another clone...) there were changes.
Honest - maybe even some that were positive. Oh...those were bugs. Ignore me.
Here's Mr Gates key statement about planting trees:
“It has obvious appeal for those of us who love trees, but it opens up a very complicated subject ... its effect on climate change appears to be overblown.”
30 million trees is able to capture the CO2 produced by 100,000 "average" people on this planet.
The UN believes that global population growth over the next 10 years will be 1.2 billion people. So 360 bn additional trees to offset population growth before we start looking into an overall reduction.
For the UK's part, the Tories are proposing an estimated 50 million trees a year or 0.5bn over 10 years for a population rise of 3.5 million in the same period. Approximately half the amount required.
Realistically, the UK would need a population decline of 3.5 million and planting 50 million trees a year to have a real effect. Or just the population decline...the trees are a rounding error even if they look pretty.
"If those vaccinated can still become carriers of the virus then the risk of the long gap before the booster jab is that it may make it more likely for vaccine-resistant mutations to develop."
While this is possible, the question is how do you deal with a limited supply of vaccine? Do you dose only the people you have two doses for or do you give lower levels of coverage to the largest group possible and hope that supplies increase to allow you to reduce the gap between doses?
Ethics/fairness suggest providing the greatest coverage is more important than "what if" risks givel the 40%+ mortality rates in the over 80s. Particularly when the "ideal" situation isn't possible.
As an approximation to the UK rollout (i.e. ignoring slow starts/acceleration as doses increase) of 2m doses a week, the "second dose within 12 weeks" delivers 65% coverage of the 13m target population within 7 weeks and 2nd doses within 8 weeks for a completion in 14 weeks. The "ideal" situation delivers 65% coverage of this same population within 14 weeks and completion in 16 weeks. Combined with patient transport/vaccine storage issues/scaling up vaccine supply chains/contingency in the event of vaccine shortages, I'm not sure there is any real argument for an alternative to the current method.
"The scientists' role is to provide the best estimates of numbers on the basis of knowledge available at the current time. Ditto the economists. "
The problem is that for most experts in any chosen field, there will be those with alternative views.
If we take Coronavirus, there have been scientists saying we aren't locking down fast enough based on very little evidence (that may later prove to be correct) and scientists that are saying we should lock down gradually as they want overwhelming evidence that it is the right thing to do.
While the tendency has been to blame the Government/SAGE, the role of the media where the are choosing multiple options and then pointing at the one that is viewed as most correct is easy.
I'm not trying to give the government a free pass - I think they have clearly made mistakes all through the handling of this crisis. Where I am a little more lenient is in distinguishing between being presented with evidence, letting departments/minsters discuss it and producing a resulting action. Historically this would have taken weeks or months but is being done in days or sometimes even hours.
Saying things has always been easier than doing them, particularly when many layers of bureaucracy is involved. And that is backed up by countries with devolved power doing better than those with centralised power.
And baby-eating Satan worshippers would seem to be an oxymoron based on rule #9
"5G...they can only install Antenna masts where there is Fibre... your essentially sharing a Fibre connection"
It doesn't have to be fibre, there are other backhaul options - microwave/radio are common and capable of significant speeds.
But yes, you will be sharing that backhaul connection. As you would with pretty much every Internet connectivity option.
"We looked at AMD embedded processors as a replacement for our Intel processors."
Is this with the latest designs or older units? AMD's embedded options have trailed the desktop/server CPU's in both core versions and process. 2020 embedded are the first ones that are likely to be competative with Intel (theres a similar story on the mobile side)
i.e.
2020 generation embedded: Zen 2 cores/5th gen GCN GPU/7 nm
2018 generation embedded: Zen+ cores/4th gen GCN GPU/14 nm
2015 generation embedded: Excavator cores/3rd gen GCN GPU/14 nm
Of those, it's only really the 2020 versions that are likely to be competative with Intel as they move to 7nm and start to outperform Intel 14nm+ or later chips.
And if you were testing against the 2015 generation chips, maybe AMD will give you a freebie 2020 version to make up for wasting your time.
I'm wondering about power as well - while the standard may support 4GB/s there is also a lower 3GB/s option versus SDXC maxing out at 1GB/s which is much higher than you would typically see on an SDXC card reader.
The interface appears to be rated at 1.8W vs current cards that are under 100 mW.
Intels q4 server CPU sales were off the charts - up around 49%
I know Nvidia contributed a chunk of that with their Geforce Now DC's, but hadn't hard which of the other big cloud providers took the rest.
AWS/Azure/Google had all been delaying spending waiting for new chips, any evidence that it was AWS that won?
"I well remember the RISC anxiety at Intel when I worked there 35 years ago."
And when Intel moved from pure x86 CISC to x86 CISC instructions decoded to µops to run on a RISC like architecture with the P6, what happened to that anxiety? Sure, Intel hedged it's bets with Itanium/VLIW but reality wasn't kind to that...
"the process that sees hordes of overpaid junior lawyers and accountants poring over every tiny detail of a business before going ahead with a merger or takeover."
In most accountancy or legal firms, overworked is more accurate than overpaid. The juniors do the donkey work (usually with significant amounts of travel and unpaid overtime), the senior managers are paid well to apply the whip to keep things going and the partners rake in the money and put the entertainment on expenses.
Oh...and the juniors are responsible for all mistakes.
"they can't _buy_ enough outsourcing capacity to catch up because it's all booked out."
To address outsourcing high performance chip designs to another fab - you will likely take 1-2 years to redesign a working CPU operating at >2GHz if you take a working design from one fab to another.
Intel/TSMC/Samsung all have significantly different processes - a working design for one fab does not instantly translate into a working design at another fab, and if you are dependent on a design that is outside of the conservative fab design rules, you risk low yields and poor performance. For examples, look at AMD/nVidia GPU's on identical TSMC processes where one card substantially under-performs or has low availability - typically the first GPU or largest GPU does not clock as highly as its rival because of design issues. A year later, we see a decent boost as the updated chip addresses the issue.
"Being able to replace a full rack of Intel with two AMD boxes is music to the ears of any data center professional."
The "rack full" of dual CPU Intel boxes MAYBE being replaced by "less than a rack full" of AMD boxes.
Licensing makes high core counts hard to justify for Enterprises unless they have applications (like HPC) that aren't licensed per core.
The high core count chips are power/cooling hungry making it hard to sell them into cloud environments or enterprise blade chassis where density is more important. That's not to say moving from 20 servers per rack to 10 while lowering connectivity costs and increasing core counts doesn't make sense.
IO (typically storage) also kills your per rack density - you are likely limited in the amount of IO you can deliver to each rack. So moving from 20 x 8-cores to 2 x 128 cores likely doesn't unless you can avoid storage/IO/network/memory bottlenecks by scaling up connection speeds cost effectively.
Can you mix your new CPU's with your existing CPU's in a VM environment? Or do you need to start a new farm. That makes switching vendors hard unless your current environment is end of life and ditching it is an option. If you have a larger environment and replace a fixed percentage each year, changing may require considerable planning/budgetting.
And finally, what does it cost all up?
If any of the Super7+1 (or is that +2 now nVidia are trying to compete for mobile gaming?) announce major deals with AMD in 2020, AMD likely double their server market share.
TL;DR: high core counts look great, but AMD's pricing, power usage, low-to-mid range core counts and being able to deliver are more likely to win them this round.
"History is important, I'm not suggesting otherwise, but using it as a barometer against today's employment opportunities is bonkers."
Then I would suggest re-examining post-WWII history (you can go back further if you want but this is likely to be sufficient to see details) to see just how far we have come and how many jobs have changed or disappeared completely.
"We all know that the roles needed in society change and automation and AI (whatever that means in reality) drives this."
Yes...and in general, society as a whole has improved on the back of change. And generally change has improved both the lives AND jobs of the less well off. Look at jobs that had high accident rates 10-50 years ago and how they are done now - machines have replaced people and the people manage the machines with a subsequent increase in rates of production with less injuries.
You look and see Deliveroo and Amazon Fulfillment Centres and I would suggest both are likely to become more automated in the future as the human acts as as a robot. And there are questions around safety/injury.
What I see is an increasing requirement for human education and knowledge and the potential for that to drive future change.
Will people still be left behind by this? Yes and the aging population is likely to require a lot more social care as cheaply as possible. At least until we figure out how to automate that.
Will society as a whole benefit? I believe so. Based on history. Trying to retain the status quo has been where society crumbles and revolution is needed to move forward again. When there is change, those at the bottom have something to strive for. And technology revolutions tend to be a lot gentler than political revolutions.
"It's somewhat disturbing to see how comparisons are being made between working and living conditions today and a 100+ years ago, as if that means anything."
We navel gaze at the future where AI will automate people out of jobs while installing computer systems that allow one person to do the tasks of many or write code or reports that allow businesses to operate more efficiently.
So yes, history means a lot...those that are aware of history are aware of the significant changes that have happened over the last 200 years to get the standard of living above "you'll probably die soon" to "you better plan for old age, you've got a lot of years ahead of you".
I suspect it may also be part of the current generational gap - those brought up in cities where everything is available as long as you had money versus those from rural areas or before much of the current service industry was available who can make do with alternatives or without...
And yes, I'm an old grumpy git before my time.
<random dump of useless information>
Before PCI-bus IDE HBA's, ISA IDE HBA's required an interrupt per channel which limited the number of hard drives you could use (you could usually manage two-channels of 2 drives each once you added a few network cards) and generally multiple hard drives were the only way of expanding capacity in the days where the biggest hard drives weren't big enough. From memory, the Netware IDE drivers weren't great either although that may have been a hardware bottleneck rather than just a driver issue.
SCSI HBA's allowed upto 14 drives per HBA...more than enough.. And the drivers were better. If you could get updates.
</random dump of useless information>
Well Lynch probably made a few million when he sold his company to HP. Looks like it was ~£500 million (source: https://www.computerworld.com/article/3416571/update--mike-lynch-leaves-hp-autonomy.html). It should be relatively easy to find based on shareholdings and bonuses from the sale but I can't find a better source...
You'd never believe how much HP paid for Autonomy...
But what about all the good work the board did with share buybacks to prop up that share price and limit the decline to only 26 per cent? Anyone? Please?
*tumbleweeds roll by*
Biting the hand that feeds IT © 1998–2022