* Posts by CheesyTheClown

779 publicly visible posts • joined 3 Jul 2009

In a touching tribute to its $800m-ish antitrust fine, Qualcomm tears wraps off Snapdragon 865 chip for 5G phones

CheesyTheClown

Cheers

I often work together with large enterprises helping them train their IT staff in wireless technologies. And the message I send regularly is that there is absolutely no value in upgrading their wiress for new standards rather than growing their existing infrastructure to support better service.

I have recently begun training telecoms on planning for 5G installation. And the message I generally send is "people won't really care about 5G" and I have many reasons to back this up.

Understand that so long as pico/nano/femto/microcells are difficult to get through regulation in many countries, Wifi will continue to be a necessary evil within enterprises and business running particularly difficult operations to deploy wireless in. We need Wifi mostly for things like barcode scanners and RFID scanners within warehouses. An example of this is a fishery I've worked with where gigantic, grounded metal cages full of fish are moved around refrigerated storage all day long. Another is in a mine shaft where the entire environment is surrounded by iron ore. In these places, wifi is needed, but there's absolutely no reason to run anything newer than Wireless-N except for availability. AC actually costs less than N in most cases today, but there's no practical reason to upgrade. 4x4 MIMO 802.11n is more than good enough in these environments.

5G offers very little to the general consumer. It is a great boon for IoT and for wireless backhaul networks, but for the consumer, 5G will not offer any practical improvements over LTE. 600Mhz 5G is a bit of an exception though. 600Mhz 5G isn't particularly fast... in most cases it's about the same as LTE. It's primary advantage is the range. It will be great for farmers on their tractors. In the past, streaming Netflix or Spotify while plowing the fields has been unrealistic. 5G will likely resolve the issue.

For people within urban environments, they're being told that 5G will give them higher availability and higher bandwidth. What most people don't realize is that running an LTE phone against the new 5G towers will probably provide the exact same experience. 5G will offer far more towers within urban areas and as such, LTE to those towers will work much better than it does to the 4G towers today. 4G is also more than capable of downloading at 10 times higher bandwidths than most users consume today. The core limitation has been the backhaul network. And where 4G typically had 2x10Gb/s fibers to each of 4 towers within an area. 5G will have 2x100Gb/s fibers (as well as a clock sync fiber) to 9 towers within the same area. This will result in much better availability (indoors and out) as well as better bandwidth... and as a bonus, it will improve mobile phone battery life substantially as 4G beamforming along with shorter distances will consume as much as 5 times less power on the phone compared to the current cell network.

5G has no killer app for the consumer. 3G had serious problems across the board since 3G technologies (UMTS, CDMA, etc...) were really just poor evolutions of the classical GSM radio design. LTE was "revolutionary" in its design and mobile data went from "nice toy for rich people" to "ready for consumption by the masses". 5G (which I've been testing for over a year) doesn't offer anything of practical value other than slightly shorter latency which is likely only to be realized by the most hardcore gamers.

I certainly have no intention of upgrading either my phone or my laptop to get better mobile and wireless standards. What I have now hasn't begun to reach the capacity of what they can support today. The newer radios (wifi6 and 5G) will make absolutely no difference in my life.

If you have anyone who listens to you, you should recommend that your IT department focuses on providing wireless network security through a zero-trust model. Which means you could effectively ignore wireless security and as you mentioned, use VPNs or fancy technologies like Microsoft Direct Access to provide secure, inspected, firewalled links for wireless users. They should focus on their cabling infrastructure as well as the addition of extra APs to offer location services for things like fire safety and emergency access. They shouldn't waste money buying new equipment either. Used APs are 1/10th the price. In a zero-trust environment, you really don't need software updates as the 802.11n and 802.11ac standards and equipment are quite stable today. They should simply increase their AP count, improve their cabling so the APs within a building are never cabled into one place (a closet can catch fire), install redundant power to support emergency situations. Use purely plenum rated cabling. Support pseudo-mac assignment to people not carrying wireless devices can be located by signal disturbance during a fire.

Once this system is operational, it should live for the rest of the lifespan of your wifi dependence. I can safely believe that within 5-10 years, most phones from Apple, Samsung, etc... will ship without Wifi as its presence will be entirely redundant.

Also for 5G, inform people that they should wait for a phone that actually gives them something interesting. Spending money on 5G for personal communication devices is just wasteful and worst of all, environmentally damaging. If the market manages to sell 5G as a "killer app", we stand to see over a billion mobile phones disposed of as people upgrade. Consider than even something as small as a telephone, when you make a pile of a billion of them is a disaster for this planet.

5G will be great for IoT and not so much 5G, but the proliferation of NB-IOT is very interesting. $15 or less will provide an eSim capable 5G modem module to things like weather sensors (of which there are already tens of millions out there), radar systems, security systems, etc... We should probably see tens of billions of NB-IOT devices out there within the next few years. A friend of mine has already begun integrating it into a project of hers of which she has funding for over 2 million sensors to be deployed around Europe.

No... you're 100% correct. Wifi has begun it's death knell. It will be irrelevant within 5-10 years and outside of warehouses and similarly radio harsh environment, it is very likely it will be replaced by LTE, NB-IOT and 5G.

And no... 5G on a laptop is almost idiotic if you already have LTE. You should (with the right plan) be able to do 800Mbit/sec or possibly more with LTE. Even when running Windows Update, you probably don't consume more than 40MBit/sec.

You're praying your biz won't be preyed upon? Have you heard of our lord and savior NVMe?

CheesyTheClown

Why oh why

If you’re dumping SAS anything in favor of something else, then please get a distributed database with distributed index servers and drop this crap altogether.

Hadoop, Couch, Redis, Cassandra, multiple SQL servers, etc all support scale out with distributed indexing and searching often through map reduce methodologies. The network is already there and the performance gain is often substantially higher (orders of magnitude) than using old SAN block storage technologies.

Or, you can keep doing it the old way and spend millions on slow ass NVMe solutions

'Happy to throw Leo under the bus', Meg Whitman told HP after Autonomy buyout

CheesyTheClown

How could this company ever be worth that much?

There was a time in history when HP was famous as a technical innovator who filed more than enough patents that they could use pretty much any technology they wanted and make deals with other companies to trade tech. They would engineer and build big and amazing things and if they panned out, they got rich, if they didn't, they'd sell them off.

Then the suits came in

HPe has become nothing more than an Acquisitions and Mergers company. They don't make any new technology. They "me too" a crap load of tech at times. But regarding innovation... check out HPe's labs/research website. Instead of actual innovation, it looks like a list of research of "why shouldn't we invest money in research" thing. I mean really... they wrote one whole paragraph on why they won't waste money on quantum computing and it's basically "We are going to prove P=NP and make a new way of saying it so if we can solve one NP problem, it will solve all NP problems."

There have been a bunch of CEOs that have converted HP from being a world leader in the creation of all things great in technology to being a shit company which spends $8 billion on a document store and search engine that "might be big one day".

Cooksie is *bam-bam* iGlad all over: Folk are actually buying Apple's fondleslabs again

CheesyTheClown

Why would you buy a new one anymore?

I have a stack of old iPads laying around. I have 2 iPad version 1 and about 10-12 more after that. My wife uses hers... the kids stopped using theirs when they got telephones big enough to render them useless as they also have PCs.

I did get my wife a new iPad for Christmas... we actually don't know why... but I suppose it had been 2 years since the last iPad was bought... so I got her that.

To be honest, it used to be that everyone needed their own iPad... but these days, I think mom and dad just need big phones and the kids need maybe an iPad mini or so. There's no need to constantly upgrade... they already have more features than anyone will ever use. Now, it's more like "Wow... look Apple is still making iPads... at least I can buy a new one if the old one breaks... if I actually need it for something"

I used to see iPads all over every coffee shop. These days, there's laptops and telephones... but there doesn't seem to be any iPads anymore.

NAND down we goooo: Flash supplier revenues plunged in first quarter

CheesyTheClown

Re: Yay!

I thought the same and then thought... why bother?

I used to spend tons of money building big storage systems... even for the house... I have a server in the closet I just can't force myself to toss which has 16TB of storage I built in 2005. These days, 500GB is generally more than enough. 1TB for game PCs.

At the office, I used to buy massive NetApp arrays... now that I have moved to Docker and Kubernetes, I just run Ceph, GlusterFS, or Windows Storage Spaces Direct and I use consumer grade SSDs.

We are soooooooo far past needing for storage it's silly. To expand a Ceph cluster by a terabyte of low cost SSD, it requires 3TB of storage which is under $300 now... and it gives us WAY better redundancy than using an expensive array. And to be fair... since almost everything is in the cloud these days, you could probably run an entire bank on 2-4TB of storage for years. It's not like a database record takes much space. Back in 1993, we ran over 100 banks on about 1GB of online storage. I'm almost sure you can run 1 modern bank on 4000 times that much. :)

As for performance... once you stop running VMware and you switch to something... well... anything else, you just don't need that much performance. I guess video games would load faster, but ask yourself when the last time you actually thought "I need a faster hard drive"

Former unicorn MapR desperately seeking cash as threat of closure looms

CheesyTheClown

Re: The software is quite good

Everyone always talks about Betamax as if it was infinitely better than VHS. As someone who thoroughly understands the physics, mechanics, electronics, etc... of both Betamax and VHS from the tape all the way through the circuitry up to the phosphors, I'll make it clear... yes Betamax was better... but the difference was negligible. The two formats were so close to being the same that it barely mattered... and when transmitting the signal over composite from the player to the TV which ... well to be honest was 1950s technology (late 1970s TV was still 1950s tech... just bigger)... it was impossible to tell.

S-Video and SCART (in Europe) made a slightly noticeable difference. Using actual component cabling could have mattered but neither Betamax or VHS could take advantage of that.

The end result was simple... when playing a movie recorded on Betamax on a high end 1970s or early 1980s TV next to the same movie recorded on a VHS tape, you had one big ugly player next to another and the only possible difference you could give the consumer was "Beta is more expensive because the quality is better" and of course... it wasn't... at least not enough to notice. Often you could sell the consumer on audio quality, but on 1970/1980s era speakers and hifi, you wouldn't notice until you were far past the average consumer threshold.

Betacam SP was actually substantially better, but by then it no longer mattered.

I used to have 400 Betamax decks and 600 VHS decks in my office... all commercial grade duplicators with automatic tape changers. The Betamax decks existed for collecting dust. The VHS decks were constantly being serviced because they were running 24/7. I spent 10 years of my career in video technology development (I am a codec developer at heart, but I know analog too). In 10 years of working with studio/commercial grade broadcast and duplication equipment, and knowing what I know about the technology, if I saw Betamax for $120 and VHS for $110, I'd still by VHS.

CheesyTheClown

Re: @CheesyTheClown ... Burned $300 million?

Thanks for commenting.

I honestly had no idea how MapR would sell in the first place. The problem is... it was a great product. But it was also expensive. And I don't really care how good a sales team you have is, the website is designed to scare away developers.

I just visited there and I'm pretty sure that I've been in multiple situations where I could have seen the technologies as interesting, but the website makes it look like it's too expensive for me to use in my projects. I can use tools that cost $10,000 or less without asking anyone. But they have to be purchasable without having to spend another $10,000 on meetings where people show Gartner magic quadrants.

I can't use any tools where I can't just pop to the web site and buy a copy on my AMEX in the web shop and expense it. When we scale, we'll send it to procurement and scale, but we're not going to waste a ton of money and hours or days on meetings and telephone conferences with sales people who dress in suits... hell I run away without looking back when I see sport jackets and jeans.

Marketing failed because MapR is not an end user program and developers can't make the purchasing decisions. The entire front end of the company is VERY VERY developer unfriendly. Somehow, someone thought that companies all start off big and fancy. My company is a top-400 and we start projects as grass-roots and once we prove it works, we sell the projects at internal expos and the management chooses whether to invest more in it or not. MapR looks expensive and scary and difficult to do business with.

This is why we do things like always grow everything ourselves instead of buying stuff that would do it better. Everyone is trying to sell to our bosses and not selling to the people who actually know what it is and what it does.

I wish you luck in the future.. now that I've looked a little more at you guys, I'll check the website occasionally when I go to start projects. If the company starts trying to sell to the people who will actually buy it (people like me) instead of to our bosses... maybe I'll buy something :)

CheesyTheClown

Burned $300 million?

$200,000/year times 2 is $400,000 for an inflated cost or employing one overpaid SV employee. Multiply that by 200 employees. That’s $80 million a year for 200 employees... to develop and market a product.

Now... let’s assume that the company actually received $300 million in investments.

Was there even one person in the whole company actually doing their job? And was that job spending money with no actual consideration for return on investment?

Planes, fails and automobiles: Overseas callout saved by gentle thrust of server CD tray

CheesyTheClown

Re: Ah the old push-out-the-cd-tray trick

Why not dump random data to the PC speaker?

'Evolution of the PC ecosystem'? Microsoft's 'modern' OS reminds us of the Windows RT days

CheesyTheClown

Presented at build and only interesting to techies?

Let me get this straight... you're complaining that technologies presented at Build... Microsoft's annual developers conference... presented tools that are interesting to developers?

Ok... so... if you were to present tools that would be life changing and amazing... primarily to developers... which conference would you recommend presenting them at? And if you want the developers and techies who will use them to be present... and actually buy tickets to the event... are we still against using Build for this?

I almost could read the rest of what you wrote after reading that... I was utterly stuck... totally lost... wondering... what in the name of hell is this guy talking about.

So... let's try some stuff here.

Windows isn't built the way you seem to think it is. This is why Microsoft makes documentation. You can read it instead of just headlines.

Windows these days is built on something you might understand as containers... but not really. It's more than that. You can think of it as enclaves... if you want.

UWP also doesn't seem to work the way you think it does. You're thinking in terms of how Linux works and how languages on Linux work. Windows has extremely tight integration between programming languages and the operating system. As such, a lot of stuff has happened in the process of the compiler development which made it so that things you would think are native code are actually .NET and things you would think are .NET are native code. The architecture of the development tools have made what classically been though of as "Linking" a LOT more dynamic.

There's also a LOT more RTTI happening in all language from Microsoft which is making things like the natural design of what many generations ago was called COM pretty much transparent. All object models at one time (especially COM) was horrible at one point because of things like IDLs which were used to do what things like SWAGGER do these days. As describing and documenting the call interface between objects was a sheer terror.

Windows has made it so that you can program in more or less anything and expose your APIs from pretty much anything to pretty much anything ... kinda like how COM did... but it's all pretty much automatic now. This means that "thunking" mechanisms can make things happen magically. So you can write something in native code in C++ and something in .NET in C# and make calls between them and the OS can translate the calls... this actually requires a few special programming practices and it actually makes it easier if you pretend like you don't even know it's there.

There are A LOT of things going on in Windows that are kinda sorta like the things you seem to think it might do... but in many ways they're done far better.

If you want to see it look really awesome... start two sessions of Linux on WSL1. You'll find that they're not in the same enclave. They have some connections to each other... but they are actually separate. It's like running two different containers... but not really.

Now consider that Windows works a lot like that now too. Microsoft has progressively managed to get most of us to stop writing software that behaves as if everything absolutely must talk to everything else directly. As such, over time, they'll manage to finally make all processes run in entirely separate enclaves while still allowing communication between processes.

And BTW... Android and Chrome OS are sheer frigging terror.... if you want to do interesting things at least. Everything is so disconnected that 99% of the time... if you're trying to make two programs work with each other, you find yourself having to send everything through the cloud.

CheesyTheClown

Re: That's what Plinston said

This is not argumentative. I'm a file system jockey and I have to admit that I'm a little bit in the dark here about the SIDL terminology.

I also wonder if you and I understand the file system in Windows differently than one another. It's been a long time since Microsoft originally added forked file support. Yeh, traditionally Windows really didn't support iNodes and it was a wreck, but it's been a long time since that's been set in stone.

The main reason Windows has required reboots to update is more related the UI. Upgrading files is no real problem. But in the a case like Linux where the GUI is entirely separated from the rest of the operating system (which is probably what I like least about Linux), the Windows GUI used to be the root for all tasks to be spawned from. So the GUI was the parent of all the tasks which made it so that if you upgraded the kernel, you'd have to restart the GUI running under the new kernel.

With all the effort they've made to make it so that they kernel is less important and that most of the OS is running either as a VM or a container, they should be able to start a new kernel now and repatriate the system call hooks to the new kernel.

Weak AF array sales at NetApp leave analysts feeling cold

CheesyTheClown

Re: "End of Storage" - silliest thing ever said...

I don't disagree. I still see the occasional UltraSparc5, AS/400 and Window NT 4 machines in production. Legacy will always exist... but I think you're overestimating the need for low-latency on premise storage.

As latency to the clouds are decreasing and bandwidth is increasing and availability is actually often rivaling on-premise location isn't the hot topic.

We used low-latency storage for things like fiber channel because we were oversubscribing everything. But if you consider that massive banks still run on systems like IBM Z which seem really amazing, but performance-wise are generally obscenely over-provisioned. A well written system can handle millions of customer transactions per day on equipment no more powerful than a Raspberry Pi... and they did for decades... on horribly slow storage.

The question is... what do you really plan to run back home anymore? Most of the reasons you've needed extremely high end storage systems in the past have moved to the cloud where they logically belong. This means that most of what you're running back home is actually non-business systems anymore.

A major company will probably have something like an in-house SAP style system and a bunch of other things like file server which no one uses anymore. Everything else will be moved to the cloud with or against IT's "better judgement". Remember, you don't need the IT guy to sign up for Slack, the boss does that with his own credit card while sitting in a meeting.

The cloud doesn't replace storage... it replaces the systems using storage.

Now... let's assume you're working for a news paper or a television station where you need local storage because 1000+ photos at 20megapixel RAW or 25 hours of video at 12Gbp/s needs to be stored somewhere. These days, you pay a lot of money for your storage, but you also have a choice of easily 10 legitimate vendors and maybe another 200 "won't make it through another funding round" vendors. Right now, there's lots of choices and all those vendors still have lots of sales keeping them in the black.

Now, as more and more services are migrated to the cloud. The storage systems at most companies with more "plain vanilla" needs will free up capacity on their local storage. If they refresh their servers again, they'll choose a hyperconverged solution for the next generation.

This will mean that the larger storage companies will dissolve or converge. If they dissolve, they're gone. If they converge, they'll reduce redundant products and deprecate what you already have.

As this happens, the companies with those BIG low latency storage needs will no longer be buying a commodity product but instead a specialty product. Prices will increase and the effected customers will be substantially more conservative about their refresh cycle in the future.

Storage is ending... sure... there will always be a need for it in special cases, but I think it will be a LONG time before the stock market goes storage crazy again. And I don't think Netapp, a storage only company will survive it. EMC is part of Dell and 3Par is part of HP etc... companies which sell storage to support their core business. But Netapp sells storage and only storage, so they and pure will be hurt hardest and earliest.

CheesyTheClown

Re: End of storage coming

Honestly, I think the NKS platform looks ok, but I expect that it's only a matter of time before all three clouds have their own legitimate competitors for it.

Don't get me wrong, I'm not saying it to be a jerk... as I said, it looks ok. But it's obvious progression for K8S, I've been building the same thing for internal use on top of Ceph at work. I'm pretty sure anyone trying to run a fault tolerant K8S cloud is all doing the same. But to be honest, if you're doing K8S, you should be using document/object storage and not volume storage.

If you're running Mongo or Couch in containers, I suppose volume or file storage would be a good thing. But when you're doing "web scale applications" you really should avoid file and volume storage as much as possible.

I just don't expect NetApp to be able to compete in this market when Microsoft and Amazon decide to build a competing product and pretty much just toss it in with their existing K8S solutions.

CheesyTheClown

Re: End of storage coming

I don't disagree on many points. I've seen some pretty botched cloud gambits.And those are almost always on the companies that go to the cloud by copying up their VMs as quickly as possible. It's like "If you actually need VMware in the cloud... you really did it wrong"

The beauty of the change is that as we systems that genuinely belong in the cloud... like e-mail and collaboration is going there as SaaS and it's working GREAT. Security for email and collaboration can't ever work without mass economy and 24/7 attention from companies who actually know what they're doing...not like Cisco AMP or ESA crap.

A lot of other systems are going SaaS as well... for example Salesforce, SAP, etc... these systems should almost, by law have to be transferred to the cloud if for no other reason than it guarantees paper trails (figuratively speaking) of all business transactions that can be audited and subpoenaed. Though that's true for email and collab.

Systems which are company specific, they can come back home, and then eventually over time get ported to newer PaaS type systems which can be effectively cloud hosted.

I actually live in terror of the term "Full Stack Developer" since these days it often means "We don't actually want to pay for a DBA, we'd rather just overpay Amazon"

CheesyTheClown

End of storage coming

Ok, when NetApp rose, it was because companies overconsolidated and overwasted. Not only that, but Microsoft, VMware and OpenStack lacked built in storage solutions. Most storage sales were measured on the scale of a few terabytes at most. Consider that a 2TB FAS 2500 series cost a company $10000 or more using spinning disks.

Most companies ran their own data centers and consolidated all their services into as few servers as possible. They went from running 5-10 separate servers (AD, Exchange, SQL, their business app...) costing $2000 each to 3-10 VMware servers costing $5000 each plus a SAN and an additional $2000+ in software licenses each... to run the same things.

Performance dropped considerably when they made that shift. Sure, they were supposedly easier to manage, but the management began to realize that systems that used to take 1 average skilled employee and 1 consultant to manage now took a team of full time employees and a lot more consultants to run.

Performance was almost always a problem because of storage. NetApp made a fortune because they could deliver a SAN which was relatively easy to manage that could handle most small businesses data.

What got really weird is when the bosses wondered how they went from $100,000 IT costs per year (people too) to $500,000 or more and no matter how much they spent on tools to make it more reliable and more robust, they always found themselves with the same outages and increasing costs.

Enter the cloud.

Companies could move their identity, mail, sharepoint, collaboration and office tools online using a relatively easy migration tool which took a few days to weeks.

SQL and their company app could be uploaded as VMs initially with little effort and with some effort, they could move their SQL to Azure’s SQL.

Now, they can downsize to one IT person and drop their costs to about $100K a year again.

The catch is, since we no longer need a dozen IT guys and consultants, no one left knows what either NetApp or Cisco is and they’re just using the simple pointy clicks UI to do everything. Their internal data center is being spun down and finding its way to eBay instead.

Then there’s whoever is left. They find that by replacing their servers with new servers containing disks, they can use VSAN, Storage Spaces Direct or Swift and not have to spend money on external storage which actually has a lower aggregate performance and substantially higher cost. Not only that, but they’re integrated into the systems they run on.

NetApp has no meaning for cloud vendors because MS, Google, Amazon, Facebook, Oracle can all make their own. In some cases, they even make their own hardware.

NetApp will still have a market for a while, but they will become less interesting as more services are moved to the cloud. After all, most companies depending on NetApp today probably have just enough performance to continue operations and as more systems go to the cloud, they’ll need less performance, not more.

There will be organizations like military and banks who will still need storage. And of course there are surveillance systems that require keeping video for 2-10 years depending on country. But I believe increasingly they will be able to move to more cost efficient solutions.

NetApp... I loved you, but like many others, I have now spun down 5 major NetApp installations and moved either to cloud or to OpenStack with Ceph. My company is currently spinning down another 12 major (service provider scale) NetApp solutions because we just don’t need it anymore.

I wish you luck and hope you convince HPe to buy you out like they do to everyone else in your position.

Cray's found a super scooper, $1.3bn's gonna buy you. HPE's the one

CheesyTheClown

So long Cray.. we’ll miss you

So... what about the obvious implications that this leaves the US with only one supercomputer vendor? ugh

I mean really, if Cray can’t manage to be a player with the US dumping exascale contracts on them... the US deserves to be screwed. The US government should have been dumping cash on SGI and Cray for years. Instead, they forced them into bidding wars against each other which allowed a non-super computer acquisitions and merged chip shop to suck them both up leaving the US without even one legitimate HPC vendor in 3-5 years.

Do a search in SGI and find out what HPe has done since buying them... nothing. They ran what was left of them into the ground.

What about Cray? Cray does a lot of cool things. Storage, interconnects, cooling, etc... at one time HP did this too. And if HPe didn’t suck at HPC, they wouldn’t need to buy Cray. They could actually compete head on. But, no... they have no idea what they’re doing.

Want to see what’s left of HPe... google HPe research and show me even one project which doesn’t seem as interesting as Mamma June on the cover of Hustler?

CheesyTheClown

What about SGI?

They bought SGI also... they finished up those contracts and what came next? Oh... SGI who?

Nvidia keeping mum on outlook for year as data centre slows, channel chokes on crypto crap

CheesyTheClown

Alienating their core?

So, gaming cards are twice as expensive as they should be.

V100 is WAY more expensive than it should be... and it is cheaper to spend more developer hours optimizing code for consumer GPU then to use V100 which is a minimum of four times as expensive as they should be... at least to justify the CapEx for the cards. If the OpEx for consumer GPU is way lower than the V100 cost, why would I buy 10 V100s rather than 100 GeForce?

Then there’s Grid.. I don’t even know where to start on that. If you use Grid, there is no possible way to justify the cost. It is so insanely expensive that absolutely every ROI or TCO excuse you have to run virtualized evaporated instantly with Grid. Grid actually increase TCO by a LOT and you can’t even force nVidia to sell it to you. I mean really, you knock on their door begging to buy Grid for 1000 nodes and they don’t answer emails, they refuse to demo... I mean... sitting there with cash in hand waving it under their nose and looking for a dotted line to sign on and they blow you off.

They are too busy to bother with... well customers.

You know... they deliver to important customers like Microsoft, Amazon and Google. They don’t need the rest of us.

Good heavens, is it time to patch Cisco kit again? Prime Infrastructure root privileges hole plugged

CheesyTheClown

Oh for the love of pizza

Ok... if you’re a network engineer who doesn’t suck, you would secure your control and management planes. If you install PI properly, it should be behind a firewall. If you install Huawei switches, the management planes should be blocked.

This is getting stupid.

Now, PI is based on a LOT of insecure tech. It’s a stinking security nightmare. You can’t run PI or DNA controllers without a massive amount of security in-between. This is because Cisco doesn’t actually design for security.

If you want a fiesta of security hell, just install Cisco ISE which might be the least secure product ever made. Their SAML Sp looks like it was written by drunken hackers. Their web login portal is practically an invitation. Let’s not even talk about their insanely out of date Apache Tomcat.

Want to really have a blast hacking the crap out of Prime? Connect via wireless and manipulate radio management frames for RRM. You can take the whole network without even logging in. It’s almost like a highway to secure areas.

When you contact Cisco to report zero-day hacks, they actually want you to pay for them to listen to you.

How about Cisco CDP on IOS XE having rootable vulnerabilities caused by malformed packets? A well formed malicious CDP packet can force a kernel panic, reboot and if you move quickly enough, you’ll be on native VLAN while it’s reading and processing the startup config. I mean come on... it’s 2019 and they still have packet reassembly vulnerabilities because they don’t know how to use sk_buff properly?

They practically ignore all complaints about it too.

Time to reformat the old wallet and embiggen your smartmobe: The 1TB microSD is here

CheesyTheClown

Am I the only one?

I was driving yesterday and as always, instead of paying attention to the road, I was going all Sci-Fi and drifting off to a weird fantasy. I though... imagine if I blinked and found myself driving my BMW i3 in the year 1919... kinda like “Back to the Future” but without Mr. Fusion.

My car had been recently cleaned so all I had with me was my backpack. And I freaked, because I had my play laptop, a Microsoft Surface Go and it didn’t have and development tools on it... not even VS Code. And I was like “I have a JRPG video game, some movies and the only programming language I had was Powershell, whatever is in Office, vbscript, the web browser... ok... I can code... but I don’t have Google, Wikipedia, or StackOverflow”

I could made do I told myself, and then I thought to myself, on my phone, I have about 150 videos on multivariable calculus, chemistry and encryption. Woot!

Then I realized how screwed I was because I didn’t have the parts I needed to build a USB to.. well anything interface... all I had for peripherals was a USB-C to USB and HDMI dongle. I could design a USB to serial UART. In fact, I also have on my phone an FPGA course and I could make a simple VHDL to schematic compiler in Powershell if I had to. But of course, I would have to make my own semiconductors and I’m not sure I could produce a stable silicon substrate capable of 12Mhz for USB using 1919 era laboratories.

Then I realized I had a really awesome toy with me... I have a 400GB MicroSD in the laptop. I don’t think I could even explain to Ramanujan what 400GB is and that’s a guy who was pretty hard core into infinite series. Could you imagine explaining to people 100 years ago that you had a chip that was visually the size of a finger nail which had the storage and addressing circuitry for about 4 trillion memory cells?

So, today... without even thinking of it, I found myself loading VSCode, .NET and Julia onto my laptop. Yesterday afternoon, I found myself packing a USB-RS-232 dongle too. I also realized that I had 3D Studio Max and OpenSCAD installed.

And oddly, I believe I have an Arduino and USB cable in my glove box. Though, I don’t have the software, but I think I could write an Atmel assembler from memory.

Today if I got sucked back to 1919, I could use my laptop to design a simple field emitting transistor which I’m sure would be reliable at 50Khz, a simple cathode ray tube, a simple CPU, a reliable and reproducible carbon film resistor, a half-assed capacitor (don’t know the chemistry on those, but I could fake it), and probably could produce a reasonable two sided circuit board etching and plating system... and I could probably do all this with my laptop and 1919 era tools and tech. I would have to do it at Kodak in New York.

Oddly, I could probably do most of this with just the information I have on my phone, but it would probably take me a while just to make a stable 5V 2amp power source to keep the phone running for any period of time.

To be honest, I think I’d find the closest thing to epoxy available at the time. I would use gold leaf to make traces... then I’d use a simple acid based battery. I wouldn’t trust 1919 electrical mains.

Anyway... anyone else here ever get geeky like this? Wouldn’t you love to show off a 1TB MicroSD card to people back then? Hell just try to explain the concept of what it would take to fit 10 trillion non-volatile memory cells into something that size :)

Mellanox investor proposes class action to kill Nvidia's $6.9bn mega buy

CheesyTheClown

Future potential?

ARM processors are beginning to integrate 100Gb/s Ethernet to support RDMA over converged Ethernet. See Huawei’s HPC solutions for reference.

Intel has the capacity to do the same with their own chipsets used in servers and supercomputers.

NVidia, if they choose to can do the same on their own. They clearly have a solid grasp on high speed serial communications.

Infiniband has is useful in HPC environments because it’s mostly plug and play. But it comes at a premium cost. The HPC market is investigating alternatives to Infiniband because as with technologies like ATM/SDH/SONET, much less expensive technologies, namely Ethernet have become good enough to replace them.

I just saw a 1000 port QDR Infiniband multiplexer sitting unused in a supercomputing center this morning. It will be replaced with 100Gb/E, not more Infiniband.

They should sell now while they are still valuable.

Complex automation won't make fleshbags obsolete, not when the end result is this dumb

CheesyTheClown

It’s not about becoming obsolete.

If you consider that the heart of the issue is unsustainable capitalism, it becomes clear. And even if it were, it has little to do with automation, it’s about centralization and enhanced logistics.

We simply overproduce.

Let’s use a simple example.

Ground beef has a limited shelf life. It can survive quite a long time when frozen, but the meat will degrade and no longer be edible after a short time when thawed.

We as shoppers however are turned away from meat that is frozen. It looks unattractive and although we should know that almost immediately following being slaughtered, the meat is stored in frozen storage, and even if we visit a butcher, we are attracted to meat hanging on hooks in frozen storage, when the meat is on a shelf, we will buy the fresh, red, lovely pack of meat which we’ll transport thawed to our houses and refrigerate and hope we’ll use it before the”best before” date passes.

Grocery stores also know that shoppers almost never buy the meat products which are the last on the shelf. They can charge more for meat that is thawed than frozen. And the result is, they ensure there is always enough thawed meat to attract shoppers and charge them more. They also waste packaging and to make it last just a little longer, they’ll use sealed packaging that makes the meat prettier for a little while longer. And the packaging now even has fancy little devices to measure freshness... which are not recycled. In order to produce (and overproduce) enough ground beef to have enough left over to actually waste approximately 30% (real number for here in Norway), we are left with massive amounts of other meat that must also be sold and suffer the same problems.

When you purchase meat online for home delivery, meat can be kept frozen during the entire process ... up to but not necessarily including delivery for the “last mile”. We don’t need to produce extra to make the meat look more attractive to consumers. We can expect the consumer to receive fresh lovely red ground beef with no need for freshness sensors, vacuum sealed packaging, etc...

Using more advanced larger scale marketing mechanisms. If people are buying too much ground beef, algorithms can raise prices of cheaper meats and lower prices of more expensive cuts to convince shoppers to eat steak instead of burgers tonight. We can sell 400grams or 550grams or however much because meat will be packaged to order. We can cut deals with pet food and pig slip companies to simply give them byproducts in exchange for bartering products like “if we give you pig food worth $1.5million, you give us bacon worth $1.5 million” which would probably count towards tax credits for being green and also leave the additional money in a form that can be written off.

This works great because people buying online will buy based on photos and text. Marketing is easier. They product always looks perfect prior to purchase.

By needing to produce 30% less, we need 30% less cows. Less movement of live stock or frozen sides. We need less butchers. We can use more machines. We’ll use less packaging. We won’t need freshness sensors. We can package in biodegradable paper or reusable deposit oriented containers. We can eliminate printing of fancy labels. We will reduce shipping of product by 50% by using more efficient packaging and shipping 30% less product to begin with. We can reduce consumer fuel consumption and car repairs and tired degradation associated with shopping.

By enhancing logistics and centralizing as much as possible, we will eliminate massive numbers of jobs. But initially the result will be people will spend more time unemployed and believe it or not... more time humping, reproducing and creating more people who have less jobs available to them.

As such, we need to start sharing jobs. People will work 50% of what they do today. This means they’ll have much more time to manage their household economies. They’ll eat out less and use more time cooking. This will reduce dependence on restaurants. They will also have less disposable income as they’ll be forced to spend more time entertaining themselves. They will think more about their meals and waste less food producing them as they know when they buy chicken breast on sale, they can use half today and half two days from now. It won’t be like “I planned to use the other half, but are out because I got stuck in a meeting.”

People will order groceries to be delivered which means the grocery stores which used to be “anchor stores” will become less important and people will stop “Let’s grab a taco and some ice cream next door to the grocery store while we’re out already”. As such, those smaller stores which were never anchors themselves will become less interesting.

This was a simple example, and it barely scratched the surface. It has so little to do with automation. It’s capitalism and we just have too many meat sacks to keep it maintained.

Tesla touts totally safe, not at all worrying self-driving cars – this time using custom chips

CheesyTheClown

Use of investor's capital?

I've worked in a few environments where we did our own HDL development. We worked almost entirely in the FPGA world because we did too much "special purpose" algorithms which would often require field updates... an area not well suited for ASICs.

But, I believe what Tesla is doing here is a mistake.

Large scale ASIC development is generally reserved for a special category of companies for a reason. Yes, their new tensor processor almost certainly is a bunch of very small tensor cores, which each are relatively easy to get right, and the interconnect is probably either a really simple high speed serial ring bus... so, it's probably not much harder than just "daisy chaining" a bunch of cores. But even with a superstar chip designer on staff, there are a tremendous amount of costs in getting a chip like this right.

Simulation is a problem.

In FPGA, we often just simulate using squiggly lines in a simulator. Then we can synthesize and upload it to a chip. The trial and error cycle is measured in hours and hundreds of dollars.

In ASIC, all the work is often done in FPGA first, but then to route, mask and fab a new chip... especially of this scale, there is a HUGE amount involved. It requires multiple iterations and there are always going to be issues with power distribution, grounding, routing... and most importantly, heat. Heat is a nightmare in this circumstance. Intel, NVidia, Apple, ARM, etc... probably each spend 25-50% of their R&D budgets on simply putting transistors in just the right places to distribute heat appropriately. It's not really possible to properly simulate the process either... and a super-star chip designer probably know most of the tricks of the trade to make it happen, but there's more to just intuition with regards to this.

Automotive processors must operate under extreme environmental conditions... especially those used in trucks traversing mountains and deserts.

If Tesla managed to actually make this happen and they managed to build their own processors instead of paying NVidia, AMD or someone similar to do it for them, I see this as being a pretty bad idea overall.

Of course, I'd imagine that NVidia is raking Tesla over the coals and making it very difficult for Tesla to reach self-driving in a model 3 class car, but there has the be a better solution than running an ASIC design company within their own organization. Investing in another company in exchange for favorable prices would have made more sense I think. Then the development costs could have been spread across multiple organizations.

CheesyTheClown

Re: 144 trillion operations per second

I'd love to see something that would back your statement up.

To be honest, I'm just moving past basic theoretical understanding of neural networks and moving into application. I've been very interested in reducing transform complexity and therefore reducing the number of operations per second for a given "AI" operation. Think of me as the guy who would spend 2 months hand coding and optimizing assembler back in the 90's to draw a few pixels faster. (I did that too)

I don't entirely agree from my current understanding with the blanket statement that it wouldn't need that much. I believe at the moment that there are other bottlenecks to solve first, but at least in my experience processing convolutional networks in real time from multiple high resolution sources at multiple frequency ranges could probably use all 144 trillion operations and then some.

Do you have something that would back up your statement... I'd love to see for better understanding of the topic.

Better late than never: Cisco's software-defined networking platform ACI finally lands on AWS

CheesyTheClown

Re: If you need ACI in AWS or Azure, you're just doing it wrong

Shitting on the competition?

What competition? NXOS vs ACI?

ACI does try to solve software problems using hardware solutions. This can’t be argued. In fact, it could be its greatest feature. In a world like VMware where adding networking through VIBs can be a disaster (even NSX blows up sometimes with VUM... which no one sets up properly anyway), moving as much networking as possible out of the software is probably a good thing.

Using a proper software define solution such as Docker/K8S, OpenFlow, Hyper-V extensible switch, or even NSX (if you just can’t escape VMware) with a solid layer-3 solution like NXOS... or any other BGP capable layer-3 switch is generally a much better design than using a solution like ACI which separates networking from the software.

It’s 2019, we don’t deploy VMs using OVFs and next-next-next-finish things anymore. We create description files like YAML or AWS/Azure specific formats and automate the deployment method and define the network communication of the system as part of a single description.

ACI didn’t work for this. So Cisco made Contiv and by the time the market started looking at ACI+Contiv as a solution, Cisco had basically abandoned the project... which left us all with Calico or OpenFlow for example... which are not ACI friendly.

Of course, NSX doesn’t control ACI since they are different paradigms.

Hyper-V extensible switch doesn’t do ACI, so Cisco released an ACI integration they showed off at Live! As few years back and then promptly abandoned.

NXOS works well with all these systems and most of these systems document clearly how they recommend they are configured. Microsoft even publishes Cisco switch configurations as part of their SDN Express git.

So... which competition are you referring to?

CheesyTheClown

Re: If you need ACI in AWS or Azure, you're just doing it wrong

Servers + Fabric + VMware license + Hyperfles storage license + Windows Server Enterprise licenses + backup licenses (Veeam?) Firewall + Load balancer + server engineering hours + network engineering hours + backup engineering hours + Microsoft hours...

You need two stacks of (three servers + two leaf and two spine + 2 ASR1000 or 2 border leafs + 2 firewall nodes, 2 load balancers) and whatever else I’m forgetting.

If you can get a reliable Hyperflex environment up with VMware and Microsoft license and all the hours involved for less than $1.6 million, you probably have no clue what you’re doing.... and I specifically said retail. And architecting, procuring, implementing and testing etc... a redundant Hyperflex environment requires several hundred hours of what I hope are skilled engineers.

I’ve done the cost analysis multiple times on this. We came in under $1.2 million a few times, but that was by leaving out things like connecting the servers to the UPS management system and cutting corners by using hacked fabric solutions like skipping the border leafs or trying to do something stupid like trading in core switches and trying to make the ACI fabric double as a core switch replacement. Or leaving out location independence etc...

CheesyTheClown

If you need ACI in AWS or Azure, you're just doing it wrong

So... the year is 2019 and well... software defined is... um... in software.

ACI has one of the most horrible management and configuration systems ever to be presented on earth. It started off as a solution to support an "intelligent" means of partitioning services within data centers running VMware. This is because VMware really really needed it. VMware, even with NSX still networking like it's 1983. So companies invested heavily in ACI which would allow them to define services based on port-groups and describe policies to connect the services together and even support service insertion.

Well, if you're in the 21st century and using Hyper-V, or far better yet, OpenStack and even better Docker/Kubernetes, all of these features are simply built in. In Docker Swarm mode, it's even possible to do all of this with full end-to-end encryption between all services. And since you can free up about 98% of your bandwidth from storage in a VM environment, you have lots of extra bandwidth and also extra CPU... and I mean LOTS of extra CPU... a well written FaaS function using 0.0001% of the resources that a similar routine on a VM would use... no exaggeration... that's the actual number... we measure resource consumption in micro-CPUs (as in one millionth of a CPU) as opposed to in terms of vCPUs when doing FaaS. For PaaS on Docker, we think in terms of milli-CPUs for similar functions.

So, we use all that idle CPU power for networking functions. And since we can truly micro-segment (not VMWare NSX crap segmentation or ACI brainless segmentation), we can have lots of load balancers and encryption engines and firewalls, etc... and still not use a 100th of what ACI would waste in resources or a millionth of what it would waste in money.

The best solution a company can take in terms of the 21st century is to start moving their systems more and more to proper modern networking and virtualization rather than wasting all that money on trying to come up with ways of scaling even further up using solutions like ACI.

What's worse is that if you're considering using ACI in the cloud, what it says is that you think that none of the pretty damn awesome SDN solutions that are integral parts of the cloud provider's solution work. And instead you're willing to spend A LOT more money to add networking that doesn't do anything that their offerings don't but at least creates a bunch of new jobs for engineers who don't really understand how it works to begin with.

Having reviewed ACI in the cloud in extreme detail... the only thing I could come up with is "Why the hell would anyone want that?". I was just at a job interview with a major multi-national financial clearing house where they wanted to hire me as an architect to recover from their failed attempt at ACI... I explained that the first thing I'd do is delete ACI from the Nexus 9000 switches, upgrade to NX-OS (the legacy networking platform) setup layer-3 connectivity between nodes and use their OpenShift environment to manage the networking and handle all the software defined networking as it's far better suited for it. They loved the idea... we could easily reduce the complexity of the networking infrastructure by a substantial amount. In fact, by using a simple layer-3 topology (all that's needed for real SDN which operates entirely on tunnels over layer-3) we could cut costs on people and equipment by millions per year.

Cisco has spent the last 10 years trying to make new technologies which don't actually solve problems but add complexity and therefore errors and management headaches at up to 100 times the cost of their other solutions which are actually more suitable. And I really only wish I was exaggerating those numbers. ACI actually increases costs DRASTICALLY with absolutely no chance for return on investment.

On the other hand, if your company has a VMware data center and A LOT of VMs which will take years (if ever) to replace with intelligent solutions, I would recommend buying two small HyperFlex stacks (retail cost with VMware licenses and ACI, about $1.6 million minimum configuration) which should let you cut the operations overhead substantially... possibly down to 3-5 people... until you can move more and more systems off the legacy platform.

Astronomer slams sexists trying to tear down black hole researcher's rep

CheesyTheClown

Thank you!!!

I had not considered her gender to be relevant. I have a friend who is a physicist, he pees sitting down because his mother would have beaten him to death for dripping on her floor. I don’t think it’s ever been relevant to his research which way he pees, why would it matter to Katie’s research how she pees?

She is a Ph.D from MIT and a researcher and professor at CIT... she’s way beyond gender issues when it comes to her profession. Even if for some reason she was at the lowest possible level of accomplishment for someone in her position, she would still be a frigging brilliant scientist. And from what I can tell, she is definitely not at the bottom.

As someone who respects the hell out of her life accomplishments, I will make some comments that border on sexist. Her presentation of her work includes a level of giddiness and bubbliness (if that is a word) that if a guy did it would be creepy, but her case is endearing to the point of bordering on adorable. She seems to love her work as much as I love mine. I can almost imagine my next presentation of my research having cute little squeaks in it like she has done in her presentations. I just started writing a brief postulate on applying Lagrangians to the Finite Element Method to attempt a P solution to an NP problem in structural analysis by defining desired results and calculating idealized mesh coefficients by working in reverse. I am so going to roll forward on my toes and yip like Katie does so I will can show my excitement. I’ll probably have rotten eggs thrown at me, but I don’t care... I love what I do and I want to show it like she does.

I have no idea how old she is, but I would love to adopt her and make her my daughter and we can do math together and be bubbly together :)

Nutanix and HPE sitting in a Greenlake: That disavowed hookup has actually happened

CheesyTheClown

Re: Subscription Boxes As a Service (SBaaS)

It is out of control.

First, we had servers. A rack mount server from Dell for example cost about $2000 and was enough to run a single application.

Then as we ran more applications and had more servers, we decided that running around and changing failed hard drives was expensive. So we centralized. We did this by encapsulating SCSI packets into a new frame called Fibre Channel and this allowed us to have stacks of hard drives in a big box that allowed us to map LUNs to WWNs.

Then we wanted to support fail over to allow for supporting a backup path for failed fiber so we supported no just node naming but port naming.

Then we decided that since hard drives were no longer 200MB in size, and instead were 200GB in size, we would make virtual drives as files on a centralized RAID and instead of mapping node world wide names to physical SCSI devices, we would map them instead to files.

We then decided that servers were running at 10% capacity and were too much trouble, we decided to run software on servers that would allow a single $2000 server from Dell to do the job of 8 $2000 servers.

Then we decided that if we added more RAM and more CPU, we could consolidate further, so we scaled those servers up... requiring specialized RAM, specialized storage controllers, specialized CPUs, specialized graphics cards, specialized fiber channel adapters, specialized management software, specialized cooling etc... so, we increased the density from 8 servers per $2000 server to 20 servers per $40,000 server... bringing us back to about $2000 a task... plus storage.

By centralizing storage and increasing density drastically, we decided that we needed to come up with crazy new technologies to sustain disk bandwidth over SCSI fabrics that lack routing and intelligent load balancing and introduced technologies like NVMe as not only a SCSI replacement for local connectivity, but also as a network fabric. We then decided to encapsulate NVMe which itself is a routable/switchable network fabric within FibreChannel. By doing so, we killed most of the benefits of NVMe which could have been retained by simply configuring UEFI and VMware correctly to us NFS or another file service.

We are now at about $4000-$5000 per virtualized server... we require specialists in storage, backup, virtualization, networking, service insertion, etc... our TCO between CapEx and OpEx has risen from about $4000 per service per year to about $12000 per service per year and we're locked into vendors with no hope of ever escaping....

And then comes subscriptions.

Instead of $4000-$5000 per virtualized server, they're trying to figure out how to charge us $3000-$4000 per virtualized server per year instead.

I did a recent cost assessment for building a greenfield minimal VMware data center design that has nothing more than 6 nodes across two data centers. The purpose of this would be to run whatever services are still not "cloud based". This minimal configuration is the minimum you should run... not the minimum you could run. It came to $1.6 million CapEx and $950,000 OpEx per year... with a $1.6-$2 million additional CapEx investment every 5 years.

Anyone who buys a current license from VMware, Microsoft, Cisco, HPe, etc... should actually be investigated for criminal activity.

Moving to the cloud is dangerous as hell because it may seem like a bargain today, but we have no guarantees we will keep the prices low over time. Moving a VM to the cloud is just stupidity... move services, not VMs that were overbloateded to begin with.

I would posit that it would be substantially more cost effective for most companies to throw everything away, close shop for a year and start over than it would be to manage the absolute mess they are in thanks to virtualized servers today.

Anyone who would consider buying Simplivity from HPe or HyperFlex from Cisco should be shot for saying something so amazingly stupid... especially when they already pay for licenses for all of this tech from other vendors anyway. And besides, neither HPe, Cisco or Dell have the first idea what the hell you would actually use their servers for... so their answer is just buy so much capacity that it should run anything and everything.

The first step to recovery is to fire most of your IT staff... especially "The smart ones" and hire a "systems analyst" who will identify your actual business needs and then hire an architect to design systems to meet those needs. Then buy a "Kubernetes in a box" solution from RedHat, Ubuntu or whoever else that could run on a few Intel NUCs at $1000-$2000 a piece. And then build what you need.

The CapEx would drop to about $16,000 every 4-5 years and the OpEx would be much lower... and most IT spending would be done on development of the systems you actually need.

Huawei savaged by Brit code review board over pisspoor dev practices

CheesyTheClown

Re: Real point here

I was hoping to see a comparative study to Cisco. My company gives Cisco over a billion Euro a year and while this seems damning to Huawei, I am the pretty sure Cisco is as bad.

1) Multiple OpenSSL instances is normal. They should however be pulled from the same repositories. There are good reasons to compile OpenSSL differently based on context. I compile it differently when using it in kernel space or in user space. OpenSSL is an absolute must for security... this is because OpenSSL is the absolute most hacked library EVER because of mass economy. But that also means it should be the fastest patched.

2) Large amount of C code in a network product unless it’s the forwarding engine itself is a really bad idea. Even then, companies like Ericsson code large amount of their systems in Erlang. While I’m no fan of Erlang, it has many benefits over C with regards to this. As such, it would make sense choose Ericsson over Huawei for 5G. Cisco uses C VERY heavily and if you were to look at much of the code Cisco has made public... let’s say they have pretty bad practices.

3) Poorly mapped safe-C type functions for string management. If you’re using C and safe in the same sentence... just don’t. Even the absolute best C code will almost certainly degrade over time. A common pattern which has grown over time in C circles is to make insane messes of goto statements for reallocating objects in the “right order” at the end of functions. I have seen many cases where this degraded over time.

4) 3rd party real-time operating systems are common. If you’re developing network hardware, RTOS makes a lot of sense as opposed to Linux. One reason is because network hardware should have deterministic latency to support protocols like ATM, SDH, ISDN, T1/E1. Vxworks, QNX, GreenHills all made excellent operating systems for communication grade equipment. Most of these systems however suffer from age. SYSBIOS from TI is also great. An excellent aspect of RTOS systems often is the ability to partition the CPU based not only on time share, but also cores.

I honestly think this review might be the best thing to ever happen to Huawei. It is a roadmap to let them plan their next steps. They should really consider looking into using Redox as a foundation to something new. If they invest in building a RTOS scheduler, it could be something glorious... especially for Huawei.

HP crashed Autonomy because US tech titan's top brass 'lost their nerve', says lawyer for ex-CEO Mike Lynch

CheesyTheClown

Re: "Losing their nerve" is a common theme in all of HP's acquisitions...

I generally recommend to my customers that if they purchase HPe designed and developed products, go for it. But as soon as a company is acquired by either HPe or Cisco, they should see it as a sign they should consider looking for alternatives.

HPe and Cisco sales people don’t like selling products that take work. Meaning that if they can’t count on making regular bonuses, they lose interest. This is why most HPe and Cisco customers don’t buy what they actually need and end up never getting anything working as it should.

Even when these companies buy great products, if the sales people can’t figure out how to make them fit their portfolio, they simply won’t.

Cisco and HPe also lose interest quickly. Many times, both companies have purchased companies producing products targeting new markets. They try to sell their products to their existing customers and their customers who are conservative and are also often backlogged don’t catch on until the development teams of those products have been downsized and placed in a purgatory in India.

So the moral of the story is... don’t expect a company that is as big as a government, turns over leadership as governments do, and sells mostly to governments to behave any differently than any other dysfunctional government.

Don't mean to alarm you, but Boeing has built an unmanned fighter jet called 'Loyal Wingman'

CheesyTheClown

If they deliver

Let’s be honest... Boeing isn’t exactly we’ll known for delivering anything in “Maybe 12 months”. As soon as they do a half assed demo, Boeing will claim to be out of money and it will end in a way late, way over budget, never delivered product.

In the meantime, any country that the plane would be useful against will focus on much smaller, mich cheaper, autonomous drones.... because they won’t have the same stupid tender process as western governments do

NAND it feels so good to be a gangsta: Only Intel flash revenues on the rise after brutal quarter

CheesyTheClown

The almighty dollar!

Thanks to the strong dollar, the majority of the world can't afford to pay in dollars what the market is demanding. Sure, we ship more bits, but if you want sell them at all, you have to consider that you can't negotiate in dollars. They're too damn expensive. So, of course the revenue will be lower. You have to ship the same product for less dollars if you want to ship at all.

Oh, there's also the issue that people are finally figuring out that enterprise SSD doesn't really pay off. You just need to stop using SANs and instead use proper scale-out file systems.

Linus Torvalds pulls pin, tosses in grenade: x86 won, forget about Arm in server CPUs, says Linux kernel supremo

CheesyTheClown

There has been progress

I do almost all my ARM development on Raspberry Pi. This is a bit of a disaster.

First of all, the Pi 3B+ is no a reliable development platform. I’ve tried Banana and others as well. But only Raspberry as a maintained Linux distro.

The Linux vendors (especially Redhat) refuse to support ARM for development on any widely available SBC. This is because even though Raspberry PI is possibly the most sold SBC ever (except maybe Arduino), they don’t invest in building a meaningful development platform on the device.

Cloud platforms are a waste because... well, they’re in the cloud.

Until ARM takes developers seriously, they will be a second class citizen. At Microsoft Build 2018, there were booths demonstrating Qualcomm ARM based laptops. They weren’t available for sale and they weren’t even attempting to seed them. As a result, 5,000 developers with budgets to spend left without even trying them.

This was probably the biggest failure I’ve ever seen by a company hoping to create a new market. They passed up the chance to get their product in front of massive numbers of developers who would make software that would make them look good.

Now, thanks to no real support from ARM, Qualcomm, Redhat, and others, I’ve made all ARM development an afterthought.

Surface Studio 2: The Vulture rakes a talon over Microsoft's latest box of desktop delight

CheesyTheClown

$100 a month? Not a bad ROI

If you consider that this machine will last a minimum of 3 years, $3600 is pretty cheap actually. It's a nice looking machine and because of it's appearance, the user will be happy to hang onto it a little longer than a normal machine. I can easily see this machine lasting 5 years which would make the machine REALLY cheap.

When you're thinking in terms of return on investment, if you can get a machine which will meet the needs of the user for around $100 a month, it's a bargain. This is why I bought a Surface Book 2 15" with little hesitation. The Office, Adobe and Visual Studio Subscriptions cost substantially more per month than the laptop.

I'm considering this machine, but I have to be honest, I'd like to see a modular base. Meaning, take this precise design and make the base something that could slide apart into two pieces.

The reason for this is actually service related. This is a heavy computer. It has to be to support the screen when being used as a tablet. 80% of the problems which will occur with this PC will occur in the base. When it comes to servicing these machines, they risk easy damage by being moved around. This is not an IT guy PC, it's something which is pretty. I'd like to simply slide a latch, then slide the PC part of the system off and bring it in for service.

Upgradability would be nice using the same system as well. But I'm still waiting for Microsoft to say "Hey, bought a 15 inch Surface Book 2? We have an upgraded keyboard and GPU to sell you"

CheesyTheClown

Re: Hmmmmm!

I work in post-production for a while. We were almost exclusively a Mac shop at the time, but we did most of our rendering on the workstation. Even more so when people began using laptops for post.

The earlier comment that the hardware has far outpaced the software is true. Sure, there are some rare exceptions. And if you're working on feature length productions rather than a 45 second commercial spot at 2k (max resolution.. meaning 1125 frames at 2k resolution), you'll need substantially more. But a GTX1060 or GTX1070 is WAY MORE than enough to manage rendering most video using current generation Adobe tools. Even 3D rendering with ray tracing will be able to work ok. Remember, you don't ray trace while editing (though we might get closer now with GTX2060+ cards). Instead, we render and even then, with the settings turned way down. Ray tracing on a single workstation can generally run overnight and be ready in the morning. If it's a rush, cloud based rendering is becoming more popular.

This machine should last 5-7 years without a problem. Most of the guys I know who still have jobs in TV (there is way too much supply and simply way too little demand for TV people) generally run 5-7 year old systems. Or more accurately, they wait that long before considering an upgrade.

UC Berkeley reacts to 'uni Huawei ban' reports: We unplugged, like, one thing no one cares about

CheesyTheClown

Re: BT Infinity uses Huawei and no one seems to care

Telenor Global Services runs a Tier-1 LTE service provider on Huawei which most western governments depend on for secure communication... and Huawei has administrative credentials for all the devices since they also have the operations agreements for the hardware.

None of this is classified information if you can read Norwegian.

CheesyTheClown

Re: UC Berkeley Stazi

I have no idea what has me smiling more

1) The comment itself... it was lovely

2) The use of the word superfluous in daily conversation.

3) The fact that you could spell superfluous that way and it was still recognizable.

I can go on... I'm practically pissing my pants in happiness of this comment and the AC's correction of your spelling of wordz :)

CheesyTheClown

Re: RE: Chunky Munky

Congratulations, you won the $52 gazillion jackpot... you're 100000000% correct!

In case you were wondering. There is no $52 gazillion jackpot. In fact, there's no so thing as a gazillion that I'm aware of outside of literature.

Also, 100000000% is meaningless, its only value is to say you're 100% correct but I wish I could give you bonus points if it were possible.

The poster above was making the point the Huawei sold A LOT of smart phones... in fact, so many that the number is ridiculously big and as a result, it shouldn't make too big an impact on their revenues if UC Berkeley turns off a box.

SD-WAN admin? Your number came up in Cisco's latest bug list

CheesyTheClown

Starting points for security researchers

CDP on IOS-XE: remote code execution

I reported this but got blown off 18 months ago. Using any XE image 3.12 or later, especially on switches, single-step the CDP module to find an overflow. In some versions, the kernel segfaults (overflow) simply parsing changes to native vlan changes from the remote. CSR1000v can be used to reproduce. The error is probably in misuse of sk_buff and Alan Cox's psnap module.

SAML on ISE Man in the middle

Reported this two years ago, got blown off. ISE's SAML implementation incorrectly reports SAML versioning in their schema when identifying the Sp. This is due to hard coded values and ignoring settings from the IDp. It seems that the SAML service also ignores updated authentication tokens. Turn on verbose logging and intercept packets in transit between ISE and AD FS... alter packets and fake signatures to reproduce.

ISE Web Portal

just message me for a really long list. I'm typing with a mouse because my keyboard batteries died.

Cisco only cares about security after it's a public CVE

Dear humans, We thought it was time we looked through YOUR source code. We found a mystery ancestor. Signed, the computers

CheesyTheClown

Re: Many mysteries

Oddly, while you and I generally don't get along all that well elsewhere, but I'm forced to agree with you here.

I am actively working on getting admitted to the masters program at the local university to contribute towards automating medical general practitioners out of a job if I can. There are many issues associated with illnesses, but stage of detection is generally the number one factor deciding whether an illness is treatable or not.

I make a comment often that the only dust you'll ever see in a doctor's office is the dust on the top of his/her books.

it's a matter of exponential growth more than exponential decay though.

As you get older and in theory gain knowledge and wisdom, the time required to maintain the health of that information grows. And if you constantly update your knowledge, your overall understanding of the field of interest will increase along side it.

Primary and secondary school provide an excellent opportunity to provide most people a glimpse of what's out there. In fact, I've been teaching adults for years different topics of engineering, science and math. What I've learned is that with rare exceptions, most every person I communicate with probably completed the education they'll draw from in life at the end of the 4th or 5th grade. This doesn't mean that they are stupid, it simply means that they've had no practical use for anything more advanced in their given careers. This is because most people simply don't need it.

High school allows people to see all the amazing jobs that are out there and available to them. I learned a great deal in high school before dropping out and starting at the university instead.

What's important and we see it all the time in modern society is that people smarter at a young age and it hurts them as they get older since the teachers don't teach the cost of information.

The XX and XY chromosome thing is a real problem. It wasn't long ago that we were told school kids that the mitochondria was the power source of the cell. When I was a kid, it was common knowledge that we had never successfully managed to "crack the shell" of the mitochondria and looked inside. Now science books are being updated to teach us that our genetic code resides within the mitochondrion DNA as well.

Cellular microbiology is a topic which all children learn at some level or another, but most people can't name a single part of the cell other than the nucleus and they constantly make announcements like "You're going bald because of your mother's father not me" as if balding were a single gene. By their education, they would make it so that boys receive 0% of their genetics from daddy.

We generally teach kids more than enough to make them dangerous. We don't place expiration dates on the information.

P.S. - nice use of nigh, it's one of those words I'm envious of when I see it but never seem to use when the opportunity presents itself.

DNAaaahahaha: Twins' 23andMe, Ancestry, etc genetic tests vary wildly, surprising no one

CheesyTheClown

Re: Boffins or Bafoons?

I followed up on more articles on this now. And while you have some valid points, there are still many questions left open and yes, they "dumbed the shit down" for news and TV substantially. Multiple times during the interview the "boffin" damn near back peddled in order to keep the words small enough for the journalist to understand.

The people who were interviewed were clearly "boffins" in the field. This means they were "experts" on something and not on another, but the people who interviewed them didn't thing to talk to someone who could interpret the data more clearly.

A boffin in my experience is someone that dumb people call smart without understanding what that means so they can tell other dumb people that they should trust this data because it comes from a genuine boffin. For example, a guy in a blue shirt at an electronics store with a logo that says "Geek" is clearly a boffin, he has a shirt to prove it. Imagine how impressed those same people are by the geniuses at the genius bar.

I believe the "boffins" in question here (after reading their credentials) are specialists on topics relating to genetics but lack interest or focus on more commercialized genetics (meaning mail order) and genealogy. They were questioned though as if this would be a particular area of expertise for them. This means that although they might be able to speculate in areas they lack the data set to speak authoritatively on, they answered anyway since they didn't realize how wide spread their findings would be published in the mainstream media.

The numbers published leave most of my questions in tact.

- How does shipping impact it

- Where the results actually contradictory?

- What were the results of two samples from the same twin from the same lab?

I can go on.... the point is, the people interviewed were geneticist and I assume pretty damn good ones. They are experts in sequencing and applying their findings to medical application. I'm sure there are even people with an interest in forensics on the team. But this team doesn't seem to have a whole lot of background on historical migratory patterns. They also don't appear to be accounting for shipping methods which means they are probably incredibly brilliant people who when handling samples minimize contamination and they're being asked to evaluate how a sample stuck into a non-sterile device and then shipped through random methods would fair.

You're right, I should have read more and now I have. Before looking like an idiot and simply jumping on "boffins at yale" as an excuse to grandstand, I would still need many questions answered.

Cool... they are identical and proven.

- What does that mean in context of a pair of 30 year old identical twins who clearly show substantial variations in development?

- What is the margin of error when taking two samples from the same twin vs one sample from each twin?

- What is the result when ensuring the labs receive non contaminated samples?

Additionally

- When the results are interpreted by a historical anthropologist and accounting for migratory patterns and possible differences introduced by how generational data is interpreted, were the results actually different or was this a rounding error?

But... I guess since now that you and I have read the same articles and seen the same interviews, the fact that you seem to be happy and I still seem to want reduce a great number of variables... it could be that I'm an idiot or it could be that I have high standards of what I consider to be meaningful data. If I had interviewed the "boffins" I would have asked for tolerances and certainties for all measurements.

You can tell the scientists involved here intentionally kept the words small and simple for people like you to avoid confusion like this. They even came straight out and said things like "identical". Absolutely no scientific evidence ever has been admissible without defining the precision and 0% deviation is not a possible measurement. 10^-100 percent is, but 0% isn't. Any scientist presenting information in any way without that is trying to sooth the fools. And the fools will quote him or her without asking additional "boffins" additional questions to multiply the tolerances to reduce the margin of error of their findings.

The AC was crap... your response was arrogant and uniformed crap. My post was arrogant uninformed crap and the story was misleading crap. The difference is, I'm claiming we're all full of crap and you're choosing which line of crap you'll present as fact to degrade people.

Tell me. If the article said buffoons instead of boffins, would that margin off error impact how you'd use the "evidence" to show you're superior enough to call people idiots?

So, are you prepared to step up and admit you are full of shit too or will you continue to present yourself as the "boffin" to we lowly idiots?

CheesyTheClown

Boffins or Bafoons?

They obviously are different :)

Journalists generally have absolutely no respect for science.

You're quoting "Boffins at Yale university, having studied the women's raw DNA data, said all the numbers should have been dead on."

Let's start by saying that the definition of a pair of identical twins is that they were both hatched from the same egg, or more accurately the egg split in half after being inseminated and produced two separate masses which eventually developed into two individual humans. I have not looked it up and I'm already guilty of one of the same critical mistakes made by the journalist which is that I have no verified my facts. But this is how I understand it.

If I am correct, then cellular reproduction through mitosis should have split the nucleotides of the original cell precisely. This means that over a certain percentage of the genetic pairs to be reproduced survived in tact. Let me clarify, from what I can fathom, simply mathematical entropy dictates that there must be an error in every cellular reproduction. It is not mathematically possibly for two cells to be 100% alike. 99.9999% is realistic, but not 100%. This is a mandatory aspect of science. To make the particularly clear, refer to Walter Lewin's initial lecture from Physics 801.x in MIT Open Courseware on Youtube where he explains how to measure in science.

So, we're presented by your quote "Boffin's at Yale..." which leads me to ask :

- What is a boffin?

- What is the measure of a boffin?

- Who is qualified to measure whether this individual is a boffin?

- What is the track record of accuracy by the boffin?

- Was the boffin a scientist? If so what field?

- Was the boffin a student? If so, what level and field?

- Was the boffin an administrator? Were the a scientist before? When did they last practice? How well did their research hold up when peer reviewed? Did they leave science because they weren't very good at it and now they wear a suit?

And "having studied the women's raw DNA data"

- How contaminated was the data (0% is not possible)

- How was it studied?

- Were all 3 billion strands sequenced and compared? Was it simply a selection?

- Did they study just the saliva as the companies did or was it a blood sample?

- Does saliva increase contamination?

- Was the DNA sample taken at Yale?

- Was the DNA sample shipped?

- If it was shipped, was it shipped the same way?

- Could air freight cause premature decay or even mutation, etc...?

And "said all the numbers should have been dead on"

- Was the boffin really a boffin? (see above)

- What does dead on mean? What is the percentage of error?

- What numbers are we talking about?

- Should the results been identical between the twins?

- Could two samples from the same twin produce the same discrepancies?

- Could two separate analysis of the same sample produce the same discrepancies?

- What happens when one person spits into a tube and then the spit is separated into two tubes?

- Did the boffin say "all the numbers should have been dead on" or did they provide a meaningful figure?

- Is this the journalist's interpretation of what the boffin said?

- Are these words the words of the original journalist or was it rewritten to make it sound more British? I've never heard any self respecting person use the term boffin if they actually had a clue to begin with. Same for the word expert.

- Did the "boffin" dumb down the results for someone who is obviously oblivious?

Overall, does this comment translate to :

"Qualified scientists with proven track records specializing in the field of human genetic research as it relates to ancestry evaluated 98% of the sample from each of the two twins, verified that they are in fact identical and that there should be no more than 5% margin of error when comparing the results of genetic studies between the two girls."

I'm not a scientist. I'm barely a high school graduate. But before I would dispute the accuracy of what the AC said by defending it by stating "Boffins at Yale university, having studied the women's raw DNA data, said all the numbers should have been dead on.", I would absolutely start with attempting to answer those questions above.

At this time, while I believe strongly that these twins are in fact identical and I believe they were verified at some point in their life (whether by a "boffin at Yale" or elsewhere) to have been conceived from the same egg, I would not offer it as evidence without further research or conclusive proof. That would be an insult to science. My believe is irrelevant unless this were some high school sociological report for a political science class. And even then, that's a misnomer.

Let's address ancestry as well.

I read the results and let's take an except as you did.

- 23andMe reckoned the twins are about 40 per cent Italian, and 25 per cent Eastern European;

- AncestryDNA said they are about 40 per cent Russia or Eastern European, and 30 per cent Italian;

- MyHeritageDNA concluded are about 60 per cent Balkan, and 20 per cent Greek.

I'm no expert on ancestry, but I have questions.

- What does it mean to be italian, eastern european, russian, etc...?

- Within how many generations would the be counting?

- Would the 40% Italian refer to the 1800's very likely means balkan and greek in 200AD?

- Is Russian of east european or east asian or central asian descent?

- Is balkan east european or russian?

What I'm reading here is that all three companies were in 100% agreement. If anything, for an imperfect science, it's impressive how perfectly they agree.

As another item

- Two of the tests reported that the twins had no Middle Eastern ancestry, while the three others did, with FamilyTreeDNA saying 13 per cent of their sample matched with the region.

I'm pretty sure that if we believe modern science, everyone on earth should come from the middle east and before that Africa. So unless the twins come from another planet, they have middle eastern descent. The question is, how many generations back? Also, what counts as middle eastern descent? If we refer to religion, only a few thousand years ago, Jerusalem was in war with the Seleucid empire which almost definitely spread seeds from the middle east to the Mediterranean or maybe the Mediterranean seed was spread widely enough to influence what is considered the middle eastern bloodline today.

Again, I don't see the data conflicting, I would need far more information to sound even moderately intelligent.

And I'll attack one more

- On top of this, each test couldn't quite agree on the percentages between the sisters, which is odd because the twins share a single genetic profile.

This is absolutely 10000% not true. They are born from the same egg. I can't find her birth date, but I'd put her at about 33 years old, but that may simply be because she dresses like an old lady. Either way, let's round to 30 years old.

Unless she and her sister have been in the womb together until last week which I doubt, they should have been through an infinitely different life causing infinite changes in their their genetic code from one another. Their genes can't possibly be that similar anymore. Almost certainly less than 99% similar now. That's at least 30 million differences from one another given that a human gene consists of 3 billion nucleotides or genetic pairs.

Consider that simply walking in the sun causes genetic mutation. Pressures from drinking water can cause genetic mutation (I read a peer reviewed paper on this, but can't cite it).

So, please don't bash the AC, he's as full of shit as I am... the only thing I got out of this article is that some girl who has a twin sister has now made an international impact with her ignorance of science in an effort to make a headline to increase the ratings of her TV show which should have served the purpose of informing people of facts.

I would like to see some real research on this topic from people who are far smarter than me. At this time, I have many questions and wouldn't even know where to start answering them.

IBM to kill off Watson... Workspace from end of February

CheesyTheClown

Maybe if someone knew it was there?

Ok, so I work for a company which is A LOT older than IBM, has one tenth the head count but is 1/4 the size in dollars.... and while IBM is a pretty interesting company, I wonder if there's something failing when IBM isn't sitting at my office begging me to buy their stuff.

My company has spending to do for our customers which could be worth several points on their share values if they were to make an effort. But while we give Cisco about $2 billion a year, I don't think IBM even tries to gain our love. And to be honest, with the project I'm working on, if I even knew that Watson Workplaces was there, I might have considered it as a solution.

IBM is failing because Ginny is targeting only C-level business and she's a acquisitions and mergers monster. She's great at that. Every time she lose an important customer, she buys the company she lost them to. But here's the thing. Working for the world's second largest telecom provider and walking distance from the CEO's office, I couldn't tell you how I would even start a conversation with IBM. I bet they have a pile of crap I could find interesting that could save me a lot of time and work to make deliveries to my customers. I'd even consider buying a mainframe and writing new projects for 20 year projects on them. But, I have no idea where I would get the people or expertise or even contacts required to even talk with IBM.

I guess IBM only wants to sell to people who know what they have.

Oracle boss's Brexit Britain trip shutdown due to US government shutdown

CheesyTheClown

Re: WTAF?

I was wondering about this as well. I don't care if you're Donald Trump or even someone important, try getting through Heathrow without a passport. You can't even transfer planes in the same terminal without being treated like a criminal at Heathrow. I've traveled business or first class through Heathrow many times and I just finished moving all my frequent flier miles (200,000+ executive club miles) to One World because I refuse to travel through the UK anymore since the security got stupid.

So, the author is awful. This was an expired passport.

I'm an American citizen and I've had passports replaced in 12 hours or less by having FedExing the forms through Boston and having them walked through by an agent. It's pretty simple actually.

But during a shutdown, I'd imagine that this is not possible. That said, it's an Oracle thing, it's not really important that people like Hurd show up... even Larry does things like ditching his keynote at Oracle conferences to play with his boats.

Insiders! The good news: Windows 10 Sandbox is here for testing. Bad news: Microsoft has already broken it

CheesyTheClown

Re: Windows sandbox

Ok... first of all, Sandbox is an insider feature which means that things will and DO go wrong. It's not meant to be reliable software, it's meant to be bleeding edge. Think of it as the alpha and beta versions of times past.

Second of all, security fixes from release builds generally come into sandbox builds. The security fixes are tested against release and if they're tested against insider as well, it's a bonus.

Internet Explorer is an application built on top of many Windows APIs including for example the URL engine for things like fetching and caching web media. It's like libcurl. Just like libcurl, wget, etc... there are always updates and security patches being made to it. So, when making updates and security fixes to the web browser, if fixes need to be made to the underlying operating system as well, they are.

That said, I've had fixes to IE break my code many times. This is ok. I get a more secure and often higher performance platform to run my code on. It's worth a patch or two if it keeps my users safe.

As for what the sandbox does, I'd imagine that the same APIs which IE uses to sandbox the file system from web apps requesting access to system resources (probably via Javascript) are used to provide file system access for example. If I had to guess, it probably is tied to the file linking mechanism.

London Gatwick Airport reopens but drone chaos perps still not found

CheesyTheClown

Re: How hard is the approximate localization of a 2.4GHz sender operating in or near an airport?

Let’s assume for the moment that we were to plan the perfect crime here. This is a fun game.

1) Communications

Don’t use ISM, instead use LTE and VPNs. It’s pretty cheap and easy to buy SIM cards and activate them in places like Romania without a postal address or even ID. Buy cards that works throughout Europe. They’re cheap enough and far more effective for ranged communication than 2.4. Additionally, jamming is a bigger problem as you can’t jam telephone signals at an airport during a crisis. In a place like England where people are dumb enough to vote Brexit and have fights in bars over children kicking balls around, it would cause riots.

2) Use 3D printed drones. Don’t buy commercial, they’re too expensive and too easy to track. Just download any of a hundred tested designs and order the parts from a hundred different Chinese shops.

3) Don’t design for automatic landing.

4) Add solar cells and plan ahead. Don’t try planting them yourself, instead, launch them 20 miles away from different sites and have them fly closer each day at low altitude until they are close.

5) Don’t depend on remote control. Write a program which uses GPS and AGPS to put on an “Animitronic Performance”. Then they can run for hours without fear of interference.

6) Stream everything to Twitch or something else anonymously. Send streams to Indonesian web sites or something like that instead.

I have to go shopping... but it would be fun to continue ... share your thoughts.

It's the wobbly Microsoft service sweepstake! If you have 'Teams', you've won a lifetime Slack sub

CheesyTheClown

Re: Of course, given recent statements from the rumor mill...

What do you mean poorly on Linux? I’m not baiting, I’m currently investing heavily into Powershell on Linux and would like a heads up.

Bordeaux-no! Wine guzzling at UK.gov events rises 20%

CheesyTheClown

This is true

I've met polite people from France.

I've actually met pretty girls from England

I've met Finnish people who understand sarcasm

I've met American's who aren't entirely binary in every one of their beliefs

I haven't bothered with American wines in the past 20 years, though I know they have improved.

I generally drink Spanish wine (Marquee De Caceras, Faustino I) as they are compatible with my food preference.

I have a fridge full of Dom Perignon I have been collecting for 20 years to serve at my children's weddings.

The one thing I can be sure of though... booze is simply too strong these days and it's ruining it across all international borders. When I read comments from the UK about people who work in places where booze is permitted or not, I'm generally shocked. Having a glass of wine based on 1950's and earlier standards would be similar to a 3-5% alcohol and it may have even been watered as well. These days, at 13% and higher, the person drinking it probably is useless for a while afterwards.

Also, a glass of wine in the 1950's was considerably smaller than it is today. Having a glass of wine with lunch really didn't provide enough alcohol to consider. Today however, people are basically getting buzzed at lunch.

I would love to see a return to when "drinking wine" or "table wine" was a good idea. Just enough alcohol to make the flavor work, but not enough to get blasted. I've had terrible experiences with modern wines. They all taste like alcohol. It's almost as if we judging the quality of a drink based on how well we believe it will mess us up. I wonder if the European nations still remember how to make wine properly and if they could actually create wines that earned their merits on flavor as opposed to toxicity.

Well now you node: They're not known for speed, but Ceph storage systems can fly

CheesyTheClown

Re: 6ms+ w NVMe

I was thinking the same thing. But when working with asynchronous writes, it’s actually not an issue. The real issue is how many writes can be queued. If you look at most of the block based storage systems (NetApp for example) they all have insanely low write latency, but their scalability is horrifying. I would never consider Ceph for block storage since that’s just plain stupid. Block storage is dead and only for VM losers who insist on having no clue what is actually using the storage.

I would have been far more interested in seeing database performance tests running on the storage cluster. I also think that things like erasure coding is just a terrible idea in general. File or record replication is the only sensible solution for modern storage.

A major issue which is what most people ignore on modern storage and is why block storage is just plain stupid is transaction management on power loss. Write times tend to take a really long time when writes are entirely transactional. NVMe as a fabric protocol is a really really really bad idea because it removes any intelligence from the write process.

The main problem with write latency for block storage on a system like Ceph is that it’s basically reliably storing blocks as files. This has a really high cost. It’s a great design, but again, block storage just is so wrong on so many levels that I wish they would just kill it off.

So if Micron wants to impress me, I’d much rather see a cluster of much much smaller nodes running something like MongoDB or Couchbase in a cluster. A great test would be performance across a cluster of Latte Panda Alpha nodes with a single Micron SSD each. Use gigabit network switches and enable QoS and multicast. I suspect they should see quadruple the performance that they are publishing here for substantially less money.

Better yet, how about a similar design providing high performance object storage for photographs? When managing map/reduce cluster storage, add hot and cold as well, it would be dozens of times faster per transaction.

This is a design that uses new tools to solve an old problem which no one should be wasting more money on. Big servers are soooooo 2015.