* Posts by CheesyTheClown

779 publicly visible posts • joined 3 Jul 2009

Hey Europe, your apathetic IT spending is ruining it for everyone

CheesyTheClown

Downsizing

My organization is moving away from large servers in favor of better code.

I believe Europe is generally far ahead of the US on IT spending. We adopt sooner and we learn sooner.

Let’s consider Cisco, VMware and Windows Server as a platform. This is because these are the prices I know best.

To build a small corporate data center based on minimalistic best practices able to ensure one single server is operational an able to handle business at all times.

We need two data centers each configured as follows :

- 3 rack servers

- 2 spine switches

- 2 leaf switches

- 2 data center bridging switches

- VMware licenses (vCloud Foundation)

- Windows Server enterprise licenses.

Also, there needs to be at least two MPLS or dark fiber connections between data centers.

This design is barebones. It contains no applications, SQL Storage, blob storage, NoSQL. This is just the absolute base configuration of Windows Server, Active Directory, file Server, etc...

The cost of this in retail pricing (which will be about 40% less in reality) is about $1.6 million U.S.

If you build anything smaller than this, you should be in the cloud using an IaaS platform which provides this for you.

Let’s also add that if you buy said system, it’s not plug and play. You need a slew of It consultants to build, configure and run it. It’s a long term project. Consider that TCO is measured in the millions over the years.

It’s far better to use smaller more classic platforms. In fact, I’ve spent a great deal of time evaluating IBM i for new projects. Eventually, I settled on using Raspberry Pi with Apache OpenWhisk. We’ll build all new systems using old-school CICS methodologies on OpenWhisk. We’ll start by adding proper C# support and building a role based access control system that meets our security needs.

So, we’ll go from about €10 million a year for new projects to about €2 million a year since we’ll stop spending money on physical hardware like servers.

Remember those holy tech wars we used to have? Heh, good times

CheesyTheClown

Re: NetWare or NT

They both had their strengths. NetWare was more stable in the beginning, but by NT 3.51 when Windows came into its own, people already chose their religion.

NDS was really great, but when Active Directory came along, most of us didn’t need all the features. When group policy happened, NDS was more of a burden than a help.

Novell was way too slow to adopt IP as well. IPX was great for LAN but wouldn’t scale for WAN. Multiprotocol router was too expensive too.

Eventually Novell forgot who they were and their file server wa mediocre. Their print server was no longer needed. Their identity services were obsolete. They lacked group policy support. They didn’t do Internet.

Strangely, I was in an NDS training course for a new deployment two weeks ago. It’s still there and it’s still great. It actually can be used to make Linux manageable.

CheesyTheClown

Re: Seen loads

NetWare, NT, Lan Manager and others were all pretty good if the people on staff understood them.

Clipper and FoxPro and dBase IV were all quite good too.

DrDOS was great, but MS-DOS was good too. Once you installed PC Tools Deluxe they were all pretty good.

Deskview/X on one computer, Windows for Workgroups 3.11 on the other.

My big one was Microsoft C 5.1 vs. Turbo C++ v1.0

I used both, but couldn’t do C++ on MS without Glockenspiel and Codeview was a whore.

CheesyTheClown

Re: "something that isn't backed by anything of value can have value?"

By that, you would believe that Tether which is SEC approved and based in California and locksteps their currency to the USD and requires an actual holding of 1USD per circulating Tether would count?

I’m just chiming in to make noise.

Crytocurrency can be legitimized. In fact, it could be the replacement for plastic and paper sooner or later. It is very likely a good solution in the long term. For me though, things like Bitcoin, Monero and others are a bit of a disaster.

CheesyTheClown

Re: Um....

My childhood 24/7 ballgags, brownie mix and clown porn.

(Pretty much the best movie quote of the 21st century)

CheesyTheClown

Re: No mention of systemd?

I like it :) But I have a use case for it.

I honestly don’t mind either way. I prefer using and developing for systemd. But that makes me special it seems.

CheesyTheClown

Re: "Religion gave way to pragmatism"?

Is it bad that I like systemd?

I actually left Linux a long time ago because of the impressive amount of stupid involved with /etc and others. It was a flaming shithole and still is.

systemd is a massive improvement and I am slowly moving back to Linux. In fact, I’m replacing a few dozen Cisco servers running VMware and Windows with a few hundred custom Raspberry PIs. I feel very strongly the move would have been feasibly impossible without systemd.

But I guess I’m not part of the masses. Oh well.

Security hole in AMD CPUs' hidden secure processor code revealed ahead of patches

CheesyTheClown

Re: BIOS updates? What BIOS updates?

And the reason for UEFI was good. BIOS required 16-bit. BIOS was not a beautiful or secure thing. In fact, BIOS was a disaster.

Consider that x86 BIOS implemented a software interrupt interface which required chaining to support adding additional device support. Booting from anything other than ATA was limited to emulating a hard drive protocol dating back to the late 70s.

The total space available to implement boot support for a new block device was a few kilobytes and was a nightmare for updating.

UEFI is a glorious update but certainly could have been better. It is however hundreds of times better than BIOS ever could be. The question is whether hardening it is an option. There is no reason why hardening UEFI isn’t possible. In fact, the main problem with UEFI is that system administrators are deprived of a suitable set of books, videos, etc to make them competent on the platform.

Keep in mind that UEFI is based on platforms which date back to the 70s as well. We lived in the dark ages in the PC world for way too long. If you ever used a Sparc or a MIPS, you would know that the UEFI design is brilliant.

Here come the lawyers! Intel slapped with three Meltdown bug lawsuits

CheesyTheClown

Re: OK, I'll bite

Not only are the fixes through software, hardware fixes wouldn’t work anyway.

So, here’s the choices :

1) Get security at the cost of performance by properly flushing the pipelines between task switches.

2) Disable predictive branch execution slowing stuff down MUCH more... as in make the cores as slow as the ARM cores in the Raspberry Pi (which is awesome, but SLOW)

3) Implement something similar to an IPS in software to keep malicious code from running on the device. This is more than antivirus or anti malware. This would need to be an integral component of web browsers, operating systems, etc... compiled code can be a struggle because finding patterns to exploit the pipeline would require something similar to recompiling the code to perform full analysis on it before it is run. Things like Windows Smart Screen does this by blocking unknown or unverified code from running without explicit permission. JIT developers for web browsers can protect against these attacks by refusing to generate code which makes these types of attacks possible.

The second option is a stupid idea and should be ignored. AMDs solution which is to encrypt memory between processes is useless in a modern environment where threads are replacing processes in multitenancy. Hardware patches are not a reasonable option. Intel has actually not done anything wrong here.

The first solution is necessary. But it will take time before OS developer do their jobs properly and maybe even implement ring 1 or ring 2 finally to properly support multi-level memory and process protection as they should have 25 years ago. On the other hand, the system call interface is long overdue for modernization. Real-time operating systems (and generally microkernels) have always been slower than Windows or Linux... but they all have optimized the task switch for these purposes far better than other systems. It’s a hit in performance we should have taken in the late 90’s before expectations became unrealistic.

The third option is the best solution. All OS and browser vendors have gods of counting clock cycles on staff. I know a few of them and even named my son after one as I spent so much time with him and grew to like his name. These guys will alter their JITs to handle this properly. It will almost certainly actually improve their code as well.

I’m pretty sure Microsoft and Apple will also do an admirable job updating their prescreening systems. As for Linux... their lack of decent anti-malware will be an issue. And VMware is doomed as their kernel will not support proper fixes for these problems... they’ll simply have to flush the pipeline. Of course, if they ever implement paravirtualization like a company with a clue would do, they could probably mitigate the problems and also save their customers billions on RAM and CPU.

CheesyTheClown

Re: OK, I'll bite

I agree.

The patches which have been released thus far are temporary solutions and in reality, the need for them is because the OS developers decided to begin with that it was worth the risk to gain extra performance by not flushing the pipeline. Of course, I haven’t read the specific design documents from Intel describing the task switch mechanism for the effected CPUs, but following reading the reports, it was insanely obvious in hindsight that this would be a problem.

I also see some excellent opportunities to exploit AMD processors using similar techniques in real world applications. AMD claims that their processors are not effected because within a process, the memory is shielded, but this doesn’t consider multiple threads within a multitennant application running within the same process... which would definitely be effected. I can easily see the opportunity to hijack for example Wordpress sites using this exploit on AMD systems.

This is a problem in OS design in general. It is clear mechanisms exist in the CPU to harden against this exploit. And it is clear that operating systems will have to be redesigned, possibly on a somewhat fundamental level to properly operate on predictive out of order architectures. This is called evolution. Sometimes we have to take a step back to make a bigger step forward.

I think Intel is handling this quite well. I believe Linux will see some much needed architectural changes that will make it a little more similar to a microkernel (long overdue) and so will other OSes.

I’ll be digging this week in hopes of exploiting the VMXNET3 driver on Linux to gain root access to the Linux kernel. VMware has done such an impressively bad job designing that driver that I managed to identify over a dozen possible attack vectors within a few hours of research. I believe very strongly that over 90% of that specific driver should be moved to user mode which will have devastating performance impact on all Linux systems running on VMware. The goal is hopefully to demonstrate at a security conference how to hijack a Linux based firewall running in transparent mode so that logging will be impossible. I don’t expect it to be a challenge.

Nvidia: Using cheap GeForce, Titan GPUs in servers? Haha, nope!

CheesyTheClown

No one mentioned cloud providers

Seems strange to me that no one here noticed that this is primarily directed at forcing Microsoft, Google and Amazon to buy server parts instead of consumer.

I’m pretty sure this is an effort by NVidia to

A) sell more data center GPUs

B) give Cisco, Dell and HP a business case to continue building NVidia mezzanines for their servers

C) force companies to pay for ridiculously overpriced technologies like Grid on Vmware as opposed to simply using regular desktop drivers on Hyper-V which is A LOT less expensive. And by a lot less, think in terms of about $100k for a small 200 user VDI environment.... just for the driver licensing.

This isn’t targeted at small companies or users. This is targeted at companies like Amazon who are “cheating” NVidia out of probably a hundred million dollars a year by using consumer grade cards.

Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign

CheesyTheClown

Counting chickens?

First, this is news and while I don’t buy into the whole fake news thing, I do buy into fantastic headlines without proper information to back it up.

There are some oddities here I’m not comfortable with. The information in this article appears to make a point of it being of greatest impact to cloud virtualization, though the writing is so convoluted, I can’t be positive about this.

I can’t tell whether this is an issue that will actually impact consumer level usage. I also can’t tell whether there would actually be 30% performance hit or whether there would be something more like 1% except in special circumstance. The headline is a little too fantastic and it reminds me of people talking about how much weight they lost... and the include taking off their shoes and wet jacket.

Everyone is jumping to conclusions that AMD or Intel is better at whatever. Bugs happen.

Someone claims that the Linux and Windows kernels are being rewritten to execute all syscalls in user space. This is generally crap. This sounds like one of Linus’s rants about to go haywire. Something about screwing things up for the sake of security as opposed to making a real fix.

Keep in mind, syscalls have to go through the kernel. If a malformed syscall is responsible for the memory corruption, making a syscall in another user thread will probably not help anything as the damage will be done when crossing threads via the syscall interface.

Very little software is so heavily dependent on syscalls. Yes, there is big I/O things, but we’re not discussing the cost of running syscalls, we’re talking about the call cost itself. Most developers don’t spend time in dtrace or similar profiling syscalls since we don’t pound the syscall interface that heavily to begin with.

Until we have details, we’re counting chickens before they’ve hatched. And honestly, I’d guess that outside of multi-tenant environments, this is a non-issue otherwise Apple would be rushing to rewrite as well.

In multitannant environments, there are 3 generations Intel needs to be concerned with.

Xeon E5 - v1 and v2

Xeon E5 - v3 and v4

Xeon configurable

If necessary, Intel could produce 3 models of high end parts with fixes enmass and insurance will cover the cost.

Companies like Amazon, Microsoft and Google, may have a million systems each running this stuff could experience issues, but in reality, in PaaS, automated code review can catch exploits before they become a problem. In FaaS, this is not an issue. In SaaS this is not an issue. Only IaaS is a problem and while Amazon, Google and Microsoft have big numbers of IaaS systems, they can drop performance without the customer noticing, scale-out, then upgrade servers and consolidate. Swapping CPUs doesn’t require rocket scientists and in the case of OpenCompute or Google cookie sheet servers shouldn’t take more than 5 minutes per server. And to be fair, probably 25% of the servers are generally due for upgrades each year anyway.

I think Intel is handling this well so far. They have insurance plans in place to handle these issues and although general operating practice is to wait for a class action suit and settle it in a fashion that pays a lawyer $100 million and gives $5 coupons to anyone who fills out a 30 page form, Amazon, Google and Microsoft have deals in place with Intel which say “Treat us nice or we’ll build our next batch of servers on AMD or Qualcomm”.

I’d say I’m more likely to be effected by the lunar eclipse in New Zealand than this... and I’m in Norway.

Let’s wait for details before making a big deal. For people who remember the Intel floating point bug, it was a huge deal!!! So huge that after some software patches came out, there must have been at least 50 people world wide who actually suffered from it.

Storage startup WekaIO punts latency-slashing parallel file system tech

CheesyTheClown

Re: Can't be an RTOS either

I remember as a kid, a school teacher challenged me and some friends to cut a hole in a business card large enough to walk through. After 10 minutes we were walking up to the teacher to explain it couldn’t be done. Then a friend turned heal and went back and made a series of cuts and we walked through the hole.

This puzzle allowed me to be the kid with the scissors.

A real-time OS needs to handle “interrupts” as they come in. Code in an RTOS should never block and if disk access is required, the program should send a read request as a queued event to the disk I/O subsystem and receive the result back once it is finished. Compared to when I was coding for QNX in the 90s, this type of programming is extremely easy thanks to improved language support with extensions like lambda functions.

Now, the “time” aspect of RTOS generally would mean that as soon as a timer interrupt fires at a specific interval on the system’s programmable hardware timer, a deterministic number of cycles should pass before the entry point to the handler. This allowed “Real Time”.

If we eliminate the time aspect of the processes involved... which a storage system wouldn’t be concerned with anyway, it is possible to develop an operating system that behaves as an RTOS if “interrupt” handling can be made deterministic. In addition, if all code is written as non-blocking by handling all protocols using lambdas and “async programming patterns”. So, as opposed to a general purpose OS which task switches without reason, an RTOS would schedule everything based on prioritized event queues.

So, the main issue is how to process interrupts in real time within a user-space process.

1) Implement interrupt priority for waking the scheduler of the process on incoming interrupt. This can be done in a few creative ways, a kernel module is one method, alternatively setting processor affinity, to reserve 100% of a CPU core and running a spin-lock sleep could work. There are dozens of ways to do this.

2) Expose a second level MMU to the app allowing the process to directly handle protection faults. This is the obvious method of providing deterministic interrupt handling.

3) Expose the virtual NIC as a PCIe device to the app. Then when the app is managing its own MMU with it’s own GDt and IDT, the PCIe adapter can trigger faults for MMIO interrupts within application managed memory space. So as the RTOS app sets virtual protection on an Ethernet memory address, it should signal the app in a reasonably “real-time” fashion.

So, while I was with you all the way up to writing a full page of text... I got to just in front of the teacher’s desk before turning back to cut a whole in a business card to walk through. As long as the events aren’t actually timer triggered, I believe an RTOS in user space via hardware virtualization is entirely reasonable. :)

The hounds of storage track converged and hyperconverged beasts

CheesyTheClown

Re: Integrated Systems

Umm... what do you mean high end?

Have you seen the performance numbers on Hyperflex, FlexPod, VxRail. Etc...???

My goodness, SQL query times to be ashamed of. MongoDB performance to make a grown man cry. Hadoop performance which looks like someone is taking downers. Object storage numbers of a pathetic nature.

These are low end systems for companies who attempt to compensate for unskilled staff by throwing millions at Dell, Cisco and HP.

I’ll give you a good means of knowing your IT department is incapable of doing anything useful. They actually buy storage systems instead of database systems.

Another clue, they think in terms of VMs and containers. This is a pretty good sign they don’t know what they’re doing.

If you have 10Gbe or faster networking to the servers, you probably have no clue what you’re doing.

If you have servers dual homed to network switches, your system probably is designed to fail and outage windows are scheduled all the time for no apparent reason.

No... these are the low end systems for low performance throw brute force. Unless you are performing oil discovery, mapping genomes, etc... they are about as low end as you can get. Of course, hyperconverged storage is scarily slow compared to specialized storage.

Look at scale out database solutions. They cost far less, require far less hardware and perform far better than what you’re used to. And no... you don’t need VM storage except for your legacy crap which you shouldn’t deploy more of anyway.

That said, VDI is a solution for super big servers... but even then, you shouldn’t have high storage requirements. The base VM should be replicated to every server in the pool and all user storage should be centralized (OneDrive for Business for example). And for that, a simple Windows Server 2016 Core install with Enterprise license and Kubernets should handle it. Though project Honolulu may automate it as well.

Again, no need for storage subsystems, SANs or anything stupid like that. It’s all about the databases.

Missed opportunity bingo: IBM's wasted years and the $92bn cash splurge

CheesyTheClown

Failure to attract new business

IBM is seen by guys like me in the outside world as unapproachable. I generally am responsible for $10-$25 million a year in solution purchase decisions. I have been very interested in IBM as parts of those solutions but regularly lean towards “build your own” solutions.

IBM should be working their asses off on making access to their technologies interesting to the Github world. For example, right now the world is moving towards FaaS services which has been IBM’s core business model for nearly 50 years and AWS and Microsoft will take that all from them. I have a small $1.8 million budget for a proof of concept over the next 18 months. If it pans out, it will become the core system for the next 10 years at a 2 billion euro multi-national company which is a subsidiary of a top-5 global telecom company.

I’m designing the system by using AS/400 as the architecture of our platform. Since IBM is something which feels unapproachable to me, I’ll invest in small group of people to rebuild all the key components of the AS/400 platform instead of simply buying them from IBM. This could easily be worth $40-$50 million for IBM, but I wouldn’t even know who to call to open a conversation with them. It’s easier to buy some out of print manuals on the platform architecture and build it ourselves and open source it.

Microsoft Surface Book 2: Electric Boogaloo. Bigger, badder, better

CheesyTheClown

Re: Middle Income vs Middle Class

Haha... it all depends whether you’re poking fun or intentionally taking offense to the letter as opposed to the intent of what was written.

I sadly wish I could say that I didn’t have to sit in a room full of teenager boys comparing specifications of Jordan’s. I would gladly be classless if that is the cost. :)

To be fair, one pair was his big birthday present and the other pair, he was allowed to spend a few hundred dollars of his confirmation money before putting the rest in savings. I don’t believe their is anything more than placebo with regards to “Tech” in sneakers. But he doesn’t ask for much, so as long as he brings good grades and doesn’t go rotten, we don’t mind spoiling him occasionally.

Honestly, I’d buy him another pair if he brought be 10 A+ grades in Math, Science and Norwegian in a row. I figure if I buy his grades now, I won’t have to pay as much to support him later.

CheesyTheClown

Re: Buy the US 15” from Amazon or similar

Well played!

P.S. - You made my day :)

CheesyTheClown

Buy the US 15” from Amazon or similar

$3200 gets you at GTX1060 and 15” screen. Then after shipping and VAT, it’s about 3200£.

I’ve had my Surface Book for two years now, top model first generation and I believe it was pretty cheap. I paid about £3500 for it after overnight shipping and taxes. It has been the best laptop I’ve ever owned and I still use it 8-18 hours a day. It’s my development and gaming PC and I’ve experienced some sleep issues with it, but never had a problem otherwise.

I owned a Samsung Series 7 Slate (the machine Windows 8 was designed for), a Surface Pro, Surface Pro 2 and Surface Pro 3. When I switched from Apple to Microsoft, my life has only gotten better with each generation. Altogether, I’ve owned about 50 laptops over the years and other than my Wacom Cintiq Companion, I have never been so happy.

Consider a machine which costs about $150 a month to own over a two year life span. I more than that for cigarettes or coffee. The fact is, $150 a month isn’t even a rounding error. Add to that the tax deduction associated with it which makes it closer to $135 a month.

I know there are people out there who think I terms of what does it cost on the first day, but as a typical middle class household that takes home $10,000 a month after tax, even if we had to pay for it (as opposed to the boss), $135 of $10,000 for a tool to use for 90% or more of your work doesn’t matter at all.

P.S. I said middle class, not middle income. Middle income is politician talk for making the less poor of the poor feel like they’re not being screwed by a classist society. Middle class means white-collar, home owners with college degrees. If you need help telling the difference, middle class teenagers own one or two pairs of Jordan’s. Middle income owns 10-30 because they lack the class to manage their money.

Storage Wars: Not very long ago, a hop, skip and a jump away...

CheesyTheClown

My predictions for 2018-2020

The storage market will plummet because companies will stop building storage systems when they should be building database systems.

1) AzureStack, OpenStack, AWS, Google Cloud will start being seen at home for enterprises. These systems have been saving massive amounts of space by dumping traditional storage systems.

2) Storage will no longer be blocks or files or strictly objects. Systems will storage data strictly as structured, unstructured or blob. All three of these systems scale-out beautifully by simply adding more inexpensive nodes and eliminate the complexity and cost of fault tolerant designs. Records are sharded across nodes and based on heuristics, store data a minimum of three times and archives data to spinning disks as the data grows colder or is marked deleted. As such, there is far less waste. Performance increases almost infinitely as more inexpensive nodes are added and map/reduce technologies give incredible performance. As such, file and block storage will start being shunned as they are generally a REALLY BAD IDEA.

3) Stateless applications will make great progress taking over. All new applications will be developed towards cloud platforms and on functions or lambdas (FaaS). The cost difference is so incredibly drastic that companies will even start rewriting systems for FaaS. The reason is that FaaS uses approximately 10,000 to 25,000 less computing resources and are far easier to operate. Applications running on crappy networks with crappy nodes repetitively outperform the best Cisco or Oracle have to offer. As for storage, what's better? Two super fast 32GB SAN fabrics shared across massive servers accessing gigabytes per second or a well designed system running on 200 Raspberry Pis with map/reduce technology for distributing the load and gathering information as it reaches where it is needed? That's right, option #2 will destroy the absolute best that Netapp or EMC have to offer every single time.

4) Virtual machines won't go away and companies will still piss away good money after bad to maintain and grow their virtual machine platform without even knowing why. They'll look at VM and storage statistics and not bother talking with the DBA to identify whether there is something which can be done to cut CPU and storage requirements. Companies will learn that no matter how much more advanced data centers get, they always cost more... not less. The minimum entry price to a virtualized data center today (assuming standard industry discounts) is about $1.9 million. If you cut corners and spend less, you really really really should be in the cloud instead. It costs $1.9 million to buy 6 servers, 4 switches and all the applicable software licenses to run a VMware data center. A 6 blade data center is the minimum configuration for almost guaranteeing one machine is always up. Dumping VMware and using Hyper-V will cut the cost under a million and using Huewai instead of Cisco can save maybe another hundred thousand... oddly, as a hard core Cisco guy, I'm considering Dell as a replacement for Cisco servers at the moment as Cisco desperately needs to rewrite UCS manager now that data centers are completely changed.

I can go on... but storage as we know and use it today is going to be nearly dead because scaleout has come to SQL, NoSQL and Blob. There just isn't any reason to invest in SAN or file storage anymore.

Guess who's developing storage class memory kit? And cooking the chips on the side...

CheesyTheClown

Perpetuating the problem?

Most enterprise storage today is devoted to virtualization waste. Using virtual machines to solve problems in a SAN oriented environment has been an absolute planetary disaster. Many gigawatts of power are wasted 24/7 hosting systems which run at about 1/10000th the efficiency they should operate at. NVMe shouldn't even have a business case in today's storage world.

This announcement was interesting because it appears that the solution presented has focus on database and object storage. VMs and containers have their own section, but VMs and containers are yesterdays news. Companies deploying on VMs or containers obviously have absolutely no clue what they're doing. They're letting IT people build platforms for systems without the slightest understanding of what they're actually deploying. They're just focusing all their time on building VM infrastructures which are simply crap for business systems in general.

Let's make things simple. Businesses need the following :

- Identity

- Collaboration (e-mail, voice, video, Microsoft Teams/Slack, etc...)

- Accounting

- CRM

- Business logic and reporting

Identity can be hosted anywhere but for the purpose of accounting (a key component of identity) it should be cloud hosted. In fact, there should be laws requiring that identity is cloud hosted as it is a means of eliminating questions of authenticity of submitted logs within the court systems.

Collaboration is generally something which should work over the Internet between colleagues and b2b... but again, for the ability to provide records to courts upon subpoena, cloud hosted is best for data authenticity sake. In addition, given the insane security issues related to collaboration technologies like e-mail servers, using a service like Google Mail, Microsoft Azure, etc... is far more sensible than hosting at home. No group of 10-20 people working at a bank or government agency will ever be able to harden their collaboration solutions as well as a team of people working at Google or Microsoft.

Accounting... accounting should never ever ever be hosted on a SAN or NAS to begin with. There are 10000 reasons why this is just plain stupid. It should only ever be hosted on a proper database infrastructure employing sharded and transactional storage with proper transactional backup systems in place. Large banks can manage this in-house, but most companies run software designed to meet the needs of their national financial accounting requirements. Those systems need to be constantly updated to stay in sync with all the latest financial regulations. To do this, SaaS solutions from the vendors of those systems is the only reliable means of supporting accounting systems today. Consider that if the new U.S. tax code makes it through congress, there will be probably millions of accounting systems being patched soon. If this is done in the cloud, if there's a glitch, it will be corrected by the vendor. If there are glitches doing so in house (and there often is), data loss as well as many other problems will occur. Using systems which log data transactionally in the cloud as well as logging the individual REST calls allows data loss or corruption to be completely mitigated. This can't be said on-site solutions.

CRM is a database. Every single piece of data stored in a CRM is either database records or objects associated to database records. There is absolutely no intelligent reason why anyone would ever run a SAN to store this information. Databases and object storage are far more reliable. Using systems like those offered from NetApp, EMC, etc... are insanely stupid as they don't store data logically for this type of media. They've added APIs with absolutely no regard for application requirements. Consider that databases and object storage employ sharding which inherently has highly granular storage tiering and data redundancy. The average company could probably invest less than $2000 and have a stable all flash system with 3-10 fold resiliency and performance able to shake the earth an EMC, Netapp or 3Par stands on. We are doing this now with 240GB drives mounted to Raspberry PIs. Our database performance is many times faster than the fastest NetApp on the market today. We have far more resiliency and a far more intelligent backup strategy as all of our data is entirely transactional.

Then there's business systems. If you need to understand how these should work, then I highly recommend you read the Wikipedia entry on AS/400. Modern FaaS platforms operate on the exact same premises as System/36, System/38 and AS/400. You can run the absolute biggest enterprises on a few thousand bucks of hardware these days with massive redundancy without the need for expensive networks or heavy CPUs. The cost is in the platform and maintaining the platform. Pick one and settle on it. Once you do, then you build a team of people who learn the ins and outs of it and keep it running for 30+ years.

As for Big Data. The only reason you need "storage class" anything here is that companies put too much on a single node. If you use smaller and lower powered nodes, you can build an in-house Google style big data solution that far outstrips most systems available today in performance using purely consumer or IoT grade equipment. If you need this kind of storage, you have an IT team who hasn't the slightest idea how things like Map/Reduce works. Map/Reduce doesn't need 100Gbe or NVMe. It works pretty well over 100Mb and mSATA. Just add more nodes.

No one saw it coming: Rubin's Essential phone considered anything but

CheesyTheClown

Re: Google Play reports a mere 50,000 download of Essential's Camera app

And out of curiosity, how many of those downloads were to telephones on display in stores?

CheesyTheClown

Essential is missing something essential

A support infrastructure.

If you break the screen... where will it be replaced?

If you need support... which store will you visit?

CheesyTheClown

Re: Saw it coming, just didn't care.

I am using an iPhone 6s plus and have no intention of changing phones so long as there is no headphone jack. The headphone jack is a standard which allows me to use the same headphones on dozens of devices. And dongles don't work... at least not more than a few days before you have to buy a new one.

As for SD card... I can't go there with you. It feels too much like saying you want a floppy drive on an internet connected device. 256GB on phones these days... Do you really need more?

Pickaxe chops cable, KOs UKFast data centre

CheesyTheClown

Not entirely true

I worked as an engineer developing the circuitry and switching components for UPS systems running the safety systems at two nuclear facilities in the U.S.. these systems delivered 360v at 375amps uninterrupted

Rule #1 : Four independent UPS systems

Rule #2 : Two UPS off grid powering the safety systems. One at 100% drain, one at 50% drain

Rule #3 : One UPS being discharged in a controlled manor to level battery life and identify cell defects

Rule #4 : Recharge the drained battery

Rule #5 : Fourth UPS drain and recharge separately

Rule #6 : Two diesel generators off grid

This system may not guarantee 100%, but it is far better than five-9’s. There can absolute catastrophic failure on the supplying grid and it does not impact the systems even one bit. This is because the systems are never actually connected to the grid. And before you come back with issues or waste related to transference, the cost benefits far outweigh the losses because the life span of 90% the cells are extended from four years to an additional 3-5 years by properly managing them in this fashion. And the power lost at this level is far less expensive than replacing the cells twice as often.

P.S. before you call bullshit, there was extensive (corroborated) research at Univeristy of South Florida over a period of 15 years on this one topic.

It's a decade since DevOps became a 'thing' – and people still don't know what it means

CheesyTheClown

Re: Nope.

haha What do you classify as a practitioner?

I know a lot of practitioners as well, and DevOps is the current name of the evolution of business software development, operations and process management. It's not new. We learn as we go and we improve. DevOps has nothing to do with writing software to perform the IT departments job. It has nothing to do with automating things like virtual machines and containers. It has nothing to do with any of that. It has to do with operations and development working together to ensure there is a stable and maintainable platform to build and maintain stable software against through proven test driven development techniques.

Software defined and automation and all that crap has nothing to do with DevOps. You could of course use those things as part of the DevOps process.

The goal is to not need all the IT stuff to keep things running. You should be 100% focused on information systems instead. Start with a stack and maintain the stack. A stack is not about VMs and containers. It's about a few simple things :

- Input (web server)

- Broker (what code should be run based on the input)

- Processes (the code which is run)

- Storage (SQL, NoSQL, Logging...)

There are many different ways to provide this. One solution would be for example AWS, another is Azure and Azure Stack. You can also build your own. But in reality, there are many stacks already out there and there's no value in building new ones all the time. As such, while the stack vendor may employ things like automation and kubernets and docker and such, they're irrelevant.

What we want is :

- The ability to build code

- The ability to test code

- The ability to monitor code

- The ability to work entirely transactionally

Modern DevOps environments are just a logical progression on classic mainframe development to include things like build servers, collaborative code management, revision control, etc... it's also adding an additional role which used to be entirely owned by the DBA which goes further to ensure that as the platform progresses, operations, DBA and development work as a group so that we reduce the number of surprises.

Of course, you may know more about this than me.

CheesyTheClown

Re: Yawn...

Isn't it nice?

It's awesome to be in a world where you can have developers on staff who... if the code is slow will work with the DBA to make it better.

Imagine a world where the operators would see 90% CPU usage and be able to discuss with the developers what was going on and work together to identify whether it was bad code, an anomaly, etc... and then correct the problem? This generally happens because code which was originally intended for batch processing is reused without optimization for transaction processing. So, instead of being run once a night or hour, it instead is run a million times a minute. So, we then optimize queries and cache data if necessary and make sure the database is sharding the hot data where needed.

If you're not pissing away $1.9 million for :

- 6x Rack servers with 20 cores each, 256GB RAM each and 16 enterprise SSD drives

- 4 leaf switches, 4 spine switches, 4 data center interconnect switches plus all the 40Gb cable

- 2 dark fibers with CWDM for DCI

- VMware Cloud Foundation with NSX, vSAN, ESX/vSphere

- Windows Server Enterprise 2016

Which is pretty much the lowest end configuration you would ever want to run for a DIY data center...

You can instead spend the money on things like making software which really doesn't need systems like that. Run your generic crap in the cloud. Build your custom stuff to run locally. And keep in mind that the developers are able to run their systems on 10 year old laptops while they're coding. But the good news is that by dumping the data center and the staff to run it, they can now have new laptops and a better coffee machine. :)

CheesyTheClown

Re: @CheesyTheClown

I'm the software architect. Pretty sure my team is safe.

I hope to be eventually outsourced. It would mean we did such a good job that the system won't require having us around.

My next career is biotechnology. I hope to obsolete myself in the next 3-5 years while studying to enter my new field. I'm hoping to move on to making a medical device for imaging the inner ear of a human and possibly using ultrasound to perform some basic procedures related to balance issues.

I do really appreciate the concern though. It is very kind of you :)

CheesyTheClown

Nope.

How can so few people know what DevOps is? We’ve used it in banks since the late 60s and it’s been pretty stable since the 80s.

Of course, it is clear you don’t know what it is either. It has absolutely nothing to do with developers being operators. And yes there are still occasionally 4am calls, but ask the mainframe operators at banks how often that occurs.

CheesyTheClown

Re: Yawn...

Not even close. VMs exist mainly because software developers don’t have access to a stable platform to design against. As such, they set a bunch of requirements for all kinds of systems which traditionally required more servers... which required IT people to build from a list of requirements... who then virtualized them.

In a DevOps environment, there’s no need for VMs because the platform is probably fully distributed using technologies like Redis for example. Table stores are favored over traditional SQL servers. Object storage is available without the need for file servers. As such, we develop software towards a platform which is triggered by timers and HTTP servers.

When you don’t have DevOps, you have IT guys and generally absolute shitty platforms which are closer to super computers than business systems. When you have DevOps, you can dump shit like software defined data centers. You don’t need virtual machines or containers. You don’t care about x86 vs ARM.

So.... nope... not even close. DevOps is about a system wher developers and operators develop and operate business systems without the chaos and madness you would generally find with IT people involved.

We’re currently replacing about $20 million of data center equipment and about 150 IT people with some Raspberry PIs and a dozen programmers. All the crap that has to be VMs like AD, e-mail and collaboration is going cloud. All the business software will be developed in-house. We have done initial testing and have proven that security, agility and performance is substantially higher this way.

CheesyTheClown

Re: DevOps is still snake oil

I disagree, DevOps has worked for 30+ years a lot better than modern IT has.

You buy a platform instead of building one. That means mainframe, private cloud etc... you the hire developers who build your business software to run on that platform or buy software made specifically for that platform. This is what banks do. Then, operations operates the mainframe and works with the developers to apply system updates that may contain breaks.

This system works absolutely damn near perfect and cuts IT spending to almost zero.

What DevOps were you thinking of?

No 2017 bonus for you, HPE tells employees

CheesyTheClown

New CEO takes helm of demotivated company

How would you like to take the helm of HPE knowing that the first action of your tenure is to tell employees Merry Christmas... you’re screwed. P.S. I’ll be busy all month working with the architect for my new sun room in my house.

Big Mike is going to make HPE's life a living Dell: Server sales surge

CheesyTheClown

Moving to Raspberry Pi

I’m not joking. For performance, reliability and stability, we’re dumping data centers. The fact is that it is no longer possible to build a proper data center because vendors like Dell, HP, Cisco, VMware, Microsoft, etc... have all decided that to show huge returns, they would make things so big that our budgets became so focused on building data centers that we couldn’t afford to develop the business systems that actually ran our businesses.

Then we realized that we can move our Microsoft stuff into Azure and move our active directory, email, office, sharepoint, etc... to the cloud.

Then we did a lot of research and learned that what we really needed was a reliable systems platform that would be scalable, high performance, manageable, etc... so we are focused now on dumping our data center into the cloud. Then we’ll replace VM by VM with service by service.

The DevOps team is building an OpenStack, ASP .NET Core 2, MongoDB, MariaDB distributed solution on Raspberry PI. It will be a multi-year project, but it will move 90% of the money spent on infrastructure and IT into business systems. Software licensing cost from Microsoft and VMware alone will save us $2 million a year.

BTW, performance of 200 Raspberry PI 3s has absolutely destroyed two full Cisco/NetApp all-SSD 40Gb/sec ACI racks on absolutely every business process by designing our software and system together. For large data processing, Map/Reduce is far better on 200 PIs each with an mSATA SSD than on any NetApp product we’ve seen.

The only real cost now for infrastructure is the Netscalers. We hope to find a better solution than that in the future.

So... moral of the story. If you want to run a business, build your IT to run your business. If you want to run a data center... you can do that too. But data centers and server virtualization shouldn’t be the focus of any company trying to run a business.

SurfaceBook 2 battery drains even when plugged in

CheesyTheClown

Re: I’ll piss on the hater parade

I stand corrected, I actually had to modify the registry. I did it so long ago I didn't even remember.

It was a single google and a few clicks and keystrokes. Total work... 1 minute.

A little silly there's no slider, but oh well. It's not exactly complicated.

CheesyTheClown

I’ll piss on the hater parade

I’m going to order a 15” Surface Book 2 with 1TB next week... just waiting on the approval from the boss.

I do intend to game on it, but to be honest, from what I can tell, it should take about 10 hours to deplete the battery with constant gaming at full quality with no breaks. I’m not that guy anymore. I do use a lot of graphics and GPU heavy software, but CAD and graphics aren’t calculating 120 frames a second. They idle the GPU about 85% of the time.

I will develop AR software using it. But expect that since I’ll be mostly coding, I won’t need full graphics more than 5 minutes at a time a few times an hour.

I have used Surface Book since the day it came out. And just like the reason I bought it... when there is something to be fixed, Windows update fixes it. When my power brick was running too hot, they sent me a new one... no questions asked.

As for the Windows haters... especially those bashing Windows 8... I FRIGGING LOVED WINDOWS 8 and still miss it. It was absolutely spectacular. I also have no problem turning off Windows telemetry... just press start, type telemetry, press enter and move the switch to off. As for office rent to own... $99 a year for Office is pretty cheap. I used to pay $249 a year on average to keep Office up to date.

Oh... yeh... could use Linux... hell I did for 6 years. But let’s be honest, Linux as a Desktop sucks. Much better to use Ubuntu on Windows.

Brocade undone: Broadcom's acquisition completes

CheesyTheClown

Was buying FibreChannel a good deal?

1) FC doesn’t work for hyper-converged, adapter firmware supports initiator or target mode, not both. As such, you could not host FC storage in the same chassis as it is consumed.

2) Scale-out storage which is far more capable than SAN requires multicast to replicate requests across all sharded nodes. FC (even with MPIO) does not support this. As such, FC bandwidth is always limited to a single storage node. With MPIO, it is possible to run two separate SANs for improved performance but the return on investment is very low.

3) FC carries SCSI (or in some cases NVMe) over a fibre protocol. These are block protocols which require a huge amount of processing on multiple nodes to perform block address translations and make use of long latency operations. In addition, by centralizing block storage, controllers have to perform massive hashing and lookups Tor possibly hundreds of other nodes. This is a huge bottle neck which even ASICs can’t cope with. Give massive limitation in the underlying architecture of FC SANs, distribution of deduplication tasks is not possible.

4) FC (even using Cisco’s MDS series) has severe distance limitations. This is controlled by the credit system which is tied to the size of the receiving buffers. Additional distance add additional latency which requires addition buffers to avoid bottle necks. 32Gb/s over 30km of fibre probably requires 512MB of fast cache to avoid too many bottle necks. At 50km, the link is probably mostly unused. Using FCoIP can reduce the problem slightly, but iSCSI would have been better and SMB or NFS would have been infinitely better.

I can go on, but to be fair, unless you have incompetent storage admins, FC has to look like a dog with fleas by now. We use it mostly because network engineers are horrible at supporting SCSI storage protocols. If we dump SCSI and NVMe as a long range protocol, the problems don’t exist.

I would however say that FC will last as long as storage admins last. Since they are basically irrelevant in a modern data center, there is no fear that people will stop using FC. After all, you can still find disk packs in some banks.

NetApp's back, baby, flaunting new tech and Azure cloud swagger

CheesyTheClown

What are they claiming?

So, the performance and latency numbers are on the pretty damn slow side. Probably still bottlenecks associated with using Data OnTap which is famously slow. Azure has consistently shown far better storage performance number than this in the Storage Spaces Direct configuration.

I have seen far better numbers on MariaDB using storage spaces direct in the lab as well. With a properly configured RDMA solution for SMB3 in the back end, there is generally between 80 and 320gb/s back end performance. This is substantially better than any NVMe configuration mainly because NVMe channels are so small in comparison. Of course, the obscene amount of waste in the NVMe protocol adds to that as well. NVMe is only well suited for direct to device attachment. By routing it through a fabric, it severely hurts storage latency and increases chances for errors which aren’t present when using PCIe as designed.

Overall, it’s almost always better to use MariaDB scaled on Hyper-V with paravirtualized storage drivers then to do silly things like running it virtualized over NFS. In fact, you will see far better numbers on proper Windows technologies than by using legacy storage systems like this.

I think the main issue here is that Microsoft didn’t want to deal with customers who absolutely insist on doing thing wrong. So they bought a SAN and just said... “Let NetApp deal with these guys. We’ll manage customers who have actual technical skills, NetApp can have the customers who think virtual servers are smart”.

Now Oracle stiffs its own sales reps to pocket their overtime, allegedly

CheesyTheClown

Re: Overtime falsification in the timesheet. How quaint. And how familiar.

Overtime?

I’ve worked generally 60+ hours a week for the past 25 years. WhenI became a father, I worked less because priorities changed. I however would never work a job that doesn’t excite me enough to do it all the time. People generally pay me to do what I want to do even if I wasn’t working. I don’t think I’ve ever received overtime. Though if they ask me to work more on things which bore me, I often get bonuses.

That said, I generally negotiate as part of my salary “I’m going to work a lot more than 40 hours a week and don’t want to be bothered asking for overtime. Just pay me based 50% more and we’ll call it even”.

Then again, I don’t really look for jobs. I simply leave if I don’t get what I want and we all end up happy in the end.

Windows on ARM: It's nearly here (again)

CheesyTheClown

Re: LOL

“Known” is the key word.

CheesyTheClown

Sorry... I vomited in my mouth as choking

On what planet is Chromebook secure?

A) runs Linux as a core

B) has very little security research targeting it, so most vulnerabilities are unknown.

C) Runs on fairly generic hardware produced by vendors who don’t customize to the local security hardware.

D) has a fairly small business user base and hasn’t properly been tossed to the wild as a hacker target.

I can go on... but that comment was as good as Blackberry claiming that 11 million lines of untested code for a total rewrite with an entirely new OS Core was secure.

CheesyTheClown

Instruction set doesn’t really matter

You’re absolutely right. Intel hasn’t run x86 or x64 natively for years. Instead, they have an internal instruction set decoder/recompiler implemented mostly as ASIC and partially as microcode to make it so x86 and x64 is little more that a means of delivering a program. In fact, it’s similar to .NET CIL or Java IL. It’s actually much closer to LLVM IL.

There are some real benefits to this, first is that the recompiler can identify instructions that can be executed out of order as there are no register or cache read/write dependencies. Alternatively, it can automatically run instructions on parallel on separate parts of one or more ALUs which lack dependencies. As such, more advanced cores can process the same code in less clock cycles assuming there is no contentions.

Microsoft has spent 15 years moving most non-performance critical code away from x86 or anything else and over to .NET. They also have implemented the concept of fat-binaries like Apple did with PPC, ARM and x86/x64. In addition, they have been making LLVM and CLang part of Visual Studio. Windows probably has a dozen technologies that allow platform agnostic code to run on Windows now.

Emulating x86 is nice, but is really only necessary for older and unmaintained software. Most modern programs can carry across platforms with little more than a recompile and for power conscious devices GPU intensive code will be the same and CPU intensive code is frowned upon. So, you wouldn’t want to run x264 for example on a low power device ... and certainly not emulated. You’d favor either a video encoder core or a GPU encoder.

As for JIT and AOT dynamic recompilers, I can literally write a book on the topic, but there is absolutely no reason why two architectures as similar as x86 and ARM shouldn’t be able to run each other’s code at near native speed. In fact, it may be possible to make the code run faster if targeting the specific local platform. Also consider that we have run ARM binaries emulated on x86 for a long time, the performance is very respectable. I believe Microsoft is more focused on accuracy and limitation of patent infringement. Once they get it working, it is entirely possible that running x86 on x86 may be faster than running native because JITs are amazing technology in that they can do things like intelligently pipeline execution branches and realign execution order for the host processor.

Nice comment though :)

The UK's super duper 1,000mph car is being tested in Cornwall

CheesyTheClown

Re: Cool, but why?

So, the answer is... no... there is no why. They simply justify it as being cool.

I’m like the guy who asked... I think it sounds nifty. It was have been much cooler if there was an application. Of course, I believe that the “before science gets in the way” argument is crap. To suggest that :

A) Getting to 1000MPH doesn’t require piles of science is silly. There is propolsion, aerodynamics, chemistry, etc... involved here already. This project wouldn’t stand a chance without tons of science.

B) 1000MPH is a ridiculous arbitrary number. If this were ancient Egypt, we’d claim an arbitrary number of cubits, elsewhere leagues, in civilization kilometers, etc... 1000MPH is of no particular scientific or engineering significance. Has any physicist ever calculated that 1000MPH is when an object must leave the ground? Did we decide a mile should be one thousandth of a magic number that is when things can’t be on the ground?

All this really did was prove that you can lay a rocket on its side and with the right structure and right shape, it would stick to the ground and hopefully go straight.

Oh... let’s not forget that it glorifies insane amounts of waste. I am generally horrified by stuff like this.

Now, a 1000MPH electric maglev or 1000MPH fuel cell powered EM pulse engine... that would be cool. But glorifying a sideways metal phalice with incredible thrust that ejects massive amounts of liquid while pushing so hard it bypasses friction that once depleted causes it to sputter out and become limp... I must admit these guys... brilliant or not are more than a little scary.

Apple Cook's half-baked defense of the Mac Mini: This kit ain't a leftover

CheesyTheClown

Re: Too late

It’s not about performance. It’s about connectivity.

I don’t like Mac keyboards... used them a long time and when I started using a Surface Pro, it was like a blessing from the heavens. Mac keyboards are a curse... especially the latest on which has absolutely no perceivable tactile feedback. I might as well be typing on hard wood.

So, I need a Mac to remote into.

A Mac Pro is just not a sound investment. It’s several generations behind on Xeon... which means more than on i7. It has an ancient video card. It has slow ram and slow SSD. If you’re going to spend $5000 on a new computer, it should be more. Even so, a modern version would be 10 times more machine than would be worth paying for. After all, Mac doesn’t have any real applications anymore. Final Cut is dead, Photoshop and Premier don’t work nearly as well on Mac as on PC. Blah blah.

Then there’s iMac. Right specs, but to get one with a CPU which isn’t horrifying, it takes a lot of space and doesn’t have pen support. In addition, you can’t easily crack it open to hard-wire a remote power switch and it doesn’t do Wake On LAN properly. So, you can’t use it unless it’s somewhere easily accessible.

Then there is the Mac Mini. Small, sweet and nice. I use a Mac Mini 2011 and a 2012 which I won’t upgrade unless there is a good update. The latest Mac Mini doesn’t offer anything mine doesn’t already have except USB3 and Thunderbolt 2. But to make that interesting, consider that to get that is $1500 as the cheaper ones are slower than my old ones. If I were to spend $1500 on a machine, I want current generation.

So... that means that there aren’t any Macs to buy.

Let’s add iRAPP. Apple has the absolute worst Remote Desktop support of any system. iRAPP was amazing, but it’s dead and now there is no hope for a remote management.

So that leaves a virtual hackintosh. Problem is, that requires VirtualBox or VMware... neither are attractive as I program for Docker which is on Hyper-V which can’t run side by side with either VMWare or VirtualBox.

The end result is... why bother with Mac? It’s too much work and there’s just no reason to perpetuate a platform which even Apple doesn’t seem to care about anymore. No pen, no touch, no function keys, no tactile feedback. I can’t use my iPhone headphone on Mac unless I use a dongle on the phone or play the pair and repair Bluetooth game when not looking everywhere for my other earbud which I forgot to charge anyway.

I still use iPhone... but I’m seriously regretting that since the last software update that has an iMessage app that looks like an American highway covered with billboards for 20 companies, doesn’t scroll properly, etc... let’s not get started on the new mail app changes.

Apple is a one product company. They make the iPhone and the make a computer to program it on. iPad is basically dead... well you wouldn’t buy a new one at least. I’m happy with my 5 year old one. AppleTV is cute, but I ended up buying films on Google now because it works on more devices. I’d actually switch to Android for now if Google makes a “important my iTunes stuff” app which would add my music and movies to my Google account.

Europol cops lean on phone networks, ISPs to dump CGNAT walls that 'hide' cyber-crooks

CheesyTheClown

Re: v7 needed

I write this now from a computer which has been IPv6 only (though sometimes upgraded) on a network which has been IPv6 only except the edge for 7 years.

My service provider delivers IPv6 to my house using 6rd which appends my 32-bit IP address to the end of a 28-bit network prefix they own to allow 4 /64 subnets (IPv6 does not variably subnet past /64) within my home.

Anyone using my service provider who wants IPv6 can either obtain their IPv6 information via DHCP extensions that provide the prefix and therefore automatically creates the tunnel over their IPv4 network... or they can manually configure it. Of course, you probably need to know IPv6 to do so.

I use IPv6 exclusively (except for a single HP printer and my front door lock) within my house. By using a DNS64 server, when I resolve an address which lacks an IPv6 destination, the DNS server provides the top 64-bits of my address containing a known prefix (I chose) and the bottom 32-bits contain the IPv4 address I'm trying to reach. The edge device then recognizes the destination prefix and creates a NAT record and replaces the IPv6 header with an IPv4 header to communicate with the destination device. This is called NAT64.

I run zone based firewalling on a Cisco router which allows me to allow traffic to pass from the inside of my network to the outside freely and establish return paths.

I have not seen any compatibility issues between IPv4 and IPv6 in the past 7 years. The technology is basically flawless. It's actually plug-and-play in many cases as well.

Is it possible you're claim there is a compatibility issue between the two protocols because you don't know how to use them?

BTW... I first started using IPv6 when Microsoft Research released the source code for IPv6 on Windows NT 4.0. I've had it running more or less ever since. At this time, over 85% of all my traffic is 100% IPv6 from work and home. Over 95% of all my traffic is encrypted using both IPv6 IPSEC end-to-end and 802.1ae LinkSec/MACSEC between layer-2 devices.

There has been one single problem with IPv6 which is still not resolved and I'm forced in my DNS64 gateway to force IPv4 instead of IPv6. That is because Facebook has DNS AAAA records for some of their servers which no longer exist.

As for technical complexity... a believe a drunken monkey can set this up with little effort.

But I guess you think it's worth a nearly $1 trillion investment to drop IPv6 in favor of something new.

Yes... it would cost at least $1 trillion to use something other than IPv4 and IPv6. Routers and servers can be changed to a different protocol using nothing but software. But switches and service provider routers which implement their protocols in hardware would require new chips. Since we don't replace chips, it would require replacing all Layer-3 switches and all carrier grade routers worldwide to change protocols.

Consider a small Tier-1 service provider such as Telia-Sonera that runs about 250 Cisco 9222 routers for their backbone with 400Gb/s-1Tb/s links between them. The average cost of a router on this scale is about $2.5 million. So, to change protocols on just their routers would cost $625 million in just core hardware. It would cost them approximately $2 billion just to handle their stuff.

No consider someone like the US Transport Security Agency which has 1.2 million users in their Active Directory (employees, consultants, etc...). Now consider the number of locations where they are present and the network to run it. Altogether about 4 million network ports... all Layer-3. At an average cost of $200 per network port... that would be $800 million just to change the network ports on their network. Then consider that's just the access ports and distribution and core would need to be changed to. That would place the expense up to at least $5 billion.

Those were just two examples. $1 trillion wouldn't even get the project started.

Now consider the amount of time it would take. Even if you had a "compatible system" and honestly... I have no idea what that means. IPv6 is 100% compatible with IPv4... but I support you know something I don't. But let's say there was a "compatible system" by your standards. It would take 20+ years and trillions of dollars to deploy it.

Of course, if all we care about is addressing... and it really isn't, then IPv4 is good enough and we can just use CGNAT which is expensive but really perfectly good. Thanks to CGNAT and firewall traversal mechanisms like STUN, TURN, ICE and others, there's absolutely no reason we need to make the change. Consider that China as an entire country is 100% NATed and it works fine.

So... recommended reading. 6RD and NAT64/DNS64

Then instead of saying really really really silly things about IPv6 lacking compatibility with IPv4 or that IPv6 is B-team... you can be part of the solution. The "B-team" as you call it did in fact pay close attention to real users. They first built the IPv6 infrastructure and they also solved the transition mechanism problems to get real users online without any problems. It took a long time, but it's been solid and stable since IPv6 went officially live on June 6th 2012.

FCC Commissioner blasts new TV standard as a 'household tax'

CheesyTheClown

Re: 3D

4K DOA? Haha I actually intentionally downscale 4K content. I don't want to look at people under a microscope. 4K is great for car chases, but it's horrible when you see how bad your favorite actress's skin looks when displayed as a close up on a 65" screen from 2 meters. 4K is absolutely horrible.

And I saw a few 3D movies and I actually stopped going to the movie theater because of them. I'd rather watch a film on Oculus Rift if I want it huge. In fact, it costs about the same to Oculus as it does to go to the movies and have snacks a few times a year.

NFS is now on tap in Azure – and NetApp is Microsoft's provider

CheesyTheClown

Re: Migrating without adapting

At least it wasn't just me.

I was going to ask "and what's the possible use case" and the answer it seems is "because Microsoft managed to convince NetApp to help migrate from VMware/NetApp to Hyper-V and storage spaces" :)

It seems humorous that the stated use case is to basically kill off using NetApp and the likes :)

2019: The year that Microsoft quits Surface hardware

CheesyTheClown

Re: Isn't it obvious

I read that article as well. I didn't agree with it then either. It was written without any regard for causality. People were more likely to return Microsoft devices because... wait for it... it's actually possible to return them. Microsoft actually has a really great return program and while I didn't make use of it, I did manage to walk into a Microsoft store and walk out with a replacement PC in 5 minutes without any hassles. Try doing that at. Best Buy in America or a Curries or Dixon's. In fact, compared to Apple in store service, it was amazing. My average waiting time for service at Apple Stores is 45 minutes. Microsoft was always better. And even better, instead of waiting 30 minutes to get an appointment with and appointment scheduler who will schedule you time with a Genius in 2 hours, the Microsoft store helps immediately.

As for broken devices, I bought three Surface Pro, a Surface Pro 2, a Surface RT, two Surface Pro 3s and a Surface Book. All of them are still in heavy use. With the exception of Microsoft's fairly poor magnetic power connectors, they have been absolutely amazing. (Apple's magnetic connectors were much worse).

Like my Macs which are still good even though I run 2011 models, the Surface Pros last and last. And I run older models because they last and last.

I am perfectly happy to pay Apple Care and Microsoft extended warranties because I love having the long term support. I always buy top of the line models as well... because if you will use it daily for 4-8 years, $400-800 a year is completely reasonable.

As for HP, Lenovo and Dell. I never bought a PC from them that had any love from the maker a few months later. Consider that ASUS releases an average of 1-2 BIOS updates per laptop. HP releases updates... sometimes. Dell has improved, but their updates don't need to come out more than 6 months later... that's because unless you bought "next day on-site service" the machine won't be running by then anyway.

I'll leave Acer out of the discussion because... we'll they're Acer. It's mean to beat up the slow kid.

Microsoft should stay in the game because if nothing else, even though Microsoft forced the vendors to raise the bar, they're still selling "lowest bidder shit". Yes, the market needs $129 laptops for the poor people... but anyone who can qualify for a credit card should be able to qualify for buying a $2500 laptop if they can't just pay cash. It's a long term purchase and investment.

As for corporations, I have no idea what kind of idiot would buy anything other than MS these days.

Bill Gates says he'd do CTRL-ALT-DEL with one key if given the chance to go back through time

CheesyTheClown

Antivaxxers?

Bill Gate is a brilliant man, but sometimes he pisses away time in the wrong way.

Consider ratios.

What's easier, his way or the antivaxxer way? Let's evaluate both.

Bill says that an African child is 100 times more likely to die from preventable diseases than an American.

Logistically, vaccinating and healing Africans is very difficult and nothing but an uphill battle.

The antivaxxers have already been increasing deaths in American related to mumps, measles and reubella. This is much easier as all it takes is a former porn actress with the education correlating to said career choice to campaign on morning TV about how MMR vaccines can be dangerous and cause autism.

So instead of fighting like hell to vaccinate Africans... isn't it easier and cheaper just to let porn actresses talk on morning TV?

The results should in theory be the same... the ratio Bill mentioned will clearly shrink either way.

Of course, if his goal is to actually save lives as opposed to flipping a statistic, we might do better his way.

China reveals home-grown supercomputer chips after Intel x86 ban

CheesyTheClown

Re: Interesting side effects of this development..

Let me toss in some ideas/facts :)

Windows NT was never x86/x64 only. It wasn't even originally developed on x86. Windows has been available for multiple architectures for the past 25 years. In fact, it supported multiple architectures long before any other one operating system did. In the old days when BSD or System V were ported to a new architecture, they were renamed as something else and generally there was a lot of drift between code bases due to hardware differences. The result being that UNIX programs were riddled silly with #ifdef statements.

The reason why other architectures with Windows never really took off was that we couldn't afford them. DEC Alpha AXP, the closest to succeeding cost thousands of dollars more than a PC... of course it was 10 times faster in some cases, but we simply couldn't afford it. Once Intel eventually conquered the challenge of working with RAM and system buses operating at frequencies not the same as the internal CPU frequency, they were able to ship DEC Alpha speed processors at x86 prices.

There was another big problem. There was no real Internet at the time. There was no remote desktop for Windows either. The result being that developers didn't have access to DEC Alpha machines to write code on. As such, we wrote code on x86 and said "I wish I had an Alpha. If I had an Alpha, I'd make my program run on it.". So instead of making a much cheaper DEC Alpha which could be used to seed small companies and independent developers with, DEC, in collaboration with Intel decided to make an x86 emulator for Windows on AXP.

The emulator they made was too little too late. The performance was surprisingly good, though they employed technology similar in design to Apple's Rosetta. Dynamic recompilation is not terribly difficult if you consider it. Every program in modern times has fairly clear boundaries. They call functions either in the kernel via system calls which are easy to translate... or they call functions in other libraries which are loaded and linked via 2-5 functions (depending on how they are loaded). When the libraries are from Microsoft, they know clearly what the APIs are... and if there are compatibility problems between the system level ABIs, they can be easily corrected. Some libraries can be easily instrumented with an API definition interface, though C programmers will generally reject the extra work involved... instead just porting their code. And then there's the opportunity that if an API is unknown, the system can simply recompile the library as well... and keep doing this until such time as the boundaries between the two architectures are known.

Here's the problem. In 1996, everyone coded C and even if you were programming in C++, you were basically writing C in C++. It wasn't until around 1999 when Qt became popular that C++ started being used properly. This was a problem because we were also making use of things like inline assembler. We were bypassing normal system call interfaces to hack hardware access. There were tons of problems.

Oh... let's not forget that before Windows XP, about 95% of the Windows world ran either Windows 3.1, 95, 98 or ME. As such, about 95% of all code was written on something other than Windows NT and used system interfaces which weren't compatible with Windows NT. This meant that the programmers would have to at least install Windows NT or 2000 to port their code. This would be great, but before Windows 2000, there weren't device drivers for... well anything. Most of the time, you had to buy special hardware just to run Windows NT. Then consider that Microsoft Visual Studio didn't work nearly as well in Windows 2000 as it did in Windows ME because most developers were targeting Windows ME and therefore Microsoft focused debugger development on ME instead.

So... running code emulated on Alpha did work AWESOME!!!! If the code worked on Windows NT or Windows 2000 on x86 first. Sadly, there was no real infrastructure around Windows NT for a few more years.

That brings us to the point of this rant. Microsoft has... quite publicly stated their intent to make an x86/x64 emulator for ARM. They have demoed it on stage as well. The technology is well known. The technology is well understood. I expect x86/x64 code to regularly run faster on the emulator than as native code because most code is optimized for an architecture where dynamic recompilers can optimize for the specific chip they are executing on and constantly improve the way the code is compiled as its running. This is how things like JavaScript can be faster than hand coded assembly. It adapts to the running system appropriately. In fact, Microsoft should require native code on x64 to run the same way... it would be amazing.

So, the emulator should handle about 90% software compatibility. Not more. For example, I've written code regularly which makes use of special "half-documented" APIs from Microsoft listed as "use at your own risk" since I needed to run code in the kernel space instead of user space as I needed better control over the system scheduler to achieve more real-time results. That code will never run in an emulator. Though nearly everything else will.

Then there's the major programming paradigm shift which has occurred. The number of people coding in system languages like C, C++ and Assembler has dropped considerably. On Linux, people code in languages like Python where possible. It's slow as shit, but works well enough. With advents like Python compiler technology, it's actually not even too pathetically slow anymore. On Windows, people program in .NET. You'd be pretty stupid not to in most cases. We don't really care about the portability. What's important is that the .NET libraries are frigging beautiful compared to legacy coding techniques. We don't need things like Qt and we don't have to diddle with horrible things like the standard C++ library which was designed by blind monkeys more excited about using every feature of the language than actually writing software.

The benefit of this is that .NET code runs unchanged on other architectures such as ARM or MIPS. Code optimized on x86 will remain optimized on ARM. It also gets the benefits of Javascript like dynamic compiler technology since they are basically the same thing.

Linux really never had much in the lines of hardware independent applications. Linux still has a stupid silly amount of code being written in C when it's simply the wrong tool for the job. Linux has the biggest toolbox on the planet and the Linux world still treats C as if it's a hammer and every single problem looks like a nail. Application development should never ever ever be done in system level languages anymore. It's slower... really it is... C and C++ make slower code for applications than Javascript or C#. Having to compile source code on each platform for an application is horrifying. Even considering the structure of the ABI at all is terrifying.

Linux applications have slowly gotten better since people started using Python and C# to write them. Now developers are more focused on function and quality as opposed to untangling #ifdefs and make files.

Now... let's talk super computing. This is not what you think it is I'd imagine. The CPU has never really meant much on super computers. The first thing to understand is that programmers will write code in a high level language which has absolutely no redeeming traits from a computer science perspective. For example, they can use Matlab, Mathematica, Octave, Scilab, ... many other languages. The code they write will generally be formulas containing complex math designed to work on gigantic flat datasets lacking structure at all. They of course could use simulation systems as well which generate this kind of code in the background... it's irrelevant. The code is then distributed to tens of thousands of cores by running a task scheduler. Often, the distributed code will be compiled locally for the local system which could be any processor from any architecture. Then using message passing, different tasks are executed and then collected back to a system which will sort through the results.

It never really mattered what operating system or platform a super computer runs on. In fact, I think you'd find that nearly 90% of all tasks which will run on this beast of a machine would run faster on a quad-SLI PC under a desk that had code written with far less complexity. I've worked on genetic sequencing code for a prestigious university in England which was written using a genetic sequencing system.... very fancy math... very cool algorithm. It was sucking up 1.5 megawatts of power 24/7 crunching out genomes on a big fat super computer. The lab was looking for a bigger budget so they could expand to 3 megawatts for their research.

I spent about 3 days just untangling their code... removing stupid things which made no sense at all... reducing things to be done locally instead of distributed when it would take less time to calculate it than delegate it... etc...

The result was 9 million times better performance. What used to require a 1.5 megawatt computer could now run on a laptop with an nVidia GPU... and do it considerably faster. Sadly... my optimizations were not super computer friendly, so they ended up selling the computer for pennies on the dollar to another research project.

People get super excited about super computers. They are almost always misused. They almost always are utterly wasted resources. It's a case of "Well I have a super computer. It doesn't work unless I message pass... so let me write the absolutely worst code EVER!!!! and then let's completely say who gives a fuck about data structure and let's just make that baby work!!!!"

There are rare exceptions to this... but I'd bet that most supercomputer applications could have been done far better if labs bought programmers hours instead of super computer hours.

Compsci degrees aren't returning on investment for coders – research

CheesyTheClown

Re: Peak Code Monkey

It is true that compsci is generally a canonball which is often applied where a fly swatter is better suited. If you're making web pages for a site with 200 unique visitors a day, compsci has little to offer. If you're coding the home page of Amazon or EBay, compsci is critical. One inefficient algorithm can cost millions on hardware and power costs.

Product development... for example when a developer at Google working on Chrome chooses a linked list when a balanced tree would be better has an impact measured in stock markets because faster processors and possibly more memory would be needed on hundreds of millions of PCs. Exabytes of storage would be consumed. Landfills get filled with replaced parts. Power grids get loaded. Millions of barrels of crude are burned, shipping prices increase, etc...

What is written above may sound like an exaggeration, but a telephone which loses a hour of battery life because of bad code may consume another watt per phone per day. Consider that to scale to a billion devices running that software each day. A badly placed if statement which configured a video encoder to perform rectangular vs. diamond pattern motion search could affect 50-100 million users each day.

Consider the cost of a CPU bug.... if Intel or ARM are forced to issue a firmware patch for a multiplication bug, rerouting the function from an optimized pyramid multiplier to a stacked 9-bit multiplier core located in on-chip FPGA will increase power consumption by 1-5 watts on a billion or more devices.

Some of these problems are measured in gigawatts or terawatts of load on power grids driving up commodity prices in markets spanning from power to food.

So... you're right. Compsci isn't so important in most programmer jobs. But in others, the repercussions can be globally disasterous.

More data lost or stolen in first half of 2017 than the whole of last year

CheesyTheClown

You mean more detected loss?

Call me an asshole for playing the causality card here.

Did we lose more data or did we manage to detect more data loss?