
Not surprised
I don't work much with SAP customers much any more, but when I did *every single one of them* would tell me about a failed archiving project. Seems this is almost impossible to get right on a SAP platform.
77 publicly visible posts • joined 27 Feb 2008
Which just goes to show that all that Cabinet Office/GDS nonsense from back in 2010-2014 was just "fiddling at the edges". This is what happens when you let a bunch of guys who "created a website" try and do proper software engineering and data processing. (It's no real surprise that a PR gonk like Cameron fell for all this BS)
As an ex UNIX sysadmin, a lot of this resonates with me, but I have to tell you, you are fighting a battle that most customers and buyers have long moved on from. You might not like it, I might not like it, but its the cold hard truth.
>> IBM's solution has superior resiliency features to anything offered by Intel or AMD
True 10 years ago - not in any meaningful way now. Xeon and Epyc have largely caught up on the CPU resiliency features apart from a few corner cases. IBM still get the advantage of owning the whole hardware/firmware/hypervisor/OS stack (at least assuming the customer isn't deploying SAP HANA in which case you have to use Linux for Power which is let's face it even more of a boutique product than Linux for Itanium was). But for the price you pay, an x86 implementation can afford to deliver resiliency in other ways. Long story short hardware reliability isn't usually the weak link in the chain these days.
>> They care about TCO - i.e. how much does everything cost
When TCO models take into account the cost of IBM support, and the cost of finding those IBM specialists to manage the kit, it rarely works out lower cost than hiring ten-a-penny Linux and VMware admins (with apologies to those folks - this isn't how I think, but it is how decision makers think)
>> Licensing costs for enterprise applications like SAP or Oracle can easily dwarf the hardware price.
1. Everyone is moving off Oracle as fast as they can - sure there are those who are stuck or those who have Stockholm Syndrome, but this isn't a growing market. It's a shrinking market.
2. SAP isn't licensed per socket or per core, so it's of no consequence there. Everyone on SAP is moving from databases licensed per socket/core to HANA which again isn't.
TL:DR the best tech rarely wins. Power 10 will do well with existing IBM customers who have deep pockets, but it will continue to slowly slide to a similar position to IBM's System Z portfolio. A fantastic money-earner for IBM and super-important for their small and decreasing number of big customers, but pretty much irrelevant for 99% of folks in IT.
As others have pointed out - you clearly haven't looked recently - for most of their product range, HPE just require you to have an account on their sire and you can access patches/updates etc.
Although their Integrity (IA64) systems are an exception to this I think - why you would want to operate something like that without a support contract is beyond me.
And with regards to getting HPE to support 2nd hand kit - of course they will with some provisos:
- You can expect to pay a "return to service fee" in addition to a standard service contract.
- You can expect them to ask you to prove any HPE software on the platform is properly licensed
- In some cases you can expect the first 30-60 days to incur T&M charges (for those lovely folks who try to return their system to support only when it has failed!)
Not the same, because other contracts don't say at the end of them "and now you must pay us much $$$ just to get your own data back" - i.e. egress charges.
And which price is it that AWS have consistently *not* reduced? Egress charges.
The whole edifice of public cloud pricing, technical architecture and governance is constructed to make this a one way transaction. I'm not claiming that other models are better, merely go in with your eyes open because once you are in they will "own you" as much as the big SIs have owned government IT
>I was in some training with some HPE support people not long ago and someone mentioned they'd
>worked on some K series servers back in the day. The HPE guys were saying they know several
>customers still running them, They stopped making those box before Y2K and have to be years out of
>support.
Not quite - the last generation of HP9000 K series systems was the K380/K580 - these were still sold until 2001! With regards to support they went onto mature product support (just break/fix with no firmware updates) in 2007, but HPE would still give you a support contract for these systems right up until the end of this year. Pretty much all HP9000 systems go completely end of support at the end of 2020.
All of the up to 8-socket (glueless) designs just use standard Intel UPI (was QPI) connections between the processors - same as in a 4-socket box, except you get extra NUMA latency domains as you don't have enough UPI links to direct connect all CPUs together.
For the boxes that scale over 8S (like the KunLun and HPE's Superdome Flex), typically one of the UPI links to each processor is connected to custom silicon (think FPGAs) that act as agents/proxies to filter the coherency protocols and in some cases cache data as well. This is why these systems can typically use Gold Intel CPUs as well as Platinum CPUs - the limit of 4 UPI device IDs per group of processors in Gold CPUs isn't a problem as they access others via the agent/proxy silicon.
You're thinking about CPU here a lot more than memory - memory is the key on these types of system - the ability to ingest huge amounts of data and process it without having to go through complex data partitioning processes is what differentiates them - in many cases they only have the amount of CPUs they do because they have to be there to provide that much memory (i.e. constrained by current x86 processor design) - as we move into the world of persistent memory (3d x-point etc) combined with new memory semantic interconnects (like GenZ) we'll see these sorts of systems focus much more on delivering many 000s of TB of memory with the right quantity and type of SoCs composed from a fabric to fit specific workloads. Much more interesting that stitching 2-socket systems together with software (and all the constraints that brings).
I should also have added that Cisco do sell an 8-socket server (the Cisco C880 M4), although it is OEM'd from Fujitsu - it is basically a Fujitsu PrimeQuest server- possibly they will also OEM the SkyLake version as well.
Dell also sell 8-16 socket servers from Bull (Atos) - I wouldn't call it an OEM though as I don't think Dell even rebrand these systems.
HPE Itanium systems... yeah only 8TB in a Superdome2 server, but then that's likely constrained by demand rather than capability - why spend the time certifying bigger DIMM sizes when folks aren't asking for it?
x86 Systems (which is where this action is really at)? HPE Superdome X goes to 24TB with 64GB DIMMs and 48GB with 128GB DIMMs.
Of course the prototype of "The Machine" has 160TB, and that will be (maybe not yet) persistent memory as well.
UNIXland does it big, but honestly (and it pains me as an ex-UNIX admin to say this), it's just not relevant any more.
... if your post was supposed to be sarcasm or not...
but for the avoidance of doubt - the IEA is funded by British American Tobacco, Philip Morris, and Japan Tobacco International. Of course they don't like to publicise this...
In my mind, that doesn't make them a "venerable freemarket thinktank", it makes them corporate shills.
>> you can go argue the toss with HP's own configurator tool.
I use it every day - I can build this configuration no problem - of course the 11K series racks in the config tools these days can hold up to 3000lbs static or 2500lbs rolling, so that's not surprising. It would be *close* with an older 10K series rack but still possible.
>> I think you're forgetting that four fully-configured C7000s means you have twenty-four C20 power sockets to feed, which means lots of hefty 32A PDUs to stick in your 10642 rack. Things got worse if you needed intelligent PDUs (which most companies I worked with did), which were even heavier again.
No, I'm not - I mentioned the "power distribution infrastructure" in my previous post. 24 x C19s = 4 x PDUs with 6 x C19 sockets - these come in at about 8Kg ea. for standard and 9Kg ea. for Intelligent.
Long and short of it - yes it was a tight squeeze, but always possible - I don't doubt that the config tools have said "not possible" on occasion, but they do make a bunch of assumptions - if you sit down and work all this stuff out by hand you can do it.
"with the next Intel server CPUs 9 months away, would customers actually buy a Synergy system knowing that the compute nodes they buy will become obsolete in less than a year?"
And how is this different for ANY x86 server vendor? That's the x86 market... it's never a good time to buy, and it's always a good time to buy.
"I hope that Synergy offers traditional networking connectivity, like in today's blade architecture"
Yes it does - if you want to do things the way you have in the past, you can.
The 42U 10K series racks (introduced before BladeSystem) and all the newer descendants have always been able to take up to 2000lbs of load (so about 907Kg). A fully loaded blade enclosure has a max weight of about 218Kg (that's all 8 interconnects installed - not many people have that) . So 4 blade enclosures in a rack has always been possible with a little left over for your power distribution infrastructure. Of course you DC floor and power might not be able to cope with that, but that's your problem, not HP's!
Is an End of Row / Middle of Row switch a Top of Rack switch? We could argue semantics on that all day, but here's the point...
Put 4 fully-loaded C7000 blade enclosures in a rack right now and depending on your throughput requirements you will need at least 2 uplinks from each enclosure, which means a minimum of 8 high throughput ports (realistic minimum 10Gb) of Ethernet, FC, or FCoE on your ToR switch... even though most of the traffic is probably east-west between the enclosures. On top of that, any traffic between enclosures is going to be bottle-necked on the uplink performance.
With an equivalent Synergy configuration of 4 frames, you could do all that with just 2 uplinks across all 4 frames, and also have no over-subscription on inter-frame traffic. My ToR/EoR/MoR density just dropped from 8 to 2... important if you happen to be using a switch vendor who has a pricing model where the cost is tied to your port count.
And these are just minimums - most installations I see have at least 4 uplinks per enclosure.
So for a lot of customers this is going to be the difference between deploying ToR switches and just connecting straight into the core.
Do you really mean manufacture? As in Dell would also need to persuade Intel/AMD to open a processor fab in the uk; Someone like Micron to open a memory fab in the UK; Western Digital to open a hard drive factory in the UK... and onward for all the other components that make up a computer?
Or did you mean "assemble" rather than manufacture, in which case you get to pay for all those components in dollars (cos that's what they are traded in) and then use our "famously cheap high quality well trained" UK workforce to assemble them - then you get to compete to sell them against all the countries that aren't weak vs. the dollar... on WTO tariffs until something else is negotiated.
>Using VMs allow you to allocate resources (and enforce thoise allocations) to particular VMs.
>Running multiple apps on a single OS instance does not.
Again using a "proper" operating system (read, any of the commercial UNIX still out there) you _can_ run all your apps on a single OS instance and enforce resource allocations.
See Solaris Zones, HP-UX Containers and AIX WPARs
Never used it, but there's even a product to do this on windows & linux - Parallels Virtuozzo - no idea if it is any good or not...
Of course the challenge comes when you need to operate at different patch levels and with different kernel parameters, but again these OSs will handle some of this to a greater or lesser extent, and if that doesn't work, THEN you can virtualise the OS.
It's interesting that part of the UK governments IT policy is to encourage the public sector to open up to smaller UK SMEs - thats all when and good, and certainly the right thing to do, but the question is, how does government cope when this sort of thing happens and the SME was providing services that have a knock on effect on front-line services - I'm not really saying that the big outsourcers are the way to go, but this isn't something that would happen with the "usual suspects".
Obviously one solution is not to outsource, but for the current lot in office that answer "does not compute".
So can you get something like this for an iPhone 5: http://www.tomtom.com/en_gb/products/accessories/for-smartphone/hands-free-car-kit-iphone-9UOB.001.08/
i.e. a kit which mounts the phone on the dash or windscreen with a built in lightning connector? (I don't want just a backet and then a seperate charging cable which I have to fiddle to plug in every time I get in the car)
If there is one, I couldn't find it easily on a google search - plenty of brackets and plenty of charging cables, but no combined offering...
and if there is one which I just didn't see due to my poor googling skills - well... I'll get my coat...
The big issue that's stopped me ditching my old 3GS for a 5 is the lack of a car kit with lightning connector - using their convertor is not an elegant solution - I suspect I'm not the only one in that situation - why Apple thought people would move without having the same set of accessories available as they have on their older models is beyond me.
> Today, I can see very, very few reasons why anyone would run Oracle on anything other than Intel x86.
Let me fix that for you:
Today, I can see very, very few reasons why anyone would run Oracle.
Any business that wants to make the sorts of dirty moves and sharp practices that Oracle have in the past few years won't get my custom. As has been said before, Oracle doesn't have "customers", it has "hostages". Those that claim to love it are suffering from Stockholm Syndrome.
The contractual side of this is only one part - if you have read all the pleadings as you claim, you will have noted that the other side of this is "promisory estoppel" - that is, Oracle executives repeatedly made commitments (in private, not in public) to HP about supporting the Itanium platform - now these weren't contractual agreements, but they were commitments upon which HP based significant investments... this is as much part of US (or californian) law as the contract breach argument.
And far from causing vendors to worry about making commitments to platforms and products etc, I think this could bring some much needed clarity to the whole area of heterogeneous support - i.e. instead of vendors saying "we support X with Y", they are now much more likely to say "we support X with Y for a minimum of Z years, and will extend that support if we think appropriate at some point". As a customer planning my investements in technology, I'd be much happier with that.
If you look at the original article here:
http://www.chinalaborwatch.org/news/new-377.html
It reads to me more like the workers work 7-11:30 a.m. and then 1-5 p.m., and are then told to do overtime 6-12 pm. That to me reads like they were already doing a 8.5 hour day, and were then required to work another 6-8 hours until 12pm, and sometimes 2pm (no doubt then getting up after less than 5 hours sleep to start another shift). And this is operating machinery etc...
The article also says the workers do 100-120 hours of overtime a month - based on the 5 day working week, average 20 working days in a month, thats and extra 5-6 hours working per day.
I don't know what you do on your 6 hour+ shift, but I doubt it is quite the same.
What a Saint Steve Jobs is eh?
... our bosses sh*t on us, so we don't care (in fact we'd actively encourage the idea) if your bosses sh*t on you...
I don't work in the public sector (they couldn't afford me - what does that say!), but I take no joy in seeing them take the brunt of this current round of media viliffication... people all seem to forget that these aren't "faceless beaureacrats", they are real people with homes and cars and families... no doubt as with _any_ organisation, there are efficiencies to be made (heck there are plenty of horribly inefficient private sector organisations out there!) but if you think public sector pay is one of them, I suspect you are wide of the mark.
And as for the pensions... well talk to public sector workers - few beleive they will ever get what has bee promised - they all know its only a matter of tiume before the government (whichever one it might be) reneges on the pension agreements in the same way the private sector has done.
The worrying thing in the private sector is just how many folks have no pension provision at all any more... the practice of moving jobs every 3-5 years (often as the only way to get a pay rise), means many have tiny packets of investement all over the place - personally I expect the retirement age to have been raised so far I 'll end up working til I drop!
>> And as for those complaining that doctors, senior civil servants etc. skew the public figures upwards, ever heard of company directors, barristers, etc?
The NHS employs about 1.6-1.7m people - of those about 160k are Doctors of some sort...
Show me a private sector organisation where 10% of the employees earn over £80K...
This number of people will even effect the median...
total pin compatability probably doesn't matter to most end users... how often are they really going to take out one processor and insert another??
what matters here is that the systems vendors can use volume chipsets in Itanium servers rather than custom chipsets - that makes overall systems pricing lower
if I understand the material released so far, HP will be able to produce up to an 8 socket/32 core Itanium server which could share almost every component with their x86 servers - that might not sound interesting technically, but it does mean that these systems benefit from the economies of scale HP have in x86 server manufacturing
what no-one seems to grasp here except for Intel is that the processor business these days is less about technology and more about volume/supply chain economics
Sorry, but there's a big difference between saying "we won't support it" and "we won't even sell you the license"... Oracle for example don't "support" RAC in any virtualized environment except for their own VM, but hell if you want to buy licenses for it to run on another virtualisation platform, they'll take your money no problem. IBM will likely try and take you to court if you try to do meaningul business on an emulated MF platform.
Lets be clear - this isn't about IBM worrying over support issues - this is about IBM maintaining 100% financial control of the mainframe space and its "over a barrel" customers (many of whom now seem to be suffering from Stockholm syndrome with these weird zLinux IFL plays)
Cue the MF oldtimers telling me how I know nothing etc etc...