Re: Does AMD
In fact most graphics cards have MULTIPLE DP ports and only one HDMI in-case you are connecting to a TV.
158 publicly visible posts • joined 9 Jan 2008
The new Genoa CPU's in the DL385 Gen 11 have 12 DIMM's per CPU as the Genoa chips have 12 channel memory controllers and getting to 48 DIMM's in a 2S system has challenges around trace length and timing so 1DPC is the cost effective answer.
The extra memory bandwidth would be useful for heavily virtualised workloads.
The Sapphire Rapid's Xeon systems have more memory slots as they *ONLY* have 8 channel memory so 2DPC is less of an engineering issue.
At the end of the day AMD is significantly further along in performance per watt, so if that is your metric then have another look at the AMD Genoa based systems.
Because you have a fully virtualized VMware environment with an ecosystem of management, applications, skills and processes that support a VMware environment but need or want to run some workloads in public cloud.
Because you want to use AWS as your DR site so you do not need to standup a complete replica in another "local"ish data center.
There are a myriad of reasons for it.
Well AMD Epyc has 128 PCIe lanes in either single or dual CPU configs.
32 Lanes for external comms (4 x 100GbE or 16 x 25GbE ports) and 96 lanes for 32 storage devices (3/Ruler) makes sense. That is nearly 3GB/Sec per device (2,955MB/Sec).
Of course Intel will probably not license the EDSFF form factor for AMD based systems so it might all be for naught.
Very few people can tell a real world difference between a SATA SSD and NVMe, this is especially true in a single user (Not server or shared storage) workload.
Unless you are editing 4K video in real time or similar then a 6Gbps SATA SSD is more than fast enough 99% of the time.
Two things stop it going into a laptop.
12.5mm Z Height
SAS Interface
These will end up in servers and storage arrays.
It does mean that when my vendor qualifies these I will be able to get 3PB raw into 5 rack units. At a monstrous price but that density. 3PB used to be 5+ racks, now it is 5RU.
But without the licensing advantages of Windows 10 Enterprise managing those VM's and their licenses in a 30,000 seat enterprise would be a nightmare of the proportion you have never dreamed and the risk and cost exposure would make any corporate guy run for cover.
What works in a SMB environment does not scale to 30,000 seats very often.
As a DellEMC Solution Architect in the Distribution Channel in the ANZ region I can categorically state that VXRail is *NOT* limited to 8 or 12 nodes.
Also the support for VXRail is *NOT* Dell standard or even Dell ProSupport but is instead supported by the VCE support organisation which also supports vBlock/vxBlock/vxRack.
BTW unlike a lot of you on here not hiding behind AC, happy to post and be held accountable for my posts.
I am the first to accept that VXRail is not perfect, but in a VMWare shop who want to lower their OPEX costs and free resources for future projects VXRail is hard to beat.
And two SD card slots just next to it. What are they likely to be for? Are they normal in a server (I have very little experience of servers). As installed in the case shown they are inaccessible.
They are used in X86 servers for hypervisor bootup (VMWare), could install Linux on raided SDHC cards and then use all the disk slots for data drives.
So the sweet spot for 1 drive is probably 3TB.
When installing 120 of those into 12 drive shelves, each shelf needing rack space, SAS cables and cooling in a datacentre environment then density is king. 10RU vs. 20RU when going from 3TB to 6TB pays for the $/GB difference anyway. For the likes of Google/Amazon/Azure/Netflix/Dropbox who run disks in the tens or hundreds of thousands this makes a major difference.
So while for a home PC with 1-2 disks 3TB is the sweet spot, at scale the higher the density the lower the associated costs will be and that is what drives the research.
I had a war going on with one of my customers for about 8 months back in 1996.
They were a high school and they insisted that they needed 2 "Multimedia" machines in the library with CD-ROM's and Soundblaster cards. Running Windows 3.11 with apps like the Microsoft Encarta and its ilk. The rest of the customers network ran on a NetWare environment with BootPROM equipped PC's running Win 3.11 off of the network with locked down and well managed.but we could not use that for the library for a variety of reasons.
It got to the point where I was replacing all of the Windows/DOS configuration files on NetWare login via login script to stop the kids installing games under Windows or DOS and having a "Shutdown" button that removed key Windows and DOS system files on Windows exit so that they could not use the machine outside of the controlled Windows environment.
Worked well but was quite time consuming to install new apps.
A few points.
1. VXrail is available today and it is NOT on Dell hardware.
2. The Dell/EMC deal, while being on track, has NOT yet closed and there is no hardware partnership between EMC and Dell at this point in time.
3. I am not 100% sure which ODM is being used for the VXrail kit but it will PROBABLY be Quanta who are the ODM for the VXRack and ScaleIO nodes.
*I work as an EMC Solution Architect in the Distribution Channel in APJ.
Once you hit 8 controllers,4+PB of raw capacity , 16TB of memory, 384 CPU Cores and 256 16Gbps front end ports do you really need more scale in a single system?
There comes a time where the complexity of the scaling is more than it is worth.
If you need monster scaling then look at a solution like ScaleIO, Isilon or ECS depending on the data type.
*DIsclaimer - I am an EMC channel pre sales guy working in the distribution channel.