
"non-standard form factors"
Vendor lock in achieved !
Over the past few years we've seen a number of OEMs, including Dell, HPE, and others trying to make on-prem datacenters look and feel more like the public cloud. However, at the end of the day, the actual hardware behind these offerings is usually just a bunch of regular servers and switches sold on a consumption-based model, …
If you buy this, you're betting that the company will: (a) prosper and continue as a business entity, and (b) not discontinue support of these systems any time "soon". Beyond that,
+ Custom, larger-than-usual PCB sizes means they can make it for less cost. Data General did this with their Nova and Eclipse lines (but they also made them fit into standard 19" rack-mount cabinets).
- Custom power supplies and power connectors mean most-likely no replacements of failing/failed units with third-party (less-expensive-to-you) hardware.
- All that custom HW and SW means many not-very-extensively (vs industry-standard HW/SW) tested interfaces. Interfaces are where bugs tend to lurk.
- That pic showed non-locking swivel casters holding up that box. The last thing one needs while swapping out a unit with a server-jack is for the rack you're working on to start squirreling around. Yes, you can-and-should chock the casters before working on such a machine, but the lack of locking casters shows a lack of thoughtfulness in the design team.
- "0xide", with the the letter "oh" replaced by a zero?! Have these leetspeak-spouting "haxorz 4nd w423z d00dz" from the 1980s not matured?
We certainly hope we're successful. So far the folks that have deployed the platform have been thrilled with the experience and the system as a whole.
The rack fits inside the standard 24" (600mm) floor tile common for most enterprise class data centers and colos. We're not trying to squeeze this in a telco CO or something.
https://en.wikipedia.org/wiki/19-inch_rack
The power shelf and rectifiers are common off the shelf parts from Murata used by hyperscalers and the public cloud providers. We'll replace them if you have a support contract, or folks can go source their own.
https://www.murata.com/-/media/webrenewal/products/power/datasheet/mwoces-191-m-b.ashx?la=en-us&cvid=20221122043000000000
https://www.murata.com/-/media/webrenewal/products/power/datasheet/mwocp68-3600-d-rm.ashx?la=en&cvid=20220525020000000000
The coolest connectors in the product are coming from Samtec, we did a whole podcast on it. Really interesting stuff:
https://oxide.computer/podcasts/oxide-and-friends/1342756
The casters are just to roll it into place, I can roll a fully populated rack by myself but we use 2 people for safety. Once in place on the floor tile you drop the feet for stability. If people are bumping into stuff that much maybe install the seismic kit too?
The zero used in the name is more about hexadecimal: 0x1de like 0x denotes hex, so more like mid 1800's.
https://en.wikipedia.org/wiki/Hexadecimal
Fun side note, our PCI ID is 01de
https://github.com/pciutils/pciids
Probably marketed as such. This can be (depending on your interpretation) better than the blamestorming which happens when you've got (say) Cisco switches, Netapp storage, some hypervisor and whatever else you have in your stack, all pointing the fingers at the vendor which isn't them when it stops working.
Note to editors... There are 350 million Americans who know what a pound is and umpteen billion non Americans who don't. Same goes for things like 3/8 of a bushels quart or whatever. I mean, you lot even have gallons that aren't gallons. Stop it please. Metric is not a commie pinko plot to turn American yoof into raging transvestites.
xyz
I think you overestimate the number that knows what is a pound.
Example, new dryer. Max 15lbs. After putting 40lbs of wet clothes in it, the wife complains it is a piece of junk after burning up the belt.
Same concept with watts. Bathroom plug. Hair dryer, 1500w, space heater 1500w, straightening iron 800w. All on at the same time. Why would the breaker trip? I still have holes to plug in stuff.
Hi -- FWIW I've added an editor's note on this point. Our vulture was told the 9 ft and 3000 lb figures in conversation with the Oxide team. It turns out those numbers were for the systems as shipped, not deployed.
The article has been updated to include the measurements as deployed.
C.
15 years ago ...
https://www.theregister.com/2009/03/18/rackable_cloudrack_two/
Hardware wise anyway
"This time around, the trays don't include the power supply, which has been shifted out into the rack enclosure itself and which provides direct conversion from three-phase AC power coming out of the data center walls to 12V power needed by the servers on the tray. So, the "server" doesn't have a cover, doesn't have any fans, and doesn't have a power supply."
I was looking at using these at the time for my (then) org's first hadoop cluster. VP of group decided to go another direction cutting corners on quality and cost. I left shortly after. Was later told their new vendor had a 30% hardware failure rate and with quorum requirements it meant the first year of operation the cluster was at half capacity. I had a great laugh. Company is long dead now. I enjoyed the 2nd best commute though(short of working from home). The office was literally across the street from my apartment. I actually had co workers driving in parking further away to avoid paying parking fees.
Our next generation C0x2de uses our proven aqueous-CO2 cooling solution. This coolant has been widely proven to combat overheating in the most demanding climates. A standard installation includes 4 kegs of spare coolant in the chiller system, and a coolant diagnostic port in the lab.
The rack is fully air cooled.
There is no liquid cooling in the product at this time and none planned at the moment.
There is a trio of 80mm fans in each compute sled along with 4x80mm fans for each of the two network switches. There are fans in the rectifiers inside the Murata power shelf. Because we paid a great deal of attention to thermodynamics and leverage 80mm fans for the entire rack we're able to achieve far better compute density and lower power utilization than comparable configurations.
In the early days, like the 1980s, Sun built servers that required 208VAC. Only the SF area supported that, the rest of the world was 220 or 240VAC. I think that's called parochial but correct me if I'm wrong.
When $WORK finally realized that, and put in a step-up transformer, the loud fan noise and irregular reboots stopped happening.
For Oxide to build a (customized) rack that is more than 2m high is....parochial. There are elevators that won't take such items even tilted and empty. The tallest Dell rack (48U) is 2273mm high. These guys have not done their homework. They haven't had to actually install such equipment in any variety of customer locations worldwide.
2.74m is just....dumb.
And putting just 15kW of CPU power into an oversized rack is....under whelming. There are higher power (and therefore faster) CPU and storage options that aren't physically unmanageable.
""It's just comical that everyone deploying at scale has a DC bus bar and yet you cannot buy a DC bus-bar-based system from Dell, HP, or Supermicro, because, quote unquote, no one wants it," Cantrill quipped."
That's actually false. Both Dell and HP/HPE have DC power supplies as option for most of their regular servers, and have had them for a while. We use them for special purpose applications but they are meant for DC use.
Yep, I put in a CCTV system for a customer that could only give us DC power, Dell sent me everything I needed with DC power supplies.
Oddly, the customer was a power station, generating lots of AC power, but the security lodge where our gear was going only had DC...
The DC bus-bar is not a DC power supply.
The DC bus-bar is a large copper bar that runs the length of the back of the rack and distributes DC to all the compute sleds and switches in the rack.
The rack is AC powered and utilizes a Murata power shelf with 6 rectifiers for AC to DC conversion.
I don't think its bus structure allows for ISA, VESA Local Bus, PCI, or PCIe-interfaced video cards. However, I once saw Quake running in a mode which used coloured ASCII characters to display the scene. Perhaps you could modify Quake to output to the diagnostic serial port, if the 0xide has one. id Software has released the source code for the original Quake, so there's a coding challenge for you. :-)