
That the board might be available is compelling enough reason to contemplate.
I love RPi but low cost and dependencies on things and countries that allow low cost are a problem for it.
That Raspberry Pi 4B feeling a bit pokey? Asus' Aaeon division has managed to fit an eight-core Xeon processor, complete with error correcting memory (ECC) support, and a PCIe 4.0 x8 slot into a four-inch single board computer. While nowhere near as small as a RPI 4B or a Compute Module 4, Aaeon's Epic-TGH7 packs substantially …
If you had clicked through to the manufacturer's site it wouldn't have taken long to see there are 16 GPIO pins and 6 serial ports which are arguably more useful in my book.
However I wouldn't compare it to the Pi, they're different markets, the price tag alone should tell you that. It always happens of course but the Pi is not unique or pioneering, the were earlier alternatives, there are cheaper, faster, smaller or longer lived SBC alternatives with I/O options that make the Pi look amateurish. The relentless Pi comparisons (always negative) for anything else in the SBC market don't demonstrate insight but ignorance of the wider market.
Yes the Pi has its place, it's a useful, mid priced, mid performance option. It has great support for hobbyists and experimenters but aside from that it is far from unique or exceptional. For some jobs it is too big, power hungry or expensive, for others it is too slow or lack sufficient I/O throughput. There isn't a single niche that Pis fit and anything else is either too big or too small.
It has 16 digital IOs. The specs make no mention of any peripheral controllers on them. It's hard to see that as more useful than the Pi's 27 GPIOs, most of which can be configured to some alternative function (I2C, SPI, UARTs, PWM).
Or, for that matter, an ESP32's extremely impressive list of peripherals and very good RTOS.
If you want to run e.g. "cake" traffic shaper (tcp_cake driver on Linux kernel) with gigabit uplink, you need pretty beefy CPU so product like this would be interesting if cheap enough.
If your uplink is only 100 Mbps, then getting EdgeRouter X or other ready made product is the best option.
Of course, if you can write software by yourself, a real system like this is a lot more flexible than of the shelf router.
the cooler they have for it is... massive. And tall. which means that a custom heatsink would have to be designed and fabricates for putting it in a multi-node server chassis, and TBH, there's already a fair number of those out in the wild already.
I think that this is designed more for the embedded markets or specialty systems where a decent amount of compute horsepower is needed, but it's not the star of the show.
"RPI 4B's $35 MSRP ... Aaeon is offering the i3 equipped variant for $812 while the i7-based version is selling for $1,167"
So, that's comparing apples with oranges. To me the two devices are aimed at totally different sectors of the market. One, the Pi at the tinkerer/hobbyist and the Aaeon at the small form PC/embedded market.
I'm not surprised that the Aaeon beats the R Pi. They are designed for two different uses.
I use my Pi (400) over my i5 NUC whenever I can - the NUC turns into a hairdryer whenever you try to do anything remotely useful with it; the Pi just copes silently with all workloads.
It is about a third of the speed (with mild overclocking), but can run & run ad infinitum (also does 128 bit software floats for long(er) term ODE simulations).
The Xeon needs fan cooling . . . how last century!
Depending on your use-case you can save a lot of SD car wear by changing the ext4 file system commit time from the default 5s to something like 120s. Of course if you get a reboot or power failure you loses the last 2 minutes of changes, though journaling should mean the file system is at least consistent.
No you couldn't. In order to build that cluster, you'll need a bunch of power supplies, enough cabling to get them all talking which would probably end up being a big network switch, some structure for holding them all so their heat doesn't slow down the nearby nodes, and at least one device with sufficient storage so that all nodes can boot from it and store data on it. Factor in those costs and the number of nodes you can get in it will drop. It still might be more useful for whatever your purposes are by that point, but know what you'll get for the budget you have.
>you'll need a bunch of power supplies, enough cabling to get them all talking which would probably end up being a big network switch, some structure for holding them all so their heat doesn't slow down the nearby nodes, and at least one device with sufficient storage so that all nodes can boot from it and store data on it.
Seems an old 4U server chassis/tower case might be a suitable platform.
Which actually raises a question about the Aaeon Epic, namely is it available in a blade package, so that a number of these could be slotted into a single chassis, enabling them to be used as server blades or even individual workstations(*).
(*)Back in the 1980's there were products that enabled this style of working, using the PC motherboard to manage disk access. If I remember correctly, up to 4 blade PC's could be fitted to the expansion slots in an AT chassis, with the ports (monitor, keyboard, mouse and headphones) externally accessible.
That's a complicated question. For one thing, there are a lot of appropriate jobs to compare it to. Since you're using a networked cluster of Pis, this means your task has to be very parallelizable to run on 80 cores and probably not dependent on real-time coordination between cores so they don't slow down to check on other nodes too often. Even with that, you can get into different situations.
For example, you have memory to consider. If your nodes are 1 GB Pis, then they'll probably spend more time loading stuff into memory because each core (assuming they're doing broadly the same task and running independently) has at most 256 MB not including the kernel before it starts impeding another core's operations. If your task is light on memory, usage, this is no problem. If each core is working on large objects or generating lots of temporary data, memory could quickly become your bottleneck. You also have different memory speeds because the Pi's RAM is slower than the bus available on the Intel chip. Similarly, if you're storing a lot of data on nonvolatile drives, that probably isn't great for your cluster either. You're either using SD cards (not speed demons) or more likely a networked device serving as common storage which is probably faster but is serving all twenty nodes over a network link. Come to think of it, the speed of your network device might become a factor too.
Now that I've talked for too long and probably got very boring, here are some basic numbers that might provide an answer you're interested in. I found some benchmark numbers for the Pi and the I7 included on this board (I couldn't find the Xeon, and these aren't the most reliable of figures). Assuming these correctly describe the relation between the core speeds, a core from the I7 is about 7.63 times as fast as a core from the Pi. You've got 8 cores in the I7, 80 in your cluster, so if only raw compute off the CPU matters and you have no bottlenecks on anything else (that would be nice, but it's not true), your cluster would be 31% faster. Taking this to an even more inaccurate level, I found a different benchmark that compares the Xeon to the I7, and says that a core from the Xeon is about 3.1% faster than with the I7. So if we add on that benchmark, the cluster would be 27% faster.
I want to reiterate that this is not how you'd do that comparison in real life. Your performance depends very heavily on exactly what you want the board to do. You can get the idea here: although the combined CPU power of 20 Pis is a match for the power of a single Xeon of this level, and if that's literally all you care about it could be superior, the difference is enough that for a task that's more dependent on memory or can't be as easily separated into parallel tasks, the Xeon probably wins.
"the board can be equipped with up to 64GB of DDR4 ... by way of a pair of SODIMM memory slots"
The second slot appears to be invisible in your attached photo.
Also, last time I checked a Pi 4b board was about $30, the cheapest of these boards is over $800, so you're kind of comparing a Dacia Sandero with a Rolls Royce Phantom, not really the same market segment at all...
The sooner Intel stops with this bullshit about ECC RAM being an "enterprise" feature and restricting it to their top of the line server CPUs, the better.
AMD have made the use of ECC RAM possible on consumer level Ryzen CPUs for several CPU generations. This makes them ideal for building things like a home NAS server where protecting against random corruption is absolutely essential. With AMD, you can do that for *far* less than what Intel think you should pay.
Given that just about every other data source/store in a modern computer is equipped with error detection and sometimes even correction, why should the system RAM be treated any differently when it's well known that single bit transient errors happen? The frequency of single bit RAM corruption is only going to increase as the storage elements get smaller. If it wasn't for the blinkered, profit greedy attitude from Intel, ECC RAM would be today's standard.
Just another reason why Intel is slowly going down the drain.
In avionics (particularly mission critical and safety critical), ECC for main memory and L2 has been a requirement for decades. L1 parity is also required.
This is why you won't find much from Intel in those markets (I know of some 6U boards used for display but that's about it). It is not a particularly large market but it can still be quite lucrative.
Freescale (now part of NXP) and others have offered ECC for a long time. I designed a SBC for that market using a MPC8548E about 12 years ago that had an 85mm x 90mm footprint.
On the bit flip front, the chance of it happening is proportional to altitude (up to about 70,000 feet) and there was a study on servers in Denver where bit flips were regularly detected.
You really don't want the flight control computers(s) to have undetected errors.
Not exactly. It's a 3% extra, which is probably a 2% premium over the one that uses a board which can paint pixels just fine but doesn't have a processor much more powerful than a passive display needs. That doesn't mean it won't sell, but buyers who want just one of those are likely price sensitive, and buyers who want to buy a ton of these things will see that premium add up to a significant extra. The manufacturers of such things also wouldn't get to mark their products up a lot for having an unnecessary Xeon in it, so their desire to keep the profit margin high will probably encourage them not to use more computer than they need.
The GPU in the Pi already can run a 4k display at 60 Hz assuming the data it receives can be decoded in hardware, and that's not the only SBC with a suitable GPU. If your board needs 8k, then you'll need something larger, but otherwise, there are lots of choices. At that point, reliability and ease of administration are more important than CPU speed.
I have the first version, an Airtop 1 - bought it about 3 months after launch, 6 years ago now so probably one of the oldest ones in the UK. It's magnificent - the case is a thing of beauty.
I recently put an M2 card on a PCI extension in there, had to contact Complab for a BIOS upgrade to do this (machine was so old it shipped with some sort of beta BIOS). Compulab were fairly helpful so that's a plus too.
Overall, highly recommended - now with 4 x SSDs and 1 x M2 on a PCI extension, running RAIDZ. no noise, no moving parts, no problems. Only failure was the front LCD packed up at some point, but it didn't do anything useful so no loss.
Oh, you can buy Raspberry Pi4's now. But only if you are an industrial grade customer¹. The original user-based of the device has been left in the dust as success (and a broad range of enthusiast developed free software) brings more profitable clientele.
[1] From the pi-man: we’ve consistently been able to build around half a million of our single-board computers and Compute Module products each month ... Right now we feel the right thing to do is to prioritise commercial and industrial customers – the people who need Raspberry Pis to run their businesses
Like a rather large heatsink/fan assembly? (You can see where its supposed to go.)
The Pi is useful because its cheap, small and doesn't use a whole lot of power (try running this board of a USB-C!). Its not super fast but its actually fast enough for many applications.