Pikey Computers
My friends often laugh at me as I am so pikey I take all their cast-off computers and re-make them into something better. But if I could make something on *that* kind of scale I bet they wouldn't laugh so much!
Durham University has built itself a secondhand supercomputer from recycled parts and beefed up its contribution to DiRAC (distributed research utilising advanced computing), the integrated facility for theoretical modelling and HPC-based research in particle physics, astronomy and cosmology. The Institute for Cosmological …
Is 256 pi's enough ?
Certainly not, but dancing boards is an awesome improvement over the Connection Machine's 64K CPU LEDs!
Sorry, but I have to disagree.
There are two things that have not been bettered yet, one is CM-5's LEDs and the other is Cray-2's FluorinertTM waterfall.
Achtung! Alles turisten und nonteknischen lookenpeepers! Das komputermaschine ist nicht für der gefingerpoken und mittengraben! Oderwise ist easy to schnappen der springenwerk, blowenfusen und poppencorken mit spitzensparksen. Ist nicht für gewerken bei dummkopfen. Der rubbernecken sightseeren keepen das cottonpicken händer in das pockets muss. Zo relaxen und watschen der blinkenlichten.
"(Unrelated aside from confused non-native speaker: shouldn't it be are 256 pis ?)"
referring to: "Is 256 pi's enough ?"
It should be "Are" because Raspberry Pis are countable.
It should be "Pis" because the official Raspberry Pi Foundation style is to capitalise the P.
It should be "Pis" (without apostrophe) because the s is pluralising not possesive nor abbreviating.
It should be "enough?" because in formal english Grammar and (totally unjustified sweeping generalisation alert) in just about every English style-guide, there is no space before a question mark.
So, technically, it should have read "Are 256 Pis enough?"
However, "Pis" just looks wrong and causes cognitive dissonance, so the apostrophe is added in because it looks better. Acronyms are often pluralised with an apostrophe as well because it just looks better (e.g. CD's, DVD's).
I would have written "Are 256 Pi's enough?"
HTH
This post has been deleted by its author
Fine, I'll have to get back to replying in a keyboard rather than my phone as it always adds a space before a question mark and I can't be arsed to remove it everytime (it also adds a random space to the begining of a sentence on the reg comments which again through apathy I've stopped giving a toss over hence the non capitalised 'i') as for why I used 'is' as opposed 'are', that would be more of the nature of the question since using 'is' can be considered a more aggressive response, nay even a challenge as opposed to 'ere's which can be taken as a more polite enquiry.
There's also the cultural context of speech patterns not being the same for ever part of the country... soft southern.... people and they're Oxford English dictionaries. No queens speech for me,etc etc.
Now where's my flat cap?
When my ship comes in, cuz, when my ship comes it ... when I strike it rich, when I find the pot of gold at the end of the rainbow (You know, I should take those leprechauns to court. This is the third time they've lied to me!!! Never trust a leprechaun who swears you can outdrink him. It's not true, and he'll still pull the wool over your eyes!)
The problem is that if you're really going for HPC, the power requirements of the old kit (FLOPS/Watt) get pretty nasty pretty fast. Not to say there's no purpose in doing this, but it's always worth working out where the cut-off on running costs is - how much electricity can you get for the money you save, and how much work can you get done for that.
That said, for small businesses (and university departments), old gear is great for running file/email/whatever servers. Just make sure you have enough redundancy for failures.
"Old gear can get you through it" is one of the reasons *how* university IT services departments keep stuff going on small budgets. It doesn't mean that they like it that way.
Management practices kill off the ability for IT departments *everywhere* to invent stuff for themselves. Don't write code, buy a product. Adopt standard procedures. Stop thinking.
Acknowledge that every bright idea has already been spotted -- for now.
Management practices kill off the ability for IT departments *everywhere* to invent stuff for themselves. Don't write code, buy a product. Adopt standard procedures. Stop thinking.
You have to remember that there are two sides to a University: The day-to-day "boring" stuff that keeps the place running (HR, Finance, Student Records, etc) and the research stuff.
The researchers can go and do what they want with their money.
The boring stuff most definitely does stick to industry norms. How would these forums light up if it transpired that a University had decided to write its own finance system?
Anon 'cause I work in a Uni.
Fair comment. I have enough years on my CV to make me an expert on university IT practices too...
Your words on finance systems remind me how badly universities handled things in 1999. At the time, many universities had legacy payroll and accounting systems which needed to be replaced to work on Y2K. Enough institutions were affected for them to negotiate a collective deal with Oracle or SAP -- the sort of deal that UCISA exists to organise. Instead, universities went their separate ways and paid even more money for third party developers to write modules to make systems work. Clueless.
I heard on the grapevine that one institution was so busy transferring data from the old to new system that they stopped paying bills and damaged their credit rating.
Not sure why the downvote but the only PC/Server I ever kept current in my stack was the gaming PC. The others ran very happily on older hardware/OS and as they were never going to be stressed I was still running my email on a Compaq Proliant 530G1 until a year or so ago...Was sad to see it go but the replacement stepped it up a couple of generations.
I never laid a finger on one of the HPCs mentioned in the original piece but I was around at the time and met the people when one was built. The particular one was a complete new build -- there was an existing data room but it was taken apart. The new build contracts were for a data centre building, computing, and environmental services -- cooling and power. The deals were to build something that would accommodate HPC for >10 years -- apply sceptic hat when considering that most data centres operate for yonks longer.
That data centre is refreshed regularly -- the computer supplier knows how it works. The HPC gets faster -- and some junk is created. You won't find a useful purpose for it in your typical 19" rack.
Durham University were just lucky. They were lucky that somebody was selling an old HPC. They were lucky that they had a vacant data centre with the right power and cooling capacity. They were lucky to have a power contract at the right price.
CPU cores haven't got much faster* recently, so you can build usefully fast clusters out of older gear. I guess in this instance it meant that their researchers could have a cluster that they otherwise wouldn't have had.
The university that I worked for a few years ago managed to get a cluster out of a well-known hardware vendor, but had the small problem of not having anywhere to put it. I was there for a couple of years and the hardware did nothing apart from sit in a storeroom.
*Unless the code can take advantage of AVX etc
Of course the individual cores haven't got much faster, but the density has. HPC is not a single-threaded game, and the throughput of a physical server is still skyrocketing. Similarly, the performance per Watt is racing upwards.
The cost of optimising code on a large HPC installation pales when compared to the cost of powering the thing. All I'm saying is that there's a crossover beyond which it's uneconomical to use older hardware, even if it's free. I'm not even saying that Durham have reached that - merely pointing out that it's a huge factor in the decision to acquire something like this.
In the past the FLOPS/Watt was changing so fast that after 3 years it made sense to replace the kit because the power savings would more than pay for the replacement. Now after 5 years the power savings will just about pay for the new kit.
Oh and with Universities there is a huge and I mean a *HUGE* difference between capex and opex. It is not like a business where you can willy nilly shift between the two budgets.
"Oh and with Universities there is a huge and I mean a *HUGE* difference between capex and opex."
I think that might be a consequence of organisational size rather than a public/private or commercial/academic issue.
Major IT consultancies such as my own are happy to waste enormous amounts of opex to avoid capex spending.
"yet simply can afford."
My job takes me to a number of UK universities. Without fail they are all putting up new buildings, usually on top of the already limited car parks. Money for education and research may be in short supply, but capital building projects seem to be well funded. I suppose that's because they know how many more paying students they can fit in the space so the ROI is known. All that poncy blu sky thinking and way out research projects, not so much.
... She's commissioned a new cluster! I must send my congratulations!
This, folks, is the unsung part of UK science and computing. While big shiny machines and satellites and cool detectors look great (and do great things), it's people like Lydia and her team, and the folks at the STFC who provide resources to science and industry alike (Rolls Royce has some excellent relationships with some of the facilities, to just name one) who keep things humming along!
So pleased for Durham that they could reuse the Daresbury Hartree hardware!
I were expectin' summat more quintessentially British.
"Built entirely from valves salvaged from old Echo brand Black-and-White television sets, wire recovered from a former BTH factory demolition in Birmingham, and cassette recorders found in an old Tandy warehouse last winter, the supercomputer consumes four floors of the Department of Clever Sums building and runs on nutty slack.
Undergraduate Students in the fifth floor classrooms are issued with special ceramic-textile outerwear and may toast their lunch sandwiches on the floor without fear of sanction."
Good to see older kit being put to good use.
At the other end of the scale reclaiming works too. Another nice thing to do is to make PCs for those who can't afford them from rubbish dump parts. My retired dad goes down the dump and takes away discarded PCs. Generally he can make 1 good'un from 2 or 3 scrapped ones, thoroughly wipes a HDD and puts on a Linux, and gives them away to other people in their village who can't afford anything really (there's not much money round their way). Has been doing it for years, and has made a huge difference for a lot of people who've discovered PCs and, as it turns out, social media late in life. Over the years the spec of machine he can put together has improved dramatically - people are throwing out some really quite nice hardware these days.
I wonder why the headline says that the supercomputer was built from spare parts when in fact what really happened was that a spare, already-built supercomputer was available, purchased and transported?
One implies the somewhat interesting challenge of building a supercomputer from a bunch of diverse components. That would be worth reading about.
The other is ... a non event.
Shoddy and untruthful clickbait headline, Reg. You're better than this. Stop it.
Probnably better off using the PCBs from relatively recent (eg Note 4) smartphones with cracked screens, they are quite a bit faster than a Pi 3 for the same or less power usage and can be substantially improved with a little heatsinking.
I tested this with an S4 Mini and S3 Neo, both power up with the screen unplugged and USB available so there shouldn't be any problems here.
Irony: that "old" smartphone is actually 24* as powerful than those early Pentium 133 machines and added bonus is that its onboard Wifi/Bluetooth can often be repurposed for intra-node communications.
Lack of range is not an issue and transmit power can be turned right down so the chips don't overheat.
"Probnably better off using the PCBs from relatively recent (eg Note 4) smartphones with cracked screens,"
True, except sourcing them is likely to be more difficult and possibly more expensive. Broken ones end up in the bin or refurbished, new ones from the manufacturer won't be as cheap as a Pi.
Not compared to say the pi zero or a dedicated microcontroller (depending on if you want a single remote sensor or a complete compute cluster :P ).
An old smartphone is handy for some tasks, but not really what you would want for a dedicated device or workload. But for art? If there is a source/stock, then it's fine to make art with anything creatively.
There's a ton of those old sandy bridge dual xeon blades knocking around for mega cheap on ebay. Bought a bunch of them for a 3d rendering project - £350 each and they are nearly as fast at rendering as the 8 core i7 PCs which cost nearly 2 grand each. A bit apples and oranges as they can't be used as workstations (no GPU) but for CPU number crunching, which is what we bought them for, they are great.