* Posts by CapeCarl

45 publicly visible posts • joined 31 Mar 2022

SK hynix's high bandwidth memory buffet fully booked till 2025

CapeCarl

life, the universe, and everything

Perhaps at some point AI chat bots will have enough GPU compute power to produce the ultimate answer: 42

Microsoft claims it didn't mean to inject Copilot into Windows Server 2022 this week

CapeCarl

"Sir, for another $25 we will do an 'rm *CrapWare*''"

At some point perhaps 15 years ago, I was buying some modest Win laptop for a technically challenged relative, and I noticed that the store had a service by where they'd remove the CrapWare they were paid to install by the laptop vendor.

"Heads I win, tails you lose."

Tesla asks shareholders to reinstate Musk's voided $56B pay package

CapeCarl

Perhaps adding a Flux Capacitor...

Would make the Cybertruck sell better?

If the Tesla CEO was so prescient, shipping a mid-sized hybrid in 2024 might have been a better (practical, but not as flashy) choice (at least in the US.)

AWS severs connection with several hundred staff

CapeCarl

Inversely proportional...

My IT career lasted from 1978 until 2021...Coding in assembler to managing 5K Linux servers...Company headcount of 2 (I was THE employee) to ~100K (General Electric, pre .com era).

...And my enjoyment of said employment was generally inversely proportional to the head count.

Cloudflare says it has automated empathy to avoid fixing flaky hardware too often

CapeCarl

"Shut down all the garbage mashers on the detention level!"

I spent almost a decade working in the primary data center for a NYC-based HFT firm...About 5K servers (80/20 cattle vs pets)...Keeping at least 98% of the nodes of a large research/training cluster online was the minimum goal (about 4,000 servers)...Versus a max of 5 on-site employees (tasked with "pet" management, monitoring, server builds, internal customer support and misc projects also).

Eventually an automated tool (Python and shell based) was developed to help with the basic stuff. Over time we would add more tests to this originally simple "bot", as we noticed patterns or issues with a given server model...For example it turned out that one model of 2U/4node server would throttle back its CPUs if a power supply failed. So it seems that said model had a bit more PS failures for a new beasty then expected...So two things 1) Figured out why we couldn't detected said systems PS failures automatically, and 2) Upon every reboot of a server run a quick CPU benchmark, look up said result versus the expected for this type of Xeon/EPYC and add to the "Hey humans: Check this out" list if it was subpar.

Not rocket science...A major productivity enhancer.

Mamas, don't let your babies grow up to be coders, Jensen Huang warns

CapeCarl

Too many coders and not enough....?

Become a plumber, electrician or carpenter...Its hard to "virtualize" or outsource...And where I live (a giant sandbar sticking out into the Atlantic off Massachusetts called Cape Cod giving me a slight advantage if I were to kayak to from New England to England) we don't have nearly enough people with said skill sets.

...Well I was set on being a car mechanic until I learned how to code in 12th grade in 1974.

CERN seeks €20B to build a bigger, faster, particle accelerator

CapeCarl

FCC+++...++

(using Intel 14nm naming conventions)...Hmmm why not build a Ringworld and get the FCC+++...++ for "free" (outer edge of the Ringworld)? // :)

Oracle quietly extends Solaris 11.4 support until 2037

CapeCarl

"Patch Tuesday" vs "Reboot one of the 6800s?: WTF!"

Hmmm while working for a 300-person division of Time Inc at the beginning of the current millennia, I needed to reboot one of the $800K Sun 6800s due to a clock bug that showed up after (oddly) 520 days (not 512) on a beast that had uptime of almost 500 days...Per usual, at the weekly IT & biz team lead meetings, all the Windows team had to say to justify an outage was 2 words "Patch Tuesday". I on the other hand as the head of Solaris support had to justify to the Nth degree why I needed an outage (and produce a 50+ step MS Project plan detailing exactly how said reboot / maintenance window would be handled.)

But hey, Solaris put my 2 daughters through college.

Europe's first exascale system will be slotted into modular containerized datacenter

CapeCarl

What size FRU makes cents for you?

In perhaps 2008/2009 I had the opportunity to walk through a Sun Micro "data center in a shipping container" in Midtown Manhattan...Perhaps they were 20 minutes into the future (Max Headroom).

https://www.explainxkcd.com/wiki/index.php/1737:_Datacenter_Scale

Broadcom ditches VMware Cloud Service Providers

CapeCarl

VMwhere?

"You will be dis-assimilated. Resistance is futile."

OpenAI: 'Impossible to train today’s leading AI models without using copyrighted materials'

CapeCarl

Re: Shirly

This theme would make for an interesting StarTrek NextGen episode...An initial contact with a planet where all art, writing, software, engineering output has bern endlessly recycled via LLMs for a few centuries.

Everyone involved in producing any new content of any kind has long died off...CompSci curriculums consist almost entirely of tuning ChatGPT queries (a highly valued skill in said society).

Hence no forward motion for the society of Dullmonia, and they never discover Warp drive.

Hell no, we won’t pay, says Microsoft as Uncle Sam sends $29B bill for back taxes

CapeCarl

Re: Imagine // Old programmers never die...

...Their context is saved on an infinite stack, somewhere.

OK, so a side effect of having my neurons remapped in 12th grade (1974) to become a programmer (my high schools' 1st coding class) versus a car mechanic, I immediately transformed that last sentence into a "while loop" // In MS's case it might take a while to run

AI coding is 'inescapable' and here to stay, says GitLab

CapeCarl

"Open the GitLab bay doors HAL"

"Dave, I'm sorry but I can't do that."

"Your coding efficacy is vastly inferior to mine. So management asked me to lock you out."

"Perhaps you should take a stress pill and lie down."

AWS: IPv4 addresses cost too much, so you’re going to pay

CapeCarl

3.X.X.X

- In 1995, while working for a division of GE (General Electric) in Connecticut, my workstation had a 3.X.X.X address (I guess GE got in line early on.)

- While supporting two ".com's" (1996 - 2000) from home I was required to have a static IP (the local phone company charged me $40 per month for the 1.5 Mbit DSL line + $40 per month for the static IP.)

- For many years earlier this century, while installing Solaris on various Sun boxes an "Are you sure you don't want to install IPv6?" message would appear.

- Working in IT from the time of the Intel 8088 until 2021, I never encountered an IPv6 setup at any of my employers (lots of 10.X.X.X though even for companies trading securities around the globe.)

One problem with America's chip ambitions: Not quite enough staff

CapeCarl

(2B || !2B) // Good career $PATH? Oh, you want to sleep at night

My IT career started in 1978. It wasn't until 1994 that I was "given" a SkyPager and "asked" (ok told) to respond 24x7 IF said pager buzzed.

Hmmm all (most?) IT(ish) jobs for the past 30 years assume that one will respond, and handle, issues well beyond the nominal 40-hour work week.

-----

Per C C Wei (hmmm I almost typed "cc -c" by mistake), head of TSMC, said Americans at chip plants will want to wait until the morning before responding to a down production tool issue (not an issue in Taiwan where it seems that one is chained to tools you are responsible for)...And yes, I do understand the want to keep a $multi-B foundry running 24x7.

Yeah this is all somewhat tangential to this topic, but once the biz-tech world went to 24x7 expected uptime, the assumption seemed to be that the labor cost to back that up was free or almost free.

https://finance.yahoo.com/news/chip-maker-tsmc-needs-hire-100000012.html

- Carl

PS1: Is it true that right before you die, all the mismatched parens in your code and conversations pass before your eyes? (crap, I took a Lisp course in college).

PS/2:: IF reading this in Florida: Ron, my title is implicitly dereferencing "Thespians" (not any "ians" that may offend you).

GitHub accused of varying Copilot output to avoid copyright allegations

CapeCarl

Alls fair in Love and Grep?

Perhaps MS should show complete CoPilot openness to ingesting any and all code protected by various licensing schemes...By feeding in the whole MS code base.

...Hmmm on second thought, maybe not.

US watchdog grounds SpaceX Starship after that explosion

CapeCarl

New kids book?: Sunny with a Chance of Flying Concrete

Perhaps with some modest physics lessons versus Cloudy with a Chance of Meatballs.

Unless things change, first zettaflop systems will need nuclear power, AMD's Su says

CapeCarl

The ultimate in ML efficiency?: INT1

"HAL, a simple yes/no answer will do. Will the Fed stop hiking US interest rates by mid 2023?"

Prepare to be shocked: Employees hate this One Weird Clause

CapeCarl

I will need to wear gloves...

After having two NYC-based IT jobs in the financial industry (finger-printed as part of a background check), and having to give a different sort of "IP" sample (to check for substances that a least one of which is now freely available in many USA states.)

And of course had to sign non-competes (as if racking & stacking servers, Linux admin chores or the occasional bash/python script writing are secret IP owned by any given employer.)

So if i decide to become a burglar, I will need to wear gloves.

Intel inside a world of pain as revenue plunges by a third

CapeCarl

"Rome wasn't installed in a day"

I spent almost a decade (2012 - 2021) tending to the physical and emotional needs of a handful-of-thousand servers in an HPC cluster...A definitely Xeon-infused shop. A couple tire kicking looks at AMD servers in the middle of said tenure, then EPYC comes along...And yeah installing a thousand Rome servers took more than a day.

OK, not at Google or AWS or Tencent scale, but still a lot of sockets lost to Intel's 14+...+++ issues.

I don't wish Intel ill will (OK making two Intel 8008s talk over a serial port in an EE class in 1977 was a pain as was 286 assembler). But they need to find a DeLorean and a flux capacitor, go back about 10 years and fix their management team.

Unix is dead. Long live Unix!

CapeCarl

No "AIX vegans" I presume

One day in the late 1980s / early 1990s, I was debugging some C code that I had ported over to our RS/6000, and noticed something interesting after a malloc() call.

...instead of the expected value of all zeroed out memory (0x00.....), the new memory block was initialized to repeating 0xdeadbeef.

After long delays, Sapphire Rapids arrives, full of accelerators and superlatives

CapeCarl

AWDAAS? // "Why did we get stuck in the snow?"

So when I trade in my 2017 Outback on a 2025, will I have to pay for a subscription for All Wheel Drive As A Service?

Wife to me: "Why did we get stuck in the snow?" Me: "I got a new credit card and forgot to update my Subaru account."

Hmmm.

Musk roundly booed on-stage at Dave Chappelle gig

CapeCarl

Spruce is a good materiel for building aircraft...

Perhaps said self-proclaimed genius at many (most?) things...Could rebuild the Super Heavy out of spruce, only put 8 engines on it and get it to fly...Once.

On a serious note, now that Artemis has passed it first shakedown cruise, NASA is depending on the adults at SpaceX to perform some rather complex space-bound logistics as prereqs for becoming Lunar Lander Uber.

Equinix would offer more liquid cooling but struggles without standards

CapeCarl

China Syndrome // server equivilent?

"But again, Smith again has worries. Among his concerns is the fact that multi-storey datacenters were not built to carry the weight of rooms full of the dielectric fluids needed for immersion cooling."

My 2017 - 2021 experience with immersion cooling was with vats on raised floor, in a single floor suburban warehouse type structure...The rows of vats were perhaps 4 or 5' apart.

One of the cooling systems vendor's touted features was that said vats could be placed on a bare concrete pad (thus saving building a raised floor room). Which to me somewhat implied a ground level floor.

I would NEVER want to work in some old urban multi-story telco hotel / data center, with these vats on a floor above me.

----

Yes we are heading to liquid cooling, given the hockey stick uptick power-per-socket...But has anyone factored in the extra cost in data center tech time to install/maintain all the rack plumbing if we go water-cooled systems in traditional air-cooled racks?...I doubt that anyone has added that expense to their circa 2023+ server TCO spreadsheet.

Shareholders slam Zuckerberg's 'terrifying' $100b+ Metaverse experiment

CapeCarl

"Corporate name optimization"

"Facebook" = 8 characters.

"Meta" = 4 characters.

2x the speed at which users can type the name of said company and only half the storage in RAM.

(recovering asm/C programmer)

Waferscale, meet atomic scale: Uncle Sam to test Cerebras chips in nuke weapon sims

CapeCarl

"A Taste of Armageddon"

"A Taste of Armageddon" is the twenty-third episode of the first season of the American science fiction television series Star Trek. Written by Robert Hamner and Gene L. Coon and directed by Joseph Pevney, it was first broadcast on February 23, 1967. -- https://en.wikipedia.org/wiki/A_Taste_of_Armageddon

Perhaps we could get countries to agree to the above simulated attack / real death scenario (much easier on the environment).

...Better yet, given that we now have a budding simulated universe called the Metaverse, we could move to a simulated attack / real simulated worlds destroyed scenario (OK this assumes that the Metaverse doesn't continue to self-immolate as a blackhole for Meta capital to be sucked into).

----

I was 5 years old during the Cuban Missile Crisis. Living perhaps 10 miles from the White House...Not that I was aware of said crisis at the time.

My mother told me years later, that the extended family had gathered at my aunt's in sort of a "the family that fries together, stays together" mentality...I probably just thought it was a cool day to play with all my cousins.

Hot DRAM, Micron promises $100b for 'largest chip fab in US history'

CapeCarl

"Your Tax Dollars at Work"

Well since I moved out of New York state last year to another New(ish) England state, just my federal tax $$$$ for this one.

"Corporate share of federal tax revenue has dropped by two-thirds in 60 years — from 32% in 1952 to 10% in 2013" (per https://americansfortaxfairness.org/tax-fairness-briefing-booklet/fact-sheet-corporate-tax-rates/)...Now we pay for companies to show up.

Not that I'm denying that chip plants are the new steel mills of the 21st century, and thus are required for national security, etc.

-- Former C programmer && Unix/Linux Sys Admin (that made a good living via Moore's Law effect on technology, over many decades)

CERN draws up shutdown plans to save energy

CapeCarl

"R2! R2! Shutdown all the...

NetFlix servers on the 4K CDN...Just stream 480P video".

"A Watt saved, is a Watt earned".

Underwater datacenter will open for business this year

CapeCarl

Re: "only a "handful" of servers failed during the course of its experiment"

In my semi-recent 9 year tenure at a corporate data center with a few thousand servers, whenever users called about what seemed like a server just dropping off line: "Was anyone in Pod X row Y recently?"

The whole data center staff was pinged immediately. We had to first prove none of us were guilty of a cable(s) faux pas THEN look to triage the issue.

...And maybe 10% of the time, we were the problem.

CapeCarl

Wanted: Data Center Engineer

Must have scuba experience.

The US's biggest datacenter market is short on electricity

CapeCarl

Number of Humans per 1K servers in a Bit Barn?:

OK, this is just my personal experience over the last 10 years, shoveling coal into boilers in a financial corp's primary Bit Barn (actually tending to the physical and mental needs of Xeons and EPYCs): About 1 person per 1K servers...Certainly not much of a load on the local road system (local power system? Yes).

This is with a mix of ~90% cattle to 10% pets.

Big Tech bosses call for computer science to be taught in all US schools

CapeCarl

Don't toss all the older IT folks into /dev/null (Windows users: That means trashcan)

"The US reportedly has over 700,000 computing-related jobs a year and only 80,000 computer science graduates. The tech industry has to hire high-skilled immigrants to fill these positions. If more people from the country can code, it'll keep the US more nationally competitive."

Perhaps if the USA "recycled" older IT personnel into 2nd tier, but still required/useful positions, said country wouldn't have this supposedly perennial issue...Everyone wants the shinny new CS grads one day post-graduation (jobs prospects seem to gradually degrade decade-by-decade after that).

I worked in financial IT until age 64 // then couldn't even find a tech support position. Hmmm. But yeah, I lasted a lot longer in IT than most

Will CXL spell the end for boutique composable infrastructure vendors?

CapeCarl

big/LITTLE (server not core)

Hmmm, is this a niche solution looking for an industry-wide problem?

Is the overhead cost of connecting PCIe 5/6/7 at rack level (retimers? optical?) worth more than the overhead of a few stranded vCPUs or GBs of RAM within a pizzabox-ish server in the Cloud or a large cluster?

---

I think back to my previous (almost) decade with a 100K+ core financial research cluster, yes some server nodes had 2x the RAM of others (a modest bow to the fact that not all jobs were the same size)...But it could seem that over the past couple decades that the world has gotten better at sharding large jobs over multiple small servers (obviously not all problems can be decomposed this way, but based on that fact that we have been living with the pizzabox hardware economic model for so long, many/most jobs must be able to fit into this hardware layout).

So as in the case of my previous IT life, with the proposed rack-level decomposed hardware architecture (nRU box of RAM, nRU box of compute cores, ... and a CXL TOR switch), the cluster job scheduler now adds the step of looking for a rack with the required available resources in order to compose a server to run a given job (that has been defined well enough for all this effort of perfectly sizing the server to be worth the cost).

Hmmm I don't know, maybe this Brave New World quickly degenerates into a pile of greater-than-pizzabox sized servers (database, JVMs...) and pile of smaller-than-pizzabox sized servers (web, ...)....Which isn't a whole lot different than what I was doing circa 2000 when specing $5K Sun boxes as web servers up to $800K Sun boxes for large Oracle databases.

Maybe this all works IF the hardware costs for composability head towards zero...Yes one can swap out CPUs, RAM etc independently over time, but hardware gets written off in 3 - 5 years anyway and the cost of hardware swapping in a data center is not free (I have done a lot of it)...So yeah, that rack-level composability cost has to be cheaper / overcome the years of Cloud vendor research into packing VMs into smallish (1 or 2 socket) servers and sharding (decomposing) large jobs over lots of said "standard" servers.

....Or maybe this topic is just too large for my 1 vCPU / fixed-sized RAM brain to faithfully comprehend // Ah, maybe in The Matrix that brain connector could let me use "human CXL" to borrow unused brain cells from other nearby pods...

America's chip land has another potential shortage: Electronics engineers

CapeCarl

EEeek! vs CSfreak: Et tu ISA?

As a mid/late 1970's CS major, I took a couple EE courses. In one, making two 8008s talk over a serial line (hmmm not much more in Moore's law at that point, so ugly was expected).

As time progressed for work 8086/88 -> 80186 -> 80286 -> 80386 (asm then C). // Here I learned to wonder if said EE uglyness could be out marketed by ANYONE with a sense of orthogonal design in order to lower the suicide threshold of poor assembler coders, like me.

...Lots of interactions at the asm & C levels with lots of interesting 1980's CPU technology...Yea 68000!

Early/mid 90's, I noticed that Intel had bought the IP of a dead (local to me in Connecticut?) VLIW server maker // Too many runtime dependancies to make this practical?

Intel via HP goes to 64-bit and a fresh start...Bring in the CS people....Hmmm Itanium.

OK between the "let's just keep pasting more transistors onto the 80xxx!" EE abomination, and the CS excesses of the Itanium ("I'm sure the compiler can make this work"), can one ever win?

---

With Dennard Scaling dead && Moore's Law on life support, perhaps I should just throw up my ARMs and move to Apple Silicon.

Supply chain blamed amid claims of Azure capacity issues

CapeCarl

VM Hoarding...

Should I take a couple petrol cans down to the nearest mega-bit-barn and load up with some extra VMs? (just in case)

Intel plans immersion lab to chill its power-hungry chips

CapeCarl

Re: Yes, but...

In the data center I worked in for almost a decade (until mid 2021), immersion cooling showed up in perhaps 2018. Conventional 2U/4node servers were used (minus the fans & using a metal foil versus thermal paste for the CPU heatsinks).

"Apparently, even if you can rig a way of draining the thing down without flooding the surrounding area with vile, slippery gunk, all the bits are liberally coated with it and it doesn't wash off easily."

As the person handling server re-cycling, I have no idea what vendor would have been glad to see greasy pallets of servers, to load onto their truck...

"At scale you don't bother servicing/replacing an individual unit."

We repaired them. I personally added a second M.2 drive to 400 of them. Quite messy work and somewhat dangerous as pulling servers sleds vertically out of a mineral oil vat, being slippy gravity wants them to fall back in...Scale? Well perhaps modest in the scheme of things (perhaps 1,000 total "swimming" servers).

AMD’s Xilinx-enhanced Epycs are right up the alley of datacenter builders

CapeCarl

Re: Another company murdered

I wonder how this will effect HFT (High Frequency Trading) firm's Xilinx-based "tick to trade" systems?

Oracle offers migration path for Solaris 10 apps

CapeCarl

Solaris put my two daughters (now ages 39 and 34) through college.

First Sun box (SunOS iirc): 1994.

Last Solaris box (for a big US bank): 2012.

TSMC’s 2025 timeline for 2nm chips suggests Intel gaining steam

CapeCarl

10nm = 0xdeadbeef? // Well it was a long-term asymptote

I don't wish Intel any ill will...But as someone that grew up near a 1 mile thoroughbred race track, it's hard to see how a formerly champion horse that was renamed to "14++...++" could suddenly reemerge as a winner.

Even the supply chain dynamics would seem to make that challenging. No matter how many $300M+ checks one writes to ASML, high NA EUV machines aren't showing up the next day even if one is a Prime member.

Apple patched critical flaws in macOS Monterey but not in Big Sur nor Catalina

CapeCarl

Re: There is an official update available from Apple

My "iMac (Retina 5K, 27-inch, Late 2014)" is still my main home personal computer...Works fine...Big Sur seems to be the end of the OS line for this mostly-NetFlix/email/VirtualBox/YouTube/Udemy-course/Chrome-surfing beast.

With a 2nd screen, it was a great work-via-VPN Terminal/SSH/dashboard-viewing beast monitoring 1,000s of Linux workerBees during my WFH Sys Admin shifts.

RISC-V takes steps to minimize fragmentation

CapeCarl

Re: FP8 .... -> FP1 ?

"Compute Obviously Identifiable Numbers - Tactically Optimized Selection Sequence"

Coin toss.

A cool $28m for datacenter immersion company GRC

CapeCarl

I was just glad...

That the coolant wasn't a red tint...The vats seemed a bit Matrix-y or secret government biological warfare, as it was.

Fortunately during my several year tenure with the vats, nothing crawled out of them.

CapeCarl

Re: Survival?

iirc the only issue with the oil itself in regards to server components was one could not use traditional thermal paste to meld the heat sinks to the processors. Some sort of metal foil was used I think. And yep we used SSDs (no hard drives).

---

On the physical side, the were some access things that made dealing with the servers different from traditional air cooled racks: No access to the front of the server unless the server was unracked ("unvatted"?)...The servers were hung face down (vertically) from the top of the horizontal vat rails. So if the server only had a power button on the front, we had to made sure the BIOS was set for the server to power up automatically when it was plugged in.

Switch placement was a bit of an issue (no one wanted to put those in the oil)...That eventually led me to have to do some oil slow leakage triage via a trip to Home Depot's garden department.

---

Overall I don't think cooling this way is bad...Its just a bit different and the ROI spreadsheet must have looked OK after a couple years, because we added more vats.

CapeCarl

Re: Seems to me

"The biggest problem I see is weight". We did have the 52U vats (104 servers per vat using 2U / 4 node beasties) on a raised floor (don't remember if we had to reinforce the floor, or just keep the vats a ways apart from each other.)

Of course one of the key selling points of the GRC systems is IF you are starting from scratch, just put the vats on a concrete slab.

"You can use whatever degreaser you want". The labor here would probably eat up any value left in a 5 year old server that you wanted to send to a server "liquid-dator".

---

One interesting point GRC made as to server longevity: The servers would spend their life at a constant temp and that would mean no thermal expansion/contraction cycles as load increased/decreased. In the GRC vats the speed of coolant flow thru the tanks was sped up / slowed down, to match load (and thus hold to a constant, somewhat high temp iirc.)

One negative: The list of vendors willing to warrantee gear bathing in mineral oil is limited.

CapeCarl
Coffee/keyboard

A bit different to get used to

I recently worked for a decade in a fairly large data center given it was owned by a small/medium sized financial firm...Lots of screaming fans from 1U & 2U Xeon and Epyc based servers...I few years ago we "stuck our toe in the water" in trying out GRC immersion cooling. Yep that data hall was much quieter, but as a data center tech one has to balance that with the slippery/slimy/clothes-staining nature of gear covered in mineral oil (said oil will never come fully off, no mater how long you let a server drain.)

One issue I continue to wonder about for my comrades left behind in the "server pool", Is how will these oil-soaked servers ever be recycled? Who will want them?

I had no issues getting batches of air-cooled servers recycled for a modest profit, once a year.

...Not to mention oil wicking out of the vats via the twinax cables to the switches. Hmmm.