I bet Windows 10 will have a last minute support extension.
Posts by Bill Gates
30 publicly visible posts • joined 6 Nov 2008
Windows 11 migration? Upgrade engine revs up, enterprises have no choice
Windows 11 users still living in the past face forced update, like it or not
We know 'Linux is a cancer' but could CentOS chaos spell opportunity for Microsoft?
AI flips the script on fingerprint lore – maybe they're not so unique after all
'Only 700 new IT jobs' were created in US last year
Videoconferencing fatigue is real, study finds
US slaps sanctions on accused fave go-to money launderer of Russia's rich
Datacenters face double dilemma of supply issues and a need for speed
Red Hat's Mexican standoff: Job cuts? Yes, but we still need someone to boot Linux
Oracle's revised Java licensing terms 2-5x more expensive for most orgs
ESA sees satellite-based air traffic monitoring on near horizon
Want to live dangerously? Try running Windows XP in 2023
Linux kernel Spectre V2 defense fingered for massively slowing down unlucky apps on Intel Hyper-Thread CPUs
Official: IBM to gobble Red Hat for $34bn – yes, the enterprise Linux biz
Congrats Red Hat, you are marrying massive debts
Congrats Red Hat. You are marrying a girl with 80B in debt that will be more than 3x leveraged soon and will have their credit rating downgraded to a BB or less after this and that means creditors will start charging substantially more interest on that 80B in debt you just married.
Hope you like beans and toast because you will never see steaks again, and don’t ever plan on taking any vacations for the rest of your life. Those days are over.
Likely this will just end up killing both of the companies in a massive debt spiral in pretty quick time but, have fun you newlyweds.
Microsoft commits: We're buying GitHub for $7.5 beeeeeeellion
BA IT systems failure: Uninterruptible Power Supply was interrupted
Vinyl, filofaxes – why not us too, pleads Nokia
Ham-fisted: Chap's radio app killed remotely after posting bad review
BEHOLD the magnificent lunar backside in our MOON VIDEO
Microsoft: It's TIME at LAST. Yes - .NET is going OPEN and X-PLATFORM
Maybe .NET on OpenShift?
I guess they got tired of totally being locked out of anyone really using their stuff on all the latest trends like OpenShift, CloudStack, Amazon EC2, Google Apps etc..
Nearly every new cloud and PaaS platform is based on OpenSource, because lets face it, what cloud provider wants to deal with software licensing when building platforms to offer their customers?!!
Microsoft has been totally shut out of this ecosystem outside their own Azure stuff, and no one even talks about anything Microsoft these days in the world of the cloud.
BlackBerry slowly pulls out of power dive into toilet
Bitcoin or bust: MtGox files for bankruptcy protection
Ya know what the real interesting thing is, and what scares governments and bankers?
The fact that even all of this.....people are STILL WILLING to go with bitcoin because the huge positives about being able to get out from under their control are worth even more risk than this.
The ability to not be slaves and subservient to the masters with your wealth is worth a HUGE amount of potential risk of not being 'regulated'.
SCRAP the TELLY TAX? Ancient BBC Time Lords mull Beeb's future
Study: Arctic warming at 'stunning' rate – highest temps in 44,000 years
Red Hat parachutes into crowded PaaS market
>great opportunity for PaaS companies to lock-in developers.
That's why you use only Open Source in your PaaS development. Then you can go to whichever one you want with your code. Sure the interface for deployments etc might be a little different, but your app will run the same.
A Rails app on OpenShift is the same as a Rails app on Heroku is the same as a Rails app on Rackspace is the same as a Rails app on Stackao.
What you don't want to do is go down any vendor specific languages, or databases etc route..then your actually locked in. That is what you get with Microsoft Azure, or Amazon, or Google. You are not locked in if you go with Redhat, Heroku, Rackspace, Stackato etc..
Pure Storage attacks EMC heartland
Fukushima's toxic legacy: Ignorance and fear
They worked!
1) The plants withstood a 9.0 Earthquake with apparent ease. These are 40 year old reactors, and held up well. Newer reactors like those in the USA, are built on "rollers" to help them withstand even MORE severe of an earthquke as this was.
2) The reactors were shut-down in an orderly manner and the nuclear reaction was stopped.
3) A while later the Tsunami occurred, and we see where the major flaw was, having the diesel generators susceptible to the tsunami, not with a flaw in the reactor design.
4) The storage of the spent fuel pools at the top of the reactor was a big mistake. Although the nuclear hysteria is causing problems with spent fuel storage and probably has some blame.
5) The officials where slow to react and call in external assets(fire truck pumps) to help, which is probably a cultural thing.
If they would have put some more thought into the location of their generators and their spent fuel storage units, we would have had a zero incident.
Most likely we will have a few bannana's worth of radiation, a shit load of media hysteria, and 4 reactors that actually worked very well for being in a 9.0 earthquake and a massive tsunami.
And people will still be afraid of the cleanest and lowest cost energy source known to man.
EMC new storage federation box named V-Plex?
HP apes IBM's SVC
Invista is just as in-band as SVC
Don't let anyone fool you into thinking that Invista is 'out of band'.
Invista traffic is all going through the linux box on the director blade... the only difference between Invista and SVC is that the linux inline appliance in Invista is connected directly to the backplane(25Gbits), instead of via physical switch ports like each SVC(16Gbits).
Example, if a server on blade 1 wants to access storage connected to blade 3, the traffic will go in blade 1, across the backplane, into blade 8(Invista), back on the backplane, out to request from the storage on blade 3, back in the storage port on blade 3, back on the backplace to Invista on blade 8, back out the backplace to blade 1, and out to the server.
Draw it out...its the same number of hops just replace the fibre channel cables in SVC with backplane connections on the Invista blade.
And with SVC I can scale to 8 using cheap intel servers, I can't scale any more Invista nodes without having additional DIRECTORS and expensive proprietary intel blades for them (OUCH $$$).