Isn't he after the Moon?
Virtualization admins asked to explain what they do for a living may finally have something cooler than server consolidation to tell their kids, thanks to space upstart Vector deciding it's a good idea to create software-defined satellites that lift a hypervisor into orbit. Vector is building small rockets capable of lifting …
No, not even a sub plot.
Its a fail.
You going to put a general purpose sat in space?
Your sat is going to have a specific set of tools and functionality. You can change the software, sure, but not the mechanics. So a thermal imaging sensor may have value if your use case fits within its parameters.
Gibson families in space for a reason. A general purpose sat with no purpose is just space junk.
The classic demonstration of this is a shot taken by a camera as it passes through the "South Atlantic Anomaly"
The image is quite brightly lit up despite the lens cover being closed due to the storm of high energy particles, usually Protons IIRC.
On the up side the cells of modern memory are much smaller than the pixels of a camera. On the downside the amount of charge they have to gain or lose before being flipped to the opposite state is much smaller.
ECC DRAM is mandatory and all the usual embedded tricks (watch dog timers, solid memory management so rogue processes don't scribble over good code and data) would be mandatory.
I have actually proposed neutron or cosmic ray interaction with a user's SSD as a possible reason why half of her data are now non-existent. More likely a bug in OSX or the SSD's firmware - but BEWARE THE SSD. For it will nullify the actual cell on a deletion request rather than just mark it as free in order to save future cycles on write ops.
I can't see an obvious business case for running standard compute workloads in space.
I can see benefits in running research programs on a standard hardware platform in orbit.
Cool for education if nothing else. No more one shot special builds and hitching a ride on a major launch. Just submit a software project for time sharing in orbit.
I do hope it gets off the ground if only for the additions to the "On Call" weekly war stories.
"You need a new disc drive installing? O.K. Where is the server? Can you repeat that, please?"
I wonder what the suitable CPUs?RAM/SSDs are for this?
Space-hardened CPUs/RAM/SSDs are usually a few generations behind ground-based ones because of the costs of hardening them ands because they don't necessarily have to do generalised computing tasks.
I am not sure that multi-core Xeons beasts that you would run a virtualisation task down here would survive up in orbit given the intricate circuit sizes.
I suppose they must have considered this, and maybe that's why they are proposing a fleet of these to allow for redundancy as nodes in the cluster fail.
You can deal with Single Event Upsets with relative ease in specific applications - hiding it all under/in the hypervisor could provide the COTS/SAAS answer.
In low earth orbit (within the Earth's geomagnetic field) I think that interaction with the blown up Chinese satellite would be higher up the risk matrix.
"The statement also just about confirms it will be possible to upload new VMs to Vector's satellites. If that's the case there are all sorts of delicious possibilities to be had."
I'm sure that the malware authors will be able to think up lots of them once they find the cracks in the security.
Cooling isn't easy - you have no medium to vent the heat into. You have to radiate it.
This is not my day-job, but my understanding it that it basically has to run like a fridge, pumping the heat into a thermal medium fluid which becomes surprisingly hot. This is used to heat a radiator, which releases the heat into space. There's no convection or conduction of heat outside the spacecraft. Your cooling limit is defined by the area of radiator you can build and the heat you can get it up to.
Dissipating heat in vacuum has its challenges... and you may have also sides which becomes quite hot when exposed to the Sun.
Indeed, spacecraft sometimes are put in a slow roll specifically to even out the thermal stresses -- most notably, Apollo missions were in a passive "barbecue roll" on the way to the moon, but the technique has been used many other places, too.
Space is a cold blackbody for radiation purposes, and it's possible to passively lose heat by radiating it to space -- that's why you can get frost on exposed ground on a clear night, even if the air temperature is above freezing -- but the actual rate of heat loss is a lot lower than you'd think. It's like being inside a thermos.
As an aside, this is also why depressurizing a whole capsule for EVA isn't a straightforward alternative to an airlock. On-board electronics often rely on air convection for cooling, and will overheat in a vacuum.
If your goal is cheap cooling, put your server farm in the Arctic, or underwater. Both present maintenance challenges, but nowhere near as bad as the ones you face in orbit. ;)
If the satellite is in geosynchronous orbit (expensive to reach...), it will have that nice 72.000km lag anyway. If it isn't, it will cross your antenna every n hours, and you'll need or to use other satellites as relays (and which ground equipment to communicate with the right one?), or ground stations - which could slow down the transmission or increase lag anyway. And there's also the risk of atmospheric interference.
Anybody knows which is the average satellite bandwidth, compared to fiber?
I haven't got the specific numbers, but consider that satellite TV streams many TV channels simultaneously, lot of them in HD.
Bandwidth is only limited by your frequency allocation (and equipment) anyway.
Bandwidth is only limited by your frequency allocation (and equipment) anyway.
That's true but it turns out that's a pretty major restriction. Satellites have to fit around everything else we use radio for and worse still their beams often cover several thousand square miles. So whatever bandwidth is available is being shared by potentially millions of ground stations and risks interfering with anything and everything else in use over that area.
It's great for one-way broadcasting like TV as a single stream can reach millions of people but rubbish for two way usage eg; the internet. More modern satellites employ spot beaming to try and mitigate this problem but each beam still covers several hundred square miles.
Another factor is TCP and latency. TV doesn't need handshaking, but the internet does and high latency impacts usable throughput.
This is why satellite internet packages are expensive and have paltry allowances. And usually slow down alarmingly at peak times. A decent radio link is never going to beat a decent wired link in the bandwidth stakes and satellite is possibly the worst radio link there is.
(*)Made up numbers but you get the point.
This makes no sense to me. Which means there are two basic possibilities:
1) It's a great idea and I'm too thick to understand why.
2) It's based on something Donald Trump tweeted at 3 in the morning after he'd been smoking crystal meth.
If I were forced to place a bet, I'd go for the latter.
Hit by Micro Satellite Server that failed.
Went into out of control orbit and is not detectable with current telescope tech.
They didn't know what hit them.
1000 tons of new space junk created.
Link if you do not know how to google.
[published 11 Apr 2017]
@handleoclast I was thinking the same:
Like others here I can't see how this is useful.
Reasons for a satelite:
1 Space experiments
2 Sensors Earth or Space observation
1 implies specific hardware for the experiment and the RESULTS are sent back, no need to run any other VMs
2 send back sensor readings no need to get general purpose computeing power into orbit?
3 web proxy, web cache or data center in orbit to reduce speed of light latency? but I can't see it doing much or supply comms to many in just 50Kg all up
here's why you want a vm on a satellite ya cucks:
say you have an sdr and you have firmware updating software and drivers installed on a vm on the satellite. you do some experiment or run some test that needs certain characteristics from the sdr. then you test a new one with your vm on the ground. when you get it the way you want you just upload the image to the satellite and you can update the firmware updating software which updates the sdr knowing that you tested the whole deal on the ground.
I just went through your entire post and substituted "vm" with "operating system." Seems equally plausible that way. No need for a vm.
If you wanted to argue that it's safer to have some sort of vm so that in case of fuck-ups you could rely upon the host continuing to work and you could re-init the guest then I'd see your point. But that, unless I misinterpreted it, was not what the article was on about. The article seemed to be more about some sort of magic cloud satelliting idea.
I still don't see the point, if I understood the original article correctly. I once watched somebody run freeswitch (better than asterisk) on a linux guest on top of a windows host. Before he started I pointed out that neither are real-time OSs and that unsynched interrupt handling on the two would likely screw things up in a "dalek gargling down a drainpipe" sort of way. He countered that it was a shit-hot, latest h/w windows box which had yet to be put to any other use and was perfectly adequate. I turned out to be right. He then tried it on an ageing linux box handling a couple of hundred web sites and their associated mail and it worked flawlessly.
I'm still somewhat skeptical about the idea. You've written nothing to persuade me to your side.