"I hope they are going to pay people for all that crap..."
Think of it as "crowd souced electricity."
16330 publicly visible posts • joined 10 Jun 2009
That said when it comes to *reliability* "biogas" beats the pants off PV and Wind, even tidal and wave (I'd say only geothermal matches it but that technology is *much* less mature).
And you can could even pass the CO2 into a greenhouse as a bonus.
Power, cooler servers, lower emissions and fresh vegetables too.
What's not to like?
I've heard of this before and I suspect *cost* may have had something to do with it as well. At 2 triodes per bit Vs 1 dekatron per decade I suspect the benefits soon mount up.
Note when people talked about "decimal" or binary machines in this era they were usually talking about BCD.
This is a a *true* decimal computer with counting by 10's built (literally) into the hardware. And note that qualification as the oldest *working* computer.
BTW What's with the picture? Was it taken with a wide angle lens or is the frame work *really* slightly curved, Cray 1 style?
Thumbs up on the restoration. I think this could be the *only* computer to survive a nearby nuclear explosion *without* being inside a bunker.
That sentence should end with the words "if we do *nothing* about it"
Likewise NO2 and CH4.
To paraphrase "If we keep on f**king up the atmosphere and do nothing about it the atmosphere (and the human race) will be f**ked"
Not exactly revelatory is it?
The bit about an IT company have lousy *internal* IT support *definitely* rings bells with me.
Being ready to record *any* conversations with "management" may sound paranoid *until* you discover that the person who writes the *minutes* tells the story.
And it may just be complete bul***it.
In retrospect you appear to be the one that got away.
It's the b***ard child of the union of Hewlett Packard with DEC and the infection that was Compaq.
An acquisitive company whose most recent "innovation" being to produce printer cartridges with a 25% failure rate preventing re-fills and frankly s**t build quality.
The real question will be with KPMG. Where the books badly cooked and KPMG too incompetent to spot them or the fraud too carefully carried out IE it was *carefully* set up and quite deliberate.
They sound like a matching pair of a***holes.
Sorry folks. but it's on their desktops and it's available *now*.
No courses
No training.
No oversight.
Today quick & dirty hack *becomes* tomorrows business *critical* infrastructure.
I'm thinking revers engineering tools to suck the BL out of a bunch of linked spread sheets and make all the logic *visible* so it can be audited, discussed and walked through.
But I won't hold my breath.
"A programmer from one of these hospitals was a business partner of mine and the other directors decided he would write the accounting system instead of forking out for an business program such as Quickbooks or Sage. "
Just staggering.
The 2nd sentence in the book Peopleware (1987) reads "There are probably a dozen or more accounts receivable projects underway as you read these words. And somewhere today, one of them is failing."
25 years later the same old s**t in the same old bucket.
If your business is so small it does not *need* an accounts package, why write one?
How many businesses (*including* govt depts) are *so* special that a bespoke solution is *necessary*?
"I have seen some amazingly stiff (in the technical sense) models using Monte Carlo integration (not in itself a bad thing) but which produced incredibly fragile results (where the outputs where extraordinarily, or even chaotically, sensitive to the inputs). And yes, they were used for trading decisions."
I had not seen that term outside of modelling things like combustion and M1+ airflow.
Something tells me Excel should be *nowhere* near such an application.
Separated inputs from actions.
Can be converted to standalone programs
Turing complete (if actions change input variables and that triggers re-evaluation you've got flow control)
With decent support for variable names and import/export of their values it can be quite well documented by default.
Just a thought.
They are indeed one of a number of odd "Research Associations" set up to do cross industry non-competitve research for their members *by* their members.
RAPRA Technology (formerly the Rubber & Plastic Research Association) is a similar body
It's a notion that IIRC is quite popular in Japan as well.
Note what's impressive is that it's detecting the presence or absence of *individual* (or very small number) of photons. Like hearing someone whispering to you from the opposite side of a football stadium in the middle of a match and *hearing* them.
And it could allow secure comms for the rest of us.
Which is pretty impressive.
"Probably most stinks-and-bangs fans have already seen John Clark's insider history of rocket propellant development - if not here's a long and brilliant lunchtime read:"
The frontispiece test stand pictures (when it goes right, when it goes wrong) kind of sum it all up.
Burn-through-concrete oxidizers, glass etching exhausts, fuels that grit-blast a gas turbine as they burn.
The days when the answer to "What should we do with the H&S officer" was "Tie him up and if he complains, shoot him."
What about Isp figures?
Badly. It's actual competitor would be mechanical monopropellants IE solid rockets, so as long as it's around 250-260 it should be in with a shot. It's about "insensitive" munitions and *simplicity* of systems. If it's simple, safe to handle, dense and has *adequate* performance it's got to be a contender in the weapons market.
How does it compare (yes, I know it's early days but still...) to other fuels?
Likely *much* poorer than all the liquid/liquid systems in common use. Look at what the existing mono's manage. That's not the point.
But I'm wondering how far away this is from the "gel" propellants Rocketdyne have pushed for *decades* apparently with *no* success?
That said the launch was impressive and being a monopropellant makes it more challenging.
*if* this uses a catalyst this might be a way to get the stability of solids with some of the performance of liquids in a safe package.
That's a *big* if.
"If you mean SNAP generators or whatever they are called now then that's a different matter."
He does not.
The former Soviet Union powered a number of its ocean surveillance radar satellites with them. Look up TOPAZ. Roughly 10Kw range.
The US's only *reactor* was the SNAP 10A (Even numbers in the SNAP programme were reactors, odd #s were RTGs like the ones on the moon). That was 500W electrical using thermocouples with about 1.9% efficiency. Re-doing it today they would be about 6% efficient. It failed after 46 days on orbit (looks like a dodgy component in a power regulator *outside* the reactor itself).
SDI looked at reactors in the 100Kw range and there is (IIRC) a low level NASA programme to do one for the Moon, Mars and orbital applications. Note a 100Kw is about 1/2 the *whole* ISS array and about the size of a pickup truck engine (depending on the level of enrichment it can be about the *weight* of a pickup truck engine).
IIRC One of the *key* enablers in Robert Zubrin's plan (A Case for Mars) was just such a reactor., even acting as a raw heat engine to melt your way into polar ice.
Note that once your in *any* kind of atmosphere the heat sinking problems get *much* easier (even when it is about 1/1000 the pressure on Earth, there's still *something* for convection to operate with). Gravity can be quite useful too.
However as such a system would be *critical* to the whole expedition you don't want a single point of failure.
Taking 2 (ideally on separate vehicles) would be a *very* good investment.
"It's quite simple. If there's a drought, it's global warming. If there's too much rain, it's global warming. If there's less ice, it's global warming. If there's more ice it's global warming. If there are more storms, it's global warming. If there are fewer storms, it's global warming. If the temperatures go up then global warming matches with the models. If temperatures don't go up, the temperatures match with the models. If... Oi! Are you keeping up at the back there?"
Highly amusing.
Do you have anything *useful* to say?
"I read this paper just an hour before reading Lewis's interpretation.
That would depend on how the information is incorporated (or deduced) from *existing* GCM's.
*if* the PDSI is a standard *input* to those models then *all* models are over sensitive to climate change. If the "draught maps" are produced by GCM's evaluating the variables of the PDSI then their *logic* has to be refined to give more accurate outputs.
"Sadly the research in no way invalidates the global warming calculations."
That's a strawman argument. If PDSI is the input that may well move the size and scale of predicted draughts, invalidating *existing* predictions. If it's an output it is yet *another* refinement GCM modellers need to make to their systems.
And BTW there is something called an "Evaporating pan" which works quite well as an evaporation gauge in meterology. However I'm not sure if it's used *widely* enough to be included in the standard mix of data fed into GCM's.
And hence "conservative" in its over prediction. Which is fine if *nothing* better exists.
Note the reference for the "better method" (Penman) dates from *1948*. And on another historic note "and may help to explain why palaeoclimate drought reconstructions based on tree-ring data diverge from the PDSI-based drought record in recent years," presumably because the tree rings record *actual* drought while PDSI calculates *expected* draught (that might not have happened).
It's plausible that thismethod would be much more computationally demanding but it's hard to believe that *only* now has the MIPS been available to do this.
Thumbs up for someone refining the model and using "underlying physical principles," rather than some sort of derived proxy.
And what are those little red triangles all about.
NASA is playing a 2-2-2 lineup?
Note this has 2 good point.
Experience of humans at the L2 point, which *could* be used as a staging point for various destinations outside Earth/Moon space
Experience of teleoperation which could be useful if you went to one of the Martian moons *without* a full scale Mars landing. IIRC they are so small that it's less a landing and more a *docking* given their escape velocity.
The quote
"Engineers believe radiation also shut down one of Dragon's three GPS navigation units, a propulsion computer and an ethernet switch during the flight. Controllers at SpaceX's headquarters in Hawthorne, Calif., recovered those systems to full operability, Suffredini said. "
I'd tell you three times but I'm not sure *how* they were recovered to "full operability" Cycling the power is just a guess (but a popular one). Most embedded stuff which is mission critical has some kind of watchdog timer. If the software goes haywire when it times out the hardware re-boots.
Unless of course the prgram in ROM is corrupted as well...
Most of which seem to have been fixed by turning them off and then back on (or "cycling the power" as NASA seem to be fond of saying).
IIRC the going rate for a BAe 750 board is something like $750k.
Note it's not *just* the size of the transistors it often includes things like all registers having 3 way voting and (possibly) a special mfg process which is more rad hard than normal IC mfg processes.
But software mitigation techniques (and hardware add ons to assist the software) have been known for *decades* and $750k buys a *lot* of modern hardware.
Note only *one* of the Dragon processors was hit, the other 2 ran without a hitch.
Shuttle taught *many* lessons about complex processor design and (*just* as importantly) the software to *support* it.
"As it is, (silicon) electronics are hitting several brick walls of physics, namely gate size, transistor count, heat dissipation and switching speed."
A current transistor is about 140 atoms wide so with a doubling of density every 18-24 months it's more like a decade and a half. The next two are down to imaging techniques and the last to 3D designs like the Intel "fin" shape. You're looking at about twenty years but the walls *are* approaching.
Note that moving to a "clockless" design would probably eliminate *half* the transistor count on a modern chip and a *lot* of the heat being generated doing nothing.
But then you could not sell faster chips at a premium.
but why do all these applications look more like special purpose *analogue* devices, built to carry out *one* task.
Where is the quantum equivalent of a device which allows the function it computes to *change* by loading it with a set of commands (or "program" as I like to call it).
Right now this looks like optical computing did in the 1960's and 70's. A *huge* leap forward in MIPS which essentially proved too awkward and specialised to use. Every niche market needed a *different* set of hardware and the bog standard (electronic) hardware just kept getting better.
Sure *theoretically* these quantum computing devices beat the pants off conventional digital hardware, but so did the optical systems of their day.
Who uses them today?
in the late 19th century this was made possible by an engineering-marvel, the "Tehachapi Loop"...still used daily by gi-normously heavy and lengthy freight trains. Maybe that's the whole point...maybe his system will have the torque and velocity to go straight up grades impossibly-steep with today's technology...
Possibly. Avoiding height changes was the key reason most of the UK rail network is as convoluted as it is.
Of course in recent years there is this marvelous new technology which they are calling a "Tunnel" which bores through such obstacles.
I think they will catch on.