Re: Hazing as human simian type dominance ritual
And about as fitting in society as flinging faeces.
5065 publicly visible posts • joined 9 Nov 2021
Basic mistakes?
Care to enlighten those of us without deep experience of the environment? Are the orange clad not told to stand aside? Were there no mainframes in use by any US prisons in the 1990s? Were US police departments forbidden from sharing IT staff with prisons?
Seriously, you can't just make that accusation without even bothering to provide a single shred of detail; this isn't the forum for the Guantanamo Newsletter so there really is no expectation of shared experience or even common knowledge.
Sensible. Especially if they complete the ML task and work to extract whatever patterns the ML finds, preferably in human-readable form, so that they can be fed back into the design and manufacturing processes to improve both the objects/processes being monitored and the sensors/sensor regimes.
Of course, the data will also be dumped wholesale into the obvious candidate LLMs by some lazy sod who didn't understand that previous sentence, followed by an announcement that the next batch will be vastly improved by being painted orange. With polka dots.
Who shoddily glued together systems created by other people, without taking any account of the (now well reported) gaping security holes that resulted?
Genius can be shown by taking separate things and joining them in a totally unexpected fashion to create something that nobody had previously considered, but:
Taking something that coordinates talking to a bunch of already existing commercial LLMs, joining that with a library that implements MCP, whose entire purpose is to be connected to LLMs, sprinkling on some connections to attach the 'bots to arbitrary existing messaging channels, which some may say is the natural environment for 'bots (waves at Twitter).
You know, in the Good Old Days, a genius might say something like "if I've seen so far, it is only because I've stood in the shoulders of giants", now the accepted way to be "a genius" is to hope nobody asks if there is anybody behind the curtain - that suspiciously large curtain.
So just make it - bring up a menu? What more needs to be done to a menu than to have it - be a menu.
Could add fluff, like make it reflect the directory structure in a "special" folder (or vice versa, depending upon how you want to think of of it, with right-click actions on the menu to mirror folder actions) so you can easily move items to the place you want, rename them, edit the properties etc etc. Maybe have *some* "magic" entries which show some more programmatic entries, like one which shows most-recently-used items, one for a search bar, ... BUT you get to choose which ones are present.
Most importantly, if - when - you have decided what goes where (grouped in submenus that make sense to you) things stay where they are put!
> I wonder how much of the case was written up on the Internet
Hmm, well as
>> (Spamann and Klöhn, 2016, 2024)
describe the test using
>>> a real appeals case from an international tribunal, with minor modifications to accommodate the experimental treatments
Only if someone had also scraped the original tribunal records and other relevant details; e.g. had had the chance to, like the experimenters, scrape the materials that were to become citations at the ends of the 2016 & 2024 papers. And perhaps student papers that discussed the tribunal, maybe even more of those than reviews by judges with too much spare time that can be used to go over historic cases. Say, enough student papers to make regurgitating them the most likely result from the LLM. Minor modifications? Enough to totally obfuscate the original sources? Or merely mods used to simplify the materials enough to fit into a 55 minute adjudication period, similar to mods that might be made to present the tribunal in a class, your judgements on my desk by lunchtime tomorrow?
No, no, I don't believe that GPT-5 could possibly have been trained on anything even remotely related to the historic cases in question; nope, uh-uh. And traffic violation cases? Information about those is only ever passed around by hand-written notes scribed by court clerks, which notes the local newspapermen personally carry from door to door, following the list of subscribers. There is absolutely no way that is ever stored on computer files that could be exfiltrated into some kind of huge training corpus.
Give GPT-5 the same pre-trial foreword to the prompt as the current judges have and little doubt it'll also manage to dredge up some weird stuff from its training data. Or something that sounds good enough that it'll keep the appeals merrily chugging away.
(Was going to suggest that the judge probably used an AI search to find the Witchfinder rules, but upon reflection the politically - and especially religiously - motivated judges have probably been storing up minutiae like that for decades, in the hopes of bamboozling everyone: playing the game of legal apologetics to its bitter end)
The old displays were all accompanied by useful audio warnings that "something has changed, have a look to see if it is your train being announced":
The porter's solid workboots, a rattle and "thud" as the new board slid home (or choice mutterings when it didn't). Which changed to the clatter of the flappy boards, rattling from top to bottom to clear then again to set the next selection. The next lot were quieter, but still an unmistakable noise as the smaller but more numerous little discs moved in unison, like wind in a giant cornfield.
Now there is silence, no reminder to look up from your book[1] to check, so you stare fixedly at the glowing displays, re-reading all of them, eyes blurring as you hope to be the first to spot the vital change and can leg it over to the platform fast enough to be in with a chance at a space on the luggage rack.
All that is needed is an appropriately designed noise, coming from speakers attached to the displays; no siren needed and certainly not the recorded announcements of somebody shouting garbled nonsense[2]
[1] Or you are staring fixedly at the train app, glazing over at the same ads cycling round, refreshing for updates and sweating as more passengers arrive, pull out their own phones and your battery drains faster and faster
[2] Rumour has it that the same dozen announcement tapes are sold worldwide without translation, once it was realised that no passenger could ever be able to decipher the echoing noise and even if they did catch an unlikely word we all assume it was our fault for mishearing "No darling, that was obviously the Basingstoke train, why would they announce Bangalore from Paddington?!"
AI created meltdowns plus nuclear materials powering the DC where the AI is running. Add that we've already been warned that all the major AI models will blackmail us if pushed hard enough - well, that's just great. Doomed, as they say, we are all doomed.
There has clearly not been enough sneering at their C compiler and the mugs^^^^investors have fallen for Anthropic's version of reality.
Trouble is, there is so much delusion, fuelled by their refusal to believe that, as they are all such canny investors, they could never be fooled into making such a bad decision, and the strange depth of belief in the magical properties of the "AI" (which we see popping up here every now and again), that it gets exhausting trying to go over it all, *again*, without just resorting to screaming at them "NO! No, that is NOT what it is doing!".
As a wise man once said: after all, it's not easy, banging your head on some mad bugger's wall.
Hate[1] to nitpick here, but[2]..
A C compiler doesn't need to (indeed, don't believe that it ever should) have a built-in knowledge of where the headers and object/library files are for your system: it can quite sensibly rely on picking those up from environment variables (or command line options or a config file standing in for cli options). So it worries me a *little* bit that the mocking isn't that this compiler is ignoring INCLUDE and LIB env vars but is phrased "native" anything. Dunno about *your* setups, but one of the earliest things my build system is wipe out the environment and start from scratch again, to pick up whichever LIB, INCLUDE - and PATH etc etc - is the right choice for the moment.
HOWEVER the compiler and readme not being accurate to each other is 100% a black mark.
As is *any* pretence that the LLM has spat out any kind of production-ready compiler - as is being pointed out by TFA, being able to start with full knowledge of the sources to tcc etc etc rates a very big "meh".
[1] Oh, who am I kidding, I love it; we all love to nitpick round here!
[2] Yes, this is a dig at the people commenting on this: if we want to get across to the Man on the Clapham Omnibus that this isn't what it is being hyped up to be then we have to be accurate in our complaints, to avoid being labelled merely as a "hater".
In a fit of insane enthusiasm, I thought this was going to some company actually admitting that AI running costs are going to be passed down to the customer and they'll find that the humans really were cheaper all the time. BUT that the result of trying to cram in AI was that they'd all found out how crap their existing support software for Customer Support really was[1] and have produced a replacement system that works better for the LLM and (here is the unintended bit) for humans as well.
But, no, of course, they are just trying to flog another "AI". But even better, it is one step removed from the previous "AI" (so now you have to pay extra to rent it from the reseller, with their "value added" stuff on top) AND as the humans are back in their seats between the bloke on the phone and the computer again, it is all the fault of the customer support bod if using "AI" isn't any good the second time around! Win-Win for both sets of Management.
[1] because the LLMs couldn't use it either: support staff had been telling them that for years but you'd expect them to complain, wouldn't you; but now The Computer is telling the CEO directly, it is time to listen!
The purpose of Markdown was to generate HTML from simple markup.
There are now scads of web pages that were written in Markdown, which was used to generate HTML for mass consumption. Of course, there are browser plugins that will do the conversion of Markdown into HTML for you, and display the results, so that you can easily preview what you are writing (if you decide not to use any of the GUI programs that help you write using Markdown).
Now we have these - and many, many other - pages being converted (back) into Markdown (BUT so far I've not spotted which *flavour* of Markdown they are targeting; there are lots of variants out there - and attempts to standardise, but we all know where that leads).
And what is the advantage of this conversion? Why, to reduce the amount of text sent back *and* to strip out all the noise inside the web page. Such as all the crap that forces you to read the page the way *they* want you to see it, not the way *you* want to (remember the Good Old Days, when HTML was all about content and presentation (fonts etc) was up to you, the reader?).
So, I'd like to encourage the use of this feature and take advantage of it as an unintended bonus[2] of all this LLM flummery:
If we can all use a browser that sends the request for Markdown[1] and then does the HTML generation and display for us, we can have a nicer, less noisy, more controllable web to browse! Yay!
Especially if sites that started with Markdown in the first place can be given first dibs at the request and just return the original, skipping two stages[3]. Getting the raw Mermaid (or similar plugins - chem and pic anyone?) that Markdown variants often support for diagrams/charts, instead of JPEGs[4], would be neat as well.
[1] there must be a plugin for that, I'll check when I'm up later today
[2] although the intended bonus is good as well - if you insist on using an LLM to read a web page, at least do something to cut down on the energy use by a bit of preprocessing and, given that is CloudFlare's job, hopefully caching to reduce repetition.
[3] yes, Markdown does include an escape to HTML as standard, so we'll still have to have CloudFlare check for that and do what it can to convert it
[4] why do websites still do that? JPEGs for photos, PNG - or even GIF! - for infographics, charts and diagrams.
> Running software internally helps improve it, assuming that there's good collection of reports of problems and communication of them to the developers. Running someone else's software internally doesn't do that.
Canonical can open an issue for Jitsi (and hopefully do so usefully, knowing what information is important to include), and be able to follow the discussion - including being able to see if progress is being made or if the whole thing has been brushed under the carpet.
> make a partnership with those who develop Jitsi
At least a partnership is *feasible* with many an open source project (can not speak to how open Jitsi itself would be, have only just heard of it!) and it need not be> a major undertaking from Canonical's (or anyone elses's) p.o.v. There could be a discretionary fund for dispensing to an outside project, either money or man power, to kick things off. Of course, if the problem with Jitsi was a major one for Canonical, one that is (or could end up) costing them any substantial sum, then Canonical would be well advised to put some more resources into getting a fix. Which is still more likely to bear fruit than trying the same with Google Meets.
> Nor is it going to help Canonical convince someone to switch to Ubuntu over running whatever meeting software they want on Ubuntu machines since the part Canonical maintains and is advocating is only Ubuntu.
What actually *is* "only Ubuntu" in that case? Canonical have, at various times, written chunks of code to go into their distro, but the overall result of "moving your machine onto Ubuntu" is to use the large amount of software that Canonical did NOT write and do NOT explicitly maintain themselves, beyond compiling it against the rest of the software in their distro, then popping it into their repos and "select some software" utility. If Ubuntu uses Jitsi, is seen to use Jitsi, and has clearly Jitsi available as selectable software then it becomes part of the whole deal Canonical are providing and advocating. Otherwise the only points to show potential Users who approach the Canonical stand are going to be purely items that Canonical directly edit and maintain which might appeal to a certain class of user: "now, just take a look, here you can see our local edits to the .rc files".
> Where the CPU spends more time after the code is parsed and maybe JITted?
The CPU spends all its time executing microcode that interprets the binary opcodes that we feed it. Not too sure what that question has to do with any of Liam's arguments, but as you wanted to know...
Article >>> Most code is interpreted
But, as understanding that short statement rather relies on the reader being aware of what is actually used to to execute so very many programs these days, especially by the bulk of the Users out there, it needed to be fleshed out for the ill-informed:
>> Javascript is rarely compiled. JITted yes, but it's at heart a dynamic interpreted language.
But, where is all this JavaScript you speak of, asks our slightly-better-informed reader? To which our esteemed author replies:
>> Someone running Gmail, Google Docs, VS Code, Slack, and so on -- all Electron apps -- is mostly running Javascript
Ooh, some real examples! So there *is* lots of JavaScript in use. Having had this pointed out, of course our reader realises that there was a hole in their apprehension and, as a Good Thinker, will ponder on this and use their web browser to go and see what other places may be hiding JavaScript, in other frameworks or wrappers. Maybe, they'll even have heard of other interpreted languages and wonder if those are also hiding away in well-known systems that are used by many people!
OR
our reader could just whine
> You didn't mention Electron at all in your article
However, that isn't what I wanted to ask you, now you've popped up again:
> So IT started to move backwards - crappier applications, slower performances, huge downloads.
We are all still on tenterhooks to know about your marvellous, obviously proprietary, OS and UI toolkit design that works so much better than the nasty stuff we all foolishly run, but as you remain teasingly silent about it, time for a bit of Holmesian deduction[1], eh.
You usually tell us that nothing good came from the 1970s (Unix et al being key items) and now we know that, since Java was released in 1995, IT has gone backwards, that leaves us with your "time of computing nirvana" being the 1980s or some part of the 1950s or 1960s. BUT as TFA points out, the latter two decades were rife with source code as the common medium for acquiring programs, it can not then. So, the 1980s it is then: something in the era we commonly know as the heyday of the 8-bit home computers caught your eye and has never let you go. Hmmmm.
I eagerly await your next clue.
[1] yes, I am aware that Sherlock Holmes didn't deduce anything (well, maybe a couple of items) hence the modifier.
Making your programming language self-hosting is a rite of passage, especially in the Good Old Days when its own compiler was one of, if not the, biggest/most convoluted programs around that could be used for regression tests. And it made it feasible to create cross-compilers and then follow the T-diagram to create a new native compiler for the latest acquisition in the computer lab. Now we have so very, very many machines all running the same instruction set and have generally stopped tweaking the microcode firmware monthly, we all wait for the new executable to magically appear as a download and practising T-diagrams has become a bit of a rarity, only occuring once or twice when a new processor architecture hits the market.
> he is the smartest man in the world
You may have a problem with your spell checker: it isn't "smartest" but "narcissist", although you might have have been going for "sociopath". Both of which are sadly useful to become stupidly rich (even though most of his wealth is imaginary - if he ever tried to realise it he'd be liquidated).
> Unlike Bill Gates, who lied stole and cheated his way to riches.
Not going to say I have any great love for Bill Gates, but, unlike Musk, Gates could actually do some of the things that are relevant to his products, like write a bit of code: ever seen Musk do anything practical, other than talk (speaking of which, what is that word for the words said when talking about things you know are never going to happen and fraudulently taking money for, something pithy - ah, yes, "lies").
> It's not the cartoon creation that is the issue, its the fact that this data is already in the system.
>> images have been added to Instagram with links to users' profiles, including both private and public accounts
>> these caricatures signal to an attacker that the person uses an LLM at work - meaning there's a chance they input company data into a publicly available model.
Yes, the data has already escaped into the system, but posting these cartoons - and linking them to accounts - means that now we know *whose* data is in there *and* a good idea of what it is, what sort of area it concerns. Always good to be able to reduce your search space to the more profitable volume.
>> Much of the information [in] the public images will support doxing and spear phishing
So now you know who has been leaking lots of juicy data, whether it is likely to be of interest to you to get access to their account and easily slurp up that data and you have info to make it easier to use good old social engineering to get the last bits of the access credentials.
> Not making them somehow doesn't make the data disappear.
The cartoons are neon lit signs advertising the presence of goodies, which are being held by a numpty with no idea about online safety: delicious!
> Limitations: let's start with encodings and line-wrapping
You want fancy features, you use fancier programs. Like Wordpad! Neither of those items are important in so many cases, including knocking out a quick batch script or C program, if you are working on someone else's box and they haven't installed *your* choice of big complicated editor.
Doubly so at the time Notepad was first released, when you claim that the devs at MS didn't want it released... MSDOS times.
Mix together the following:
"I read about it in a LinkedIn post and paid a lot of money for this AI, we will use it" + "meh, it does the HR work for me, why should I bother looking at its choices" + "the computer says we should hire this person, they'll be a big earner" + "we taught the ML by showing it pictures we'd tagged with these traits but it is all a black box, we don't honestly know what it picked up on" + "exaggerate the traits the ML is actually recognising to ensure it triggers" + "we are continually retraining our model on new mugshots from the Top Earners listicles"
and you have a lovely feedback loop.
We already have this loop in real life ("this person looks the part") with ill-defined criteria ("I just have a feeling about them") and a lack of proper negative feedback controls ("I am a good judge of character on first impressions, do not argue with me") which already gives us the well-known "sod what they can actually do, tall with silver hair, pay them" effect.
Now add in the "use AI" and we have the opportunity to really go town and start to exaggerate whatever weirdity the black box has really spotted: "All the high earners paid for studio headshots" => ooh, reflections of expensive cameras in the eyes and spectacles; paint a mirrored Leica logo on your forehead to get the job! It won't be that blatant, of course, but slow and subtle, with the features being selected for unconsciously and "becoming fashionable" as people ape their "betters": if we didn't have make-up and "influencers" we'd just to wait for evolution to do the job for us (hey, right back at "your parents are the best determiner"!).
So what will the unexpected feature be, how will we appear? The Leica logo? A weird adversarial noise pattern that the LinkedIn image compressor glitches on?
Or the training data had some of those blank outlines with nothing but a question mark on them: "who will take on the role of next CEO, a position worth $xxx?"; "who is the mysterious Satoshi, is he really now one of the richest men on the planet?". At the end of the next decade, everybody is dressed as The Riddler (Holy genetic algorithms, Batman!).
"Desktop Commander"?
Why don't people name their programs sensibly, so that it reflects what the program will actually do? I mean, you look at something called "Paint" and can guess what it allows.
For this MCP server, I'd suggest something more meaningful, like "Bwaahahaha" or "SubmitMeatbag".
> But hey, I can run an outdated OS with a ugly UI and ugly applications for free....
We are still waiting to see release 1.0 of your modern OS with beautiful applications; doubt any of us could afford to buy a copy, but it is always interesting to know what we should be aspiring to run.
> Pah - in my day we handcrafted zeros and ones straight onto the HDD platters with tiny magnets!
> self-describing ... That's a trend that has run its day, fortunately - more power to JSON and CBOR.
Bleughl Giving up self-describing for something that is totally arbitrary and needn't be carrying anything of any use to the receiver at all: description by random hard-coded checks, assuming you've even bothered to code the checks at all.
Madnessl
> type definition is stored as well
The type *can* be stored as well (or is well-defined by the DTD or schema or whatever meta-thingy you decide to declare "is being used" at the top of your XML file)[1]
But all too many uses of XML never got anywhere near dreaming about identifying that meta-data and relied on hard-coded assumptions and crap documentation: interoperability down the drain.
But I'll take XML over JSON every time. Especially using expat, where I can avoid cluttering memory with the entire file loaded as a DOM before even deciding it is any use.
Sod it, 99% of the time that I had JSON foisted on me at work, an INI format file would have done the job! Especially for embedded devices that just wanted to send a bit of info but were lumbered with a parser that would happily try to read into core a 500Kbyte file (embedded, remember) that some prat had sent before, again, getting round to spotting it didn't contain a field "fred" with a value between 0 and 255, which was all that the poor thing wanted! Mutter, mutter.
[1] note that your example looks reasonable, but unless you remembered to name your schema/thingy then maybe the B tag is really a boolean, how is the parser supposed to know? Yes, you told *us* but we're not interpreting the XML. Well, except for those times when the easiest way to progress was to get a human to translate the XML into something *really* "machine readable"!
Once again, could you *please* say what you mean by "Fifth Generation Languages"?
The Wikipedia response is total bollocks[1] and has, of course, fatally posioned every other result in my quick web search :-(
Are you using the term to refer to Rational Rose (as you mention *a* Rose), intended to generate code from diagrams etc?[2] We used that for a while (I moved on) but to talk about its output as being "unmaintable" is about as sensible as talking about the assembler from GCC as being unmaintainable: if you need to "maintain" the output, instead of modifying the input and re-running the compiler (Rational Rose is just another compiler[3]) then you are Doing It Wrong. And if in the end you can't make it do anything useful without trying to fiddle with its output then the "Doing It Wrong" means "trying to use this tool to solve your problem, go use something more sensible".
[1] confusing, as it does, "a language to be used by a proposed fifth generation of hardware" with "this is the fifth generation of programming language") and then not even being 1% accurate about constraint solvers; sigh, Wikipedia, what will we do with you.
[2] Rational Rose, and other (variously-arsed) "code generators" aren't "the fifth generation of programming language" by any sane[4] measure either
[3] btw, not trying to claim that it is/was a *good* compiler, or a good or even a vaguely useful input language in the first place - before I moved on from that project it didn't seem to be doing anything terribly useful for us.
[4] sane? Is anything from a hypemeister ever able to be considered "sane"?
cookiecutter has essentially said that he would never deploy *anything* that hasn't been released by chainguard (or similar).
Which leads to the question that, if he doesn't trust any "developers", who or what does he think works on the material that chainguard packages - or even at chainguard themselves?
Fascinating.
Taking the time to follow your usual rant against extant OSes[1] *and* also referring us to a technology (memory segmentation) that does literally *nothing* to solve any of the issues that the OP raised![2]
A masterclass of the kmorwath style.
[1] still waiting to hear about your progress on your replacement OS that you believes works, btw
[2] conflicting dependency requirements which require copies of absolutely everything to be bundled with the application, just in case, OR the application only runs on one, very specific, OS version so be careful you downloaded the correct one from the 1719 different builds that were provided - or not.
> As long as it's done right
It is hardly a condemnation of 2FA to add that as a rider, it literally applies to *anything*. And we tend to remember (with good reason) anything that is done badly/stupidly/frustratingly so that is what comes to mind first and colours our perceptions.
Drinking water is a Good Thing, so long as it is done right: forcing people's heads into an ice-cold butt is probably going to put people off.
Mutter dark oaths and raise your pitchforks at the purveyors of the "Chill Dunk 3000 Hydration Station (ducking stool add-on available)" but still enjoy your refreshing glass of perfectly potable aitch-two-oh.
Great comment: just blithely state that the Register comments are negative because we can not be bothered to learn and are just scared of the upstarts coming to take away our jobs.
OR you are a shallow youth who is the one who can not be bothered to learn, to actually go back and read about what the grumpy old farts actually lived through and see how it really is just one damn stupid fad after another, making money for the hypesters. To heck with comments, that is the entire thrust of TFA you are responding to!
> where it is genuinely useful it survives
And fades into the background, needing no hype to survive and generating few, if any, emotional reactions aka rants. When did you last get worked up about the Turing Architecture?
> People will often do it unnecessarily because they want to understand and learn it
Now there is a fascinating statement.
If you are truly interested in learning about it, giving a technology a work-out is not unnecessary. So - contradiction?
Unless you are "doing it' so too early, without having first picked up enough background to comprehend what you are seeing (and by "work-out" I don't mean running the installer so that you are ready to try out the examples in chapter one). The worst case of which is the en masse move of your company to the New Shiny, with the need for increasing numbers of Memos From On High demanding that everyone has to use the Shiny in order to get the company's money's worth from it - does that one ring any bells from any Register comments you have so casually brushed aside?
You have correctly described the value of gathering hardware into a data centre, getting all your physical plant together and taking advantage the physics in doing so. And the equivalent in human resources to keep the plant running efficiently.
HOWEVER
whilst Cloud implies data centre, DC does not imply Cloud.
Even your statement: you ran a cloud in a shared DC. It was a cloud to your clients, to you it was a bunch of hardware, either purchased & colocated or hired by the rack from someone else who decided to pop it into that DC. The software running on it then determines if it (today) is to be "cloudy" or or not.
> The function name says what it does, and the parameters and return value(s) specify everything it could touch.
> This should mean that you can scan the upper level and quickly ignore all the functions except for the one or two that you may need to change
You missed taking a copy of that "should" from the second of those two lines and inserting it into the first:
] The function name should say what it does, and the parameters and return value(s) should specify everything it could touch.
> Every IDE has a trivial keyboard shortcut to jump to the implementation and back, whether in the same file or another. A few will even show it in-inline.
Skipping over "every" (and, btw, that is the job of an editor; if it also has to be tied into an IDE then, well, hmm): if your first contention is correct then what need is there for such an immediate review of the called function's code? Especially in-line! That latter really starts to sound like the response to "functionitis" getting out of hand ("I have absolutely no trust in any function's author"). Perhaps, for real life work, which trumps "it should be" every time, the sentence ought to be:
] The function name hopefully suggests what it does, and the parameters and return value(s) possibly specify everything it could touch.
> With containers I don't have to give a flying Duck about those and not fear that one would break the other.
Or one can use VMs for the same effect.
Now cue neverending series of arguments about why/when you should (not) use containers or VMs; more than a few of which will completely ignore the key line in the above, which is also all that I am concerned with these days:
> for own needs
>> Think it is unlikely that NASA has done any kind of detailed study on an orbit boost.
Followed by argument why no DETAILED study is required: because it is clearly too costly (and dangerous is an expensive cost, btw ).
A DETAILED study, as would be required to produce an actual mission plan as the request to NASA asks for if granted, is a lot more work than is needed to that there is no point in going on with that sort of plan.
I stand by my belief that it is exceedingly unlikely that NASA ever (formally) produced a plan, or detailed study, for an orbit boost mission: it is simply not needed to go to that level in order to decide that it is a non-starter.
> our diet would revert to the more austere and seasonal one I remember from my youth.
Although it was seasonal and the basic ingredients were more - basic - the food of my youth was varied and enjoyable (except the broccoli, of course).
But then, my mum was a good cook and she made sure that we all knew how to cook (and did so for the family): there is more than one way to prepare potatoes and a vast range of breads.
To my shame, with all the modern 'convenience' my cooking skills have gone downhill as the decades progress: it is easier to maintain your abilities, in the face of so much else to do and so little time, when it is a commonplace necessity to do so.