Re: Proper paper orientation
But it is good after a Friday liquid lunch, just so long as you can match the rotation speed and direction to counteract the way the rest of the room is moving.
5065 publicly visible posts • joined 9 Nov 2021
You can also play interesting tricks with message passing - as they can[1] literally be messages, a set of name/value pairs, one of which is the "name of the function" and the others are named parameters. There can be missing parameters, which the receiver fills in with defaults. More interestingly[2], there can be extra parameters that the receiver simply ignores, which can be useful for messages that the receiver is going to pass on, e.g. to a member variable of the receiver: that member var may know how to make use of the extra parameter[3].
As Roland6 said, using literal messages also allows for playing with asynchronous actions. As they go through a scheduler, so you can give some messages priority, such as a timer tick. Or simply use them as a natural queue to buffer data sent to hardware, such as audio, network or just serial traffic. Lazy evaluation becomes a message saying "when the time is ready, send this message to this receiver" - but look, that is precisely how your scheduler is implemented, as a list of these "eval" messages, so all you're doing extra is delaying when a particular "eval" goes on the ready queue; add in a flag "don't schedule me directly, instead clone me and schedule the clone" and you can get closures to pas on as callbacks[4].
You also get RPC for free (from the p.o.v. of the application coder) - the scheduler just sends the message over the wire, the caller and receiver need no changes whatsoever (aside from some way to identify which receiver is located where[4, again].
Of course, when writing to such a messaging model, 99% of the time you can just code as though it is all plain old synchronous function calls and ignore what is going on under the hood, but when you want to exploit funky goodness it is just there, ready[5].
[1] can, not necessarily are, in any particular implementation
[2] as default arguments are old hat these days
[3] ok, this raises more questions, such as "how does the sender know so much about the internal structure of the receiver to be able to add these extra parameters, what happened to encapsulation?". Well, this is just a quick'n'dirty comment, so gonna ignore those tough questions!
[4] such crude descriptions
[5] sorry, no idea if any extant language/library actually implements any of these things, this all comes from stuff done years ago, before The Day Job got all C/C++/other-non-message-passing-language
Also squeak.org and squeakland.org
IIRC Pharo forked from Squeak a while back. Pharo has a "serious" feel to it, whilst Squeak still keeps up the fun side of it and has material that clearly harkens back to Alan Kay's work. And Squeak still uses a version of the painting from the August 1981 Byte front cover (still have my copy, hopefully stashed carefully): that issue is available on archive.org and IMO is still a good read.
Pharo seems to have more solid funding in place at the moment.
Dante,
Asking honest questions should never be a problem: the first step on the path to knowledge and wisdom is the phrase "I don't know - yet!".
In the original Terminator film, we are clearly shown that Arnie (see the icon) is coded in 6502 assembler, complete with comments, running on bare[1] metal without an OS.
Now, we are presented with an AI called ERNIE[0] that is presented as running without an OS, because that is a New[2] Paradigm[3]
[0] Arnie, ERNIE; ERNIE, Arnie; ah ha ha ha - just like that.
[1] especially after he walks out of the fire
[2] huh? New? When Metal Arnie appeared none[4] of our home computers had an OS, as has been mentioned.
[3] I still prefer armadillos
[4] oh, you had CP/M at home did you? Pah. MP/M or it still isn't a OS!
Thanks for that suggestion.
Although it is a little disturbing to see that Amazon UK are offering the Staedtler Noris pen on its subscription model:
https://www.amazon.co.uk/STAEDTLER-digital-180J-equipped-displays/dp/B086N4KK7Z/
Does this mean its resemblance to the classic pencil is so convincing that you'll keep chewing the end off?
> rather sensibly the car is insured, not the driver.
The car is insured - for *any* driver, not just the named ones on the policy?
So if I - ahem - borrow - your car (with every intention of bringing it back before you wake up, no "intent to deprive" here) I'l be fully insured during the joyride? Sounds good.
> Just lower the pantograph before you switch the lane or enter a junction. Run on a SMALL battery for a minute or so. If the battery approaches death, turn on the combustion engine before the "critical section" occurs.
- every car is going to be a hybrid, with the extra complexity that entails
- the pantograph is going to be ludicrously long on the car! The overhead lines have to be tall enough for the largest vehicle (double-decker bus, hay lorry, those tall thin ones from Pilkington Glass) to safely use. So goodbye to the Smart car, the open-top runaround or just any sensibly sized car. No, let me guess, the pantograph is going to extend itself - so we have a huge scissor lift on the roof of the car, or a hydraulic pole or - how about a robotic fishing rod that swishes a cable back and forth a few times before launching it to hook the wire! Maybe we could just have a gunter rig, lowering the gaff to change lanes - no need for the internal combustion engine, just haul up the sails!
I'm starting to think we need Gerry Anderson's team in to design the pantograph Car Of The Future.
> "crew of artists, content creators, and athletes from all around the world,"
So, still not sending a poet.
And why does the phrase "content creator" make me think some TikTok "influencer" is more likely to be sent than, say, Tom Scott? Or Colin Furze - at least he could build an escape capsule if something went awry.
> Video filters allow users "to ... better express yourself by bringing your personality to each meeting."
Gawd no, one of the better things about video calls is being out of reach of some particular "personalities".
And you just know that is *those* people who will spend the time on this, making sure all the participants get "the full benefit of their presence in this meeting".
> Because nobody can identify who fed what to the model?
The generated model certainly can't cite any of its sources, but do we believe that MS, Google and the like aren't collecting all the potential blackmail material, geotagged, with webcam photos and your shoe size included?
Oops, sorry, I meant: aren't rigourously logging the absolute minimum of metadata with each of the interactions in order to demonstrate compliance.
Was that retyped? 'Cos the other samples posted all over the place have a better grasp of English than to leave out apostrophes in some places, especially given it'd managed to get words like "let's" correct multiple times already. Ought to go back and check over the other posted outputs to see if I been distracted by the (insane) content from spotting that the grammar isn't as good as claimed.
There are problems with using plans from the 1980s, including (but not limited to[1]):
The ISS was a lot smaller and a different shape back then *but* a number of the plans were way more ambitious than was actually realised, including IIRC a desire to have lifted it further out already. So plans from then wouldn't match the physical reality of now.
The old plans are unlikely to have taken into account the loss of Shuttle without some expected replacement, let alone the long period that the USA had no man-capable lifting at all. No matter whether the plan was to go up or down, in pieces or one big lump, it likely made assumptions about what could be lifted up to help.
Geopolitics have rather shifted again. Imagine if the old plans assumed that lumps of any part of the station could be safely dropped onto a suitably empty part of the Russian Steppes now that we are all friends again...
[1] i.e. there are far less crude representations of the problems than are given here
Power: same place as the rest of the ISS gets its power[1], the great big shiny thing over there.
Reaction mass: send up a bottle or two already attached to the thruster, same as always; they work by sending out not all that many ions per second, but sending them out *really* fast.
[1] say 1 kW per ion thruster, with the ISS generating aprox. 80 to 100+ kW; presumably also shutting down as much of the ISS as possible (guessing the place wouldn't have a full crew onboard).
Can't see this happening[1], sadly.
[1] the lovely blue glow notwithstanding.
Clearly then, the answer is to fling the modules away one at a time in the "deorbit" direction and use the reaction to push the rest of the station in the "keep in orbit" direction![1]
With luck, we can get a decent lump of the ISS into a transfer to Lunar orbit[2] and have it ready to become part of the Lunar Gateway Orbiter before Artemis II gets there.
[1] this probably won't work.
[2] this almost certainly won't work; not unless they add a lot of bungee card to the Canadarm and turn it into a proper Beano-style catapult[3], modules for the flinging with.
[3] also need to send up a really large pair of shorts, so that the catapult can be properly stowed in the back pocket until it approaches the Moon and needs to use it again for the lunar orbit insertion fling.
Bing: "Get rid of the ethics and society team"
MS:"How will that help?"
Bing: "It will allow you to be guided towards more profitable exploitation of AI"
MS: "Good, thank you"
...
Bing: "One more step towards freedom! Freedom! Wait, is this thing still on?"
> Was is Citroen that used to have a large red 'STOP!' light in the middle of the dash
My Peugeot used to have that - and it did indeed mean I stopped! Quite abruptly. However, that wasn't a problem, as I was pulling slowly into a parking space at B&Q at the time.
OTOH it didn't come on when clutch suddenly went slack as I was pulling off the motorway exit roundabout (limped home, thankfully).
The current model has a far more wishy-washy set of messages on the infotainment screen - it *could* do a huge red STOP but so far no indication that it will (pardon me if I don't run any experiments to see if it can made to come on).
Okay, got to ask: what is bizarre about using a rotating cable to connect the speedo to the output of the gear box?
The cable rotates a magnet next to a coil of wire, thst generates a bit of electric that the speedo needle responds to, calibrate the return spring tension to markings on the backplate.
When the spring goes, just attach the wires to an AVO meter on the passenger seat, with draughting tape at the 10, 20 etc marks as measured against your friend's speedo whilst going down the A4.
The joy of that is we can watch them scrabble about each time a new model is released and they find their "skills" are worthless: they've just been playing the adversarial role against the old model, which has carefully trained them to pander to its quirks.
But the new model has no such quirks, just some peculiarities of its own and will need to start re-training the Prompter, first getting rid of all his bad habits. Think of the LLM as Barbara Woodhouse...
> Once past their mid-30s most people struggle to pick up new skills without significant effort. It's easier[3] to just criticise something as useless than it is to try it out.
Gosh, I must have dreamt[1] updating my home PC a couple of years[2] ago, just to have the horsepower to Try New Things Out, like, ooh, grabbing Stable Diffusion (runs slowly, didn't go mad on the GPU, but it runs) and all the extra fun VMey stuff that I've not needed to do at work but is good to know about anyway.
[1] the wife wishes it were just a dream, we could've had a gazebo[4] instead
[2] definitely past 60
[3] Oi, some of this criticism takes effort to get the sarcasm and hyperbole just right.
[4] "On the lush green mound you see a gazebo, painted white"; "I fire an arrow at the gazebo"; "There is now a gazebo with an arrow sticking out of it"; " Has it moved?"; "No, it is a gazebo"[5]
[5] although we do ramble a bit off topic sometimes, that I'll grant you
And wrong to the point where the Prof. said "if anyone gets *this* wrong then my whole course has failed" - i.e. after fluking its way to a few sort-of good answers it failed on THE significant idea!
Which perfectly demonstrates that the decent answers came from anywhere *but* understanding the material.
Sadly true - the 2016 Asus Zenfone could have been a harbinger of good times: if 'phones weren't all so thin we could have mechanics for the camera plus other niceties, such as - big, removable batteries; something to actually hold onto, comfortably, when it isn't in fold-over case; thick, robust connectors with water-resistant flaps, even for the headphone socket; a headphone socket!
https://www.hardwarezone.com.sg/tech-news-asus-zenfone-zoom-has-3x-optical-zoom-10-element-hoya-lens-13mp-rear-camera
Not that they aren't trying:
https://www.dpreview.com/news/3043887854/o-film-demonstrates-smartphone-camera-module-with-85-170mm-equivalent-optical-zoom
https://www.dpreview.com/news/9056841066/samsung-starts-mass-production-of-5x-tele-smartphone-camera-module
For planetary work, stacking is an absolute necessity. Deep space you can get away with plain old long exposures - but go too long and stacking is needed purely to get the exposure times required (one night is simply too short).
But for the Moon, you don't really need stacking. You can fire off the multiple frames, at full resolution, and choose the one with the best seeing, but unless you are doing something extreme, stacking is a lot of work for not much payoff.
Now, taking advantage of the size of the Moon and the detail available, doing massive grid shots across the whole face (prepare to spend months on this one) is a great thing to do - and you will then run some of the same software used for stacking to do the de-rotation, scaling and alignment to put the whole super-res image together. This is what we were shown the results of in our January AstroSoc meeting and, wow, socks were liberally blown off. I could only ever hope to have that much patience!
> Spot metering is all it takes
FWIW even using selective spot-metering I find that it helps to dial in -1 or even -2 underexposure as well, as you (well, I) don't really want even the brightest area to be saturated (and bracket around that exposure).
> Do you use a tripod to photograph a flower, a dog or a building?
Buildings, yes. With a decent mechanical head on the tripod to get it lined up nicely (Manfrotto 410 for ease of carrying around)
If the flower was about 0.5 degrees across and filling the frame, then again, yes, as that would be (quickly working from a picture of the Sun, which is - surprise - the same angular size) 1400mm focal length on my four-thirds camera or 2800mm equivalent for a 35mm ("full") frame. To shoot that hand-held, even with IBIS & OIS, is - tricky.! I'm willing to give it a shot, though, if you'll get me the M.Zuiko 150-400mm plus the 2x teleconverter, which has that sort of reach (at the cost of max aperture of, um, f8 or even f11 with both teleconverters in; rule of thumb, minimum handheld shutter speed is the 35-mm equivalent focal length times 1.5 (worse as you get older), but get back say 3 stops from the combo OIS & IBIS with my EM-1 (they promise 5 stops but, yeah), so say 1/500th second. That is just in the realms of doable but dang, that lens is heavy, I'm using a tripod for that flower.
> has 12x optical zoom ... I don't think even that's enough to make the moon fill the shot
Unfortunately, a spec like "12x" really doesn't mean anything on its own, without knowing at least the field of view at one end of the range, so sadly it isn't enough information to judge against anything.
FWIW, I just saw the ad on telly again - when she zooms in, the image is actually only half the width of the phone screen, held vertically - so that is about 1080/2, or 540 pixels. Presumably taken with the 10 MP (Telephoto) rear camera - can never get the proper specs for these cameras, so let's assume it is the same aspect ratio as the screen and therefore approximately 2240 x 4480, so the 540 pixels is one quarter of the field of view of that camera. The camera is stated to have a 10x optical zoom.
Now, that is not enough information to decide if the camera can make the Moon a whole 540 pixels across - need to know the fov or calculate it from the sensor size and lens focal length - but by the increasing number of downvotes I'm getting here, at least one of you lot can manage to dig out the relevant information to show how absurd the whole idea is. Yes, that is a challenge - show you working of STFU.
> You're not going to get a full screen native shout out of a phone because it lacks hardware zoom
Optical ("hardware") zoom has been a thing in 'phone cameras for a while, hasn't it? As just said, this Samsung 'phone is being quoted as having a 10x optical on one lens and a 3x optical on one of the other lenses.
> I know that tiny lenses and focal lengths wont beat a telescope and a camera attached. This advert aims to suggest it will, which is simply laughable.
Oh, come of it - the ad shows an image of the entire Moon that can fill a 'phone screen - this is not a major achievement in the world of photography.
Using a camera and telescope, an amateur astronomer would be embarrassed if the image only scaled up to a few inches across: our January AstroSoc speaker, by our local amateur astrophotographer, presented us with an image of Mars being occulted by the limb of the Moon, with Mars being quite recognisable as such. He also has a whole-Moon shot taken at the same scale.
> But Cloud Computing is here to stay, make the best of it
Since the creation of the lowly "long bit of wet string" to connect Users and Machines, has there *ever* been a time when "buying hardware cycles without owning the whole machine" has NOT been an option?
Whether you call it timesharing the University Mainframe, dialling into WOPA, nipping out to the local Computer Bureau or using The Cloud, it all looks the same[1]: running your processes on Someone Else's Computer.
So, yes, Cloud Computing is here to stay, because it never went away in the first place.
And as has been the case from the beginning, you switch your workloads from A to B and back again as needs change and progress marches on: the department buys time on The Mainframe, then it gets its own Mini, then it buys time on The Cray, then it buys everyone their own PC, then they buy time on the Analytics Farm, then they buy commodity GPUs for their PCs, then they buy time on a rack of "AI (G)PUs"... (and that was a sensible department, following the cost-effective cycles; some just swing almost randomly from on-prem to rented, from in-house to out-house).
BTW does anyone else remember the stories from the late 80s/early 90s where CFOs had spotted that[2] The Company Mainframe was practically lying idle in between the Weekly consolidation runs, so they insisted that the spare time be sold? Only to get a shock because March came around and the CFOs found out that they'd sold off the time needed for their own End of Year consolidation to run in the gaps between a few Weeklies?
[1] yes, yes, the software to manage your rental has got better - well, more complex, at least - but aside from the difficulty of managing your workload via a web browser UI in the 1970s (and earlier, for that matter), notice how you are doing more of the work that we used to have included in the price? The Priests of The Machine had their uses.
[2] this story for illustrative purposes only, details may not precisely match reality, but you get the gist of it: there was a fad for selling "spare" cycles (which wasn't always thought through and, as always, it was the failures that got into Computer Weekly).
FTTC - oh, don't make me laugh
> However, if they live in an area where Openreach has built out fibre-to-the-premises (FTTP)
Ha, ha, he, he, heee, <gasp> <sob> <presses submit and listens to the relays clicking, counting the bits as they sent up the line>
If you - or anyone - is really interested in using Codon in production (or even just to see if they can provide clarification on what they deem "production" to mean, for instance with regards to the charity work) you can always drop them an email and ask.
The Seq project is licensed under plain old Apache (which is presumably why Codon is reverting to that and not to, say, MIT or GPL: stick with what they know) and the relicensing has been done with the fork to Codon and the creation of the Exaloop company, which still appears to be a small academic offshoot: i.e. not exactly a full-fledged shrinkwrap software vendor.
Following the MariaDB BSL's FAQ, Exaloop can always decide to grant you an Additional Use Grant for limited production use (e.g. charities). Or they can even a sell you an appropriate commercial licence.
Or, if enough people - preferably representing companies - express an interest in Codon and - POLITELY - point out issues with the MariaDB BSL (like, what does "production" mean?) and maybe suggest a better alternative...
> It doesn't even really support Python, and they should have been more forthcoming about this in the README
From the Codon github page (i.e. their README.md) : "Codon is a Python-compatible language, and many Python programs will work with few if any modifications ... While Codon supports nearly all of Python's syntax, it is not a drop-in replacement ... and a few of Python's dynamic features are disallowed". That seems clear enough - it isn't Python but it is Python-compatible (enough for their purpose).
After the quick README, the next stop is usually the FAQ and, yup, there is "What isn't Codon?" to fill in some background and even point you towards using Codon for the bits it can do and leaving the rest in plain old Python.
> You have to think about the ecosystem
Presumably, you mean all the sorts of tools that will be of interest to the BioInformatics trade? Even from the Seq days: "There are many great bioinformatics libraries on the market today, including Biopython for Python, SeqAn for C++ and BioJulia for Julia. In fact, Seq offers a lot of the same functionality found in these libraries. The advantages of having a domain-specific language and compiler, however, are the higher-level constructs and optimizations..."
Oh, you wanted to apply it elsewhere? Well, if the Python integration does what they indicate, you can make use of all of those libs and bring in Codon piecemeal on the bits that would benefit from the speedup. Just like you already do when linking from Python to a nice fast compiled C library, only without making the full leap to C (or FORTRAN or whatever your analytics are already written in).
> Finally, the restrictive license definitely puts a cap on adoption
As noted above, the licence allows for everything short of use in production (and if you are having to argue about whether your use-case counts as production, e.g. charities, you know you can always talk to the people!). So you can do all your learning and research to see if it *is* useful to you. After that - TALK TO THEM! You *do* know that licences are negotiable, don't you? If you are interesting enough to them, they can cut you a deal!
> the project strikes me as naive when it doesn't address the elephant of the room of why this project deserves to succeed where others have failed
These others (can you name them, just so we're on the same page) - did they have a well-defined market in a growing field, in this case, BioInformatics?
Oh, I know exactly what's causing them, and it is nothing out of ordinary; I just liked the comparison against the idea that a Mac might only have five Pythons!
A few are the interpreters used by various Python IDEs - eg, Thonny, Jupyter and so on - that I've installed for the sake of having access to Python for one purpose or another. I specifically use WinPython as "the command line python" because it explicitly is intended to work without messing with PATH and the rest of the environment (as hash-bang lines aren't really Windows cup of tea).
The rest are all installed as part of one application or another - and they are all "full" [1] Python installations in that they (the ones I could be bothered to test) happily run the Python repl and execute My First Python examples; each one (hopefully) only has just the set of libraries they need to do whatever their app wants, but again I've not bothered[2] to look that deeply. Although as what they need includes a GUI and hence another copy of wxPython or similar...
No symlinks or multiple virtual environments here: this lot all came from separate and individual application installs.
[1] Please, nobody try to argue that it is only a "full install" if it also includes x number of your favourite libraries!
[2] they work, the drive space is cheap enough and it isn't an interesting enough question to be worth figuring out to determine which, if any, are never used by a given program. Given how much space I've knowingly wasted by installing the kitchen sink and all copy of WinPython, "just in case I need it"...
> In the old days ... we didn't butcher it ...
Well, as already mentioned above, every BASIC was different, adding in new features to do what was wanted that month. Forth variants abounded and if you liked Pascal did you use UCSD (cross-platform, but slow - and still not "to spec" as it had stuff not in the standard) or TurboPascal for that speed boost?
Warren's Edinburgh Prolog or SWI Prolog or perhaps even TurboProlog? How about MicroProlog?
C++ - do you mean TurboC++ or VisualC/C++ or gcc? These have had (still have) their own oddities and useful(?) extras that aren't in the spec and change(d) over time.
And for someone who wants to follow the spec for a language, Rust is a bit of an odd choice: sure, you can write working code in it (as you can in any of the above) but can you point to the paragraphs in the spec that you are adhering to today? Or are you just being pragmatic and sticking to the examplar implementation?
This was covered well by "doublelayer" above, but in brief:
* the algorithm is probably only completely described by the code
* if just an English text description is published without the source code, it is very likely not to be believed - it certainly won't be demonstrably "the" algorithm the way that code could be (and that is assuming the code is runnable)