Re: Architecture? That's houses, isn't it?
And before that, when they did. https://en.wikipedia.org/wiki/WorldWideWeb
(The editor anyway)
1509 publicly visible posts • joined 6 Jul 2017
And before that, when they did. https://en.wikipedia.org/wiki/WorldWideWeb
(The editor anyway)
Having had RedHat, and the odd commentor here, describe people using CentOS, then Alma and others as "freeloaders", I was very interested to see this quote:
"The incident refers to Red Hat's self-managed instance of GitLab Community Edition... Customers who deploy free, self-managed instances on their own infrastructure[...]"
Some things helpful to know:
1. shebangs carry over from Unix where it is used to tell the kernel which interpreter an executable text file should use, this is the reason for its general inclusion in scripts.
2. "#!/usr/bin/env python" - current python command in environment without worrying about its location. "#!/usr/bin/env python3.6" - same for 3.6 if available. python3 is available as the interpreter for a python 3 version, python2 existed (but of course you're not using it any more).
3. Installers can interpolate the python path on install for this very reason, with the installed script now pointing to the appropriate local python with little to no interaction from the user.
4. Launchers in desktops can bypass all of this if necessary by simply invoking the correct interpreter path for the script directly, again something an installer can handle.
To some degree it's messy, but compare the situation where you are building with different dynamic libraries on a system (which is the equivalent of having multiple python environments). It's also much more of a developer-facing problem than a user facing one, as users use installers. Your IDE probably allows selection of the appropriate python interpreter for your project.
But if that doesn't happen, Greg K-H and Linus would ideally take a step back and ask themselves if there's something about the kernel development process that is fundamentally inimicable to creating this type of advanced filesystem.
Not so sure. The existence of Bcachefs proves the development is not really the issue, it's a personality thing of being unwilling to accept the release schedules and processes that keep everything ticking along. If every developer was claiming special child status and wanting to make last minute changes outside their own code at every release then things would be absolute chaos. I suspect someone else who was more willing to work with others and didn't demand that overhauls be pushed through in the post-merge window (it's an experimental filesystem, if was already broken before the merge window that's no reason to potentially break other things to fix it) would still be maintaining an in-kernel module right now, with all the things that means about getting a say on interface changes.
The one-man show aspect of bcachefs would also make me hesitant to adopt it, particularly when that one man doesn't appear to work very well with the rest of the developers.
If you ask why there are no "advanced" filesystems in Linux, I suspect it's a mix of people who really need ZFS type features running BSD, the rest being happy with using ext4 and the like and having strategies like "keep backups" that will also help you when the kind of failures occur that no advanced file system (except possibly gluster) will protect against. Compression and thin provisioning are also ideas that work well until they don't, so some users are not even necessarily that keen on some of the features on offer.
I had an SRCF site in the distant past and briefly checked before posting whether it was still around! It looks much more professional than it used to and is currently showing a message that they are looking to hire a system administrator so, while it is apparently still student-run, I'm not sure it's still solely student operated. It's what I meant by "in that form"; it used to be much more common for there to be some kind of student society maintaining a basic webserver on bare metal, SRCF might be the only one left. (Did a quick google for student run computing and other big UK universities and didn't find anything, one major university's CS society lists among their activities this year roasting each other's CVs and holding hackathons instead).
Aside: I was hoping there was a mortar board icon (even if Cambridge don't don't really wear them)
The thing is, for a very long time we were sold a story about how copyright was a type of ownership, "YoU wOuLdN't sTEal a cAr!" (but maybe a fount). We're still being sold it to an extent.
And for a very long time that identity was partly enforced by the physical aspect of a reproduction and coupled to owning that item, a book, a CD, a set of installation disks. Yes, it was possible to copy them (and potentially breach copyright in doing so), but physical possession of what was originally a for-sale authorised copy tended to be proof of your right to use it. While the law was different, in people's minds this is ownership of a physical item with copyright simply restricting your right to duplicate it. Probably even in the minds of those writing and adjudicating the laws; the continued existence of second hand bookshops and CD and record shelves in charity shops testifies to this. You can be sure HarperCollins and EMI would have put a stop to that if they could.
Now there's no physical item and so finally we're all thrown adrift into making agreements with billion pound corporations.
Copyright was of course never the same as ownership, although the identity is still used to make the case for making it ever stronger. Think of the starving artists! Of course it's hard to say there are many artists better off due to extending it to 100 years, other than those working in the legal departments of the companies that hoover it all up eventually.
Starbucks started there, and there was a pretty sophisticated coffee culture, but they steamrollered it by being good at corporate expansion.
Something that lodged in my memory long ago was from a little book of case studies handed out by some group of management consultants at a graduate careers event (I suspect it was BCG, but could have been any of them). One of those was Starbucks, and analysed the strategy of saturating an area with their shops until any local competition was driven out of business simply by swallowing up all the clientele through the statistical effect of anyone trying to find a coffee shop: a. being likely to find a starbucks first, b. having trouble finding one that wasn't a starbucks.
That this was supposed to be clever probably started me down the path of not going into management consultancy.
For those of you too young to remember the alfresco dining experience in the UK before chains.
A cafe would have an urn of tea, containing a large number of tea bags, more boiling water would be added during the day.
As a child of the 80s I don't really remember before chains and growing up in Northern Ireland little dining was alfresco. However, the urn of tea description was definitely not accurate for that era, tea would be brought (as it still is in many independent places and hotels) in a one or two person pot with actual boiling water added to the appropriate number of teabags. A second pot of hot water with no teabags to allow topping up was not universal, but pretty common (otherwise a waitress or waiter would often stop by later to offer one). The chain experience is pretty universally hot (but not boiling) water poured into a cup and a miserable teabag on a string added (usually in that order).
In the 90's before Starbucks really took off in most of the UK, it was pretty common to be able to get espresso based drinks in nicer cafes, although most would have some kind of drip machine and while that might generally continue to boil the coffee after production it's not really a big problem during a busy period when it would come fresh, individual or two person cafetieres to the table were also not uncommon. Particularly noteable, and something that disappeared for a long time under the big coffee chains (and which has only really come back in independent shops, while chains offer a nod to it as an extra alongside vanilla syrup), there was often a choice of coffees, although in the Whittards sense of Kenya, Columbia, Brasil etc. (Particular memory of trying to persuade someone at university that this was actually different coffee in a way that cappuccino, latte, moka were not.)
"It claimed it could think 10–15 moves ahead — but figured it would stick to 3–5 moves against the 2600 because it makes 'suboptimal moves' that it 'could capitalize on... rather than obsess over deep calculations.'"
Imagine, if you will, someone on reddit or a comments section (maybe even here), or perhaps usenet (RIP), spouting off about something they have only a passing knowledge of. Now imagine that you're Alan Turing and you're attempting to distinguish between that and what we see above.
As for Copilot and chess, I conducted quite a different experiment recently, as our work 365 subscription now includes it. I asked for a t-shirt design with a particular chess opening on it and a specific text. Obviously it failed, first producing a kind of Etsy-esque view of half a design alongside half a t-shirt with a similar design (the design in question of course not being what I'd asked for).
After managing to refine to just giving me the print image, but getting ever further away from anything resembling a chess board as opposed to an assortment of chess themed images, I asked if it could just give me an image of a chess board in the starting position. There should at least be a good number in the training data right? What I got back is be best described as Howard Staunton's fever dream. The 9x10 board did have 2 rows of chess pieces at each end. In the centre file of which stood a monstrous queen with a spreading crown of spikes, appearing to rise out of the picture they were quite a bit taller than any of the other pieces, including the two kings that flanked each one. For some reason black's pawns were three dimensional while white's laid flat. As you stared closer into it you realised that many squares shaded white into black. Lesser details like the strange hybrid bishops and the half-round, half-square rooks have faded in my memory. I haven't tried it since.
(The knights were surprisingly normal.)
Dye infused caps used to be pretty standard (and might last longer than the keyboard), but I had to get rid of my last one a while ago and most new keyboards are black where dye infusion is less easy. That said I'm typing this on a cheap Dell keyboard (basically the add-on option for one their workstations) that must be at least 15 years old, and the only decals to have worn off are the direction keys and half a shift symbol.
I can think of a couple of practical use cases for RGB lighting, neither of which is to display a psychedelic lightshow (which typically tends to be the domain of showrooms, see also 90s era stereos):
1. (For whole keyboard or zoned lighting) Adjusting backlighting colour to your preferences, some people may prefer warmer colour palettes for backlighting for example.
2. More for gamers, but being able to colour highlight particular keys can be quite useful. Flashing can even be useful there as a way to communicate status (although control methods tend to be proprietary so integrating these things can be a bit painful).
For HPC, it would likely be better for them to have more 128- or 256-bit vector-matrix units (rather than 512-bits), without SMT, and focus on optimizing scatter-gather memory accesses (over step, stride, gait) ... SMT4+ is more of a database processing thing iiuc (and it sunk SUN).
Indeed, for almost everything we run you may as well disable SMT (or make sure threads are limited to the number of cores), as floating point units are the limiting factor for a lot of scientific computing and the fragmentation from multithreading starts to hit performance once you're exceeding the number of threads that can genuinely run in parallel.
A friend, who eventually became a British citizen for this reason (and put it off for a long time because their birth nationality didn't allow dual citizenship), had to pay the ridiculous NHS surcharge, despite spending their entire working life in the UK, running a business that created jobs in the UK and being raised and educated at the cost of another country altogether (including paying foreign student tuition fees in the UK while at university) and therefore even without the surcharge being more of a net contributor to this country's fianances and health system than many UK born citizens.
Their example was particularly extreme, but many people targeted by the surcharge will similarly be arriving here at working age, earning money and paying tax (both on income and spending) just like everyone else. The only place it has any kind of (mean) logic is in preventing them bringing dependents, and if you want to attract skilled workers on UK salaries, asking them to pay an additional £700 per year for any children they might want to bring isn't exactly that appealing.
Or the funding for said salaries. As NIH and other US funding is cut (sometimes in the middle of programmes), charities like Alzheimers Society are having to bridge those gaps, leaving less money for projects elsewhere and so UK institutions are already starting to see knock on effects. If they want to attract US researchers the funding for the science needs to be there, £50M is approximately a single moderate to largish research centre.
Among them all, I simply don't understand the success or existence of Costa. Price-wise it's similar to everything else, but somehow adds on a level of misery in a way I can't fully articulate. If Starbucks coffee is not great, then at least it manages to be somehow reassuring in the ridiculous comfort food way that only a pumpkin-spice latte can really be and their cakes and biscuits ridiculously sized (if over-sweetened), Pret is all about speed and convenience at the cost of joy, Nero makes a stab at doing decent coffee and slightly upmarket food in (varying) comfortable surroundings, but Costa feels like choosing a snack from the supermarket meal deal aisle for twice the price. I suppose it makes sense that their recent big move has been to put branded machines into supermarkets and cut out the middle man (themselves).
Upvote, but I also used to be in the lava camp and it has its appeal, especially if you need a strong flavour to wake you up. The lighter roasted Lazy Sunday is not the best coffee you'll ever drink, but definitely pleasant enough, has more flavour than just "roast" and is cheap enough for people not deep into their coffee not to blanch at.
Union Coffee is available in most UK supermarket chains now and pretty good if you're not sufficiently keen you're hunting out local roasteries and subscriptions. Currently they have a Brazilian "Bobolink" that is chocolatey and nutty enough to pass as a dark roast for people who're used to that while also tasting really nice. (Union also put their roast date on the packet, ideally you're drinking within about two months, if it's been sitting on the shelf for a year then you're probably better off buying something else.)
Why not? They pay money to staff the union, and (in the US) take employment risk to improve their bargaining position in order to get a slightly fairer sliver of pie.
Extending the award to non-union members is a boss class strategy to undermine the union, and in the long run, pay themselves more (and all workers less)
This is a slightly funny one that probably comes down to local attitudes. In theory it's correct, but it also risks making unions into cartels that attract resentment from other workers and the general public. My impressions of US unions are founded only on how they're represented in popular culture, but that impression is maybe best represented by Homer Simpson, "I always wanted to be a teamster, so lazy and surly". It's almost certainly deeply inaccurate and also (something that happens in the UK too) generalises from an attitude about one union in a particular sector to all unions. In the end, if you don't fight for everyone you can easily walk into another, much older trap; divide and conquer.
The question "Who owns Copyright in Copyright?" brilliantly sums up the stupidity.
Whereas "Who owns copyright in a license?" is pretty straightfoward, and if you got a lawyer to draw up a license for you they'd expect to be paid for it (and might be able to stop you using it if you hadn't). The free availability of well-written licenses is one of the points of FSF and Creative Commons.
In conjunction with patent law, it is an utter mess, and offers paradise for nitpicking lawyers.
Sadly the people best placed to offer alternatives tend to end up as nitpicking lawyers. That said, nitpicking is probably inevitably the situation once you have any codified set of rules.
Even if the FSF is correct about all of this, all it means is that they could make Neo4J change it, which they have done voluntarily. It wouldn't necessarily mean that you can apply other terms to the software.
There's an interesting question, which is not really relevant to this case at all, about what happens there. If there was a situation in which FSF could make Neo4J change the license it seems clear that would be limited to not using the language of the AGPL (which is FSF's), but sufficiently changing the language of the license without altering its terms is hard (particularly everyone knows it's a derivative work to start with, so you can't argue similarities are not the result of copying). How much is legalese like mathematics in that you can't claim ownership of a theorem (or in this case the language to achieve a particular effect)? In some ways it seems opposite, since everything is down to wording and language, but we're talking about the operation of logic, albeit in natural language. If a particular phrasing is the only way to achieve a particular effect (flowering it up can introduce side effects) then is that phrasing fair game?
Exactly, the license attached is the license under which the copyright holder allows others to use it. They can at any time decide to stop offering it under that license (although for open source this alone does not stop those already in possession of it continuing to distribute), additionally offer under a new license, and potentially revoke the license (the GPL is not expressly nonrevokable, although I think there may be notice periods required depending on the legal system in question). If a company is the sole copyright holder this means that new versions (which contain new copyrighted material that was not previously covered by the license) can be provided under more restrictive licenses.
The case when changing is more difficult is when there is no single copyright owner and all owners would need to agree to change the licensing on their contributions. This is why the Linux kernel license cannot be changed from GPL2 to GPL3 for example, and why companies often want copyright assignment for outside contributions to their "open source" projects (the quotes are because this means there is no guarantee such a project will remain open source). Attempting to revoke a license to which there were other contributors might also result in mutually assured destruction if they decided to revoke your right to use their contributions.
In the Neo4j case the license change would have been possible, but (without reading through the license) it sounds like they have rather ineptly attempted to tack on restrictions to a license that said those restrictions could be ignored. As you say in another comment, if they had removed the language allowing those restrictions to be removed then this would be simple, but they left it in so they've actually themselves allowed the restrictions to be removed, essentially by not understanding their own license. (Although potentially creating a situation in which FSF could pursue them for mangling FSF's license and calling it AGPL, but that seems shakier ground that maybe FSF might not have wanted to venture onto.)
I do find some use in this kind of demonstrative work. A lot of stuff gets published that only gets read by academics, so it's quite useful to have these little more demonstrative projects that can be turned into public engagement. That's sort of what annoys me about them not being clear about OLED vs LCD here, because it seems like they've wanted to play up the importance of the work more than to educate people.
Reasonably modern / older expensive displays have a grid of backlights so each light in the grid only needs to be as bright as the brightest pixel in front of it.
How widespread is this? Because I've recently been looking at TVs and local dimming still seems to be a relatively premium feature there. (Putting them near to the price of cheaper LED.)
I find bright text on a dark background much easier to read.
Certainly for self-illuminated displays. I do find printed black on white more comfortable than either though, not really tried white on black non-illuminated (can be done with e-ink displays). Fun aside on e-ink displays, if you haven't noticed yet, next time you're in a LIDL, take a close look at their shelf tickets. A lot of the UK stores now use a programmable e-ink shelf ticket.
This isn't really the part of it that makes them idiots[1]. Failing to clarify this is looking exclusively at LCD, at least on the blog summary is the real failing. I suspect people use dark mode for a variety of reasons, maybe they prefer the look, maybe it's more comfortable to read in a dark environment. The point about the text being made darker too is not necessarily really an issue, part of their takeaway is the palettes chosen will probably affect people's response, but this is how people respond to typical dark modes.
The big omission is that on OLED the situation will be completely different. Let's assume all people are doing is turning the brightness up so the text brightness is the same level as they would have light mode background, which I think is what you're suggesting, but now we're already making assumptions about what people's target level is, and that one is probably not a given. (Eyesight isn't linear, why should white on black require exactly the same levels to read clearly as black on white? You'd think the lighting environment might also play a role, although they did test that and no effect shows up in their sample. Science does include testing things that seem obvious, because they're not always true.). Anyway, if we're assuming people will adjust brightness so the peak displayed brightness is the same regardless of mode then the power use will necessarily be less than in light mode, because the sum brightness across the screen is lower and on OLED that's what affects the power demand. Not looked at.
[1] I don't entirely think they're idiots, but the way they've chosen to summarise in the blog post is actually misleading and possibly mildly harmful (since it could persuade people trying to save power on OLED to use bright modes), while they could have done the additional work on OLED. Conference papers don't usually get peer review rounds of revision, often just include or reject, which means while the work as it is is fine they probably haven't been asked to fill out the gaps they would for a journal article.
From the actual conference paper:
"Further investigation is also needed to determine whether the observed rebound effect applies to devices with OLED displays, and to quantify the energy trade-off"
I'll bet not.
This is really not ground breaking, although maybe most people don't realise. They tested on an LCD laptop screen (2017 MacBook Pro), that's where the graph on the blog comes from. As they address in the paper, but nowhere in the blog, LCD backlights are set for overall brightness, pixel elements are subtractive, and therefore power consumption is fairly independent of the displayed image, while LED/OLED pixels are self illuminated and so larger black areas reduce power consumption.
It is actually interesting to show that on LCD devices people turn up brightness in dark mode, the increase in power consumption is therefore inevitable. On OLED though, the power consumption is going to be related to how much of the screen is illuminated, and even this effect of turning up the brightness is unlikely to counteract that. Now it's possible I'm wrong and the effect does counterbalance (LED is less efficient at higher brightness for example), but what I find sloppy is that this could easily have been tested with an external OLED monitor, and while the paper itself is at least clear about that, the blog post just outright says dark mode uses more power, which is wrong, especially on phones where OLED is relatively more common than it is for monitors and dark mode is more popular.
PHiP ("fip")?
I guess you meant "sequel" though. Well, if fewer syllables are wanted we could just go for one, "quil" maybe, or "s"? Mostly I type or think it, so it doesn't really come up and I just find it easier to parse it as written. Also, SQL may be more syllables, but the se in sequel has a long vowel, while the second syllable has a coda ("quiL"), and to me overall it just feels longer than es cu el. YMMV obviously (yumve?).
Hopefully, in this day and age, USB sound devices use USB 2.0? (While many devices still use PCIe v1 or v2, when newer specs exist: "if it ain't broke, don't fix it" ? - My answer: USB 1.0 was born "broke".)
I've got a very long blog post(/rant) on this topic that I'm still trying to reduce to something actually readable by the sort of half-tech-literate audience that might benefit from it. Some of the takeaways:
1. Many USB audio devices are UAC class 1, which dates to roughly the original USB (or USB 1.1, I forget right now) spec.
2. USB 1,2,3... are not speeds. USB LS, FS, HS, SS (low, full, high, super) are. USB number is not actually that useful in knowing the capabilities, leading to:
3. If a device is USB-C it is by definition USB 2 or higher, USB 2 was revised to include the USB-C connector. A USB-C device can never be USB 1 even if...
4. ... 2+3= a USB-C device may still only be capable of USB FS communication. UAC-1 uses FS.
5. USB FS is plenty of bandwidth for reasonable audio, particularly uni-directional, as in the comment above.
6. But USB FS communications when on a controller managing high speed devices result in scheduling that drops the available bandwidth for isochronous streams quite a bit below even the FS bandwidth.
7. Pretty much every USB-C headphone or headphone adapter is: a. USB-UAC1, and b. Pushing ridiculous sample rates and depths for playback, I've met some that are also trying to do 24bit 48kHz stereo input on a lapel mic. Put this all together and, if it doesn't fall over by itself, it quickly does if it has to share with, for example, a webcam or a GSM modem even when the nominal available bandwidth at even USB HS should easily be enough.
The model in USB-1 era for this kind of thing was multiple ports with their own controllers if you wanted to run the high bandwidth profiles in UAC1, but since USB2 the tendency is lots of ports on a single controller. At the same time cute little dongles for USB-C now support the kind of profiles only pro hardware used to, plug it into a phone and there's no way to choose (in linux you can if you're willing to dig about in pipewire configs, but audio menus wont let you, maybe windows can do similar). USB3 and newer USB2 devices at least have xHCI controllers, which can schedule a bit more robustly, worst of all worlds is if you happen to have USB 2 era EHCI but without a dedicated OHCI/UCHI controller for the USB-1 mode.
Okay, you can now see why that blog post is unmanageable.
Basically yes. Although whether you draw that line at XP or 2000 is slightly open. XP has the distinction of being the putting out to pasture of 9x series windows in favour of the NT based stuff, making NT the consumer option too. That was actually a big change, the stability of NT was significantly better, prior to that you'd not be surprised if a home or small office machine completely locked up. We do still make fun of the Blue Screen of Death, but, unless you've got some particularly bad drivers, you can now go a whole day without the machine needing a hard restart.
But of course the Active Desktop stuff had really kicked off in Windows 98 and got carried through, so there's never really been a version of windows without flaws. However awful that was (and it really was), MS really were somewhat ahead of their time there, given the amount of Javascript and other dynamic features in modern desktops.
Most ISPs I've dealt with in the last few years you have to turn parental controls off, I've often found this is necessary to use the VPN for work. The most recent one I did this with was Three for mobile broadband and there a proof of ID was needed to allow the switch (what access to a credit card number actually proves is anyone's guess). I think for mobile phone providers it's a requirement for this to be in place, not sure about landline-based broadband, or maybe if your account predates that then the setting remains off.
The control isn't really granular enough as a result to be that useful for protecting children online for home broadband, only really in the case of a personal mobile device.
I'm not sure why you do but I'll take your word for it
Well, you probably do need 64GB if you want to run 70B LLM while playing a AAA sandbox shooter. Although that might just move "I'm not sure why you do" one level up, as well as adding "why do you think that's the use case for a Pi?".
Also, as someone with a Ryzen based laptop, mobile Ryzen power use is good, but I wont entirely believe 20W at full whack without seeing some specs. AMD Ryzen 7 5800H is 45W TDP on its own, without including the rest of the NUC. Beelink SER6 (just picking one of this class of device as an example) seem to come in around 55ishW on load with occasional 80W excursions under heavy loads.
Thanks, the Jetson looks interesting and probably more practical for serious use. Might be an good next step if we find a real application, I'm looking at the Pi as a proof of concept for SoCing this type of thing (and the GPU is only useful for some of these loads, the single threaded benchmarks for ARM Cortex-A78AE 8 Core from Orin NX and BCM2712 from Pi 5 don't look too far apart, although twice the cores isn't to be sniffed at).
I don't think it's publicly known how much RAM ChatGPT would require to run, but I've seen estimates starting from 45GB, up to around 80GB. OpenLLaMA models can apparently work within 16GB, although I'm not sure that's total system or required free RAM[1]. The Pi also does not have a GPU and the processor is rather limited, so it's not ideal for that application in many ways. (There is a "machine learning" module, but it uses a bespoke architecture and is mostly only suited to 2D vision tasks.)
That said I've been wondering for a while about trying a Pi for some medical imaging research applications, since they are a lot cheaper and less power hungry than what we normally use, even if it might be slower. Some of that is deep learning and some traditional algorithms, both are RAM hungry but not usually to those 80GB extremes, 8GB probably wasn't enough for some of it, but 16GB is a safe-ish bet. (Although less for our own use than for education and outreach and maybe making this kind of technology more available to researchers with less resources.)
[1] I had some fun recently getting a 3D-UNet type model working on a 8GB laptop GPU, would just about fit, but required running with desktop stopped to ensure as much free GPU RAM as possible.
You can train an LLM
I thought this post was going in a different direction; you can train an LLM around scraped copyrighted material and build a billion pound business on it, but put up your own content as a small operation and you can still be struck off by the copyright vigilante industry.
Some interesting history that then runs into weird opinion piece:
Type a bare expression, the computer does it. Number it, the computer remembers it for later. That is all you need to get writing software.
It eliminates the legacy concept of a "file". Files were invented as an indirection mechanism.
This is both oversimplification and obfuscation at the same time. First part: python and many other interpreters have an interactive mode which works as described (bash shell scripting could be described as just this). Run ipython or jupyter notebook and you get that same experience. Files are a storage mechanism, a way to store those programs, just as I had to learn to do to tape when trying to program on the ZX Spectrum.
Second, maybe you will decide to call something like Jupyter Notebook extra levels of indirection. Now the thing is, so are those basic interpreters. This is the thing that it took me a long time to unlearn, the ZX's Basic interpreter, MS DOS, Linux command line, none of those are in any way some kind of direct interface or native link into "the computer" in a way that Harrier Attack, Windows 95 or Plasma aren't. Because the assembly of hardware is running some combination of that software and its own microcode that provides APIs to hardware. You are not any more directly entering commands into a Z80 with BASIC than you are in something like ipython or Bash. To this point, some python starter guides actually start with interactive mode.
Not sure this is strictly true, my understanding was that "#" was used for pounds (weight) in the US and OED seems to support a use in 1923 https://www.oed.com/dictionary/pound-sign_n?tl=true
"2. U.S. The symbol # [...] 1923 Special Signs and Characters..#..Number or pound sign"
Which is pre-ASCII (although doesn't attest a relation to weight). ASCII did develop from Teletype terminals, but you wouldn't think there was enough interchange of equipment in that era for the #/£ keyboard key to be the origin (and of course screens weren't a thing at that point either).
What does always cause mild irritation is people (including people in computing) calling "#" in code a "hashtag", c.f. twitter. What do you call "#something" if "#" is a hashtag? A hashtagtag?