Re: calloc?
-ftrivial-auto-var-init=pattern
This is a nice trick I didn't previously know about, although it obviously doesn't help with alloc heap memory. I've found libasan + -fsanitise=address useful in the past.
1459 publicly visible posts • joined 6 Jul 2017
If the programmer wants something other than 0 to be the starting value, he can explicitly set it - which is no different than the situation today. If you ever have a stack variable that just happens to be preset to the value you want without explicitly setting it, you are relying on side effects of the stack framing which could change the next time you recompile and would definitely fail if you ever ported to another OS or ISA.
Well, this is exactly what I mean. I'm not talking about relying on things being non-zero on allocation, I'm talking about zero only being a suitable initialisation in certain trivial cases anyway and it generally being preferable to using explicit appropriate initialisation. Of which calloc is one type, but I'm not sure I'd propose always using calloc instead of malloc just because sometimes zero is what's wanted. As you mention, global variables default to =0, but it's difficult to tell a global variable someone wanted initialised to 0 from one where they just forgot to give the correct value. An example might be a volume in medical imaging where NaN may be a more appropriate way to initialise data in certain cases, but typically people start with zeroes, a more common one would be structures, where normally some initialisation function is needed.
Which is a roundabout way of saying I'd read malloc() as "allocate the memory, still to be initialised" and calloc as "allocate the memory and initialise it to zeroes", i.e. in the second case zeroes are the correct starting value, rather than just a starting value.
Though (in my experience) calloc far more often used than malloc ( unless speed was absolutely critical, & the miniscule overhead of clearing the data was just too much then do not use malloc) , in C would use typically calloc as always safer having "cleared" data than retaining preexisting value that happened to be in that area of memory e.g. there's a small but non zero chance* that if you are testing your pointer data to see if it matches a value, that might just be a match with the "junk" that was present in the memory to begin with.
I've generally been of the opinion that you should initialise correctly, and the correct initialisation of a region is not necessarily all zeros. I've seen bugs where failure to set correct initial values for an algorithm has been masked by an earlier initialisation (to shut the compiler up? I don't really know why people do this, int somevar=0; and then set the value it really should be later). Calloc is in the same class, if you really want zeros then it's the fastest way to achieve it, a nul-terminated string then you'll get it but it's not the most efficient. And if using realloc you'll need to deal with it anyway. (There's even a more subtle type of bug, which is more at the algorithm than processing level, where starting with an array of zeroes will bias your output, have been looking at that in a variant of multilevel spline fits for a while now, it's not a coding error as such, but a tendency to reach for things like calloc might encourage the mindset.)
Without doxxing myself too much (probably not at all...), the local council has a portal which appears to use the same login but be divided into completely different sections with arbitrary routes into forms and pages from the portal that sometimes just take you to a completely empty (as in no fields) form. Absolutely no desire to know what's at the back end. Submitting a change of details in my council tax recently resulted in two further statements being issued, payment not being taken, so getting pushed back onto the rest of the year and then finally adjusted downwards. All sort of worked out at the end of the day, but you do sort of wonder what's going on behind the scenes.
There's also the issue that councils are meant to be somewhat customised to local needs anyway, rather than rebadged versions of Capita+Veolia+Oracle+Multi academy trust. No, they probably don't need deep customisation and cooperation would be good, but there's a lot of pressure to go with big vendors.
I've probably given before the example of libraries NI, which is now run on some fairly generic US-centric software, staffed with lots of layers of management (except at the actual branch level which is increasingly agency staff) and makes purchasing decisions based heavily on what publishers suggest to them. It's the end result of a sort of efficiency driven managerialism which loses sight of what is actually meant to be being achieved.
(Edit, how could I forget Capita?)
Would it pass inspection? Most likely.
Genuine question, as I'm not an electrician, but would it really pass inspection? If the PCs ("et al"!) are plugged into wall sockets and not hard-wired in in some way then that 10A switch is controlling 13A sockets (in the UK at least), don't you have to take into account that something else might get plugged in? In any case my laptop on USB PD will happily pull 80W regularly and 100W at times (have measured with a metered cable), and that's only because it uses a lower energy mode when on USB, the normal power supply is 230W. Tower workstations can easily use more, and all computers will tend to pull maximum power during startup (except maybe the GPU isn't fully engaged).
In the UK number of sockets on a ring main is unlimited (!, but a ring is meant to be limited in the area it serves) and a ring is 32A. It looks like a smaller radial can be 20A, but even then don't you need an actual isolator rather than a light switch? OTOH, isolators do make a bigger snap when toggled than single pole switches, so maybe what was wired in was actually an isolator, in which case probably fine electrically and just an unwise choice practically.
Exactly how I feel when some home counties MP comes up with yet another proposal for permanent daylight savings because they like long summer evenings and think adding an hour to the clock will make that last all year round. In Scotland and NI the days are longer in the summer in the south and shorter in the winter, on top of which NI is quite far west, meaning that from mid December to mid January the sun doesn't rise in Belfast or Glasgow until after 8:40 GMT, for London it's 8:00 GMT. Sunset on the other hand is about 1600-1630 Belfast, 1550-1620 London during the same period and 1540-1610 for Glasgow (being furthest north and losing out at both ends).
i love long summer evenings that last till 11pm, but if the UK ditches clock changes it has to go GMT, +1. All year helps nobody; you are not actually increasing the number of hours of daylight despite what politicians sometimes appear to think.
So even with +1 DST most people in the south of England will still be at work until sunset anyway, while those in the "provinces" would get to trudge to work in the dark as well.
Fun aside, back in the days when I spent time trying to help with Ogg Vorbis metadata (there's a limit to the value you can add when people like Chris Montgomery are doing the heavy lifting), one suggestion that arrived from outside the regular developers was to add a field that would cause a command to be run. It was obviously gently but very firmly rejected. I didn't (and don't) think it was made maliciously, just someone who had some idea for a thing they thought would be cool and had in absolutely no way thought it out fully.
One of the things Graeber (misspelled name previously) discusses is precisely this. It's typically the perception (certainly from the private sector) that it's government and regulations that produce the phenomenon, but as mentioned above there are plenty of instances where it happens in the slim and mean private sector too (flunkies to make senior executives look more important being one example).
Certainly a thing, but hard to actually argue (in the UK for example you have to quit and then pursue a claim against the employer, having to spend money and time while searching for a new job or getting to grips with one).
Also, that page makes no mention of Japanese law, which can of course be different. Some of the links from the current article do discuss the situation in Japan:
The rather fascinating "Employment Law World View" take seems to be pitching "we can give foreign corporations legal advice to help them do it":
The fact there's actually a name for them suggests that, whatever the law might actually be, the practice is not unknown.
Interesting. I see there's a reddit rabbit hole filled with people very concerned about static arcing and things, but I can believe in bad connection issues, particularly with order of pin connection. The one I have is essentially a shortened C plug and socket with a magnetic surround, which in the absence of a design standard for the things (something that should be remedied really), is probably the most likely thing to work properly. I can see there was a rash of other designs on kickstarter a few years ago which were more like actual magsafe, with pins and pads (and not enough of those to be USB 3).
On mine the "socket" side is attached to the C plug, so sits in the device, offering more shrouding for the exposed contacts. Might change my order of connection to laptop first and then charger at the other end, would let the true USB C connectors take care of any order of connection issues while still providing the mechanical protection. (I'm usually unplugging the cable at both ends anyway.)
For USB-C fans, there are magnetic USB-C connectors (essentially USB-C to USB-C with a magnetic breaker in the middle) rated for 140W and 240W. Recently added a 90degree elbow one to my laptop and wondering why I didn't do it sooner, has tidied up the cables, less stress on the socket (easier to run the cable sideways across the back of the machine without a bend in it) and easier to remove (again without repeatedly stressing it).
Of course raising questions of why windows didn't have the sense to properly reset itself on shutdown-startup after an upgrade. Although that's probably been a moot point for nearly a decade. (Surprised to realise extended support ended last year. Though would be even more surprised to find out anyone actually wanted extended support for 8.)
Okay I'll second this one, typically dnf upgrade plugin every other release. Quite often you find you'll have to uninstall something to allow dependencies to resolve (and one particularly memorable upgrade when Gimp seemed to have become involved in some kind of dnf group organisation scheme), but otherwise uneventful.
The problem is most of those options are for remoting into an existing session. Multi-user remote access to a server is still not well supported, this is RedHat's official documentation for multiple user remote desktop on RHEL9: unnecessarily long url, that is to say, configured per-user VNC ports. VNC can be configured to dynamically assign ports, but remains unencrypted. Xrdp can be set up, but don't try to run a session locally at the same time (as you might if you wanted to keep your remote logged in session). X2Go, which used to handle all this nicely (connection via ssh, so encrypted and no port oddness, even possible for the same user to have multiple sessions), no longer works with Gnome and not well on Wayland desktops in general. I've just tried it with LXDE on a F41 (wayland) system to open a remote session on the local machine and it will connect and show you a desktop, but launching applications on the remote desktop opens them on the local session instead.
Now, the thing is, people have been pointing out this is a requirement for years, only to be told variants of "not a priority" or "build it yourself if you want it". Why was it not included from the very start?
"Wayland is catching up too: it dynamically resizes the desktop to fit the VM window size, a feature previously restricted to X.org. Good job, too: this release drops the GNOME on X.org session, so it's Wayland or the highway."
Wayland: "Finally just about acceptable (for local systems)" (TM). Having used rawhide through the f41 cycle it seems this has landed just in time for fedora to go wayland only. It wasn't a given, particularly on nvidia where changes from upstream like nvidia side and Firefox were needed to address unusable flickering or flat-out crashes.
Since reading is hard:
emacs: nano does not exist as a gui application
vi: use anywhere, for when you have to edit something and would rather do it in ten seconds than start installing nano every time.
nano: life is too short to teach anything more complex to someone to whom you also have to explain version control
I'll use all of them as required (e.g. remote session quick editing of a file vi is often most efficient), and I'll make sure novices' default git commit editor is set to nano (because life is too short), but Emacs remains my go-to precisely for ease of use as a windowed editor with advanced region selection (rectangle select is particularly handy at times) and keyboard commands for things many other editors don't.
but the Microsoft hardware, I always got the impression, was far, far better than their software.
It's a bit like that, "It's a shame they make beer really" joke about the old "Carlsberg don't make X, but if they did" adverts. (TBF there are far worse beers. And mice.)
Friday afternoon of course ->
I eventually had to part ways about two years ago with the wireless Intellimouse Explorer (PS/2 and USB adapter dongle) I bought in 2000. Was mostly still fine, the battery cover catch had been re-glued on several times and every couple of years I had to open it up to bend the little spring that connected the PCB to (what I assume is) the aerial on the inside of the shell. I just no longer have a desktop machine where the nokia 3300-sized dongle isn't an inconvenience (on reflection should have just taken it into work for my workstation there).
Really comfortable design to use, it resulted in all my mice being microsoft despite all my computers being linux, although the newer pebbles that are the portable mice aren't nearly as ergonomic. (Actually, the mouse I'm using at the minute is about the same size as the old dongle.)
Thompson's Punjana, which can handily also be used to treat fencing. I've tried Barry's (the southern equivalent, no, not England), and will concede it also does the job. Fortnum and Mason (yes, England) do an Irish Breakfast tea and while they seem to have got the general idea they haven't quite got the spirit of it.
(Okay, will admit to drinking Thompson's signature or Irish breakfast in preference, as more than one cup of punjana has a bizarre drying effect, I swear it's got stronger.)
But I respect efforts to work with it, using kit you already have, especially where this means creating simpler processes and simpler tooling to replace on-the-fly rebuilds.
They do seem to be improving (or, as it's a massive organisation, some factions appear to be doing things that might be construed as good*). I was mainly enjoying that 2024 MS is having to deal with problems that 2011 MS inflicted on the rest of us.
Of course now nvidia have "open sourced"** their kernel module this may become a moot point.
* for values of "good" that include running linkedin
** for values of "open-sourced" that include "moved 60MB of closed source into userspace"
However, the Azure cloud insists on signed kernels, so Microsoft has built a repository of signed kernels incorporating all the hardware in use.
Gosh, I wonder who's responsible for signed kernels being difficult?
Still, double schadenfreude in RH pushing innovation in Linux... by inadvertently forcing MS to do it.
Indeed, an LLM work has to be derivative, as that's its only source of input. A person can call on their lived experience, an LLM only has the training corpus (of one particular format, here textual work), if that is made primarily of work which was used without permission then how can the results be legitimately used?
Besides this, the argument that it's what people and so it should be treated the same is more fundamentally flawed. There is no problem with the law being different for people and corporate-owned LLMs. The law is supposed to be for people after all. If, until someone has the sense to write that in, we want to ascribe the difference to inspiration then that's fine.
It never really feels like it fits correctly though to call Plasma the windows-alike when talking about Linux desktops though, because the obvious alternative is Gnome. Initially, back in the Gnome 1/2 days Gnome was closer to Windows than KDE, and in terms of /appearance/ KDE was long the more Mac-like of the two. Gnome 3 obviously went weird, but I don't think it became more mac-like (except in relation to Spatial for the file explorer).
So far as Plasma goes, whether the taskbar sits at the bottom, top, left or right has been configurable for quite a while, although the default seems to be the bottom. On Plasma 6 I notice that unless there's a window overlapping the taskbar it floats slightly, similar to both Mac and newer Windows. The one thing that remains definitely windows-like is a start-menu type button on the taskbar, which Unity avoids (Mac, Windows, Gnome, Unity all have their own flavours of status area, the most marked difference there is probably where you find logout). Gnome 3 initially tried to have the activity chooser approach but it was far too clunky and they re-added a start menu (I still get to see this monstrosity as RHEL8 has it).
What both Gnome and Plasma have long done though is remove folders from the default desktop, something I initially hated but now re-enable if I find it gone. This is not really classic Mac or Windows though, as both strongly insisted on putting things on the desktop.
The major Mac-ism that always seems most distinctive to me coming from Linux and Windows familiar user is the top/global menu integration, which I don't think Gnome still has. They may have experimented with it at some point (both Gnome and KDE have supported some version of it, not sure if it's ever been default).
Indeed, sudo used properly (before Ubuntu popularised sudo everything as an alternative to root login) allows giving access to exact commands and per user control of those. It's actually fine grained, and with some kind of network based authentication you can actually just revoke someone's ability to do things by (for example) removing the relevant group membership from their account. Whereas, to use an old school example, given a printer user login that some people have a password to, revoking one person's access requires changing the password and redistributing the new one to all remaining authorised users, and either a specific password account or a group permission based approach they can still do everything, while sudo says only this exact command as a particular user.
https://www.freedesktop.org/software/systemd/man/devel/run0.html
Authentication takes place via polkit, thus isolating the authentication prompt from the terminal (if possible).
So, not in the vast majority of times when I'm running an ssh session on a remote system then?
And I'm still looking in vain on that page, or linked ones such as https://www.freedesktop.org/software/systemd/man/devel/systemd-run.html for some example of how escalation privileges could be configured. As you say, probably somewhere in polkit (obviously documentation is not linked), which the only end user way to engage with is generally all or nothing "grant administrator access".
I'm pleased to see that SystemD with its rock-solid history in application security is taking on the field of privilege escalation. We can look forward to a future without exploits because a completely different part of the framework pulls in a random buggy or compromised library. Hallelujah!
"Technically, when GPT-4o writes out the move it wants to play, it correctly formats it in Forsyth–Edwards notation (FEN), but the model doesn't understand that even if it makes sense, that doesn't mean it's the best move or even legal."
Chess moves are in Portable Game Notation. FEN describes a board position. For a puzzle you need FEN to describe the position, but the solution move or sequence (4 f3 or whatever) would be PGN.
Nobody here expects LLMs to be any good at chess (that they succeed at any rate suggests quite a bit of chess notation has fed into their training data), but I suppose this kind of exercise is to remind everyone else of that fact. Computers are really good at chess, humans can be pretty good, LLMs do not show general intelligence, just regurgitation.
Doesn't inspire much confidence really, well known security risk (at a university and /we/ get trained on this, Mercer makes a big deal about his army background), but the reaction is to play the 'weirdo' card as if it's somehow cheating or unforseeable. See also Trump and the overweight people in their parents' basement accusation. As if the odder (and by implication in both cases more pathetic) the person who waltzed through your security the more understandable it was.
Is it 'cheating'? Depends who defines the rules of the game. Is it utterly foreseeable? Yes. Is "I was being stupid and got caught out by some weirdo" a reasonable excuse? No.
Not so critical, but a couple of years ago a colleague's laptop bit the dust (can't remember why, battery had become dodgy I think). Could still be powered up, but the bitlockered disk got corrupted. Mostly things were backed up via onedrive, but some important folders hadn't been. Since they are the kind of person it's pleasant to help and this was a bit of a challenge, I thought I'd give it a try. The first steps are easy, live linux usb to boot and clone a copy of the encrypted disk it was then possible to use dislocker (and the recovery key, which we did have) to create a decrypted copy, but this was pretty mangled. From what I remember linux ntfs actually refused to even mount it. Using some iso mounter in windows (probably virtual clonedrive) it was possible to run windows's scan disk on it, which recovered a bit of the directory structure but not much. Going back to the the unmodified decrypted copy and trying DMDE achieved a bit more, we were able to recover quite a few of the files and fortunately it was mostly the directory listing to know what had been there that they really needed.
Can't confirm the timeline (online version histories are not detailed enough), but XTree can supposedly do directory-entry moves, the last version was 1993 and DO 6 was released that same year. It's possible xtree provided the feature before DOS natively did, there were other 3rd party utilities around at the time that provided disk copying features and the like so it's not impossible something provided a move facility. (On the other hand, remote access and 9GB both suggest Windows era rather than DOS. Don't know if remote control of a DOS box was possible at the time, but doubt it was common.)
I didn't notice back then, but now I wonder if there's another layer to that joke. If it was Pratchett I'd be sure, but the Blackadder authors were reasonably clued up historically[1]. Alchemists never did succeed in creating gold (unless you count a handful of atoms in modern particle accelerators), but over time the discipline gave rise to chemistry, and the earliest profitable industry that came about from that was dye manufacture; William Henry Perkin's discovery of a synthetic purple (but later also "Perkin's green"). So what Percy had, several centuries early, was potentially a successful product.
[1] Even if they mostly parody historical dramas. There's a long forgotten film (rightly so) called "Hawk the Slayer" that makes the end of season 1 make sense. Although it's not really worth watching just for that.
Firstly, at least some of the works do exist in some encoded form within the models, both LLMs and image generation models have been shown to be able to regurgitate things that were in their source. Just because it's not stored as plaintext or a png doesn't mean it's not in there. That particularly situation is not much different to creating some self-extracting archive format and then claiming whatever you put in it is not a copy because it can't be extracted with normal tools.
Secondly, the luddites were at least partly right. You can paint the complainants here as "the arty 'creatives'" if you like (presumably a group that contributes nothing to society), but is the end goal really just to replace people with machines? At least ultimately devices like washing machines or assembly line robots save people from repetitive hard and dangerous tasks, but now we're really starting to cut into roles that actually give people some degree of joy or meaning. It's easy to paint this as democratisation, hey now anyone can generate images that would have taken them much longer and years of training before! Except it's not freed anyone from anything. The people who loved doing that in the first place will no longer be able to do it as their livelihood, the rest of us end up with something that's no longer special and for the privilege we all pay the people who own the things (built on stuff they stole in the first place).
Easy to say that what AI systems are doing in remixing old (often still copyrighted) works is no different from what people do, but the point is that it's people doing it. Those exceptions, for education, for criticism and reporting are for the benefit of people. Copyright was created with the intent of protecting people from unauthorised duplication of their work, to allow a livelihood from creating works of art, often these days it's swallowed up by corporations.
One of the promises for automation and industrialisation has always been that it will free people to lead better lives, without drudgery. So, we got rid of the weavers and the miners, then the typing pools, now we're going after the drivers, the writers, the artists, the programmers. What happens when we've replaced every job? It hasn't actually resulted in people working less, just more things that the wealthiest can claim rent on.
There are two slightly different threats that physical access can mean though:
1. Your laptop (or other device) is stolen and you never see it again, the attacker is attempting to gain access to the data (or use it to access another system, an ID card for example) with only access to the laptop or device itself. Good encryption should present a strong barrier to recovering the data (although you have to assume they have as long as they want to clone encrypted data and try to break the encryption, or inspect the hardware and any vulnerabilities), unregistering the device from any access control protects other things.
2. The attacker has physical access to the device for a period of time, might be with or without your knowledge (stolen and then found vs the cleaner is the attacker). This might allow the attacks available in situation 1. (depending how long they have it for) as well as tampering with it.
About five to ten-ish years ago when USB sticks had dropped down to about £1/GB I occasionally stopped by Ryman's and checked whether they had Sandisk devices on sale, occasionally they'd be half price. I ended up with a couple of fairly cheap 64GB and 128GB devices (was handy to have some extra storage for my laptop at the time), alongside quite a few 16/32GB ones (some also came from supermarkets on the same basis). Have not bought any other brand in a long time after I had a string of cheap ones fail on me. They haven't been heavily used, but all are still going strong. Whether that applies to more recently manufactured ones (particularly given the story) I've no idea.
The 128GB does get noticeably warm though. It's useful but I try to avoid trusting anything long term to these things.
All of this. Any time I talk to people about it their focus often seems to be on Horizon itself, some computer issue that lead to all this. But, while the particular events arose from issues with Horizon, the real issues and guilt are institutional ones. Look at that list: right from the start, "prosecutions continued when it was known." Every item is something that happened at a point at which somebody knew the system was in error and chose to ignore that (or, you might suspect, in some cases took their actions precisely to avoid it becoming more widely known).
Fujitsu should be front and centre here, and liable to compensate just as much as the Post Office is.
I'm not so sure. Not that Fujitsu aren't culpable, my understanding is that they backed the Post Office and possibly committed perjury to support it and cover themselves, but that the PO is guilty of so much here. They were the prosecuting authority and used essentially unaccountable power to persecute (sic) post masters. Early on they should have (as you said) been suspicious about the apparent increase in fraud, later on there was reason to believe the system couldn't be trusted AND THEY KEPT ON PROSECUTING. Why didn't they attempt to properly audit these cases and instead kept relying on horizon data? One angle I've never seen properly looked at is that their conduct also surely amounted to extortion; there is at least one well documented case of a post master paying in money from the rest of their business to cover the "debt" the horizon system falsely said they owed. With prosecution threatened if that money isn't paid what else would you call it?
And despite this being public knowledge and in the press for years, it takes a TV drama to actually get something done?