Re: Absolutely brilliant
If I remember correctly... It was actually the CEO, although that's from stories pre-y2k
Directors don't get replaced as often as managers, but also IIRC it has happened a few times since then
47 publicly visible posts • joined 6 Nov 2019
It might not have even needed much lobbying - the insurance industry likes being able to quantify risks, and Google/MS authentication is a reasonably well known and predictable risk level for them.
And there's a lot of work that goes into securing auth on those platforms - certainly there are problems, but even specialists like Okta have had breaches, so I can understand that viewpoint
As other have said above - it's a Python of the Monty kind (if you read through carefully you'll find that it's actually explained)
If OPERATOR was the account name, then yes it narrows it down a lot; if it's just the job title of the computer operator then not so much
And of course a script can be corrupted when sending to another person, particularly when it's a joke script that gets corrupted by the jokester sending it to someone else.
Hard to say exactly how it happened (maybe, just maybe, it was done with a text editor), but it certainly can
Yes, security on those old systems was poor. But the idea of multi-user and having some sort of isolation (and access control) was being worked on.
There's always been a conflict between security and usability (if it's done well, you can get a fair bit of both, but that takes skill and effort)
MS doesn't exactly have a track record of focusing on security - but if they did, random other corporations couldn't, say, install kernel code - and that'd upset a lot of people too (anti-cheat rootkits, for example)
A driver is for things like hardware.
e.g. if your sound drivers fail, you can just lose audio rather than killing the system.
drivers are not for security (well, there are hardware security tokens, but that failure will fail correctly)
And external companies injecting code into the kernel for security? yeah, we have an example today of why that's not ideal
That's only part of it.
Going back through the years, they've always prioritised making things easier for people over good security.
Which did work, as it helped with the spread of computers and with making them more money
But we're continually seeing the downsides
Windows started as single-user & standalone. You didn't really need that much in the way of security (until sneakernet virii, but that's a bit later).
As opposed to "real" OSes that started out with some sense of being multiuser (primitive and broken though it was have a century ago)
It's changed a lot over the years, but you can still see some of the legacy of those early days
It'd be possible to have a kernel that can catch a subset of crashes in kernel space (and maybe eject a driver or something), but more to the point:
kernel crash means a bug in kernel space.
Either MS's code has bugs itself, or has a security hole that allows other software to crash the kernel (yes, in reality it's both).
Unless you're using a non-MS kernel, MS has something to do with this.
And yes, the same could happen with other kernels, sure, but it usually happens with Windows
Not that rare - if you barely used it for anything and it wasn't left running for long, even older versions of windows would work reliably
e.g., for all of the executives who had to have the most expensive computer because it was expensive, and it only got turned on when they summoned a new pleb to install upgrades in order to keep it expensive...
I dare say that could have been much more common than WinNT/Alpha...
Are you asking about the advantages for the website, or the user?
Depending on the app, there definitely are some, for both
E-commerce websites definitely don't need an app, but things like maps (set up to have a widget, directions on lock screen, etc) or media players (allowed to play in background, etc) have a use-case.
If most of the data and files are static (or rarely changing, or you want them offline) then an app is an advantage (both to you, since it loads faster & offline, and to the site since it drops data costs)
for a good app performance should be a lot better too
Plus for the website, there's some ego trip, but also having the icon on your phone's home screen is bonus advertising every time the user notices it
For most apps I actually want, there's no adverts anyway (some do it, but unless you install an alternative browser on your phone you'll still get them in the website as well)
> It could have been yours, too.
Indeed, it was (first Linux distro, I had played a bit with gnuwin and cygwin(*))
30 years... Nostalgia sure aint what it use to be
I didn't start at the start, but it wasn't much later - now I'm wondering which version I started with..?
Initially thought it might be 3.0, but I think I remember compiling to a.out files as well, so could have been 2.3...
Ahh, the days of running off of 2 floppies (because the hard drive had windows installed) - one to boot (with kernel) and one for userspace (including X86, I think - or maybe that was later and a 3rd floppy?)
I learnt a lot from that, but eventually got tired of having to figure out changes to config files (particularly X.conf) most updates (config files weren't managed - if you unpacked the one in the .tar.gz over the one in /etc you had to reconfigure it - and I hadn't yet figured out diffing and merging local changes with upstream)
Footnote: I thought I had played with cygwin for a while before slackware, but wikipedia tells me cygwin came out in 95, so maybe it was just gnuwin before and cygwin in parallel...)
It is relevant for a lot of their customers though - American companies will prefer to buy from other American companies(*)
Is it relevant in regards to if companies will pay for a sysadmin team that can manage it themselves rather than rely on support contracts?
harder to say...
Footnote: A few $JOBs ago, we had ecommerce clients from around the world.
Nearly all in the US were running IIS, most everywhere else were Apache (nginx was only starting to come in)
I'd say in the Unix command line philosophy, each app can be the master of one skill - not a mere jack (there are apps that do things badly, and there are apps that do a lot, but for most common tasks there's one tool that does that thing well).
And you control how they interact pretty easily, as opposed to having to set up linking requests/etc with modern apps
That makes a big difference!
I'm a KDE user rather than Gnome - there's a bit of it in the KDE world.
It's there in app composition (e.g. anything that needs text editing - from notes up to an IDE - can embed the text editor part and it uses same keyboard shortcuts, spellchequer, etc, in all apps)
It's there in places like I/O (text editors can open a file on a remote server over SSH if you want, since it's a set of io access plugins)
And of course in calling other apps. If the app doesn't have its own webview, then clicking a link can launch the system web browser. Or similar for downloads. But that much is common on most (all?) DEs
It may have been less than "mere hours".
Quite possible the address was already getting intermittent spam before you signed up, then once the email address went live and the mailserver stopped saying "recipient not found" - joy of joys for a spambot, a fresh inbox detected!
Of course it's also possible there's other ways spammers can find out - like if they offer a bit of free hosting space, and people can check the directory of ftp://homepages.$isp.net/
Or something more mundane like the ISP added the email address to the email service their marketing people use for promotional material - and they selected one of the cheaper providers...
It's that the version of WebKit available to use in apps (or competing browsers, which are mostly just a skin on WebKit on iOS) is not as up to date.
Or, at least, that was the case when I was doing webapp work at $JOB-1.
The APIs we needed were (recently) supported by Safari (and iDevices kept Safari up to date so we didn't need to worry about that), as well as on any recent Chrome, FF, or Edgium.
They weren't available on the older WebKit engine used by anything other than Safari.
We had graceful degredation, but did have to explain to some clients they'd need to switch browsers to see the features we were trying to demo!
If the site has a paywall and you have to pay to read the part you want to read...
They should only be showing the same teaser you see to FB (and Google, etc)
If they're letting bots read the whole article and then complaining the bots have the information, they're doing something wrong (unless FB/other have paid for access but not permission to resell - that would be very different, but I doubt that happens).
If what FB knows is enough that you decide not to click through, then you definitely wouldn't have paid to get past the paywall anyway
You're not wrong, but I suspect PHP is more likely to suffer that fate.
If you're running, say, a Flask of python, or a NodeJS instance, or a perl app on Plack, etc, then the webserver is just a layer adding HTTPS (and a few other bits).
If it's misconfigured, you get an error page - no source code leakage.
For code running in a processor in the web server's space, if you're on apache you can usually assume PHP will work, while for mod_perl (for example) you're more likely to need to install it yourself.
That assumption means you may forget to check.
PHP being an easy option for beginners to quickly hack something together means it's more likely to see beginner mistakes
As a dev who spends some time on front-end... LocalStorage can be useful.
As long as browsers treat them as the same thing (e.g. "Clear cookies and site data" in my FF; or similar rules for 3rd party cookies as 3rd party localstorage) it doesn't make the tracking situation any worse than you already have with cookies.
It's not the programmers; at least not mostly.
A lot of the time, the people who build the models may not even know how it works (both in the sense of not knowing the theory behind, say, neural nets; and in the sense of not knowing which words the algorithm decided mattered - a lot of "AI" model training is "given this input, you should give this output - you figure out how")
The problem is with the training data, and any "AI" needs a lot of it.
Good training data will be entirely self-consistent, properly labelled, and cover all of the variations you want to match.
Good luck ever getting that for any non-trivial task.
Yes, but defining the correct criteria is hard.
Do you emergency brake when a bird swoops across the road? (it's close, so need to break hard to avoid it). A large bird may be a similar size to a ball that a child is about to chase onto the road
An attentive human is pretty good about making those decisions (we're built to recognise moving 3D objects) - so providing training data from human-controlled cars can help (video as input, and output of if the driver took action - even if too late)
Part of the problem is those cases are really rare - humans are good at noticing and remembering unusual things; but they make for sparse training data for AI
If it was been able to track the same "object" even though the classification had changed (as opposed to "There's a new bicycle there" "The bicycle is gone, there's a new unknown object"), then it could have kept the movement history and made better decisions
If you fire a single bollock at a screen with 2 slits in it, you'll get an interference pattern as if it had gone through both.
If you fire multiple bollocks at a screen with 2 slits in it, you get an interference pattern that's unknown until it's observed; at which point it may appear to be a forum post where someone takes a bag and shakes a loan of magnets about...