* Posts by Martin M

140 publicly visible posts • joined 12 Apr 2016

Page:

You've just spent $400 on a baby monitor. Now you need a subscription

Martin M

Re: Someone else's computer

This is much more sensible. Opening up the API is nice and makes it possible to write an alternative server, but certainly doesn't guarantee it will happen particularly for minor products. Even if it does help users of this forum it won't help the 99% of people who don't know how to run their own server. Instead, financial incentives need to be aligned instead to let consumers compare cost of products properly up front.

I'd combine your suggestion with an obligation to say in the product specifications/advertising how long the company will provide the service for (as a minimum). It's not reasonable to expect subscription-free service for ever, but it should be transparent when you buy. If they don't keep the trust fund topped up sufficiently to run it for the remainder of the time, and the product is withdrawn or the company goes bust - director liability for the shortfall.

Long-term support for Linux kernels is about to get a lot shorter

Martin M

Re: Backport

> Compilation still takes time. On an *extremely* high end box it's under a minute

In 1995 I submitted a trivial patch for a file system endianness bug in the Linux 68k port. It took a while, largely because a kernel recompile took over a day on my Falcon 030. I can’t remember how much RAM it had (2MB?), but it’s safe to say it wasn’t enough and a swapfest ensued.

I got into the habit of reading over the code a few times before kicking it off…

No joke: Cloudflare takes aim at Google Fonts with ROFL

Martin M

Re: I presume it's opt-in?

“In fact Cloudflare is also a CA, so it can automatically transparently MITM any client if IT WANTS TO.”

To be fair, they also have to be able to *get* in the middle. Although for people using 1.1.1.1 for DNS, that wouldn’t be hard ;)

Martin M

Re: I presume it's opt-in?

Yes, it’s opt-in from the server end. As a website operator, you register and point your DNS at them, at which point you obviously need to trust them as much as your origin server(s). I would imagine there’ll be an opt-in for this specific service given it’s rewriting pages.

Yes, they MitM - that’s how CDNs work. They’re distributed caching reverse proxies, so they have to terminate TLS on your behalf before connecting to the origin server to retrieve cache misses.

In fact Cloudflare is also a CA, so it can automatically transparently issue a domain validated certificate if you want it to. It can also provide a certificate for your origin server to secure the second leg.

Authors Guild sues OpenAI for using Game of Thrones and other novels to train ChatGPT

Martin M

Re: It doesn't store the original, just 'interesting' features of the original

It’s an interesting question, and one for a lawyer, but I suspect comes down to the context and whether it qualifies as fair use - hence the careful qualification.

Wikipedia’s take - https://en.m.wikipedia.org/wiki/Legal_issues_with_fan_fiction - explains there are no fixed rules but when deciding fair use on a case by case basis courts consider

- the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

- the nature of the copyrighted work;

- the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

- the effect of the use upon the potential market for or value of the copyrighted work.

So at the extremes: if you’re doing it as part of a 500 word school assignment, you’re fine. If you’ve published your own novel sitting alongside the Song of Ice and Fire series without a license, you’ll likely have problems.

If OpenAI making fair use? No idea, but it’s definitely commercial use. Definitely feels like one for the courts…

Martin M

Re: It doesn't store the original, just 'interesting' features of the original

Firstly, most LLMs including ChatGPT are entirely capable of regurgitating quite long sequences of training data, referred to as "memorization" - https://www.theregister.com/2023/05/03/openai_chatgpt_copyright/ . Actors do this too, by altering their neural weights in a somewhat similar way, and if they wrote a play down and distributed it, there would be a breach of copyright.

But even leaving aside whether text is reproduced verbatim, case law has determined that copyright protection extends to the traits of well-delineated, central characters - distinct from the text of the works they are embodied in.

I've just typed "how would tyrion lannister describe having a baby" and "how would cersei lannister describe having a baby" and it spits out highly distinctive, extended replies very much in line with the thinking and speaking styles of those characters.

I'm no expert, but I can see how it might well breach copyright to reproduce these outside of a fair use context.

Google exec: Microsoft Teams concession 'too little, too late'

Martin M

Re: Windows Server?

I'm not sure how Linux can be elitist when it's the majority of the server market and the vast majority of the *new* server market. My commiserations for being on the wrong side of history; it's like listening to a Solaris advocate circa 2008 or mainframe advocate in 1995. But if there's not too much of your career left, stop tilting at windmills, sit back, relax and enjoy. Surfing the trailing edge can be lucrative - your skills are increasingly at a premium.

Decades-old Home Office asylum system misses EOL deadline, no new timetable in place

Martin M

Re: Why why why

"effectively an unsupported platform, or practically so"

VB6 IDE passed out of extended support *15 years* ago. No "effectively" or "practically" about it. I'd say it's remarkable it can even reliably connect to a modern Oracle database, but I have a suspicion that the one they're using will be of a similar vintage and support status.

Why these cloud-connected 3D printers started making junk all by themselves

Martin M

Re: Sounds like this cloud thing was programmed as if it was a local server

Knew there'd be someone getting lathered up about The Cloud. Can't see how it has any real bearing on this, which could easily have happened with a USB-attached printer with rubbish firmware and rubbish desktop software/device drivers. Apart from apparently no-one's allowed to make things without their own wifi connection and TCP/IP stack nowadays.

For a shining example of software engineering excellence and robustness, I refer you to my local HP printer. The one which frequently requires a reboot to both PC and printer before it will deign to print a single page.

Rubbish software is rubbish, no matter where it's run.

Typo watch: 'Millions of emails' for US military sent to .ml addresses in error

Martin M

Re: How much legit traffic is there from US military/government computers to .ml?

Not really. Ultimately it's impossible to prove for anyone except an adminstrator of .mil email. Readers are welcome to reach their own conclusions on the validity of my educated guess.

Martin M

Re: How much legit traffic is there from US military/government computers to .ml?

Aside from easily strongarmed military contractors, the vast bulk of all non-spam emails (especially personal) to .mil are going to be sent from US based email providers such as GMail and Microsoft. I’m sure that Uncle Sam could have a word and ask them to implement a confirmation scheme (maybe an account setting you tick to enable email to a list of clearly military-looking Mali domains) for US companies and individual accounts linked to a US cellphone. The exorbitant privilege of owning most of the internet…

Would be even more effective if it’s made clearly in big red letters that the details of anyone ticking the box that their details will be shared with the US Gov. That would prevent the ‘reflex box tick’.

Another challenger to OpenAI? OK, we'll allow it

Martin M

This’ll be fun to see

Musk is going to find out just how expensive recruiting top flight, highly sought after talent can get when you’ve very publicly screwed your employees.

I doubt very much that many will be interested without a good slug of equity. Options aren’t attractive if you are likely to be fired on a whim.

Intel pulls plug on mini-PC NUCs

Martin M

Real shame

I’ll miss them. I bought one of the early NUCs intrigued by the form factor and have had two since, less because of the size and more because the engineering is great for the money.

There’s just something about them that feels almost workstation or server grade, rather than typical home PC trash.

Oracle pours fuel all over Red Hat source code drama

Martin M

Effectively IBM are freeriding on all those upstream contributors, who devoted their time (or their employer’s) on the understanding that there would be reciprocity downstream. Whether technically in compliance with the GPL or not, IBM have violated the norm here.

Which is interesting, as the norm from upstream projects is to provide free support to and accept patches/PRs from downstream. The latter is as much, if not more, beneficial to the downstream, as they get free testing and don’t have to do a bunch of increasingly painful merging as source trees diverge. Upstreams, of course, are not mandated to provide any of this.

I wonder what would happen if some key upstream projects (the kernel, glibc etc.) asked people posting to forums and submitting PRs to declare that they aren’t IBM employees, or acting on behalf of IBM? I suspect that first of all IBM would lose many of their core committers. Second, would you buy support from a company with no upstream assistance or influence?

It would require some gumption as obviously Red Hat is a really big committer. But ultimately, it would benefit the community to show there are consequences.

Microsoft and GitHub are still trying to derail Copilot code copyright legal fight

Martin M

Re: All for analogies, but can we bit a bit accurate (or at least explain our analogy)?

"something containing all parts of the original data with certain aspects removed for size" just means "something without all parts of the original".

Compressing at too low a bit rate occurs frequently (e.g. any Netflix stream) and frequently manifests as posterisation etc.. It is clearly not "containing all parts". It chucks away large chunks of colour information that otherwise I can perceive from the side of the room, and I find it intensely annoying. Thankfully audio streaming at 64-96 bits is no longer a thing, but it was, and it was awful. Does this mean low bitrate compression cannot be compression under your definition?

On the other hand:

Wikipedia definition of lossy compression: "Reduces bits by removing unnecessary or less important information".

Definition of text summarisation: "Creating shorter text without removing the semantic structure"

For situations where semantic structure is the most important information in text (which are many), these are synonymous.

Martin M

Re: All for analogies, but can we bit a bit accurate (or at least explain our analogy)?

Further on the contention that lossy compression doesn't really exist for text: while it isn't - as above - cost effective to do for storage or network cost reasons, a lossy (and sometimes dramatic) reduction in text size is often desirable to save time and mental load for readers. It's just called something different in that context: text summarisation. As well as being something your teachers made you do frequently at school, there's an entire subfield of NLP dedicated to doing it algorithmically.

Abstractive summarisation aims to reduce the size of a text while preserving the most important semantic structure. This is conceptually very similar to lossy compression in the image, movie and sound domains, where you're aiming to reduce storage size while allowing reconstruction of the most readily perceived information. It is of course more challenging to do text summarisation acceptably - there is no easy shortcut equivalent to the relatively dumb transforms JPEG can use to throw away details the eye can't see (analogously that are unnecessary for the elementary exam question).

As it turns out GPT-3 has been tuned to do abstractive summarisation for book-length unseen text with near-human performance - https://arxiv.org/pdf/2109.10862.pdf - and GPT-4 will be better. There's your lossy text compressor, right there.

Martin M

Re: All for analogies, but can we bit a bit accurate (or at least explain our analogy)?

> Note that we only apply lossy compression (of a form arguably similar to JPEG...) to audio and visual, not text. "Strip out the high frequency components" from text and you get gibberish, especially when trying to compile the results.

Strppng sm vry hgh frqncy cmpnnts lvs nglsh txt cmpltly ntllgbl. Yr brn wll vn dcd t fr y (slwly)

Tabs/spaces/newlines are high frequency components of code. Most of those can be trivially stripped by a language-aware lossy compression algorithm with no impact at all on compilability or function, only on readability. Even that can be (mostly) restored by an automated formatter.

I'd suggest we don't do lossy compression on text mostly because it's so small compared to images and videos, and it isn't worth the bother.

> It is those input bitstreams, the traversal pattern, that are the copyright violation. Not the Hufman tree.

I'm guessing your Huffman tree isn't going to be 100s of GBs in size (the V-RAM requirements of GPT-3 - so probably higher now), and is therefore incapable of independently coding and regurgitating much larger chunks of copyrighted text than the input.

After giving us .zip, Google Domains to shut down, will be flogged off to Squarespace

Martin M

Re: Email forwarding for Gmail users

Actually it's a thicket of standards, with sometimes spotty support beyond the basics.

Despite an occasional hankering for pine and having rebuilt a small ISP's SMTP and POP3 servers in about 1996, my recall is that GMail was a bit of a revelation when it launched. A nice big inbox, decent search, free, great anti-spam, efficient UI and generally rock-solid reliability. The alternatives were either expensive and a bit clunky (I migrated from Fastmail, which fell into that category), Squirrelmail from a vaguely shady web hosting company that probably didn't even RAID its mailboxes, or hosted Outlook Web Access (enough said),

And those were simpler times. Any email service now has to deal with a firehose of spam, phish and worse hitting my email address that's been in continuous use over nearly quarter of a century. I don't know what magic GMail is doing to sort everything into the right folders, but I bet it helps being able to hire the best data scientists and pointing them at a substantial fraction of the world's email flows.

It has to be able to defend my inbox against sophisticated attackers. I know GMail is pretty good at this, because they carefully warned me my account was under nation state attack a decade or so ago, and encouraged me to turn on two factor back when they were one of very few consumer services offering it.

And because of what spam has done to the email ecosystem, players need to be important enough to sort out deliverability issues with the big boys who host most of the worlds mailboxes when they occur.

So I'll probably stick to GMail + Cloudflare ARC-compliant mail routing. Seems to work nicely on a test domain - SPF/DMARC/DKIM all pass.

However, I'm all ears if you have better options.

Martin M

Re: Email forwarding for Gmail users

I've just found a free ARC-supporting forwarder from a reputable outfit: https://blog.cloudflare.com/introducing-email-routing/ (ARC confirmed here - https://developers.cloudflare.com/email-routing/postmaster/)

As it happens, I'd been thinking about migrating my domains to Cloudflare anyway before they disappear off to Squarespace, so this looks perfect.

Martin M

Re: Email forwarding for Gmail users

Thanks for that Nick, really appreciated - had not come across ARC, and looks like GMail does support it.

Tutorials etc. seem a bit thin on the ground, but from a brief look, from a practical perspective it involves running your own forwarding to generate the ARC headers/signatures (e.g. https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/).. Generally I want dedicated professionals as admins for anything in the email delivery path - my extended family would be less than impressed if it went down, and a compromised server would be bad. Google Domains had built-in fully managed email forwarding which is (was) nice.

My current feeling is that Zoho Mail etc. with standard SPF/DMARC/DKIM might be better for me, if pricier. But if anyone knows of a decent and reasonably priced managed email forwarding service supporting ARC I'd love to know about it, I've had no luck Googling.

Martin M

Email forwarding for Gmail users

I transferred my domains over to Google Domains a couple of years ago because I use Gmail, and it had developed a habit of binning (not even moving to Spam) forwarded emails.

I think email forwards are increasingly problematic from an anti-spam perspective because they obscure the origin and the new(ish) validation methods can’t be used.

Google Domains forwards seem to be trusted by Gmail. Certainly I’ve had no problems since.

I have a horrible feeling that after this I (and family) are going to have to pony up for ‘enterprise’ Google Workspace, Zoho email etc. accounts with native, non forwarded accounts just to get reliable delivery for a personal domain combined with a decent email service…

For password protection, dump LastPass for open source Bitwarden

Martin M

Re: Why not share via Bitwarden?

But certain people *are* trusted. If I and my wife can’t trust each other to look after our young kids’ school and Minecraft passwords etc., we would frankly have bigger problems than credential management. Some paranoia is justified, and some is excessive.

C++ zooms past Java in programming popularity contest

Martin M

"Kotlin code runs on the Java Virtual Machine, so its rise lifts Java too."

We're talking about programming languages not execution environments, so this is a completely nonsensical statement. It's like saying a falloff in Java usage means a downturn for JavaScript, because they both have the word "Java" in their name.

A brand new Linux DRM display driver – for a 1992 computer

Martin M

Re: Actually owned one

For some reason I bought a Falcon, thus joining an *extremely* select group of owners with comfortably my worst computer purchase ever.

Atari shipped it (or sent as a free add-on shortly after launch, can't quite remember) with a thing called 'MultiTOS', which was a multitasking version of GEM running over the preemptive MiNT kernel. MiNT had always been pre-emptive and provided an efficient, very Unix-like command line environment, and on the 68030 gained memory protection (I think?). However the MultiTOS desktop was abysmally, almost unusably, slow. There were much better desktops available though.

Fairly quickly I started playing with Linux - early days with none of the niceties of actual distributions etc.. It was the first port of Linux to a non-x86 architecture if I remember correctly. With very little disk space available after partitioning, I ended up using the UMSDOS Linux filesystem instead - this layered longer filenames, permissions etc. over a FAT filesystem to make it almost feel like ext2, so I could boot Linux off a folder of my main TOS disk drive. It eventually worked passably and I could even run X Windows with a very light window manager.

However, the UMSDOS kernel filesystem code assumed little endianness, so didn't work initially. Developing a patch was ... painful. Rebuilding the kernel took over a day. No debugging. Comfortably the most annoying coding I've ever done.

Moving to a 486 and Slackware a couple of years later was a breath of fresh air - everything felt like it happened almost instantly. Probably the zenith of responsiveness of any computer I've ever owned. Since then the rapid growth of compute power has been more than offset by the massive growth of crud.

Google kills off Stadia

Martin M

Re: Solution to a problem nobody had?

Are you sure about tethered downloads and pretty much running locally on Xbox (and other services)? Xbox for sure allows you to stream games to an Android phone - can’t see how that would work via a tethered download. Even on an Xbox console, startup time for a cloud game is so short it couldn’t be downloading substantial portions for local execution. I’m pretty sure if you’re using Xbox Cloud, it is actually running on the cloud. The amazing thing is that with a decent internet connection you’re hard pressed to notice.

What is true is that if someone has a console, if they like a game they’ll probably download it after trying it on the cloud, which reduces cloud hardware costs for Microsoft in the short term.

In the long term (as more people get decent connectivity) pure game streaming makes more sense than music, and we know how that went. Avid gamers have extremely expensive hardware of which at least the GPU portion is going to be idling much of the time, even during some of the evening peaks. Storing millions of copies of 100GB games on high end SSDs is a lot of wasted silicon. Both are obvious targets for pooling centrally for increased utilisation - and it’s an even bigger win with casual gamers. Especially if the pooled hardware can be put to some other easily schedulable/preemptable use (transcoding? ML training?) during the off-peak. Also, many data centres can use electricity with lower carbon intensity than home users, so there’s environmental benefit..

I suspect many gamers will be increasingly happy to stream if it’s cheaper, greener and more convenient.

The main problem with Stadia was execution, not concept. And a raft of other Google specific problems you mention.

UK hospitals lose millions after AI startup valuation collapses

Martin M

Consent?

It would be interesting to understand how consent was obtained for data sharing, given the well documented problems with obtaining this at national level. Particularly for GOSH.

Claims of AI sentience branded 'pure clickbait'

Martin M

Re: Definition

Blake Lemoine's "revelations" do indeed seem to be rubbish, but an article by another Google VP, Blaise Agüera y Arcas, on June 9th in New Scientist is much more interesting.

In case you can't get beyond the paywall: LAMDA appears as though it might be capable of some high order social modelling, which has been hypothesised to be closely related to consciousness. In particular, if you can model other's reactions to you, you are as a side effect modelling yourself and your relations, and that sounds awfully close to some definitions of consciousness.

As you say though, consciousness is very hard indeed to directly measure, which was no doubt why Blake was cautious in his claims. And he said nothing at all about sentience.

BOFH: HR's gold mine gambit – they get the gold and we get the shaft

Martin M

Clearly you didn’t have a three year old during lockdown. *Many* unscheduled appearances during meetings!

US Cyber Command shored up nine nations' defenses last year

Martin M

Re: Back doors in firewalls,

Boris is bombproof on this. Given the number of peccadilloes and indiscretions we already know about, any more would be considered a feature rather than a bug.

Twitter buyout: Larry Ellison bursts into Elon's office, slaps $1b down on the desk

Martin M

Nah, you’d be required to buy an upfront license at the beginning of the year for the number of likes you expected to receive. Then, if you got more, an audit would ensue during which it would be demonstrated that the number of Likes displayed on the tweets were actually substantial underestimates, plus you’re responsible for your Uncle Bernard’s likes too. At list price. With RPI inflation and penalty interest backdated to the launch of Twitter.

CAST AI puts out report on extent of enterprises cloud resource overspend

Martin M

Pretty difficult to see how something like AWS Lambda + Aurora Serverless V2 doesn't give you "automated dynamic resource allocation from a very large common pool with charging based on actual usage". Or Azure Cosmos DB Autoscale and Functions. Or Google Kubernetes Engine with Autopilot. Etc. etc.

"Government digital service goes titsup on launch" predates cloud technology and is more a comment on some gov IT environments. Selecting the lowest cost supplier, insufficient or unrepresentative nonfunctional testing, extreme political pressure to go live too early etc. etc. are technology-neutral, time tested recipes for failure. A surprising number do actually get it right (or at least right enough), but obviously you tend not to hear about those ones on the news.

That said: if load spikes fast enough, all of the pooled available cloud capacity and fancy autoscaling in the world won't help you. It's fast but not instant to scale, and if you're not prepared to tolerate some user-visible errors during that time, the only answer is a bit of overprovisioning.

But scaling is much faster in cloud than (at least some) on prem environments. I have worked with more than one enterprise with capacity management so poor that rollouts have been delayed while a new datacentre hall is completed, or kit works its way through the supply chain, or while a hunt is carried out for VMs that can be killed and dedicated servers that can be pulled. In cloud, at least the wait is generally measured in minutes not months.

BT must 'prioritize' between 'shareholders and workers' says union boss

Martin M

Re: No choice

If you had bothered to look up the legislation referenced, which is the thing actually phrased like an RFC, unlike Companies House guidance:

—-

(1)A director of a company must act in the way he considers, in good faith, would be most likely to promote the success of the company for the benefit of its members as a whole, and in doing so have regard (amongst other matters) to—

(a)the likely consequences of any decision in the long term,

(b)the interests of the company's employees,

—-

MUST … have regard (amongst other matters) to … the interests of the company employees.

You can argue about the weight that might in practice be placed on those interests amongst competing concerns, but they are required by law to be considered. As I said.

You may have meant to say “My point was that the union was demanding boards ought to prioritise workers above shareholders which they can't do.“ but what you actually said was “Legally BT management is required to act for the best interests of the shareholders and nobody else”. Those are two different statements. The first is correct. The latter is clearly not.

Martin M

Re: No choice

Legally that’s not correct. The second of the 7 duties of a director, as described by Companies House and enshrined in the Companies Act 2006 s172, is indeed to promote the success of the company for the benefit of shareholders. However under this duty they are also required to consider the consequences of decisions for various stakeholders, explicitly including employees. In their words:

“Board decisions can only be justified by the best interests of the company, not on the basis of what works best for anyone else, such as particular executives, shareholders or other business entities. But directors should be broad minded in the way that they evaluate those interests – paying regard to other stakeholders rather than adopting a narrow financial perspective.”

AWS power failure in US-EAST-1 region killed some hardware and instances

Martin M

Re: Elastic

Small business owners have lacked IT expertise/clue since the dawn of computing.

And yes, I do blame them if they're so massively naive as to unquestioningly believe marketing. Most people wise up to that when they're about 5 years old, put a toy on their Christmas list off the back of exciting puffery, and receive underwhelming plastic tat.

Luckily, nowadays most sensible small businesses don't try to train their admin assistant to juggle EC2 instances, but instead go for a collection of SaaS. Many of those are horrible, and it is a lemon market, yet it's still almost always better than them trying to muddle through themselves.

Martin M

Re: Elastic

In other words, probably the exact same people who would have screwed up on-prem.

Martin M

Re: Elastic

They don’t. Most Platform as a Service products automatically fail over in the result of an AZ outage.

If you’re using EC2 then you have to engineer your own solutions, but APIs and tooling allow you to automate almost anything. If you have to do anything manual in order to failover - other than possibly invoke a script - you’re doing it very wrong.

JavaScript dev deliberately screws up own popular npm packages to make a point of some sort

Martin M

Re: Proof that the industry is mad

There are a huge number of excellent reasons why everyone should follow a controlled process for bringing in and caching dependencies - much reduced regression defects, assured availability of packages, reduction of some kinds of software supply chain risk, repeatable builds, robust configuration management, CI performance, bandwidth efficiency and probably many more.

However, Log4j ain't one of them. Those naively pulling in the latest version (especially if replicated all the way down the dependency chain) with every commit build were probably among the first to close that particular security risk - albeit entirely unintentionally...

Can you get excited about the iPhone 13? We've tried

Martin M

Re: Thanks!

Try “Secure shellfish” - scp client, gives you access to browse and download via the Files app.

Sync and share cloud services are also very slick for doing this, albeit often poorly supported on Linux desktops. I’m sure ownCloud or NextCloud would do it natively, Dropbox still has a Linux client I think, or - marginally less conveniently - rclone up to almost any service. Drag/drop to cloud synced drive, run rclone (if necessary), into the iOS app and download for offline use.

I have to say, I’ve not had a problem with this since pretty much the dawn of the App Store.

How Windows NTFS finally made it into Linux

Martin M

Re: Title to long :whaa:

I’ve actually had to re-enable hibernation recently because of the abomination that is Modern Standby being forced down my gullet by Microsoft/Lenovo/Intel. If I want my applications/open documents up and running after I lift the lid, it is now the only option that reliably avoids turning my laptop bag into a disconcertingly hot oven.

VMware to kill SD cards and USB drives as vSphere boot options

Martin M

Re: Nanny

A ‘fool reboot’ sounds a bit BOFH.

Seeing as everyone loves cloud subscriptions, get ready for car-as-a-service future

Martin M

Software defined computer

Traditionally known as a … computer.

Gives a whole new meaning to “rolling release”, particularly if the brakes stop working.

Remember the Oracle-botherers at Rimini Street? They are expanding third party support into open source database world

Martin M

Re: They missed the memo

I've just remembered a bit more about that Oracle support issue and should probably say I'm being a little bit unfair above - it was the trickiest production support issue I've ever encountered. Intermittent error that only occurred every few days, and made no sense at all, I was called in to help.

It turned out to be down to an unexpected interaction between Weblogic and a caching provider due to and ambiguity in the XA distributed transaction spec. This architecture choice was not mine ... hate XA and have long held the opinion that anyone even considering using it (thankfully few, nowadays) should probably stop trying to hide the fact that they have a distributed system and instead ponder the benefits of idempotency.

However, there was a certain amount of novelty being able to list "Sloppy language in an international technical specification" in the Root Cause Analysis :-)

Martin M

Re: They missed the memo

There’s a massive difference between knowing how to use a piece of infrastructure, which is what a full stack engineer does, and being able to provide third line support for it when it goes wrong and it doesn’t behave as expected / designed. 2 a.m. with a outage going on is the wrong time to try to be trying to familiarise yourself with a huge, complex database codebase and be wading through mailing lists.

Whether Rimini Street can provide good support at their relatively small scale, supporting many different databases, is a separate question. The likes of EnterpriseDB (for PostgreSQL) and Oracle (for MySQL) have been doing this for much longer and have people who will properly know the code. Not that Oracle support is always great - I have in the past had to decompile bits of Weblogic to show them where the bug is.

As for MySQL for mission critical - yes, I’d take PostgreSQL over it any day, but you are aware that InnoDB has been the default storage engine for well over a decade?

Fired credit union employee admits: I wiped 21GB of files from company's shared drive in retaliation

Martin M

Re: Rather moronic

What she did was moronic. But not sure I get the logic on why her fine should depend on how moronic the company is. Restoring in less than 10k is not “clever”.

Far from restoring from actual backups, with a proper setup this should have been a case of simply restoring a NAS snapshot - 5 mins tops actual technical work, call it a half day with surrounding paperwork. This is not advanced technology - I have had it on my home server, for free, for the better part of a decade.

But it sounds like even backups didn’t exist, so they’re spaffing money on disk recovery specialists instead.

Try placing a pot plant directly above your CRT monitor – it really ties the desk together

Martin M

Re: We’re talking CRT era here

OK, multi-monitor *graphics* support then :-)

And to be fair I suspect there were actually quite a few systems doing it well before Windows did, but original context was about widespread use…

Martin M

Re: Most common fault was Magnets

We’re talking CRT era here - the first official multi screen support was in Win98 I think, before that only via expensive/fragile hacks. As corbpm says, until LCD monitors came along an unfeasibly large desk was required anyway, so few people wanted to multiscreen.

I first had dual LCD monitors in 2003, but probably only because I was working on a trading floor and it was the default, hang the (significant, at the time) expense. When I started a new job in 2006, I had to spend some time convincing the head of department to give the development teams second screens, citing academic studies on productivity increases and defect count reductions. Probably the single best thing I did for them while I was there…

Martin M

Re: Most common fault was Magnets

I had almost forgotten about degaussing aka The Wibble Button. I can’t remember a time my monitor actually needed it, but it was sufficiently satisfying that I did it anyway. Thank you for making my day! DDDONNGGGGonngonggongong…

Teen turned away from roller rink after AI wrongly identifies her as banned troublemaker

Martin M

Re: exhibit ingrained racist assumptions in the design

No apertures, exposures. D’oh.

Martin M

Re: exhibit ingrained racist assumptions in the design

Do you seriously think the CCTV cameras used in the are likely to be genuine HDR?

Shooting stills in HDR is relatively easy - the camera just takes a burst at multiple aperture settings and there’s a (albeit computationally expensive) process to combine them. Although the results will not be good if anyone moves during that process.

Shooting video in HDR currently requires at least $1000 of camera, more usually $2000. I doubt those are capable of streaming the result easily, and running around with SD cards or SSDs doesn’t really work in this scenario.

I can’t imagine HDR hitting the CCTV/face recognition market for some time yet.

Page: