* Posts by chelonautical

14 posts • joined 26 Oct 2015

UK banking was struck by one IT fail every day for most of 2018


The meaning of "major incident"

Interesting data, but I don't find these raw figures very helpful on their own. As the correspondent noted, there's no mention of the duration of the incident. Also, I don't get any sense of the severity of the impact as there's no detail about which bank functions were affected. Some of the things that banks do are more critical or time-sensitive than others.

As an example, the inability to process outgoing debit card payments for even a couple of minutes could be extremely inconvenient (not to mention embarrassing) for the customer. By contrast, if the system that handles my outgoing standing orders is down for (say) 20 hours but eventually manages to send out a payment slightly late on the same day it was already scheduled to be paid then I probably won't even notice that. The first outage is a real nuisance, the second far less so - although still serious from the bank's perspective, of course.

I was curious as to what "major incident" means so I went digging... The relevant regulation appears to be http://www.legislation.gov.uk/uksi/2017/752/regulation/99/made - it doesn't classify incidents, although regulation 98 says "... the payment service provider must establish and maintain effective incident management procedures, including for the detection and classification of major operational and security incidents". I interpret this to mean that each payment provider can have its own definition of "major incident". In which case comparing the numbers between different banks won't necessarily yield an apples-to-apples comparison. A more honest bank (if such a thing exists!) would look worse than its competitors, so without strong oversight of the criteria used, the likely effect of this kind of publicity is a race to the bottom to define "major incident" as narrowly as possible, thereby defeating the purpose of the regulations in highlighting such problems.

It's a pity because in principle I welcome greater transparency in this area, just not sure it will work as intended.

New era for Japan, familiar problems: Microsoft withdraws crash-tastic patches


Re: People should stop using calendars...

@Spazurtle re UNIX time. ... that's great ..... until 2038.

Everything will stop working in 2038.

Only if using 32-bit time fields based on seconds since the epoch. A 64-bit time field would be fine.

I interpreted the statement about UNIX time as meaning the use of an internal date/time representation based on elapsed number of seconds (or smaller units if necessary) since a specific point, which need not be the same representation shown to the user. Regardless of how the timestamp is stored, it's just a matter of how the date/time is rendered when displaying it to the user. Someone with a Japanese locale might see a different date output from someone in the US, for example.

Of course I don't know how Microsoft are handling this internally so perhaps they're using a different approach. Or maybe a few different approaches have sneaked into their codebase over time and now they have to fix a load of different code paths to make them consistently handle the new era. I can only speculate as to why it seems to be tricky.

I see you're writing a résumé?!.. LinkedIn parked in MS Word


The rise of the machines

Well I'd heard that machines were going to steal lots of jobs. Maybe I was wrong about how they're going to do it. If they can watch enough humans apply for jobs then perhaps they can beat us at the application stage. Opt out of this - don't help the machines learn to replace us!

Unlocked: The hidden love note on the grave of America's first crypto power-couple


Re: Awwww...

Thanks for a really nice article. A good mixture of technology, history and some interesting people.

Also I find it fascinating to note how far back in time some of these ideas originate. I'd heard of the ancient Caesar cipher but Baconian ciphers had passed me by. I for one would welcome a few more articles about the history of cryptography (or indeed the history of other technologies for that matter).

UK parliamentary email compromised after 'sustained and determined cyber attack'


> How? They're all a collection of pompous, self-obsessed, talent

> free clowns, with no relevant education or experience

One of the core problems with politics is that democracy and psychology can combine to produce problematic outcomes.

Most people vote for politicians who appear the most confident and certain in their beliefs. In an increasingly complex world, confidence and certainty are reassuring characteristics in leaders. Therefore we end up with politicians who are above all else confident, regardless of their actual ability or knowledge. Most voters probably don't mean to choose a brash ignoramus to represent them (and not every MP is one) it's just frequently a side-effect of how the system works.

But the Dunning-Kruger effect means that many of our leaders are over-confident in their own abilities and understanding. If they aren't aware of a particular threat or problem then they don't see any point in doing anything about it and it wouldn't occur to them to ask anyone else because they are already convinced that they know everything (e.g. see Gove's remarks about experts). In the abstract they know that national security is very important, but most of them don't know how that translates into technical and administrative controls.

There are a few good smart people who manage to get into politics, people who listen to others and seek expert input before forming opinions and policies. They just happen to be the minority exception to the general rule.

For these reasons it's unlikely that lessons will be learned by everyone who need to learn them. However, that doesn't mean we shouldn't try to educate our politicians to do better. Also we need skilled experts to design and implement better systems, in order to be less reliant on the knowledge and whims of the individuals concerned. I'm going to hope for the best, while still dreading the worst (as per usual really).

Sorry, Dave, I can't code that: AI's prejudice problem


Re: This is a known problem with a known solution

> If we ever achieve strong AI, that is artificial sapience, it's going

> to be as biased and stupid as we are. But it may be a lot faster

> at being so.

Yes, faster and also much more pervasive. It's likely that some of these systems (e.g. the larger, more famous, systems run by the likes of Facebook and Google) will be regularly consulted by most organisations due to the sheer volume of data they hold about many people. We could end up with a situation where a handful of AIs (or AI-like systems) have huge influence over people's ability to find credit, insurance, employment and more.

Biases, errors and omissions in these systems could have a detrimental effect on most aspects of people's lives and could follow them around inescapably. The combination of automated decision-making and widespread dependency on a limited number of "AI" providers could result in life-long automated and repeated problems.

> If the singularity ever happens, it's not going to end well for humans.

Yes indeed. Also it could go badly for many humans long before then.

Regulate This! Time to subject algorithms to our laws


Re: Not just computers

>The problem I see with these is not the "AI" or whatever,

>but the massive consolidation in many industries and the

>lack of competition. This may be via corporate consolidation

>or merely that all the corporates are running the same

>software. Either way, that is unhealthy and intervention

>may be required to stir things up and bring back competition.

Good point. Another consolidation concern is consolidation in the data sources that record people's lives. Companies like Google and Facebook hold such vast quantities of data about the general public that their datasets will probably end up being given a very high weighting in any decision-making process. These companies know so much about us, why wouldn't every employer, bank, insurance company or any other business use them to find out much more about our habits and risk profiles? There is a lot of power in a few hands.

As a result of this consolidation of personal data into a small number of internet services, any errors or unfavourable entries in the records of Google/Facebook/etc. could easily follow us around. If someone once said something foolish on Facebook or Googled something risky, these things could be added to their digital "permanent record" and result in life-long disadvantage in employment, credit, insurance and many more areas. This is already the case to some extent, as people can be found online but it used to require a degree of manual effort and patience on the part of a human. The next human at a different organisation might not bother or may search less thoroughly, so you have less chance of any past online embarrassment becoming permanently life-ruining.

What's new is that large-scale automated information sharing and deep AI-based analytics of people's life history is becoming possible such that organisations can automatically judge people's entire digital lives and reject them in a matter of milliseconds. As a simple example, imagine going to an online car insurance comparison website and being told "all companies declined to quote for you" without knowing why. It might be because you posted a couple of things about social drinking on Facebook so they have all wrongly concluded you are a drunk and therefore a bad risk. It might be for some other reason entirely. Will it be possible to find out? Will companies admit any responsibility or will they just pass the buck to their third-party "lifestyle analytics" provider in another jurisdiction off-shore? Will there be an ombudsman who will help you find the culprit? Ultimately, who can you sue for the damages caused by incorrect automated decisions?

These issues already occur today on a smaller scale (e.g. several times a call centre operator has told me "the computer says X" without being able to explain why), but on a small scale it's easier to handle or shop around elsewhere. The danger is that unexplained incorrect or biased decisions become automated and repeatable to the extent that you can never escape them in any aspect of your life. If everyone uses the same software and the same data sources that becomes increasingly likely.

Can you ethically suggest a woman pursue a career in tech?


In hiring decisions there is often an unconscious bias towards "people like us" ("us" being the managers responsible for choosing a candidate).

In areas already dominated by men that's likely to result in new hires being men, which is the focus of this article regarding certain kinds of IT job. However, where women call the shots, it's similarly possible for them to exhibit a bias in favour of female employees. It's likely that hiring bias perpetuates a lack of diversity in a number of industries.

There is currently a lot of research into this phenomenon in order to understand it better and figure out ways to counteract it. It's fair enough to point out that the bias can cut both ways and, at the same time, it's important to remember that male-dominated jobs tend to be better paid: exclusion of women from male-dominated jobs tends to disadvantage them more than it disadvantages the men excluded from female-dominated jobs. Not saying that hiring bias against men (or indeed anyone) is OK, just that the impact is not necessarily the same.

For some pay statistics, see here:


For example, an average female "IT user support technician" is paid 15.5% less than her male counterparts (women hold 26% of those jobs), whereas a male in a "secretarial & related occupation" is paid 7.5% than his female counterparts (women hold 92% of those jobs). But no prizes for guessing which type of role has higher average hourly pay.

(I apologise, without knowing the specifics of the assistant role mentioned here, I have assumed it to be a secretarial or related role but feel free to check the link for the specific role the Zimbabwean candidate applied for and let us know the comparison)

Different types of discrimination also combine to affect people in multiple minority groups worse (where "minority" can mean "minority within the employee population for that job role"). It sounds like the Zimbabwean candidate in this example most likely lost out due to his nationality and gender. Tackling these issues goes to the heart of some very deep-rooted assumptions and unconscious behaviours.

As a white British man with a degree, I am fortunate to have so many good career choices available to me here in the UK, so I don't find it very concerning that a small number of lower-paid jobs would be willing to overlook me because I am still in a privileged position overall. I'm not trying to brag, just pointing out that I'm somewhat aware of my own privilege and that a similar awareness would benefit some of my colleagues too, especially when considering what we can do to make a positive difference to our job culture.

To be clear, I would like men and women (all humans, really) to have the same opportunities to work in whichever job they prefer and to be paid according to their skills and ability to do the role.

Regarding the specifics of the article, I'd like to help make things better for women in IT and would appreciate some advice on how best to do so from within a technical role that doesn't involve hiring people. I've worked alongside a small number of brilliant women and enjoy working with a variety of people to get different ideas and perspectives. I'm not sure how things are going to improve from where we are now but it's a good debate to have and I enjoyed reading this article.

Hard numbers: The mathematical architectures of Artificial Intelligence


Re: Bah!

If that fails, there's also the classic follow-up question:

"Why anything?"

"Because everything."

That might work in a pinch.

Huzzah! Doctor Who comes to Playmoverse


Re: Amazing :)

I certainly hope so... I miss Lester Haines. :(

On the bright side, I notice a few other El Reg hacks seem to like Playmobil so I'm feeling hopeful that more Playmoverse antics will be forthcoming.

The Register's entirely serious New Year's resolutions for 2016


Pros and Cons - 2p from me

As a reader for the majority of El Reg's 21 years, I feel like I've grown up alongside the site. For what it's worth, as a software engineer, the things I like about The Register are:

1) Insightful analysis, especially where it points out the discrepancies between marketing and reality. Bringing in knowledgeable people to comment on issues and cut through the hype is very useful.

2) Light-hearted tone and lack of concern about upsetting corporations. Maybe I'm quite a cynical person, but I like my IT news a little bit on the sarcastic side: it's a welcome antidote to the reams of corporate BS I have to wade through on so many other sites. Certainly scepticism and humour are very good things. That said, I'm not so keen on adolescent humour (e.g. knob jokes) and sexism (unless ironic) - these are things I have outgrown and I don't mind if The Register grows out of them too.

3) Broad range of content. Although it probably lacks some of the very in-depth technical content you might find elsewhere, The Register provides a good balance of IT industry news. There are not many places I can find news on (for example) cryptographic algorithm flaws, food recipes, corporate mergers, economic theory (goodbye Mr Worstall!) and storage techologies. I like coming here because I feel the broader content helps to put some of the technical stories into a real-world context. Although there are times where I'd like a bit more technical detail in the articles, the links provided are usually good enough for my needs.

4) The comments section. It's not perfect but, unlike most other sites, every time I read the comments section here I learn something genuinely useful and interesting. The bottom half of the internet is usually so full of idiocy and trolling that it's often best avoided for the sake of my blood pressure. However, The Register does very well to maintain a comments section which is actually worth reading.

Things I'm not so keen on:

1) Articles which read like a press release without any added analysis (cynical or otherwise). Sometimes these sneak into the site, although mercifully not very often.

2) Large graphics at the top of each article. I'm more open-minded on the desktop site, but for mobile devices they hog limited bandwidth and screen space.

3) Some of the adverts have been rather screen-hogging and obnoxious. I know adverts pay for the site and I realise that most people aren't weirdos like me who would gladly pay a subscription fee not to see them. Some of the ads on this site have been too distracting.

I'm enjoying seeing what other people here like/dislike too. Hooray for the comments section!

What the Investigatory Powers Bill will mean for your internet use


Re: you've forgotten about something

Yes, Server Name Indication is visible on the wire in plaintext as part of the initial TLS Client Hello. I've seen it myself in Wireshark traces of HTTPS connections. Use of SNI is common nowadays since many web servers host multiple sites and need that information to present the correct server certificate.

Don't forget that the domain name of a web site could potentially reveal a great deal of personal information about the person accessing it, even if the individual pages and requests are hidden. Visiting a website for a divorce lawyer likely indicates a relationship in trouble, visiting certain adult sites may reveal sexual orientation or fetishes, visiting a payday loan company could reveal financial troubles etc etc. For this reason we should still be concerned about government plans to keep lists of visited domains. Also, whilst "use HTTPS" is good advice as far as it goes, there's a danger that the manta becomes a substitute for deeper understanding of the risks involved.

Having said that, thanks for doing an article about internet security in simple language. I've been looking for something I can show friends and family about the topic in words they can grasp. I'm not ready to show it to them in its current form: many commenters have pointed out various flaws in the text as written. With a bit of redrafting, it could become a really nice starter article for those who want to improve their awareness.

Windows 10 is an antique (and you might be too) says Google man


Re: Dear Matias,

The definition of intuitive might vary from one person to the next.

Not everyone considers touch interfaces to be intuitive: touch interfaces often require the user to memorise arbitrary and unnatural gestures to accomplish basic tasks. For example, one big argument in favour of a mouse interface is that it enables "discoverability": users can see all the main objects on the screen and can even see pop-up context help as they roll the mouse pointer over each object, so it becomes possible to explore the interface and discover new features. Discoverability favours recognition over recall: human memory is much better at the former than the latter... it's much easier to recognise an icon or a named menu option previously seen than to drag it from the depths of the brain. Also, the longer a familiar interface remains the same, the more entrenched the mental schema for replaying known actions becomes. A new interface that is "almost-but-not-quite-the-same-as-before" can be very disorientating (e.g. "Where's the start button?" etc.). Knowing a bit about how the human brain works can help to design better user interfaces for the majority of users.

Of course technology moves on... it's not convenient to carry a mouse around with our phones and also people will prefer cleaner less cluttered interfaces on devices with smaller screens. Some of the specific challenges we face will vary as the display and input technologies evolve, but designers must not forget that there's a human using the device and the fundamental human "hardware" has not changed. In my degree many years ago, we learned about researchers like George Miller and Ben Shneiderman: they taught us a great deal about human memory and cognition which can be used to inform interface design. Psychologists could definitely teach us a few things about how to design systems for regular people and for my money Shneiderman's 8 golden rules are just as valid today. So I agree with the point that engineers are not always the best people to design a UI, as we often lack these insights.

Having said all of that, I'm a bit suspicious of the current fad around interface design. I believe good design is important and it's great to see it becoming more of a discipline. At the same time, I've seen some very fiddly and unnatural professionally-designed interfaces which are difficult to use and violate many of the good human-based design principles I learned back in the day. And sadly a few people who call themselves designers are too arrogant to listen to criticism from actual humans (constructive or otherwise).

Innovation is great, but everyone involved in designing systems should remember the humans who have to use the end product and listen to their feedback. If the majority of users hate something about a UI even after having lots of time to adjust then their rejection of that feature should be heeded. That doesn't mean never changing the UI, but it does mean careful thought and research before doing so.

We can't all live by taking in each others' washing


Re: The last one?

Like many other people here, I have very much enjoyed Tim's columns. I won't pretend to have understood every last word and I haven't always agreed with what I did understand, but then that's part of the fun of a vigorous and intelligent debate. Overall, I've found the series to be both educational and challenging, so I just wanted to say thanks and that I'll miss these articles in future.


Biting the hand that feeds IT © 1998–2021