
Plea to Microsoft UI teams
Go do something else - solve hunger, find cures for disease, find a solution to the middle east etc, but please don't tinker with the UI
A fortnight ago my Apple Watch automatically updated to WatchOS 10 and ever since it's taken me twice as many taps to perform basic tasks like telling the device to stop tracking exercise sessions. I've grown used to the redesign but I can also feel in my gut that Steve Jobs would have critiqued it by pointedly asking "Why is …
This sort of thing is why the major antagonist in Roberts' You Can Be a Cyborg When You're Older1 is pretty much a Clippy allegory. LLM-enhanced Clippy is just that much worse.
1Frankly one of the best cyberpunk novels I've read, perhaps only behind Effinger's When Gravity Fails.
> effectively removing the bar between an idea for an AI chatbot and its realization as a GPT
Great. Remove the barrier between a half-arsed "idea" for yet another chatbot and dumping the useless thing on the world.
Also, I note in the glowing uncritical approval of this article, no consideration of how to check that your marvellous creation actually bears any resemblance to what you think you wanted: with no knowledge or understanding of the options available when creating a new GPT, the proud new chatbot owner has no idea what has been assembled in their name even if they bother to examine it (are they even given the opportunity to read the set of chosen options?)
The Computer spat out a chatbot, put it into service. By the time it has been running long enough to be demonstrably useless[1] the creative person has moved onto something else and forgotten what they even said when they "set up" their montrosity.
Blindly trusting one LLM to create another! What a concept.
[1] and you will be lucky if it only turns out to be useless!
On first reading, this article seemed to be approving of this whole idea, and I reacted (somewhat negatively, cough) to that view.
But now I am thinking, in that last paragraph:
> instead, we move from complexity into ambiguity
That *is* just a carefully worded subtle jibe, designed to turn around the whole meaning of the article and show the author's worry about and contempt for the concept, isn't it? Reassure me that this is the correct way to interpret that last paragraph.
Please, please, don't let it be the case that this last parahraph is meant to be taken at face value and that Mark Pesce wants us to accept that knowingly introducing ambiguity into an end-user's interface is merely something we have have to shrug our shoulders about and accept as the price of improving things for - well, of improving things for the person who wants to foist a new chatbot on said end-user and do so without having to bother their pretty little creative head with anything as dull and dreary as a "developer", or an "analyst" or, worse yet, a "programmer".
My new LG TV is a PITA when it comes to changing the input device. I did it once and it took seventeen clicks and four menu traverses.
The old... make that ancient Sony that it replaces could do it in one click, a list traverse and a click.
I simply won't bother to do it in software in the future. I'll just swap the HDMI cables.
In general, I find that software designers are doing their best to make using their product a PITA.
Gone are the days when 'Ease of Use' was measured and if it got too hard then the design was changed to make it easier.
I kinda miss those days.
The more awkward something is to use, the more likely you are to throw it away and buy a new one. It might not be from the same company, but that doesn't matter because they get the customer churn from all the other companies being abandoned.
1) Invent something
2) Make it impossible to use
3) $$$
Even basic monitors are as bad. The latest Dell ones have the power button on the back, where you can't see it when sitting in front of the monitor, and the menus controlled by this weird little joystick thing (also on the back). Classic example of form over function - a row of buttons along the bottom edge at the front was far easier to use.
The monitor I'm typing this on turns off when the computer it's attached to does. Only then it turns itself on to display a cheery little message about how it no longer has an input, so it's going to go to sleep, and then finally it goes to sleep.
I swear you can't do that sort of idiocy by accident.
That's for putting in the drawer and never using again, as luckily they still supply a proper remote with lots of buttons on it.
HOWEVER, my Samsung Q90 puts up a message asking why aren't you using the magic remote about 20 seconds after switching on, and during the 10 seconds it is on screen, you can't use the standard remote - WTF?
On the other hand, on the 3-year old LG TV that I gifted to my elderly parents it's altogether too easy to switch sources. Every few months I receive a call from them telling me "the TV has broken down" because my father inadvertently sat on the remote and it takes only a single butt-press to switch source from the TV aerial to the AV1 input, and now the TV only displays a "no input signal" message and they have no clue what it means...
Perhaps LG has made it more involved to switch sources because it was too easy to do inadvertently and with faded legends on the remote keys it wasn't obvious how to revert the change?
My Sony rear projection (which gives you an idea of the age of it) has a dedicated button just for that. Honestly, for that TV, I could manage with 4 buttons:
Power, Input, Volume up, Volume down.
Why, oh why, would they take AWAY such an easy, obvious interface?
(At least our TCL Roku TV makes it easy to pick an input when first powered on.)
I tend to think that these misfeatures are about making sure that the tick-box features you "have to have" (like multiple inputs) are present, but you don't use them - instead using the manufacturer's store/channel/service to give them (or their "partners") more moolah. In other words, it's a "nudge".
Thy can't monetise HDMI, but they can monetise viewing choices made through the TV UI, be it channel selection or streaming content viewing.
The problem with adding an ambiguous layer between you and what you want to do is whether you trust the AI (or rather the organisation implementing the AI) to do what is right for you or right for them? So if I ask to call someone, will it choose the cheapest option or the one that makes them the most money?
Worked on a fiendishly complicated MCAD software product. It had options for *everything* and then some. By the time I left in 2005, there were over 220 options! The kicker was that some options were interrelated ie changing one setting silently changed others.
We did one release where product management were trying to optimise an 'Out Of the Box' configuration for the settings. These optimal settings were tweaked at least 3-4 times per day during the dev cycle which showed that nobody really knew what was going on.
an AI chatbot to tell me what I already know
Keep
It
Simple
Stupid
And just remember that 95% of all work done with MS office is done with 10% of the options.
And why I'll always ask users what they want on the interface..... because they dont care how clever your multi-tasking multi-threaded application is.
Button for the destination... a file selector. and a big red 'Send' button and they'll be happy as anything.