A bit over 20 years ago I started using Minolta SLR cameras, followed by DSLRs. My current camera from Sony (which has a compatible lens system) replaced the optical viewfinder with an electronic one. While being able to overlay graphics into the eyepiece is useful, it did take a bit of getting used to its electronic version of reality, and I still like the purity of my earlier cameras’ optical systems.
Posts by Simon Harris
2764 publicly visible posts • joined 1 Mar 2007
Page:
Metaverse? Apple thinks $3,500 AR ski goggles are the betterverse
Air Force colonel 'misspoke' when he said an AI-drone 'killed' its human operator
Airline puts international passengers on the scales pre-flight
EU tells Twitter 'you can run but you can't hide' from disinformation policy
Intel mulls cutting ties to 16 and 32-bit support
Re: 8080 and 8086
I remember the 68000 did the same thing. As well as the standard clock signal there was a slow clock output at 1/10 the CPU clock speed, typically 0.8MHz for an 8MHz CPU, which was used to feed the E input of 6800 family IO devices as Motorola were a bit slow getting 68000 IO devices out. If I remember correctly, there were a few 68000 opcodes specifically designed to access 8 bit 6800 family devices that only used half the data bus.
Encoded 'alien message' will reach Earth today, but relax: It's just a drill
Sci-fi author 'writes' 97 AI-generated tales in nine months
Re: each hour’s earnings is about $2.95.
You have a good point - I was trying to find out about the economics of self publishing, and came across an article suggesting many authors make more from talking about what they’re writing about than they do actually writing about it, and the books are more a way-in to doing the talks.
“Impressive returns”, or not.
Running the numbers, if I’ve understood it correctly:
97 ‘books’, each of which takes 6-8 hours to write (call it 7) means 679 hours of work.
$2000 income from that means each hour’s earnings is about $2.95.
At that productivity rate I might question whether it’s actually worth it!
Professor freezes student grades after ChatGPT claimed AI wrote their papers
Large language models' surprise emergent behavior written off as 'a mirage'
That, detective, is the right question.
It seems that the only real intelligence in LLM AI comes from the user knowing how to prompt it to give the answer you want. Everything after that is just a souped up word association game.
In the words of Dr Alfred Lanning’s hologram in I Robot “You must ask the right questions”.
Microsoft can't stop injecting Copilot AI into every corner of its app empire
Re: AI, AI, AI!
I think there are some AI (or rather ‘applied statistics’, I don’t believe there is any actual intelligence) that are yielding good results in some fields, and wouldn’t dismiss it all - for example the pancreatic cancer finding AI.
But that I think is the problem - the more reliable AIs are trained on a limited dataset set specific to the problem. When you have a very large language model of everything and have something that can answer general questions on anything, I think the noise in the system is going to make it unreliable.
Maybe what we need instead of general AI are multiple independent AIs trained for specific tasks - e.g. software development assistants trained on good software, medical assistants trained on medical sources, etc. without them also being trained on the complete works of Shakespeare, all the nonsense on Reddit, etc. so what they produce is focussed on their ‘expertise’ without the noise of all the other stuff. Yes, it probably means they will have more limited conversational abilities and users will have to think more in how they query them, but I’d rather that if it ‘knows its stuff’ rather than giving plausible but wrong answers some of the time.
Dell reneges on remote work promise, tells staff to wear pants at least 3 days a week
Re: It's not for everyone...
Sure not everyone can, not everyone wants to (I prefer to go in to work).
But talking about Dell, yes they produce physical products, but many of their production lines are outsourced to Asia, with some in Europe and South America too. Not everyone involved in the process of creating a physical product will actually need to be there to actually make the thing.
Cisco: Don't use 'blind spot' – and do use 'feed two birds with one scone'
DEF CON to set thousands of hackers loose on LLMs

Re: Better than hackers for red teaming an LLM
If, as other articles in this esteemed journal suggest, ChatGPT had mostly been taught using works of science fiction, possibly a bunch of hackers might be more familiar with the source material than academic linguistic professors.
Not suggesting of course that we’re all a bunch of nerds who prefer a rollicking space opera to the complete works of Jane Austen!
OpenAI's ChatGPT may face a copyright quagmire after 'memorizing' these books
Re: Stop anthropomorphizing computers. They hate that.
If it read everything known to humankind only once (and there were no quotations of other works within those, which excludes a lot of works) then it might be that it doesn’t have an internal representation of any one work.
However, if it has had the same document as input multiple times (e.g. it’s crawled the web and found multiple copies, or multiple copies of extracts) or there is something that is often quoted in other works, wouldn’t the model parameters be biased towards reproducing sequences found in those works?
Could that reinforcement then be considered ‘memorising’?
Here's how the data we feed AI determines the results
Suspension of disbelief.
When watching a movie (particularly if it’s a historical drama where there are known facts) and they get something very wrong about something I have a particular interest in, my suspension of disbelief vanishes as it sets me wondering what else they’ve got wrong in subjects I don’t know so much about.
I get a similar feeling with ChatGPT - I’ve seen enough examples of things I know about that it’s got wrong that I have trouble believing anything it comes out with.
Ok, so it may be ok to use it as a springboard for ideas, but I wouldn’t trust it with facts.
ESA's Jupiter-bound Juice spacecraft has a sticky problem with its radar
If you don't get open source's trademark culture, expect bad language
ChatGPT creates mostly insecure code, but won't tell you unless you ask
Re: This is totally expected - it doesn't know what you're trying to do
I do actually think that ChatGPT might be beneficial, not because it produces (or doesn’t produce) any reasonable code, but that the ‘conversation’ you have with it along the way, correcting it and providing more information, might clarify in your own mind how to solve a particular problem.
“down to the training data”
With current AI generations, as I understand it, the training data was frozen at about 3 years ago.
However, if future generations are continually learning, how ring-fenced will the training data be, and how easy would it be to deliberately introduce extra vulnerabilities into AI generated code by poisoning the training data by posting multiple copies of intentionally vulnerable code and associated keywords?
Re: What a future
“and ideally, we used to write and unit test our own code”
I thought the ideal was that someone else would write the unit tests based on the specification and interface of the unit. That way the test writer is less likely to make the same assumptions as the unit writer and more likely to pick up obscure fault conditions.
Thanks for fixing the computer lab. Now tell us why we shouldn’t expel you?
Firmware is on shaky ground – let's see what it's made of
Firmware used to have a consumer market.
I remember back in the late 70s/early 80s RAM was one of the more expensive components in a computer (more so than ROM), and backing storage was slow or expensive, so various computers were sold back then with the ability for the end user to buy extra firmware either in the form of a ROM to plug into the motherboard (e.g. the various word processors, spreadsheets, graphics extension ROMs for the BBC Micro), or in the form of cartridges, so whole applications were there instantly and didn’t eat valuable RAM.
I say ‘buy’, but probably more likely buy a blank EPROM, and borrow someone else’s firmware and a EPROM programmer.
Cisco Moscow trashed offices as it quit Putin's putrid pariah state
Plagiarism-sniffing Turnitin tries to find AI writing by students – with mixed grades

Fail
“the features now integrated into its existing anti-plagiarism products can detect AI-generated text with "98 percent confidence" – but has failed to provide any evidence of this.”
As someone who will soon be receiving a pile of student dissertations in his inbox to mark (which, incidentally, are screened by TurnItIn), if such a statement appeared in their dissertations without supporting evidence, that would be a fail from me.
Can ChatGPT bash together some data-stealing code? With the right prompts, sure

Easily fooled
I saw another article yesterday about ChatGPT being asked to generate Windows licence keys (which it did with a low success rate).
Ask it directly and it tells you it’s illegal and you should pay for it (actually as the demo was for Windows 95, it said to buy a newer one). Reformulate the question and it has a go.
If it’s so easy to hoodwink ChatGPT into doing things it’s specifically not supposed to do, I have an inheritance from a Nigerian Prince it might be interested in.
'Slow AI' needed to stop autonomous weapons making humans worse
Government by AI
I remember reading a novel quite a while ago where the process of government had become so automated and efficient that things were happening too fast and a government department existed solely to mess things up a bit and slow things down to a human speed again.
I have a feeling it might have been one of the Niven/Pournelle collaborations, but without going through my book collection I can’t be sure. Can anyone remember the book?
Parts of UK booted offline as Virgin Media suffers massive broadband outage
In the battle between Microsoft and Google, LLM is the weapon too deadly to use
The difference between the LLM that Microsoft is using and a traditional search engine is that a search engine will find pages and you decide how relevant each of the results is to your query. The LLM obfuscates that somewhat by boiling it down with a plausible description - unless it provides a valid list of references (and ChatGPT has been known to make them up), what are the sources it has used to provide that information, and has its prediction method validly come up with the answer?
An example - in a 6502 forum, one member asked ChatGPT what the best method was for breadboarding a 1MHz clock oscillator for a 6502.
ChatGPT suggested using a 555 - now, that’s a classic timer that has been used for decades as an astable oscillator, but 1MHz is pushing it to the limit (possibly past its limit). So ChatGPT had linked the concept of oscillator to the 555, which in many cases would be reasonable.
It then went on to suggest passive component values, and what to connect to each pin of the 555. This uncovered two more problems - the components suggested gave a frequency nowhere near 1MHz, and while some pin connections were correct, others were completely wrong - it’s as if it had learned a pattern for how to describe component values, presumably from a range of online examples, and either the training data was wrong, or it just didn’t have strong network connections to associate the correct components with the desired frequency, and as for the pin descriptions, presumably it learned how to describe connections in general but the prediction mechanism couldn’t make any sense out of the specifics. However, the whole thing was framed in a plausibly written set of instructions, and if followed (rather than looking in a readily available data sheet, easily found with any search engine, that has the actual circuit), you’d be scratching your head trying to work out why it didn’t work.
If you didn’t know it was bollocks you would probably think it was a reasonable answer. When you know about a subject and you see the answer is bollocks, you worry about how poor the answers might be on subjects you don’t know much about.
Education
One problem I think will emerge this year is that we don’t seem to have a consistent approach to how LLM affects education, and how educational establishments should respond to it, with some banning it outright, some embracing it and some somewhere inbetween.
My own considers ChatGPT generated essays (along with those bought from essay farms) a form of plagiarism - and as someone who will be marking project dissertations soon, we need some way to determine what is actually the students’ own work (apparently TurnItIn is working on a ChatGPT detector).
But at the wider lever, we know that among some of the useful output from such systems, there is also some right bollocks. When LLM gives the impression of doing the thinking for us, it can present those bollocks to look as reasonable as the correct answer. We already have problems with people believing nonsense they read online. How do we educate people not to further subcontract their critical faculties to a machine?
Scientists speak their brains: Please don’t call us boffins
Re: Paging Ms. Streisand
I think there’s a bit of a difference.
A boffin, to me, is someone who actually does science.
A geek or nerd may do science, but it might also refer to someone who just has an intense interest in it, or some other subject, not necessarily scientific, for example you could be a language nerd or a history geek. But I’ve never heard boffin used outside of science.
RIP Gordon Moore: Intel co-founder dies, aged 94
Re: And I had just bought some more Xeons, too…
I had an orange plasma Elonex 386sx ‘laptop’ back then (about 1990) - I remember the screen did get pretty hot.
It took so much power they didn’t even bother with batteries so needed plugging in wherever you went, and being somewhat weighty was more of a luggable than a laptop - I think it cost me a little over £1700 - that was the same price as the 286 machine I’d bought at the beginning of 1988 with an EGA monitor.