* Posts by Electronics'R'Us

556 publicly visible posts • joined 13 Jul 2018

Page:

Techie banned from client site for outage he didn’t cause

Electronics'R'Us
Devil

Unusual fuses

When I was fixing two way radios (and other stuff) in Florida, we also had the contract with most of the local police for their blue lights and sirens as well as the radios.

One day, a deputy comes in and tells us the blue lights on his cruiser need fixing as they were blowing fuses and he put a temporary measure in place in the fuse holder which was the clamp type.

The temporary measure was a bullet.

Electronics'R'Us
Holmes

Been there

Some time ago (decades now) I was writing test / diagnostics software for smart payphones (basically a microcontroller with supporting bits that acted like a payphone).

In this case, I had written the code running on the test machine and the target hardware.

One day, the director of operations came over to my desk screaming that my tests had all gone wrong and we had 100% failures at test at the build house (who had one of the testing systems).

I went to said build house (not a short journey) and looked at the failures and they were all failing one specific test: Escrow relay. For those who may not know, this is the beast that will either take or refund your money. In this specific design, it was powered by a static inverter that generated 120VDC and dumped the energy into the relay when required.

I took a few of these failed units back to the office and the first thing the director of engineering said was 'test it on <other engineer>'s test setup'. Those in hardware will know this is hardly a scientific test, but the relay did operate (although to me it seemed sluggish). <other engineer> declared the unit was fine.

I took said board to a test bench and hooked up a scope to the drive circuit and fired the circuit into a known load. Rather than seeing the expected 120V peak and discharge curve [1], the signal went up to around 80V and abruptly dropped back to 40V from where the normal discharge curve was observed.

A close inspection of a couple of capacitors (which were carefully desoldered) showed they were delaminated and were shorting out at the above 80V. That meant this was a build issue. While I was doing those tests, the director of engineering, director of operations and <other engineer> were watching and once they saw what I had found they were rather red faced and left without another word.

I talked to the build house and they found the build process was heating up that area of the board to a temperature well above what it should get to, so the problem was solved.

In this case, my tests were working just fine.

Note 1. I was sampling this signal multiple times in this test to capture the energy under the curve.

Trump spectrum sale leaves airlines with $4.5B bill for altimeter do-over

Electronics'R'Us
Holmes

Older designs

A lot of avionics have a pedigree from between ten and twenty years ago and there are good reasons.

An update to anything safety critical (and radalts can come under that heading) is a time consuming process as the new kit has to be thoroughly qualified and there is a rule: Don't be the first with anything. We only want to use a proven basis, which is sensible in safety critical design.

When those earlier systems were designed, the band limit filtering was easily sufficient for the operational profile required. As noted by at least one other, brick wall filters are very difficult to achieve (and we prefer to not have any software if possible).

Circuit fundamentals haven't really changed over the years although there are some newer modules that can help with the situation but I suspect part of the solution will be to detect interference and remove it. There are a number of ways to do that depending on what the interference type actually is.

So it is not that we don't use filters - we do. It is just that we need to update them to account for a new RF threat that did not previously exist.

Aviation delays ease as airlines complete Airbus software rollback

Electronics'R'Us
Stop

Re: Flight control system design

Unfortunately, many AntiFuse FPGAs from US vendors come under ITAR and therefore will not be used in any civil design.

Electronics'R'Us
Holmes

Flight control system design

Having actually done some of these, here is the pretty standard design requirements.

Triplex design, galvanically isolated so if one lane goes out, the other two are not electrically affected. Processors must be 3 completely different architectures to make the chance of a microcode problem (spurious execution) so small as to be effectively zero as all the processors will actually have completely different engines under the hood (in reality, the numbers are more like 10^^-12 or so).

Memory interfaces: L1 = parity protection. L2 and higher ECC (fix one, detect 2).

All the above is in hardware.

Every relevant sensor is read (typically 20 times per second) and the 3 channels vote with their results which will naturally have some margin for different physical sensors. That is usually done either in a processor or quite commonly now, a FPGA [1]. If there is a disagreement, the two in agreement will reset the other channel.

Ultimately it is software that commands control surface movement based on pilot input against the control laws, a specification given to the manufacturer of the equipment by Airbus in this case. In this particular case, if Airbus are doing the top level or have replaced the third party vendor model it looks like the control laws may well not be properly defined.

I don't know the current software stack in the A320 but I do know the process is very strict.

1. FPGAs in this context are designed to the requirements of DO-254 (or the Airbus equivalent which has the same requirements) and is very similar in scope and effect to DO-178 for software. FPGAs with SRAM configuration data are (or at least were) a big no-no. The old Actel ProASIC flash based series were the go to parts in this area for a long time. Flash is pretty much immune to free neutron hits.

Most atmospheric particles are free neutrons, the density of which increases with increasing altitude (it is an air pressure issue and varies quite considerably across the world)

Soup king Campbell’s parts ways with IT VP after ‘3D-printed chicken’ remarks

Electronics'R'Us
Devil

Re: I have one question

Several years ago (decades now) I had a book of Punch magazine cartoons.

One of my favourites was set in an ad agency with two people and one saying "I really have to believe in a product before I can lie about it"

Windows boss defends 'agentic OS' push as users plead for reliability

Electronics'R'Us
Headmaster

Paint

Microsoft has turned the humble Windows Paint app from a basic bitmap wrangler into an AI-enhanced nightmare

So very true. The previous version was simple and lightweight with a reasonably easy to use interface. The new version is a horrible mess with standard controls that were often icons or plain text on the menu (crop comes to mind) hidden away behind an extra right click.

So a tool that was great for a quick image markup has become infested with bits no-one wants (the copilot button is probably larger than any other but I have yet to find a use for it other than to show how crap this stuff has become).

Amazon's AI specs aim to stop delivery drivers getting lost between van and porch

Electronics'R'Us
Facepalm

Post codes

I live in a rural area, just outside the village proper.

My postcode covers an area of perhaps 0.7 miles by 0.4 miles. The centroid (depending on which map service you use) will either put you in the village to the west (which is not in this postcode at all) or close to 0.5 mile to the east as I am on the western edge of the postcode.

Many have the times been where a delivery driver cannot find my location (it is also not visible from the country lane until you are on top of it).

Even the energy company was sending my initial bills to the wrong postcode when I moved in.

As far as I can discern, postcodes are based on a number of dwellings.

UK calls up Armed Forces veterans for digital ID soft launch

Electronics'R'Us
Windows

Another NO

This veteran is not beaten, broken or conditioned to kneeling before authority and I wasn't even during active service.

I left the service over 44 years ago (and if you think the transition support is bad now do I have some stories to tell although I was given time off for interviews).

When I left I was on my own and I didn't really worry about it as that was the same for everyone (not sure if that was true for officers).

Having had an interest in infosec for a long time, this scheme can really only be about creeping control as there is no need for it as far as I can see. I have a NI number and proof of identity already. It won't stop illegal immigration (who are employed by people who simply ignore the rules anyway).

So they can take their digital ID (and driving licence for that matter) and shove it where the sun don't shine.

From what I see in the veteran community (such as it is) the vast majority are saying no.

Campaigners urge EU to mandate 15 years of OS updates

Electronics'R'Us
Holmes

Electronics reliability

Electronics is highly reliable. Far more than spinning things or connectors [1].

Properly designed electronics [2] will last decades so my view of the arbitrary hardware requirements from M$ is it is just to help their 'partner companies' sell more hardware and the ecological effects (potential several millions of fully functioning devices going to landfill) be damned.

I agree with the commentard who took a position that if software forces a device to be obsolete (but it is still perfectly functional otherwise) then full documentation for the device in question should be made public [3]. That way, the devices can be re-purposed and still be useful and not end up as e-waste. I would not put any specific time limit on this; just a requirement that if the device in question is still functional at the hardware level then the documentation should be released if the software vendor has made their code base unavailable thereby making the system obsolete.

1. Connectors that are mated just once will last a long time. They are actually usually rated for the number of mating cycles and still be operational. Too many mating cycles wears the conductive layers and reduces the strength of the contact retention mechanism. This also depends on the type of connector and how it has been mated, assuming it has been properly rated for the level of current going through the contacts.

2. I have seen electronics in the high reliability sector that is well over 50 years old and still operational, albeit having had relatively minor repairs over the years. A lot depends on he environment as this is highly thermally dependent. A temperature rise of 10C yields the rule of thumb (very conservative) that it reduces the device life expectancy by 50%. More typically it is quite a bit less. Most of the things I have designed over the last few decades should last (for the electronics at least) for 40 years or more.

3.Not as difficult as some might assume. GPUs go obsolete after 1 or 2 runs at the fabricator (those runs are typically in the 100k+ to millions of units). The internal secret sauce has been deprecated so a 10 year old GPU should have no reason to be kept secret. The same goes for other high volume products such as CPUs (Intel always keeps their behind a NDA as do others).

Half of tech firms plotting restructures as AI hype bites

Electronics'R'Us
Devil

Re: Impossible to Hide Indeed

It is impossible to hide from the impact of AI hype.

When the real use cases are studied (there are some but the scope is limited) all the hype (and the enormous sums of money that have been sunk into this) will disappear down the metaphorical financial black hole.

No more 'Sanity Checks.' Inclusive language guide bans problematic tech terms

Electronics'R'Us
Devil

Re: It's being done for hardware too...

Many years ago (late 80s to early 90s) National Semiconductor (now part of TI) had a 3 volume set of databooks (yes, real books!).

In the special functions book the LH0033 / LH0063 buffer amplifiers were featured. These were designed for video applications and the databook stated Fast and Damn Fast Buffers.

In the applications section for the parts where the physical PCB layout was being discussed it had the headline:

ACHTUNG!

This was because the output transition rates were very fast (for the time - we surpassed that many years ago) and needed careful consideration for the manufacturing capabilities of the day.

From a time when people didn't take offence at every silly little thing and besides techies were known for their sense of humour quite aside from their perceived social issues. (which is actually not the case with most).

UK's Ministry of Defence pins hopes on AI to stop the next massive email blunder

Electronics'R'Us
Holmes

Leak?

Such a leak could be construed as an offence under the official secrets act although that may be open to question.

At the very least, the person who made the blunder should have had an interview with the security vetting people. That is not very common for SC but it has been known to occur.

Perhaps further training and tighter supervision as well.

Anyone who comes into contact with UK classified information as a standard part of the day job has signed a little piece of paper (or electronically nowadays) that specifically reminds them of their responsibilities under the act. Not having signed it does not mean you are not covered by it, but that little piece of paper / file means you have no excuses (I didn't know about that....) if you do something dumb.

Top spy says LinkedIn profiles that list defense work 'recklessly invite attention of foreign intelligence services'

Electronics'R'Us
Holmes

Re: You mean some folks profiles

Security clearances are a bit of a Catch-22.

Individuals cannot apply for them; it has to be sponsored by a company that has a reason to require the clearance, such as list X companies.

In many cases, it is perfectly acceptable to start with the lowest level clearance (BPSS) which takes days (a DBS check is required but that is pretty simple to do and some statements from the candidate).

In my case, I have started with the BPSS when I did not hold a current clearance. When you go beyond this (SC for example) is when the real scrutiny begins.

The last time I had to do the application, it took about 4 weeks or so after completing the questionnaire for the certificate to be issued to the company.

As to not advertising it (even to HR), most companies have security passes that have visual identifiers to indicate the level of clearance so if you know those, you know the clearance level (to a certain extent - one I worked for had the markers for 'is cleared to SC or above').

Electronics'R'Us
Holmes

Re: Did IQs drop

I joined the Royal Navy in 1970 and I must have been cleared, but like you I was never told what level.

I figured it out several years later when a Petty Officer declared bankruptcy and received shortly thereafter a 'Services No Longer Required' letter. Bankruptcy = Lose your secret or above clearance.

In those days (as I am sure you may recall), bankruptcy was one of the immediate 'no clearance for you' items for anything above restricted [1] and therefore we clearly all needed to have a minimum secret clearance (even though the next level of classification was Confidential, the next level of clearance was secret). On one amusing occasion, I had to go to the signals office to get one of the signals for the squadron CO because the duty officer did not have a high enough clearance to collect it.

I currently still work in the defence industry (and my LI profile shows the company so it is pretty easy to know that I am in the industry) but the details of what I do (in terms of projects) is never shown; I might list some types of tasks I do which is sufficient for recruiters to contact me, if they are looking for someone with a particular skill set. My level of clearance is never stated and I am not in the 'SC' or 'DV' groups which would rather give that information away.

1. Information is routinely overclassified by UK govt. The newspapers that came on to the base would be stamped 'Restricted'

Windows 11 leads as October looms, but millions still cling to Windows 10

Electronics'R'Us
Happy

Re: Abomination

This is not a religious view nor do I look either up or down at other operating systems and users. They are using what they wish and I am going to use something that meets my needs.

I have used Linux (and written a significant amount of code for it) on and off for over 25 years. I know it has its 'quirks' but generally it just gets things done, which is exactly what I need. I had to talk to SWMBO to make sure she wouldn't have an issue with it.

Electronics'R'Us
Stop

Abomination

My work laptop got updated (it is definitely not an upgrade) last week and it has convinced me to finally make the switch to one of the Linux distros for my personal devices. We have nothing M$ specific so it should be fairly painless.

I don't want this spyware ad-slinger that masquerades as an operating system on my home devices (I have managed to pretty much clean up the Win 10 installations).

The UI is horrendous, lots of settings are even further buried in obscure locations (even getting back a reasonable UI took 2 hours of digging), some apps are totally different (paint, for example which has been infected by the AI kool-aid. As a simple tool for grabbing an image and marking it up, the old paint did a good job - not so much the new version where common commands are now not on the toolbar at all such as crop).

The work laptop has to have some flavour of Windows as I have CAD tools that are only available for Windows and I am being paid to use it, so I will simply put up with the slowdown caused by having to make more selections to get a particular application up and running.

So in a way, the work experience was useful and tipped me over the edge of waving goodbye to Windows on my personal devices.

Britain's billion-pound F-35s not quite ready for, well, anything

Electronics'R'Us
Holmes

A lot to unpick

In fact, the UK Lightning Force faces "major personnel shortages across a range of roles," which the NAO says are not likely to be resolved for several years, although it notes the MoD is recruiting to fill some of these gaps.

There are multiple levels for maintenance that require (sometimes) different skill sets and it varies by trade.

Flight line / flight deck. Limited to LRU replacement (a part of the avionics kit in a bay, for instance). Other things such as refuelling, adding / removing payloads and turnaround checks.

Hangar. For most electronics, this is the same as flight line (replacement of LRUs) but also full system testing can be done. For airframe and engines it can be much more in depth but that depends on a number of things such as level of training and equipment availability. When I was doing this stuff (admittedly a long time ago), we could do engine changes in the hangar when at sea.

Third line support. Often with contractors either from the company or simply sending equipment back to the OEMs for repair. We used to have a pretty much sovereign repair capability on bases and ships for most avionics but that changed over the years to support contracts and is now mainly card replacement within a box. The turnaround time for some equipment can take several months especially if the kit is US made and sold with the aircraft under FMS (foreign military sales). Even UK made kit [1] (of which there is actually quite a bit) may still need to go back via Lockheed Martin (depending on the wording of the contracts of which I am ignorant).

For weapons, it is not just the aircraft software, but also the rather slow process at Boscombe Down to get weapons approved onto an aircraft.

So it is not a simple problem.

The day to day stuff is flight line and hangar and needs to be serving RAF / RN personnel, of which there is a known shortage for a number of reasons. Until that is addressed, the problem will continue.

Note 1. I have been involved in the design of some of it.

UK students flock to AI to help them cheat

Electronics'R'Us
Holmes

Re: Glorified calculators

One of the subjects I had was 'mental arithmetic'. No book, no slide rules. Juist questions and answers.

When I took my (belated) mathematics GCE O level in the 1970s it had 3 papers.

Mensuration (no, not that). Evaluation of numeric questions. The hardest I remember was 27 to the power of 2/3. No calculation aids of any type permitted

Plane geometry. As the calculation phase here depends on knowing which trigonometric function to use, calculators were permitted.

Calculus. As with geometry, knowing how to solve for the answer is the biggest challenge. I recall one question on the lines of 'a bullet is fired at an angle of <x> with a velocity of <y>. Ignoring wind resistance, what would be the highest point of its trajectory? Take G to be 10m/s^^-2.

The last two required you to show your work.

Wanted: Junior cybersecurity staff with 10 years' experience and a PhD

Electronics'R'Us
Holmes

Silly job descriptions

As noted by others, this has been going on for a long time.

I recall seeing a job advertised on LI (I know, I know) for an electronics engineer.

They stated 'Typically 2 to 5 years of experience' so not quite entry level but not far above it.

They wanted:

Degree in electronics. [6]

Expert in analog and digital design [1]

Design for EMC [2]

Experienced with high end CAD tools [3]

Mixed signal design layout expertise [4]

High speed digital design [5]

among other things.

1. I have yet to meet a graduate from any university with any real knowledge of analog design and that is true of at least the last 30 years. It takes typically a couple of years of solid mentoring (if it even exists) or the grad studying it for said amount of time (or more typically 4 to 5 years) and actually building the circuits because it can advance their career , so they would need their own at least basic lab equipment. There is also a whole lot more to digital design than it seems at first glance. Sometimes the person who did not go to university has a better grasp depending on the career path.

2. This is still one of the darker arts although there is a lot of science but every design is different and has to be assessed with an experienced eye. To get good at this takes several years in its own right.

3. Depends on where they were previously; these tools are expensive although there are open source alternatives; those alternatives don't come with support, though, which most companies need or want. Even the relatively low end of the market (Altium) runs to almost £10k per seat per year. They all do (roughly) the same thing but each has its quirks, so there is always a learning curve.

4. Hahahahahahaha. This is possibly the most difficult of skills and can take literally decades to fully master.

5. For this, one needs to understand a lot of things from all the above and transmission line theory at a minimum. I started in RF so it really wasn't that big a deal for me.

6. Slight afterthought; some of the best engineers don't have one.

The real applicant for such a position would be very senior (probably a principal level) and would laugh at the offered salary.

Ship abandoned off Alaska after electric cars on board catch fire

Electronics'R'Us
Holmes

Li+ batteries

These generate their own oxygen after ignition, so starving the fire of oxygen (foam type firefighting) is not really going to achieve anything.

In this case, it really doesn't matter where the fire started (although my view is was probably a Li+ battery in one of the said EVs as there shouldn't be much fuel in the other types although that remains a possibility) as once the fire gets going there is little that can be done. For very small fires, they can be dealt with by using Aqueous Vermiculite Dispersion (AVD),. Source: FPA

Lots of distilled water would help (cooling only) as that is an insulator but most water on the planet contains some salts (the level depends on a lot of factors).

There are a lot of reasons Li+ can catch fire and although the rate is low, the consequences can be somewhat brutal.

In some older gas turbine engines that used cartridge start, the fuel used generated its own oxygen so it isn't as if we don't know the risk.

Please tell us Reg: Why are AI PC sales slower than expected?

Electronics'R'Us
Holmes

What use case?

I do quite a bit of CAD work with this laptop (Dell with a core i5) and it copes quite well (I had the installed RAM upped to 16GB as that is really the minimum for such things).

I have seen some frankly ridiculous claims on various sites about AI can do this or that (the latest fad is it can design your circuit and lay it out and another claiming you won't need to read the datasheets anymore).

I can't see this aggregated ignorance [1] providing any assistance in the core part of designing electronics (by the time I start actually using the tools I have already designed the thing at a decent level). One of the other things I do is troubleshooting, and given that each and every circuit is a bit (or a lot!) different, not sure how that will somehow miraculously find the solution to all the problems we can see.

I just can't see the value (and nor do my IT group, AFAIK) in paying for internal functionality that we neither want or need (to say nothing of the slurpage going on).

The vendors are desperate but if they are that desperate maybe they should be selling what the buyers actually want. [2]

1. Brazenly stolen from a post here some time ago. Best description I have ever seen.

2. I know, I know...

Some signs of AI model collapse begin to reveal themselves

Electronics'R'Us
Devil

Snake oil

I see a lot of posts around where "AI" (aggregated ignorance) can produce a working piece of electronics.

Circuit diagrams.

Layouts.

We can check your design so you don't have to read your datasheets!

All things I have seen.

For trivial circuits that can be extracted from a datasheet or application note it is just regurgitation and as for checking your datasheets for you, it is simply checking pin types and recommendations that is already in the documentation (such as 'you should have a decoupling capacitor of x as close to the power pin as possible'). Pure snake oil. I can check things faster than any of these 'tools' (I use that term in the pejorative sense) can possibly do.

I can also check things it cannot possibly know how to do (things like why do I have occasional memory problems which is probably due to crosstalk or some other design issue).

I fear (well, I expect) that someone who fancies themselves as skilled will use one of these circuits (which will have all manner of unusual edge cases) and then try and flog it as 'AI generated! - it's the future!'

AI engineering? Don't know whether to puke or laugh. Maybe both but not necessarily at the same time.

Empire of office workers strikes back against RTO mandates

Electronics'R'Us
Holmes

For those who can

For those who can effectively work from home then it makes sense on many levels.

I am aware that not all jobs permit this (building stuff requires hands on at a plant of some description) but my job (primarily electronics design) can be effectively done from my home office just as well as the company office(s).

My $company has a pragmatic view on this; just about all the engineering staff (including management) are on a hybrid pattern with varying amounts of time in the office, driven by need. There are times I really need to be in the office / lab / production and I fairly regularly go to the main office for precisely that reason (it's a bit of a hike so I go there for a week). There is a (fairly) local office but the lack of public transport in my neck of the woods means driving but I do go in but when it adds value, not just to see my butt on a seat.

The company has a 'green' initiative and has an annual report on how much we are contributing to pollution (this report is increasingly being demanded by our customers) so it is a win for everyone.

I gain more flexibility (my commute is about 10 seconds) and the company gains more productivity (no interruptions or distracting noises from Joe in sales). Collaboration? We use online tools and there doesn't seem to be a problem with that.

The management isn't paranoid about seeing people at desks; they just want results.

To progress as an engineer career-wise, become a great communicator

Electronics'R'Us
Holmes

Good communications is essential

If you want to be long term successful, anyway.

Contrary to some comments, this doesn't mean you studied Chaucer or necessarily the classics (although that doesn't hurt).

I have had to effectively communicate with people at all levels of organisations and the interesting thing is that they all effectively speak different languages, even though (in most cases) it has been English.

It isn't just about fixing problems quickly, either. Can you explain how the problem occurred so it can be prevented in the future? In my experience, being able to clearly explain what a problem was / is and how to prevent it is a very important skill.

Senior management wants to know there is a solution. Engineering management wants to know the solution and how it might be implemented. Engineers want to understand the solution and the practical effect.

QA wants to know how this will be documented and applied.

If you have junior engineers who you are mentoring (you really should be) then it is perfectly possible you will need to be able to explain a concept in at least two different ways.

If you are helping with bid work then you need to be able to explain how your part of the bid will solve the customer's problem.

As far as getting jobs goes, the fact I really know my core subjects (and a lot of tangential ones) would not be very useful if I cannot explain clearly how and why I know those things.

The list goes on but that is the gist of the subject. Explaining how a fix affects the business to a senior manager is a lot different to explaining how that will affect day to day engineering to the line manager.

Dilettante dev wrote rubbish, left no logs, and had no idea why his app wasn't working

Electronics'R'Us
Holmes

No error returns

Back around 1999 / 2000 I was doing diagnostics for a video on demand system. Quite an impressive beast.

This newer system was based on a 3U compact PCI rack with our own designed cards apart from the control processor (a power architecture). It communicated with a host over sockets.

The initialisation was not particularly complex but did have to do PCI enumeration and then initialise all the cards in the rack.

One fine day when I was looking at this fairly new system, I was rather surprised to find that none of the initialisation code (including the enumeration phase!) tested for successful completion. After a (failed) initialisation that was totally silent, I would be met by an error message that the system could not be reached.

I set about rectifying this by adding a status word that cleared a bit for each part of the initialisation when it succeeded. At the end of he init, that status word was returned to the host. If it was non-zero, something had failed but just what was obvious as it would be the first non-zero part of the status word.

Took me about 4 hours or so [using ioctl() over a socket]. That as eventually added to the production version because the customers were complaining of silent failures.

Even back then, we had the 10 commandments for C programmers many of which apply elsewhere, in particular number 6.

Sheer laziness imo.

Techie solved supposed software problem by waving his arms in the air

Electronics'R'Us
Holmes

Re: I was called in ...

That sounds suspiciously like a ground loop.

Seen quite a few over the years.

GCHQ intern took top secret spy tool home, now faces prison

Electronics'R'Us
Holmes

Re: Official Secrets Act?

The official secrets act applies even if you haven't signed the piece of paper or online document.

The document we sign explicitly states the responsibilities for those likely to come into routine contact with classified information so there can be no doubt what the rules are.

UK govt data people not 'technical,' says ex-Downing St data science head

Electronics'R'Us
Devil

Re: Not technical?

Do you KNOW who I am?!

Response: You haven't asked me a much more important question.

WHAT?!

Whether I give a shit.

2 in 5 techies quit over inflexible workplace policies

Electronics'R'Us
Holmes

Re: Interesting

I am one of that earlier generation [1] and I am perfectly comfortable with most of my interactions being online (Teams manly). I am still doing full time electronics design and my home office is by far preferable to any open plan office (or juts about any corporate setting for that matter).

I do go into the local office once a week (most weeks) as it helps to catch up with the integration team and see what issues they may have been having. I regularly go to the main office (where the team I primarily work with are based), but as that is a couple of hundred miles or so, I go there for a week at a time.

For those of us who really need peace and quiet to do our best, WFH with the occasional office day is superb.

My management is relaxed about it - I am measured on results, not where I happened to be when I did the latest new / updated piece of electronics. Don't get me wrong, the company is fast paced (not that big with a lot of growth and large orders incoming) so we regularly get new starters. I can just as effectively mentor them in the ways of electronics in an online chat as being in person (which, contrary to popular opinion, really hasn't changed that much over the years [2]). The key to those online interactions is getting it right.

I hover between introvert and extrovert; when I am in the middle of actually designing something, someone coming along for a 'casual' conversation is the last thing I need.

1. I haven't paid employee NI for some years which will give you an idea of my age.

2. Many things have evolved, particularly in PCB layout and we now have functions contained within a few square mm that would previously have taken up a complete 3U board. The underlying principles really haven't changed, though. There are still times when I will use a circuit type from several decades ago as it is often still the best solution.. We do have far better tools for many things (CAD, for example, and IDEs for FPGA development).

Court filing: DOGE aide broke Treasury policy by emailing unencrypted database

Electronics'R'Us
Holmes

Clearance?

Elez was granted an interim secret clearance on January 22

I must assume that this was 'encouraged' by the dynamic duo. There have been some real howlers in the USA from inappropriate clearances.

Having a security clearance myself (for the umpteenth time), the screening takes a few weeks in the UK and although I don't know if they scrutinise antisocial media, they do check a lot of other things.

Would be interesting to know just how much of a background check was actually done.

Framework Desktop wows iFixit – even with the soldered RAM

Electronics'R'Us
Holmes

Soldered RAM

Not really a problem in most cases.

The footprints are standardised although the physical packages are often dominated by the die; whether it can be upgraded to more depends on that physical size and whether the memory controller supports the larger size.

Electronics'R'Us

Electrical signals travel at ~0.9C in copper

On most PCB materials, it is closer to 0.5C (6 inches per nanosecond).

This is a pretty simple (and well known) equation. V = C / (sqrt(relative dielectric constant)). Most of them have a relative dielectric constant of about 4.

Hey programmers – is AI making us dumber?

Electronics'R'Us
Mushroom

Re: Maybe I like the misery

Unfortunately, not all pilots did push back.

The amount of technology in a modern airliner is astounding and I have designed some of it.

Autopilot, fly by wire[1], auto-landing capability [2] and more.

The worst case of over reliance on technology was AF447 which crashed into the Atlantic some years ago. After it went missing, I looked at the telemetry the aircraft had sent (that was satellite based IIRC). The pieces I remember very clearly were:

Airspeed disagree. This means that the 3 computers, each with their own pitot tube, did not all agree on the airspeed. Not usually a major issue but it was a major contributing factor here.

Alternate laws. Flight control computers have what is known as 'control laws'; under ordinary circumstances these are time, but when the system goes from triplex to duplex those won't do the job, so we have alternate laws for that situation.

After all this, apparently all 3 pitot tubes iced up (they were in the middle of a massive thunderstorm which is a prime location for icing).

Without any reliable information on airspeed, the flight control system disengaged. Then the worst part happened - the pilots in the cockpit did not know how to manually fly the aircraft. They kept pulling the stick back which eventually led to an aerodynamic stall. The captain (who was on a scheduled rest break) arrived back in the cockpit but was unable to recover from this. Flat spins are one of the worst situations to be in.

Over reliance on technology is never a good thing.

Note 1. Fly by wire uses (as mentioned) flight control computers. The person at the controls inputs what they want the aircraft to do, but the electronics and various motors and actuators does the task. For civil aviation, these are triple redundant with some interesting requirements such as the processors in each 'lane' must be different architectures and each lane must be galvanically isolated. We also have FADECs =- Full Authority Digital Engine Controller. These are what actually adjust engine thrust.

Note 2. I have experienced this going into Salt Lake City almost 30 years ago. Smoothest landing (grease job) I have ever had.

Are Copilot+ PCs really the fastest Windows PCs? X and Copilot don't think so

Electronics'R'Us
WTF?

Intelligent?

the fastest, most intelligent Windows PCs ever

I was not aware that any PC (or electronics system with or without software) had any intelligence at all.

My current expansion for AI is Artificial Ignorance.

91% of polled Amazon staff unhappy with return-to-office, 3-in-4 want to jump ship

Electronics'R'Us

Productivity

Open plan offices are guaranteed to be less productive than a quiet office (be it at home or company office).

There are times I go to the office. I am encouraged to show my face in the relatively local office in Plymouth once a week, but I get little objection if I am busy with something that requires concentration (implying peace and quiet) as said office is open plan [1]. The marketeers are usually loud and I once just packed up after being in the office for an hour.

When asked why, I told them that I needed to concentrate and their constant loud conversations (from one desk to another) was preventing that.

I do go to the more distant office (not far from Gatwick) every now and then (very ad hoc usually but sometimes there is new kit to test or problems to solve that can only be done hands on). Given that (by train) that office is the best part of a days travel from where I live in Cornwall so I go over for a full week.

Those offices, although nominally open plan, are populated with various teams, so the area i get to hot desk at is fully of engineers who truly appreciate a quiet atmosphere.

Back on the primary subject, a full RTO mandate whether it is appropriate or not is silly. If I am going to just work with the rest of the team to get a design solution (which happened 3 times this morning), it matters little where I physically sit.

There are those who really do need to be onsite, such as production teams, but working from where i do the best work just seems logical [2].

1. I have a really good manager who is also an engineer and understands this.

2. I am aware that many middle and senior managers are incapable of grasping this difficult concept </sarcasm>

AI to power the corporate Windows 11 refresh? Nobody's buying that

Electronics'R'Us
Holmes

Electronics lifecycle

You are completely correct on the 4 year comment.

Properly designed electronics hardware (ignoring the spinning things for now) should easily last decades. The biggest single hardware killer is excessive heat which is why we go to great lengths to get it out of the box.

The rule of thumb for silicon devices is that for each temperature rise of 10C, the life will be halved. This is based on the Arrhenius equation

I have some really old equipment (30+ years old) that is still going strong.

I am in the high reliability business so typical system life has to be measured in decades (some parts are allowed to fail after a few years which can be an obsolescence nightmare), but that said, some really old parts are still being manufactured such as the 741 (designed in the 1960s).

An office or home based system should easily last 10+ years. Pushing users to dump it just because the vendor no longer supports the OS or insists that a certain type of device be present that is really not necessary is, to me, a form of vandalism.

Another problem is that way too many parts are used for a given system. We could probably use 50% of the components that actually exist in equipment with probably no noticeable change in response. This does require a disciplined software approach: don't require 16G RAM for a simple app because it happens to run well on <framework that is downloaded in its entirety>.

Lebanon now hit with deadly walkie-talkie blasts as Israel declares ‘new phase’ of war

Electronics'R'Us
Holmes

Re: So yes, I *am* an expert on radios.

We have had 'tone controlled' receivers (within 2 way radios) for decades.

In 1981 I was working with such devices (used by an ambulance service in the UK). There were 6 tones (I think - see date!) and a specific 5 tone sequence would unmute the receiver. This was used so only the intended recipient ambulance would get the message (and corresponding workload). Each receiver was programmed to unmute only for a specific sequence which was different for every receiver. This also caused the transmitter to send a response for confirmation of receipt (but that is probably irrelevant here).

A tiny module in the output audio path was used for this. The tones were notched out (notch filters) and therefore never heard in the actual audio output.

Given the large number of available sequences, it would be quite easy to ensure that one particular sequence would trigger an 'event'. As noted above, this is well known (tried and tested) technology.

This is simply one of the ways such things are possible.

The case for handcrafted software in a mass-produced world

Electronics'R'Us
Holmes

Small...

In the murky world of microcontrollers and sensors there are often situations where an OS is highly undesirable. Bare metal applications abound in this (vast) world.

One of my projects was for a very precise 2 axis tilt sensor and the sensor was analog (MEMS devices drift a lot and quite quickly). The drive circuit was reasonably simple but to get the stability of readings some post processing of the data following the ADC conversions was necessary (think 64 tap or more FIR filtering).

The issue here is that the time between each conversion starting must be as close to identical as possible and interrupts is the way to do it; the kicker is that nothing must get in the way of those interrupts to introduce timing jitter on the conversions (even a small amount of jitter can royally screw up the filtering because the mathematics depends on the time between samples to be identical).

Add to that the issue of designing timing windows (sampling, processing, re-initialising data structures, reading the ADC data and hardware functions such as DMA to name a few) and it becomes clear that cycle accurate timing is required (for the sampling at least).

The best code in this situation is small and tight apart from the startup initialisation (where it doesn't really matter). No assembly was required (an accurate oscilloscope is your friend for this stuff). That means no layering of the code (or at least extremely little but none is best). Goodbye several deep layers for function calls.

In that particular case, everything was done based on hardware timers and interrupts that were provably not interfering with each other and results communicated upstream to an application processor (from a DMA buffer).

It might surprise El Reg just how many of those types of system exist.

Sorry, Moxie. Blaming Agile for software stagnation puts the wrong villain in the wrong play

Electronics'R'Us
Holmes

I sort of agree with some of his points

And disagree with others.

In the original article, he railed against the use of frameworks [1].

I don't oppose frameworks in general but I do think they are overused or used when it is really not appropriate to do so. Do we really need an enormous framework such as electron for simple applications? Honest question.

When he was going on about understanding the underlying hardware (knowing what the computer is actually doing) I disagree. COBOL programmers didn't really need to understand what the underlying hardware was doing but when I am bringing up a single board computer, it is imperative that I thoroughly understand precisely what is actually going on.

Try initialising DDRx without understanding all the register settings (there are a lot) or initialising the DMA subsystem. That way lies not only madness but also a major loss of hair and perhaps the consumption of large quantities of 'refreshments'.

It is courses for horses, really. Not everyone writing code needs to understand the internal details of the parts we are using as it would have no measurable gain [2]. For those who do need too know, it is interesting that fully understanding a modern microcontroller from a family we have never used before can take several days or weeks depending on the architecture. There is typically 4000 to 5000 pages of documentation in total for a modern microcontroller.

1. The standard C library can be viewed as a framework (as can numerous others); they all provide an abstraction layer to a greater or lesser degree. Obviously C exposes far more of the hardware level than python libraries for example.

2. That does not mean I would not encourage them to look into what the machine is really doing, just that for many tasks it is unnecessary.

Google's ex-CEO U-turns after saying staff 'going home early' killed winning

Electronics'R'Us
Holmes

Office when it adds value

I have no objection to going into the office [1] when it adds some value.

A lot of my work is design and analysis (see handle) and generally I am far more productive at home [2].

When I do need to go in (as I did a couple of days ago to decide just where to add some fixes / mitigations for EMC failures) there needs to be a solid reason. A few weeks ago I spent the entire week at the EMC test house (living out of a hotel room) to run the most risky tests (RE-102 and CE---102 for those who may be interested) so that if (when) there were failures I could analyse them and have time to develop a solution. Travelled home on the weekend.

Most of the time I spend the majority of my week in my home office [3] with the occasional day at the relatively local office.

No commute means I am far more relaxed and ready to go when I start the day.

1. $COMPANY has 3 sites of which I regularly go to 2, one of which is full day worth of travel so I usually stay for a week. The local office is open plan (yuck) so it is not the place to get things done that require concentration.

2. Design and analysis requires concentration rather than hanging around a canteen / water cooler / <hangout of your choice> and I am quite ok in my own company when I am doing that. If I need to chat to other members of the team that is simple (Teams video call - it works well enough for that).

3. I don't work excessive hours as that can easily lead to errors. At the normal end of the working day I shut down the work laptop and close the door to the office. Occasional late work is fine but it should noit be the norm. Being a bit disciplined in that is useful.

AI stole my job and my work, and the boss didn't know – or care

Electronics'R'Us
Devil

Re: Past parallels

After over 5 decades in the electronics and associated business (and still going strong), I have yet to find any form of automation that can replace a skilled, experienced, professional.[1].

I recently oversaw some EMC testing for some naval kit and, as always, there were some failures on the initial run. [2]. Understanding the root cause of those failures is a darker art than even being able to understand Intercal.

Some fixes are more obvious than others but the why and underpinning theory are often not easy to find. I might, just for a giggle, ask one of these 'miracle machines' what the answer may be considering it will have zero knowledge of the internals of this rather large, multiple box system. Oh - did I mention cable runs of up to 50 metres?

1. Many years ago (30 or so) I designed an automated test system for a product line. The rationale is quite simple in that touch time is more expensive than line time so it can make sense in some circumstances but you need to be doing sufficient volume to amortise the cost of the hardware and development.

2. Anyone that tells you their non-trivial system passed all EMC testing (particularly if you are testing against some of the MIIL-STDs) first time is probably, lets see - stretching the truth.

LLM-driven C-to-Rust. Not just a good idea, a genie eager to escape

Electronics'R'Us
Devil

Re: Sometimes you want it to be slow

Several years ago (well over 30) before the widespread advent of microcontrollers with timers and built-in peripherals I wrote quite a few bit banged UARTS.

That means counting machine cycles for every instruction to ensure each bit was timed properly with the judicial insertion of NOOPs to maintain the timing. I think the type of tool suggested in the article would barf on those.

Then there were the integer multiply and divide routines that had to have constant timing regardless of data (so run time was data independent); those got very interesting doing 32 bit stuff on an 8 bit machine.

So for situations where the timing really matters I am not sure this type of tool would be suitable (that's an understatement I suspect).

Car makers sold people's driving habits, location data for pennies, say US senators

Electronics'R'Us
Holmes

Number 4

Several years ago, I designed an interface to vehicle CAN bus (through a FSM gateway so it was read only).

This was for a large waste management company. It would alert on harsh braking and acceleration (and send that data to a server) but I understand the company would give the drivers an opportunity to explain why.

The rationale for the company was to reduce maintenance costs (HGV servicing is an expensive proposition and brake replacement even more so).

Company vehicle so this type of monitoring, which had a reasonable goal, was legal as far as I know.

A couple of drivers covered up the alert light and speaker, which made no difference as the data went to the server anyway.

Inquiry hears UK government misled MPs over Post Office IT scandal

Electronics'R'Us
Stop

Please

Stop repeating incorrect history.

Horizon is an EPOS and backend finance system for thousands of Post Office branches around the UK, first implemented by ICL, a UK technology company later bought by Fujitsu.

The entire rollout of Horizon was very much driven by Fujitsu, who owned 80% of the company at that time. They even leaned on the British government to make veiled threats about economic problems should the Horizon rollout be delayed.

History of ICL: here [Silicon.co.uk]

Fujitsu leans on government: from Computer Weekly

Meta warns bit flips, other hardware faults cause AI errors

Electronics'R'Us

Re: I'm a bit out of touch with the hardware design

When I look at the probability of an error within, say, the ALU, I need to know how long the data will actually be there and likewise for a register. These are usually very short.

There is a statistical chance an error can occur due to various causes but it is way lower than the figures in the article. We deal with that by having redundant channels and voting (among other safeguards) in a safety critical application which they are clearly not doing here.

Now, if they are running the thing at very hot then the chance of error goes up, usually due to timing violations within the device various data pathways.

In communications theory, all data paths have a bit error rate; what that is depends on a lot of factors but for an internal datapath I would suspect it is in ppb (parts per billion) or lower provided it is being run at a temperature that does not violate the timing requirements.

Electronics'R'Us
Holmes

Re: I'm a bit out of touch with the hardware design

We have been dealing with this in avionics for decades.

The usual requirement for a safety critical system is:

L1 - Parity protected

L2 - ECC protected

L3 (if it exists) - ECC protected

Main memory - ECC protected

The ECC used is 'correct 1, detect 2'; a single bit error can and will be corrected, a double bit error will be detected and trapped (critical even handler).

When it comes to free neutrons (the most common cause of single event upsets in avionics) the odds of more than one bit in a word being corrupted are extremely low (far lower than what Meta claims for an error rate). There are other causes but they can all be detected.

Thermal issues are well known as well. Transistor I/O structure characteristics change over temperature (the edge rates slow down as they heat up) and typical dynamic memory devices need to speed up the refresh rate at higher temperatures due to leakage. A temperature sensor and on-the-fly parameter settings is not difficult.

FPGAs with SRAM configuration are susceptible but there are solutions for that (redundant processing internally and partial reconfiguration is one such solution used in space applications).

So use systems that have the necessary error detection and correction. Job done.

There is a timing hit at startup as all memory to be used must be initialised with the ECC syndrome bits.

Asda IT staff shuffled off to TCS amid messy tech divorce from Walmart

Electronics'R'Us
Mushroom

They have already had a meltdown

A few months ago, Asda rolled out its 'new' payroll system (this was another move away from Walmart systems) and it was an utter train wreck.

Many of their staff (who are on the low end of the pay scale) paid hundreds less than they should have been and told, in typical tone deaf management style, that the missing money would be paid at the end of the month. That got rolled back pretty quick after the rather large outcry that ensued.

So they have already had one (ongoing, apparently) software rollout that was not just bad but terrible, particularly as this can really cause major screw ups to people's tax status.

Now they are doing a full blown ERP implementation? On the cheap and rushed at that.

This is likely to be worthy of the corporate equivalent of a Darwin Award.

Apple says if you want to ship your own iOS browser engine in EU, you need to be there

Electronics'R'Us

Re: I absolutely adored my Mac Classic.

The closed ecosystem is nothing new for Apple.

This is from a humorous article from 1995.

Mac Airways:

The cashiers, flight attendants, and pilots all look the same, talk the same, and act the same. When you ask them questions about the flight, they reply that you don't want to know, don't need to know, and would you please return to your seat and watch the movie.

Dear Stack Overflow denizens, thanks for helping train OpenAI's billion-dollar LLMs

Electronics'R'Us
WTF?

So the problem will just get worse

A while ago (2019) a blog entry warned of vulnerabilities in code posted to SO.

SO Blog

Page: