* Posts by Electronics'R'Us

533 publicly visible posts • joined 13 Jul 2018

Page:

To progress as an engineer career-wise, become a great communicator

Electronics'R'Us
Holmes

Good communications is essential

If you want to be long term successful, anyway.

Contrary to some comments, this doesn't mean you studied Chaucer or necessarily the classics (although that doesn't hurt).

I have had to effectively communicate with people at all levels of organisations and the interesting thing is that they all effectively speak different languages, even though (in most cases) it has been English.

It isn't just about fixing problems quickly, either. Can you explain how the problem occurred so it can be prevented in the future? In my experience, being able to clearly explain what a problem was / is and how to prevent it is a very important skill.

Senior management wants to know there is a solution. Engineering management wants to know the solution and how it might be implemented. Engineers want to understand the solution and the practical effect.

QA wants to know how this will be documented and applied.

If you have junior engineers who you are mentoring (you really should be) then it is perfectly possible you will need to be able to explain a concept in at least two different ways.

If you are helping with bid work then you need to be able to explain how your part of the bid will solve the customer's problem.

As far as getting jobs goes, the fact I really know my core subjects (and a lot of tangential ones) would not be very useful if I cannot explain clearly how and why I know those things.

The list goes on but that is the gist of the subject. Explaining how a fix affects the business to a senior manager is a lot different to explaining how that will affect day to day engineering to the line manager.

Dilettante dev wrote rubbish, left no logs, and had no idea why his app wasn't working

Electronics'R'Us
Holmes

No error returns

Back around 1999 / 2000 I was doing diagnostics for a video on demand system. Quite an impressive beast.

This newer system was based on a 3U compact PCI rack with our own designed cards apart from the control processor (a power architecture). It communicated with a host over sockets.

The initialisation was not particularly complex but did have to do PCI enumeration and then initialise all the cards in the rack.

One fine day when I was looking at this fairly new system, I was rather surprised to find that none of the initialisation code (including the enumeration phase!) tested for successful completion. After a (failed) initialisation that was totally silent, I would be met by an error message that the system could not be reached.

I set about rectifying this by adding a status word that cleared a bit for each part of the initialisation when it succeeded. At the end of he init, that status word was returned to the host. If it was non-zero, something had failed but just what was obvious as it would be the first non-zero part of the status word.

Took me about 4 hours or so [using ioctl() over a socket]. That as eventually added to the production version because the customers were complaining of silent failures.

Even back then, we had the 10 commandments for C programmers many of which apply elsewhere, in particular number 6.

Sheer laziness imo.

Techie solved supposed software problem by waving his arms in the air

Electronics'R'Us
Holmes

Re: I was called in ...

That sounds suspiciously like a ground loop.

Seen quite a few over the years.

GCHQ intern took top secret spy tool home, now faces prison

Electronics'R'Us
Holmes

Re: Official Secrets Act?

The official secrets act applies even if you haven't signed the piece of paper or online document.

The document we sign explicitly states the responsibilities for those likely to come into routine contact with classified information so there can be no doubt what the rules are.

UK govt data people not 'technical,' says ex-Downing St data science head

Electronics'R'Us
Devil

Re: Not technical?

Do you KNOW who I am?!

Response: You haven't asked me a much more important question.

WHAT?!

Whether I give a shit.

2 in 5 techies quit over inflexible workplace policies

Electronics'R'Us
Holmes

Re: Interesting

I am one of that earlier generation [1] and I am perfectly comfortable with most of my interactions being online (Teams manly). I am still doing full time electronics design and my home office is by far preferable to any open plan office (or juts about any corporate setting for that matter).

I do go into the local office once a week (most weeks) as it helps to catch up with the integration team and see what issues they may have been having. I regularly go to the main office (where the team I primarily work with are based), but as that is a couple of hundred miles or so, I go there for a week at a time.

For those of us who really need peace and quiet to do our best, WFH with the occasional office day is superb.

My management is relaxed about it - I am measured on results, not where I happened to be when I did the latest new / updated piece of electronics. Don't get me wrong, the company is fast paced (not that big with a lot of growth and large orders incoming) so we regularly get new starters. I can just as effectively mentor them in the ways of electronics in an online chat as being in person (which, contrary to popular opinion, really hasn't changed that much over the years [2]). The key to those online interactions is getting it right.

I hover between introvert and extrovert; when I am in the middle of actually designing something, someone coming along for a 'casual' conversation is the last thing I need.

1. I haven't paid employee NI for some years which will give you an idea of my age.

2. Many things have evolved, particularly in PCB layout and we now have functions contained within a few square mm that would previously have taken up a complete 3U board. The underlying principles really haven't changed, though. There are still times when I will use a circuit type from several decades ago as it is often still the best solution.. We do have far better tools for many things (CAD, for example, and IDEs for FPGA development).

Court filing: DOGE aide broke Treasury policy by emailing unencrypted database

Electronics'R'Us
Holmes

Clearance?

Elez was granted an interim secret clearance on January 22

I must assume that this was 'encouraged' by the dynamic duo. There have been some real howlers in the USA from inappropriate clearances.

Having a security clearance myself (for the umpteenth time), the screening takes a few weeks in the UK and although I don't know if they scrutinise antisocial media, they do check a lot of other things.

Would be interesting to know just how much of a background check was actually done.

Framework Desktop wows iFixit – even with the soldered RAM

Electronics'R'Us
Holmes

Soldered RAM

Not really a problem in most cases.

The footprints are standardised although the physical packages are often dominated by the die; whether it can be upgraded to more depends on that physical size and whether the memory controller supports the larger size.

Electronics'R'Us

Electrical signals travel at ~0.9C in copper

On most PCB materials, it is closer to 0.5C (6 inches per nanosecond).

This is a pretty simple (and well known) equation. V = C / (sqrt(relative dielectric constant)). Most of them have a relative dielectric constant of about 4.

Hey programmers – is AI making us dumber?

Electronics'R'Us
Mushroom

Re: Maybe I like the misery

Unfortunately, not all pilots did push back.

The amount of technology in a modern airliner is astounding and I have designed some of it.

Autopilot, fly by wire[1], auto-landing capability [2] and more.

The worst case of over reliance on technology was AF447 which crashed into the Atlantic some years ago. After it went missing, I looked at the telemetry the aircraft had sent (that was satellite based IIRC). The pieces I remember very clearly were:

Airspeed disagree. This means that the 3 computers, each with their own pitot tube, did not all agree on the airspeed. Not usually a major issue but it was a major contributing factor here.

Alternate laws. Flight control computers have what is known as 'control laws'; under ordinary circumstances these are time, but when the system goes from triplex to duplex those won't do the job, so we have alternate laws for that situation.

After all this, apparently all 3 pitot tubes iced up (they were in the middle of a massive thunderstorm which is a prime location for icing).

Without any reliable information on airspeed, the flight control system disengaged. Then the worst part happened - the pilots in the cockpit did not know how to manually fly the aircraft. They kept pulling the stick back which eventually led to an aerodynamic stall. The captain (who was on a scheduled rest break) arrived back in the cockpit but was unable to recover from this. Flat spins are one of the worst situations to be in.

Over reliance on technology is never a good thing.

Note 1. Fly by wire uses (as mentioned) flight control computers. The person at the controls inputs what they want the aircraft to do, but the electronics and various motors and actuators does the task. For civil aviation, these are triple redundant with some interesting requirements such as the processors in each 'lane' must be different architectures and each lane must be galvanically isolated. We also have FADECs =- Full Authority Digital Engine Controller. These are what actually adjust engine thrust.

Note 2. I have experienced this going into Salt Lake City almost 30 years ago. Smoothest landing (grease job) I have ever had.

Are Copilot+ PCs really the fastest Windows PCs? X and Copilot don't think so

Electronics'R'Us
WTF?

Intelligent?

the fastest, most intelligent Windows PCs ever

I was not aware that any PC (or electronics system with or without software) had any intelligence at all.

My current expansion for AI is Artificial Ignorance.

91% of polled Amazon staff unhappy with return-to-office, 3-in-4 want to jump ship

Electronics'R'Us

Productivity

Open plan offices are guaranteed to be less productive than a quiet office (be it at home or company office).

There are times I go to the office. I am encouraged to show my face in the relatively local office in Plymouth once a week, but I get little objection if I am busy with something that requires concentration (implying peace and quiet) as said office is open plan [1]. The marketeers are usually loud and I once just packed up after being in the office for an hour.

When asked why, I told them that I needed to concentrate and their constant loud conversations (from one desk to another) was preventing that.

I do go to the more distant office (not far from Gatwick) every now and then (very ad hoc usually but sometimes there is new kit to test or problems to solve that can only be done hands on). Given that (by train) that office is the best part of a days travel from where I live in Cornwall so I go over for a full week.

Those offices, although nominally open plan, are populated with various teams, so the area i get to hot desk at is fully of engineers who truly appreciate a quiet atmosphere.

Back on the primary subject, a full RTO mandate whether it is appropriate or not is silly. If I am going to just work with the rest of the team to get a design solution (which happened 3 times this morning), it matters little where I physically sit.

There are those who really do need to be onsite, such as production teams, but working from where i do the best work just seems logical [2].

1. I have a really good manager who is also an engineer and understands this.

2. I am aware that many middle and senior managers are incapable of grasping this difficult concept </sarcasm>

AI to power the corporate Windows 11 refresh? Nobody's buying that

Electronics'R'Us
Holmes

Electronics lifecycle

You are completely correct on the 4 year comment.

Properly designed electronics hardware (ignoring the spinning things for now) should easily last decades. The biggest single hardware killer is excessive heat which is why we go to great lengths to get it out of the box.

The rule of thumb for silicon devices is that for each temperature rise of 10C, the life will be halved. This is based on the Arrhenius equation

I have some really old equipment (30+ years old) that is still going strong.

I am in the high reliability business so typical system life has to be measured in decades (some parts are allowed to fail after a few years which can be an obsolescence nightmare), but that said, some really old parts are still being manufactured such as the 741 (designed in the 1960s).

An office or home based system should easily last 10+ years. Pushing users to dump it just because the vendor no longer supports the OS or insists that a certain type of device be present that is really not necessary is, to me, a form of vandalism.

Another problem is that way too many parts are used for a given system. We could probably use 50% of the components that actually exist in equipment with probably no noticeable change in response. This does require a disciplined software approach: don't require 16G RAM for a simple app because it happens to run well on <framework that is downloaded in its entirety>.

Lebanon now hit with deadly walkie-talkie blasts as Israel declares ‘new phase’ of war

Electronics'R'Us
Holmes

Re: So yes, I *am* an expert on radios.

We have had 'tone controlled' receivers (within 2 way radios) for decades.

In 1981 I was working with such devices (used by an ambulance service in the UK). There were 6 tones (I think - see date!) and a specific 5 tone sequence would unmute the receiver. This was used so only the intended recipient ambulance would get the message (and corresponding workload). Each receiver was programmed to unmute only for a specific sequence which was different for every receiver. This also caused the transmitter to send a response for confirmation of receipt (but that is probably irrelevant here).

A tiny module in the output audio path was used for this. The tones were notched out (notch filters) and therefore never heard in the actual audio output.

Given the large number of available sequences, it would be quite easy to ensure that one particular sequence would trigger an 'event'. As noted above, this is well known (tried and tested) technology.

This is simply one of the ways such things are possible.

The case for handcrafted software in a mass-produced world

Electronics'R'Us
Holmes

Small...

In the murky world of microcontrollers and sensors there are often situations where an OS is highly undesirable. Bare metal applications abound in this (vast) world.

One of my projects was for a very precise 2 axis tilt sensor and the sensor was analog (MEMS devices drift a lot and quite quickly). The drive circuit was reasonably simple but to get the stability of readings some post processing of the data following the ADC conversions was necessary (think 64 tap or more FIR filtering).

The issue here is that the time between each conversion starting must be as close to identical as possible and interrupts is the way to do it; the kicker is that nothing must get in the way of those interrupts to introduce timing jitter on the conversions (even a small amount of jitter can royally screw up the filtering because the mathematics depends on the time between samples to be identical).

Add to that the issue of designing timing windows (sampling, processing, re-initialising data structures, reading the ADC data and hardware functions such as DMA to name a few) and it becomes clear that cycle accurate timing is required (for the sampling at least).

The best code in this situation is small and tight apart from the startup initialisation (where it doesn't really matter). No assembly was required (an accurate oscilloscope is your friend for this stuff). That means no layering of the code (or at least extremely little but none is best). Goodbye several deep layers for function calls.

In that particular case, everything was done based on hardware timers and interrupts that were provably not interfering with each other and results communicated upstream to an application processor (from a DMA buffer).

It might surprise El Reg just how many of those types of system exist.

Sorry, Moxie. Blaming Agile for software stagnation puts the wrong villain in the wrong play

Electronics'R'Us
Holmes

I sort of agree with some of his points

And disagree with others.

In the original article, he railed against the use of frameworks [1].

I don't oppose frameworks in general but I do think they are overused or used when it is really not appropriate to do so. Do we really need an enormous framework such as electron for simple applications? Honest question.

When he was going on about understanding the underlying hardware (knowing what the computer is actually doing) I disagree. COBOL programmers didn't really need to understand what the underlying hardware was doing but when I am bringing up a single board computer, it is imperative that I thoroughly understand precisely what is actually going on.

Try initialising DDRx without understanding all the register settings (there are a lot) or initialising the DMA subsystem. That way lies not only madness but also a major loss of hair and perhaps the consumption of large quantities of 'refreshments'.

It is courses for horses, really. Not everyone writing code needs to understand the internal details of the parts we are using as it would have no measurable gain [2]. For those who do need too know, it is interesting that fully understanding a modern microcontroller from a family we have never used before can take several days or weeks depending on the architecture. There is typically 4000 to 5000 pages of documentation in total for a modern microcontroller.

1. The standard C library can be viewed as a framework (as can numerous others); they all provide an abstraction layer to a greater or lesser degree. Obviously C exposes far more of the hardware level than python libraries for example.

2. That does not mean I would not encourage them to look into what the machine is really doing, just that for many tasks it is unnecessary.

Google's ex-CEO U-turns after saying staff 'going home early' killed winning

Electronics'R'Us
Holmes

Office when it adds value

I have no objection to going into the office [1] when it adds some value.

A lot of my work is design and analysis (see handle) and generally I am far more productive at home [2].

When I do need to go in (as I did a couple of days ago to decide just where to add some fixes / mitigations for EMC failures) there needs to be a solid reason. A few weeks ago I spent the entire week at the EMC test house (living out of a hotel room) to run the most risky tests (RE-102 and CE---102 for those who may be interested) so that if (when) there were failures I could analyse them and have time to develop a solution. Travelled home on the weekend.

Most of the time I spend the majority of my week in my home office [3] with the occasional day at the relatively local office.

No commute means I am far more relaxed and ready to go when I start the day.

1. $COMPANY has 3 sites of which I regularly go to 2, one of which is full day worth of travel so I usually stay for a week. The local office is open plan (yuck) so it is not the place to get things done that require concentration.

2. Design and analysis requires concentration rather than hanging around a canteen / water cooler / <hangout of your choice> and I am quite ok in my own company when I am doing that. If I need to chat to other members of the team that is simple (Teams video call - it works well enough for that).

3. I don't work excessive hours as that can easily lead to errors. At the normal end of the working day I shut down the work laptop and close the door to the office. Occasional late work is fine but it should noit be the norm. Being a bit disciplined in that is useful.

AI stole my job and my work, and the boss didn't know – or care

Electronics'R'Us
Devil

Re: Past parallels

After over 5 decades in the electronics and associated business (and still going strong), I have yet to find any form of automation that can replace a skilled, experienced, professional.[1].

I recently oversaw some EMC testing for some naval kit and, as always, there were some failures on the initial run. [2]. Understanding the root cause of those failures is a darker art than even being able to understand Intercal.

Some fixes are more obvious than others but the why and underpinning theory are often not easy to find. I might, just for a giggle, ask one of these 'miracle machines' what the answer may be considering it will have zero knowledge of the internals of this rather large, multiple box system. Oh - did I mention cable runs of up to 50 metres?

1. Many years ago (30 or so) I designed an automated test system for a product line. The rationale is quite simple in that touch time is more expensive than line time so it can make sense in some circumstances but you need to be doing sufficient volume to amortise the cost of the hardware and development.

2. Anyone that tells you their non-trivial system passed all EMC testing (particularly if you are testing against some of the MIIL-STDs) first time is probably, lets see - stretching the truth.

LLM-driven C-to-Rust. Not just a good idea, a genie eager to escape

Electronics'R'Us
Devil

Re: Sometimes you want it to be slow

Several years ago (well over 30) before the widespread advent of microcontrollers with timers and built-in peripherals I wrote quite a few bit banged UARTS.

That means counting machine cycles for every instruction to ensure each bit was timed properly with the judicial insertion of NOOPs to maintain the timing. I think the type of tool suggested in the article would barf on those.

Then there were the integer multiply and divide routines that had to have constant timing regardless of data (so run time was data independent); those got very interesting doing 32 bit stuff on an 8 bit machine.

So for situations where the timing really matters I am not sure this type of tool would be suitable (that's an understatement I suspect).

Car makers sold people's driving habits, location data for pennies, say US senators

Electronics'R'Us
Holmes

Number 4

Several years ago, I designed an interface to vehicle CAN bus (through a FSM gateway so it was read only).

This was for a large waste management company. It would alert on harsh braking and acceleration (and send that data to a server) but I understand the company would give the drivers an opportunity to explain why.

The rationale for the company was to reduce maintenance costs (HGV servicing is an expensive proposition and brake replacement even more so).

Company vehicle so this type of monitoring, which had a reasonable goal, was legal as far as I know.

A couple of drivers covered up the alert light and speaker, which made no difference as the data went to the server anyway.

Inquiry hears UK government misled MPs over Post Office IT scandal

Electronics'R'Us
Stop

Please

Stop repeating incorrect history.

Horizon is an EPOS and backend finance system for thousands of Post Office branches around the UK, first implemented by ICL, a UK technology company later bought by Fujitsu.

The entire rollout of Horizon was very much driven by Fujitsu, who owned 80% of the company at that time. They even leaned on the British government to make veiled threats about economic problems should the Horizon rollout be delayed.

History of ICL: here [Silicon.co.uk]

Fujitsu leans on government: from Computer Weekly

Meta warns bit flips, other hardware faults cause AI errors

Electronics'R'Us

Re: I'm a bit out of touch with the hardware design

When I look at the probability of an error within, say, the ALU, I need to know how long the data will actually be there and likewise for a register. These are usually very short.

There is a statistical chance an error can occur due to various causes but it is way lower than the figures in the article. We deal with that by having redundant channels and voting (among other safeguards) in a safety critical application which they are clearly not doing here.

Now, if they are running the thing at very hot then the chance of error goes up, usually due to timing violations within the device various data pathways.

In communications theory, all data paths have a bit error rate; what that is depends on a lot of factors but for an internal datapath I would suspect it is in ppb (parts per billion) or lower provided it is being run at a temperature that does not violate the timing requirements.

Electronics'R'Us
Holmes

Re: I'm a bit out of touch with the hardware design

We have been dealing with this in avionics for decades.

The usual requirement for a safety critical system is:

L1 - Parity protected

L2 - ECC protected

L3 (if it exists) - ECC protected

Main memory - ECC protected

The ECC used is 'correct 1, detect 2'; a single bit error can and will be corrected, a double bit error will be detected and trapped (critical even handler).

When it comes to free neutrons (the most common cause of single event upsets in avionics) the odds of more than one bit in a word being corrupted are extremely low (far lower than what Meta claims for an error rate). There are other causes but they can all be detected.

Thermal issues are well known as well. Transistor I/O structure characteristics change over temperature (the edge rates slow down as they heat up) and typical dynamic memory devices need to speed up the refresh rate at higher temperatures due to leakage. A temperature sensor and on-the-fly parameter settings is not difficult.

FPGAs with SRAM configuration are susceptible but there are solutions for that (redundant processing internally and partial reconfiguration is one such solution used in space applications).

So use systems that have the necessary error detection and correction. Job done.

There is a timing hit at startup as all memory to be used must be initialised with the ECC syndrome bits.

Asda IT staff shuffled off to TCS amid messy tech divorce from Walmart

Electronics'R'Us
Mushroom

They have already had a meltdown

A few months ago, Asda rolled out its 'new' payroll system (this was another move away from Walmart systems) and it was an utter train wreck.

Many of their staff (who are on the low end of the pay scale) paid hundreds less than they should have been and told, in typical tone deaf management style, that the missing money would be paid at the end of the month. That got rolled back pretty quick after the rather large outcry that ensued.

So they have already had one (ongoing, apparently) software rollout that was not just bad but terrible, particularly as this can really cause major screw ups to people's tax status.

Now they are doing a full blown ERP implementation? On the cheap and rushed at that.

This is likely to be worthy of the corporate equivalent of a Darwin Award.

Apple says if you want to ship your own iOS browser engine in EU, you need to be there

Electronics'R'Us

Re: I absolutely adored my Mac Classic.

The closed ecosystem is nothing new for Apple.

This is from a humorous article from 1995.

Mac Airways:

The cashiers, flight attendants, and pilots all look the same, talk the same, and act the same. When you ask them questions about the flight, they reply that you don't want to know, don't need to know, and would you please return to your seat and watch the movie.

Dear Stack Overflow denizens, thanks for helping train OpenAI's billion-dollar LLMs

Electronics'R'Us
WTF?

So the problem will just get worse

A while ago (2019) a blog entry warned of vulnerabilities in code posted to SO.

SO Blog

Novel vitrimer plastics promise greener PCBs

Electronics'R'Us

Re: "First, heat gun? "

Repairs are not done with heat guns (at least, not in a professional environment).

Depending on the repair, it might be a soldering iron or an IR hot plate. For larger components, preheating the board can be required.

Directed hot air feeds are definitely used, particularly for larger components and BGAs, but referring to them as heat guns is not accurate.

Electronics'R'Us
Holmes

Details?

For PCBs, the devil is in the details.

First, heat gun? That is perhaps true for hobbyists but the standard method for modern SMT PCBs is reflow. Peak reflow temperatures are in the range of 245C to 260C for a lead free process.

If there are some through hole components, then hand soldering or even selective soldering might be required.

Some more details that are rather important:

1. Coefficients of thermal expansion (CTE). The CTE below Tg (glass transition temperature) and above vary quite widely, particularly in the Z axis.

2. Tg. The actual glass transition temperature.

3. Dk. The dissipation coefficient. This varies with frequency - just how much depends on the particular variety of material being used (no, they are not all the same).

4. Moisture absorption. In a high reliability world, this matters (and no, you cannot perfectly seal a PCB that has mounting holes).

There are many other details but by now you get my drift.

UL94 V-0 compliance is also important.

Note that the term FR-4 does not have any meaning other than the material is flame retardent.

State-by-state is the best approach for right to repair, says advocacy leader

Electronics'R'Us
Holmes

Most laws in the USA are State laws

This is something that many who have not lived in the USA fail to understand. This is the way it was originally set up and continues to this day.

To see the effect on right to repair, Colorado is not imposing their will on any other state, because they cannot. The kicker is that in most states a company that wishes to do business there has to have a registered office within the state [1], regardless of the State of incorporation and that means the business within that state must comply with all state laws that pertain to the business.

This does, admittedly, lead to a hodge-podge of varying regulations but it does have an effect.

So if a manufacturer of something (farm equipment for example) wants to have their own company branded sales and service operation within the state then they must comply with the right to repair laws within that state. In most cases, as noted above, that requires a registered office within the state. A business that refuses to comply with the rules will have that registered office closed.

Such manufacturers are perfectly free to not do business in that state, of course, but then they lose sales and service revenue.

What the state by state approach does is to impose the will of the local legislature (which is arguably closer to ordinary people) to business that operates within the state.

Messy? Yes. Effective? Yes.

As a rather larger example of one area passing regulations that affect places far beyond, take the RoHS directive (often known as the lead free directive) within the EU. Device manufacturers now almost universally provide lead free components [2] even within the USA despite there being no such USA legislation.

Note 1. Most registered offices are a local attorney's address.

Note 2. There are some exceptions, mostly military and aerospace,

The chip that changed my world – and yours

Electronics'R'Us
Holmes

Re: It lasted 50 years, but history finally claimed it

The announcement stated that the supplier was unable to continue fabricating the part.

That is hardly surprising; it was on a 4 micron node (IIRC) and the fabrication equipment needs maintenance and spare parts which are likely to be totally unavailable now.

It is interesting how some parts live on where you least expect it.

The venerable 8051 is often found within a wide array of controllers (running at many times the clock speed of the original); I have seen them used widely to implement state machines in USB hosts. There are numerous 8051 devices in a weapon system designed in the late 70s / early 80s, incidentally. I suspect there are many such examples.

Many problems can be solved with an 8 bit device running at a few MHz, so it is hardly surprising (in that sense) that such parts are still widely available.

Australia’s spies and cops want ‘accountable encryption’ - aka access to backdoors

Electronics'R'Us
Holmes

They still don't get it (in one sense)

Burgess labelled encryption “clearly a good thing, a positive for our democracy and our economy” because it “protects privacy, it enables communications and transactions.”

But he noted it also provides criminals with anonymity, which is why Australia has laws that make it possible to access encrypted messages. Burgess said those laws aren’t working well because tech companies aren’t helping.

Technology of all types, in and of itself, is agnostic; it is neither good or bad. The various use cases could be seen to be somewhere in that scale, though. This is not news

Hint to all the authoritarians out there; Pandora's box is officially open.

On the subject of encryption; if anyone other than the sender and recipient know the key, it is, by definition, insecure. It is not a matter of 'tech companies aren't helping' - a properly crafted encryption system means they cannot help.

Zilog to end standalone sales of the legendary Z80 CPU

Electronics'R'Us
Holmes

Masks

I have never used Rochester for that (and I have used them a lot). I don't think they actually buy masks but buy up all available stock at LTB.

There are die banking services available where certain companies have relationships with the silicon vendor. This is very prevalent in avionics where kit has to be available for a lifespan of (often) 40 years or more.

Boeing top brass stand down amid safety turbulence

Electronics'R'Us
Holmes

Over 25 years of rot

The (reverse) takeover of Boeing by McDonnell Douglas (in 1997) is well known (especially to the denizens of these parts) to have been the true watershed moment.

The writing was already on the wall, and then when the company relocated HQ to Chicago and effectively put the bean counters in charge, it was clear Boeing was no longer a proper engineering company,

The 737MAX problems were a symptom of this rot that has been in place for a long time, and I am not sure how quickly it can be removed (and it needs to be removed, fast). There have been a lot of comments on the issues at Boeing over various articles and I am not going to rehash them.

It is not just a matter of recruiting new C-Suite execs; the entire corporate attitude must change to regain their reputation.

An old saying, but true: It takes years to build a reputation and seconds to lose it.

Uncle Sam, 15 US states launch antitrust war on Apple

Electronics'R'Us
Devil

Closed ecosystem

I remember this from the 90s; the closed system by Apple is northing new (perhaps more extensive now).

Mac Airlines

All the stewards, stewardesses, captains, baggage handlers, and ticket agents look the same, act the same, and talk the same. Every time you ask questions about details, you are told you don’t need to know, don’t want to know, and would you please return to your seat and watch the movie.

Source

UN: E-waste is growing 5x faster than it can be recycled

Electronics'R'Us
Holmes

Surface mount parts

There are a lot of very good reasons we use surface mount parts (including BGAs with thousands of pins).

Among those reasons are signal integrity which is far simpler when you don't have a big hole in the board, let alone a component in a socket (which is, electrically, a bit of a nightmare in this context).

That said, a group of repair hobbyists (or perhaps a repair outfit) could get the necessary equipment to repair such things quite affordably.

One thing that needs to be done is to educate those people on the risks of ESD (electrostatic discharge) to modern electronics; a very small amount of it can damage a modern microprocessor or microcontroller (so small that we would not even notice that we actually had an ESD event).

A major problem is parts availability; this is where otherwise perfectly good electronics that has one defective part has to be scrapped because the replacement part is not sold through regular channels (or is not even produced any more). This is, to a certain extent, driven by the planned obsolescence model.

Most failed parts, though, are not the highly integrated bits, but are more usually the various support parts, which can be sourced quite simply in most cases.

I know of some companies that are indeed repairing expensive controller boards (and the OEMs are aghast!); when a board can cost upwards of £10k (some are much more expensive) then there is a financial incentive to repair a unit rather than buy a new item. There is at least one company that sells (and train people in the use of) diagnostics and reverse engineering equipment to facilitate this.

The ability to repair needs to have an economic incentive, in my view.

Legal eagles demand $6B in Tesla stock after overturning Musk's mega pay package

Electronics'R'Us
Devil

Hilarious

"The lawyers who did nothing but damage Tesla want $6 billion. Criminal."

Pot, meet kettle.

FAA gives Boeing 90 days to fix serious safety shortcomings found in report

Electronics'R'Us
Holmes

Re: The Golden Years

The last Boeing commercial aircraft that was designed and managed by engineers is probably the 777 (not the latest iteration of which horror stories about quality abound).

I know the principal designer of the flight control computers in that aircraft (and I was involved in the technology refresh some years ago). The Boeing 777 team was both difficult and excellent.

I mean 'difficult' in a good way as quality issues did not get past them.

The train wreck that is now Boeing took hold once the bean counters from MD took over; I think most of us in the industry saw the writing on the wall quite quickly and were confirmed in our view when the corporate headquarters was relocated to Chicago.

Mamas, don't let your babies grow up to be coders, Jensen Huang warns

Electronics'R'Us
Devil

Re: "Jensen Huang believes"

Well, I am an EE but I have done a lot of code for embedded systems of my (or my team) design.

That is an enormous market, incidentally.

Some of the skills are setting up the underlying hardware [1] which requires a rather solid knowledge of the internals of the microcontroller you happen to be using [2]. I don't see any 'AI' (which although artificial is anything but intelligent) being able to do that in the foreseeable future.

1. The are hardware abstraction libraries available, just about all of which are poorly written, are very opaque and have horrible corner case problems. Train the model on those and hilarity will ensue. I had a library function (DMA initialisation in one case) that was several call levels deep and I replaced it with 3 lines of code. The same issue exists for just about every onboard peripheral library function provided by the vendor. For precision work, using callback functions is a major no-no but they proliferate in the various HAL libraries.

2. Modern devices have several thousand pages of documentation without considering the underlying core and assembly language and it can take a long time to fully understand all the functionality and just how to invoke their operation (even order of operations can make a significant difference).

Persistent memory to replace DRAM, but it could take a decade

Electronics'R'Us
Holmes

Use cases

There are a lot of use cases, but the majority of them are not in general purpose computing.

1. I have done a lot of avionics and the start up time requirements can be very stringent. Shortening that can be the difference between success and failure in getting the contract.

2. Edge / IoT. Many applications in this arena need to power down, and often only power back up on an event. Without needing to maintain power to the memory during power down, power is reduced. That can be a very important factor in the design of such things.

3. Where non volatile memory is required (there are so many use cases here such as 'cause of last shutdown'). In the past, one would use a Dallas semiconductor (now Maxim, which is now part of Analog Devices) device that had a small battery. Those things cost a lot, but with the newer non volatile devices, more mainstream approaches can be used.

Note that although these could use EEPROM / Flash, the write times (and erase for that matter) take quite a while; these things operate at standard memory speed and if you are in an unexpected power off event there will be time to do an orderly shutdown and write the fact of power failure to a log file and also give that an orderly shutdown - make sure the write enables are off prior to voltages going below required states. That prevents data corruption.

Will it make it's way into mainstream computing? I have no idea.

There are plenty of places where this stuff is used.

Forgetting the history of Unix is coding us into a corner

Electronics'R'Us
Holmes

Also...

The Unix philosophy for the various parts were do one thing and do it well which is why things such as SystemD are anathema to many.

Dell staff not alone in being squeezed to reduce remote work

Electronics'R'Us
Go

Hybrid

I work from home 4 days per week normally, and $EMPLOYER is perfectly OK with that.

There are 2 offices I can go to; one is fairly local (about 35 minute drive) but that is not the team I primarily work with. The one thing of value in that office is the cross-site engineering meeting where we all get together in the offices (which connect over teams). I find that particular meeting quite valuable because I am one of the design team and the local office is an integration team so we get a bit of a chat going.

The office where the team I primarily work with is about 250 miles away and I go there when it is necessary (new hardware commissioning, troubleshooting and the like) which happens between 4 and 10 weeks apart (it really is dependent on what is going on).

When I do go there I go for a week (all expenses paid) so I am in the office that entire week - that all seems to work very well. Quite a few of the team I work with are quite inexperienced and mentoring them is easier with a teams chat than in an open plan office [1].

Some of the things I work on (apart from designing electronics) are making templates for the relatively new ECAD tool to automate the outputs; that demands peace and quiet so I can properly focus and try things out.

This is a relatively small company, so we get to be flexible, although that does mean wearing a number of hats but I am comfortable with that.

This works for everyone, so why change it?

Note 1. I am not going into the problems of open plan offices but I detest them with a passion as it can be impossible to concentrate.

Dumping us into ad tier of Prime Video when we paid for ad-free is 'unfair' – lawsuit

Electronics'R'Us
Devil

Re: Question is...

Several years ago (not sure when but it might be 80s) I had a book of Punch magazine [1] cartoons.

One I will never forget had two advertising people; one was saying to the other:

"I really have to believe in a product before I can lie about it"

1. Punch magazine, for the uninformed, was a wonderful somewhat satirical publication that included such things as Spy vs Spy and Let's parler Franglais.

Drowning in code: The ever-growing problem of ever-growing codebases

Electronics'R'Us
Headmaster

In the embedded world...

Most of my software is written for small microprocessors or microcontrollers. Typical maximum flash is around 1MB and perhaps 256k RAM.

Even here, bloat has taken hold with HAL (hardware abstraction libraries) although it is not extreme; they are, however, opaque.

One project (using an ARM Cortex M4) required me to do rather interesting things such as respond to external events while the processor was in sleep do a bunch (1024) ADC conversions and DMA the results to as buffer. Only at the end of the conversions / DMA was the core woken up.

The library DMA initialisation function was 6 layers deep; once I figured out what was being done, it was replaced with 3 lines of code using the same structure the library required. The library function for the ADC / DMA did not solve an issue that the DMA would just run at maximum speed (and not just after each conversion). I made that work by forcing an arbitration after each DMA which was part of the initialisation code!

My point is that most HALs are poorly written and inefficient. For ADC / DAC conversions, in particular, timing jitter will render any DSP functionality fubared so 'trusting' the HAL to get it right is simply not acceptable.

Several years ago (decades in fact) I was tasked with adding a test for an optional flash device that could be fitted in a socket (they were expensive at the time). The system resources was 32K (not a typo) of ROM, of which there was perhaps a few hundred bytes free. I managed to get a comprehensive test (using existing flash programming functionality with the walking 1s, walking 0s technique) into 33 bytes.

I always look at 'frameworks' with a very sceptical eye.

Fujitsu finance chief says sorry for IT giant's role in Post Office Horizon scandal

Electronics'R'Us
FAIL

Investigator suggested dropping wording...

From a witness statement as it could 'damage' the Horizon brand (might have been a lot less than it is now).

https://www.bbc.co.uk/news/uk-wales-68172203

Return-to-office mandates boost company profits? Nope

Electronics'R'Us
Go

Just reduced my office time

When I joined $COMPANY it was agreed that any office time would be on the basis of adding value.

A couple of months later this was changed to 2 days a week at one of the company offices (notably my direct team lead does not work at that office - his office is over 200 miles away); the reason was that people in the office I use when required were perhaps a bit jealous. That was recently cut to one day.

Three things are interesting here.

1. As my handle implies, I design electronics (among a number of other things such as low level software and process improvement - by which I mean making it work for engineers), which requires large amounts of peace and quiet for concentration (something not found in an open plan office).and my actual productivity went down. I can access the CAD licence and network from my home office (over a VPN) just as easily as I can in the company office.

2. In general, to talk to the team I work with I use Teams (I know...) even when I am in my local company office except for the occasion I go to that other office for a week.

3. The one day I go in actually does have value (an all site engineering update with the other sites on Teams).

Unsurprisingly, my overall productivity has gone up, I don't have the commute and the company can burnish its green credentials as my vehicle time is reduced (there is no public transport in the village).

The Post Office systems scandal demands a critical response

Electronics'R'Us
Facepalm

Re: It's still happening

The issue here is that a numerical overflow becomes possible.

'p' (the input variable) is probably ok, but when the compiler expands 2*p then it may not be ok for whatever type is actually being used.

Given that the function appears to be at the heart of the 'reverse transaction' process (which has been implicated in doubling the size of the transaction or otherwise not functioning correctly) such 'cleverness' is (as usual) not a good thing.

Electronics'R'Us
FAIL

Re: It's still happening

Have you seen some of the code?

Take a look.

Computer weekly has an article about it, naturally.

From that article: “Whoever wrote this code clearly has no understanding of elementary mathematics or the most basic rules of programming."

WTF? Potty-mouthed intern's obscene error message mostly amused manager

Electronics'R'Us
Devil

Code comments

Some years back (about 10) I was doing the base code for a microcontroller with GCC in an eclipse based IDE. Think bare metal and device drivers.

The default for that was live warnings (equivalent to -wall) which can be useful, but when setting up DMA channels among other things (where pointers abound in the hardware) it can prove to be slightly annoying.

The DMA descriptors are in RAM, but the addresses of source and destination are in dedicated registers (as they are for many things - check out the memory map for any ARM Cortex device).

I would get implicit conversion warnings so the code was littered with <someregisteraddress> = (uint32_t*) myvariable where myvariable had been declared as a uint32_t (because what was loaded was a literal value).

At some point I got frustrated and added a comment along the lines of

// cast to suppress GCCs verbal diarrhoea

My then boss was not impressed so I had to remove it.

In all fairness, this code was going with our hardware to a university for their algorithm development so he had a point.

Epic decision sees jury find Google's Play store is illegal monopoly

Electronics'R'Us
Holmes

Re: This is the mother of all Google trials

Many years ago when I lived in the USA, I had a girlfriend who was a lawyer.

When it comes to jury trials, she explained to me that the jury is the finder of fact and the judge is the finder of law.

For an appeal to overturn the jury finding, Google will have a very uphill battle as it will have to show that the evidence, as presented was incorrect or lacking. She told me that overturning a jury decision is very difficult.

To appeal otherwise they must show that the findings of law were incorrect.

Interesting times indeed.

FTX crypto-villain Sam Bankman-Fried convicted on all charges

Electronics'R'Us
Holmes

Gold finish on PCBs

We have been using ENIG plating for decades.

The issue is the coplanarity of the finish. The previously popular HASL (hot air solder levelling) just isn't good enough for the vast majority of surface mount parts.

Some manufacturers (notably TI) have also provided parts where the lead finish is NiPdAu (Nickel Palladium Gold) for a long time; one major advantage is that this is compatible with both Tin Lead (SnPb) and lead free reflow profiles.

So it wasn't RoHS that pushed us into using gold on PCB surface finish - it was device geometry.

Page: