* Posts by Electronics'R'Us

480 publicly visible posts • joined 13 Jul 2018


Dell reneges on remote work promise, tells staff to wear pants at least 3 days a week


No added value

The arbitrary 'n' days per week diktat is one of the most specious argument out there.

I will be starting with a new company soon.

There will be times I need to go to an office (one reasonably close, the other one a couple of hundred miles away - expenses paid for that one); we have agreed that there has to be value in actually going to the office (hardware commissioning, for instance).

Where there is no added value (teams call, perhaps) then there is no point in doing a 40 mile round trip. Bonus: they get to boast of their environmentally friendly policies.

II have no problem at all going to the office when it makes sense. That might be every day for a week and then not for a month.

The 'people work better in person' argument is also silly; people work better when they collaborate might be true but it does not always require physical presence. It also ignores the fact that introverts don't actually like being in a roomful of loud people such as marketing 'rah rah' types.

This seems to be a pure micromanager issue.

Microsoft cries foul over UK gaming deal blocker but it's hard to feel sorry for them


Not just crocodile tears

I think there are some real tears being shed (and much gnashing of teeth) at the missed bonus that several of the main people were expecting which were probably measured in 10s of $M and quite possibly far more.

Icon for Microsoft and Activision reaction.

Microsoft is busy rewriting core Windows code in memory-safe Rust


Re: "Oh no, not again!" said the potted petunia

"If you're building an application today that's either performance critical or low-level, then Rust is a no-brainer at that point."

Here we go again.

Depends on the definition of low level. I do a lot of bare metal stuff where dynamic memory allocation are very much a no-no [1] and for clarity C is still best for those applications. For arrays, it really doesn't matter if they are dynamically allocated or not as an overrun is an overrun regardless of how an array was declared.

I highly commend everyone to number 5 of the The Ten Commandments for C Programmers reproduced here:

5. Thou shalt check the array bounds of all strings (indeed, all arrays), for surely where thou typest ``foo'' someone someday shall type ``supercalifragilisticexpialidocious''.

I have picked up code directly driving hardware where no error checks were performed (see number 6 at the above) written by supposedly excellent software engineers; ion one case I added a status word to the code and as each part of the initialisation was done successfully, I cleared the appropriate bit. If everything went ok I got a '0' at the end; more importantly, I could tell exactly where in the sequence things had gone wrong by finding the first non cleared bit.

Rust makes sense in certain things (courses for horses and all that) but in many applications, C will remain on top. It has its issues but they can all be dealt with.

1. On some occasions it is necessary to use malloc() to get the alignment correct. I had that requirement for a DMA descriptor setup.

How Arm aims to squeeze device makers for cash rather than pocket pennies for cores


Device agreements?

How that would work looks something like this, we're told: Arm would still license its processor designs to chipmakers, but under so-called development licenses that require the chips to only be used by manufacturers that have device agreements with Arm.

Many ARM microcontrollers are sold (as parts) via distribution (as are a lot of other architectures); if the above is true, that would no longer be permitted.

Example: I design test interface equipment, among other things. Some of them are using a Cortex M4 device and I have perhaps 4 of them made once the design is solid. Would I (or $COMPANY) need a device agreement? It certainly seems so. How much would be the royalties? This stuff is for internal company use (never sold).

I realise Cortex M series might not be affected but the reputational damage from this would see me drop them.

What about the cores in Xilinx (now AMD) FPGAs? The Ultrascale parts have quite a few (some M series, some A series). I can't see them paying this rent.

Quite apart from the engineering nightmare, the administration of it becomes terrible.

There are plenty of alternatives that will do the task; one reason I currently use ARM is that free toolchains are widely available and parts are reasonably easy to come by.

When either of those is not true, when the interfaces get replaced a different family will be chosen as the codebase is abstract enough that only low level drivers would need to change.

I don't know how many controllers as a percentage of sales are sold this way but as a revenue stream it would get stopped in its tracks.

John Deere urged to surrender source code under GPL


Electronic Serial Numbers

The way JD (and others) lock 'customers' in is very probably by the use of ESNs.

These do have many legitimate uses [1] but what these companies do is a consequence of technology not being able to prevent bad actors from turning it to their advantage.

Here is how it can be done (which might give a clue on how to reverse engineer it).

Each sensor has an ID and an ESN (ESNs are usually 64 bit numbers)

The entire list of sensors (IDs) and corresponding ESNs are written to non-volatile memory (probably an EEPROM).

The list is read at powerup and compared with the enumerated devices - mismatch == you have top pay through the nose for someone to come out and...

Connect to onboard computer.

Get current list of enumerated devices (ensuring that only the ones that have been newly installed are permitted to be updated)

Write new list of devices to EEPROM.


If a kernel driver needed to be linked for the device enumeration process then there might be interesting things that could be done. For at least the last 20 years, manufacturers of the devices have provided equipment vendors to define their own IDs, ESNs and so on.

1. In safety critical avionics (as an example) the serial number of every board is tied to traceability information so that if faults occur more often than they should, the batch of components (and manufacturer) can be easily determined. There are many examples of reasonable use of the technology. These parts have been around in various forms for at least 30 years.

Most Londoners would quit before they give up working from home


WFH / Hybrid has advantages for more than just the WFH crowd

Some of the people I work with need to be at the office (production for example) and some prefer the office for practical reasons.

Young grads / apprentices who are either flat sharing or living with family come to mind.

One particular team I work with have a relatively limited space (it is a repair facility for electronics) and the WFH / collaboration space solution that $EMPLOYER has put in place means fewer people traverse their limited space which is a win for them.

I do go in occasionally for those times I really need to but the rest of the time I am working in my home office (which was designated as such when we moved here).

Right now I am in the middle of detailing the principles of operation of a piece of electronics I am designing; the process of writing that down typing that up without any interruptions or extraneous noise lets me concentrate better (especially when I discover a flaw that only appears once I type it up but I can fix it right away). The other piece (currently) is the actual layout which can really require multiple sessions of several hours uninterrupted work (typically 4 to 6 hours at a time) which is difficult, if not impossible, to achieve in an office (especially open plan).

I have a meeting with one of the electronics distributors FAE tomorrow for lunch so we are meeting at a nice little cafe about 10 minutes from here; that will probably be far more productive than sitting in the office.

I also do a lot of microcontroller development and the first parts of all those uses a development kit attached via USB; I can (again) get that done far more effectively here. Once I am ready to commit to a PCB I will go in to commission it, then bring one or two home (I have a nice little lab setup) to do much of the rest of the work. Final integration is at the office.

The flexibility is the key and employers who refuse to see that as an advantage (or perhaps are worried about their 'stature' as viewed by senior management) are already losing out.

I might go in 3 days or more in a row and then not see the office for 2 or 3 months. We have no problem with collaboration online (Teams, for all its faults, works quite well for that sort of thing). I just did a team call yesterday to explain the process of designing electronics (for a particular piece of kit) to 8 junior engineers / grads and that went extremely well.

Arbitrary 'you must be in the office coz reasons' is a good way to demotivate people when they get a lot more out of the job when they can just get on with it without a detailed plan for the day and just when they are mentally up to it.

New software sells new hardware – but a threat to that symbiosis is coming


Memory vs. Data size

There is no real hardware reason that a 32 bit processor have a 32 bit address bus. 8 bit devices almost universally had a 16 bit address bus. One practical reason is the size of the program counter (if it is 32 bit than the maximum size within a given space is 4GB) but that is an implementation detail.

There is no consensus on what determines whether a device is 16 bit, 32 bit or whatever. The two most commonly used definitions are internal register size [1] or ALU data width.

The PowerQuicc 3 series (from what was Motorola -> Freescale -> NXP) have an internal 36 / 40 bit address space (device dependent) and exposes a 32 bit address for non SDRAM memory but with multiple chip selects. Boot flash, static RAM and so forth come to mind. Peripheral mapping can also be done using that interface.

SDRAM of all flavours is a different type of beast as the address space can be much bigger than might be expected as there is a row and column value latched on different parts of a memory transaction. Thee data interface can be 72 bits (64 + ECC) for those devices (That has been true for some 32 bits devices for over 20 years). SDRAM (DDRx included) interfaces are always separate from the main address (physical) bus due to the hardware requirements.

It can also assign a 32 bit address space for PCI/PCI express interfaces and so on which can then map much larger spaces behind each device (depending on device).

So it wasn't just memory space.

1. Register access width, anyway.. Many internal special function registers, although accessed as a 32 bit device only expose a few bits with any meaning.

FTC floats rule to ban imposed non-compete agreements in US


I had a non compete

Several years ago - early to mid 90s, in fact. In Florida.

It was done properly (to start with). The parent company boss wanted to implement these, but I saw [1] the memo which clearly stated that existing employees (of which I was one) could not be bound unless something of value was offered. It was, in the form of a profit sharing arrangement.

When I left them (I got fired as it happens) I went to a small outfit designing the same type of equipment (smart payphones) and I got a snottygram from a lawyer [2[ which I asked a friendly labour law practitioner about.

It turned out that non competes were very limited (the judgment was in the Southern Law Review) and was limited to some very specific areas.

1. Specialised training or knowledge that the company provided that I would not have found elsewhere. No. I probably brought more knowledge in than they ever provided.

2. Customer lists / contacts that were made using company resources. Not applicable to me.

3. The use of internal IP. They had some but I wouldn't use it anyway for a number of reasons including my reputation.

They tried to intimate that a linear 2 wire to 4 wire conversion (necessary for remote updates using QAM modulation - FSK would work in an ordinary diode based circuit) was their trade secret.

So we wrote back explaining that the technology found in their products was (apart from an ASIC, the details of which I knew but was not using) decades old. At time, semiconductor databooks (remember those?) were brimming with linear 2 wire to 4 wire converter circuits (they wanted to sell their modem ICs), one of which I adapted. They had not provided anything special in the way of knowledge and I was not involved in direct customer contact and to basically back off.

Never heard from the lawyer again.

1. I don't think I was supposed to see it but it was left on the operations director desk with the door wide open. I had reasons to go in there regularly (mainly to leave memos that he often didn't read but I could say had been delivered)..

2. Said lawyer was a major investor in the company. I mentioned the conflict of interest in the response to them.

Man wrongly jailed by facial recognition, lawyer claims


Somme questions

Let me get this straight. He was driving on a highway in Georgia

A police department in Louisiana apparently had access to his driving licence photograph.

He claims he has never been to Louisiana, so it would not be a Louisiana driving licence .

From the article, it would seem that either Clearview AI or Morphotrak had this image (or the Georgia driving licence agency shared its database [1]).

So how could the Louisiana cops ID him from a different state's data?

1. Assuming it was a Georgia licence.

Corporate execs: Get back, get back, to the office where you once belonged


Re: Collaboration

I completely agree that the review process has to be more explicit. I do quite a lot of electronics for interfacing which can be 'interesting' on occasion.

Because of that we (the team) always ensure we have real requirements (which may include software) and there is an 'originator - checker - approver process.

When onsite everybody is spread out over a large area so the remote method is not much different from what happened prior to the mass WFH.



"How do we get them working on things together? I mean, remote is great, but when you have new and difficult problems, putting people inside of rooms is absolutely critical," he added.

I call BS on this

I work as part of a team of various functions and we have found that a remote virtual meeting for problem solving is every bit as effective as a physical meeting and without the commute.

All the 'offices' have been converted to (bookable) hot desk collaboration areas and although we are 'encouraged' to go on site regularly what matters more is getting the issues sorted. There are times when it makes sense to go onsite (commissioning new hardware for example) but I can do design work far more effectively here.

If I need someone's opinion on a part of the design I can just call them.

Some people need to be onsite (production teams for example) but those who can work more effectively from home are encouraged to do so.

End of an era as the last 747 rolls off the production line


From a different era

When Boeing was an engineering company.

Everything up to the original 777 and derivatives were properly engineered aircraft. The 777 is a fly by wire aircraft (I have worked with the flight control computers).

The team on that aircraft were real sticklers for proper QA (as they should be).

Now they are just bean counters.

Just 22% of techies in UK aged 50 or older, says Chartered Institute for IT


Re: hmmmm

I am a 65+ something and I grew up with transistor radio kits (the original new shiny - 6 transistors! Woo hoo!)

I am currently working full time where my knowledge of the old equipment is invaluable, but just like many of my age group, I have seen fads come and go and the latest 'ultimate' components (there is always a catch here - the list is long).

I have designed hardware and software (6502 assembly was one of the first) and I remember the first FPGAs appearing. There is a major advantage to really understanding transistor theory even with new parts (they are still transistors inside, just a lot smaller).

One of the things I am currently working on is some new precision test equipment and interfaces and it is just about all surface mount parts. Layout has changed over the years for good reasons but I know why so I don't get caught out generally.

I remember someone mentioning that all that old stuff was not relevant anymore so I reminded the person that we have not yet repealed Ohm's Law (it is quite interesting how many faults can be tracked down just using that and some simple test kit).

It helps that I have my name on a couple of high-speed standards.

So yes, we understand what is going on in the otherwise magical innards of these new devices and writing init code for them on bare metal is a doddle when that level of knowledge is present (why yes, I do use C for that. How did you guess?).

The current $POSITION requires an in depth knowledge of RF and radar and although we have come a long way, the fundamentals of radar are still the same as they were when it was first invented. New (well, sorta new) types of signal generation (but signal generators nonetheless), power amps and receivers but I can figure out what everything is doing very quickly as there are a limited number of ways to implement a radar. The same goes for any number of other types of technology.

Can't adapt? I think many of the younger crowd are the ones with difficulty adapting.

IBM manager sues for $5m claiming postnatal demotion


Re: It really is a choice, honest...

For some years (from about 98 to 2004) I was a single dad and the rules where I lived (Pa) were such that my son had to be have adult supervision at all times.

The companies I worked for over that period were all supportive of the fact that I had to get home to care for him (pick him up from day care after school, get dinner, all the usual things) and that in the mornings I had to get him to pre-school day care / breakfast / etc. There were occasions that once I had fixed up dinner, bedtime and so on where I would go back to the office for some things (some all nighters in there as well) but it worked out and it worked for the companies as well.

Perhaps I was fortunate enough to work for enlightened companies, but there is nothing to prevent either parent from having a career.

Intel reveals pay-to-play Xeon features with software-defined silicon


This is quite common

In the test equipment industry.

I am not defending Intel here as they are in a (relatively) volume market compared to test equipment.

A fully featured, high end oscilloscope (capable of analysing / displaying multigigabit signals or analysing complex systems for timing requirements) can cost upwards of £200K when everything (particularly the probes which can cost £25K each for the highest performance parts). That is but one example - there are many.

Now there are some who would like to have an upgradeable piece of equipment but simply don't have the budget for one of those all shiny top of the line [1]. For them, it can make sense to buy something that is capable of that level of performance but that level of performance will not be required for a year or more so they pay a lower price but still have an upgrade path.

There are many pieces of test equipment that are sold this way but the high end test equipment market does not have the volumes for economies of scale. Paying for an upgrade is less than buying the two pieces of equipment.

So it can make sense in some areas.

[1]. There are less expensive pieces of equipment, certainly, but those are not upgradeable, generally speaking so that will always be limited to its given specifications.

Aviation regulators push for more automation so flights can be run by a single pilot


Re: Automation

Strictly speaking, Typhoon is a dual duplex system. There are 2 channels controlling one of the Canards and another dual channel that controls the other Canard and rudder.

Disclaimer - I have worked pretty extensively on that system for the design authority.



"As a computer, I find your trust in technology amusing".

That is a very old line but it is just as true today. Technology fails. Regularly.

For flight safety critical avionics, there are duplex (2) and triplex (3) architectures. The duplex approach requires the pilot to take over when the channels disagree (because with only 2, we don't really know, from an automation perspective, is incorrect.

Even triplex systems can fail [1] if multiple sensors fail, as was the case with AF447. I remember looking at the telemetry and there was 'airspeed disagree' (so at least one airspeed sensing channel was not operating properly) followed by 'alternate laws' [2] which gives the system extra flexibility in operation in a difficult environment. When the system could not get agreement on at least two channels (and therefore had no reasonable way of knowing the real airspeed) there was nothing it could do - it required the pilots to actually take over with manual control

A lot of pilot error has to do with extremely confusing errors and warnings, especially at low level / low speed. The checklists are designed for use where there is plenty of time (and height) to get to the bottom of a problem and even then there is cross checking by the crew and a strict division of duties (one pilot goes through the checklist to identify the issue and the solution while the other pilot keeps an eye on actually flying the aircraft is one scenario).

Single pilot with a duplex system would have major overload as soon as something went wrong and a triplex system would be required for all safety critical systems (there are quite a lot of them in a modern aircraft).

Failure rate is a numbers game and the usual threshold is that the probability of catastrophic failure is less than 10^^-9 per flight hour. Doesn't mean it cannot happen, just it is very unlikely. The corollary to Murphy's law is 'Murphy was an optimist'.

All these systems have a lot of complexity (the channels have to be galvanically isolated, for example) so there is always a nagging suspicion that something has not been thoroughly analysed.

[1] The failure of multiple pitot tubes on AF447 actually highlighted a known human factors problem with automation- over reliance on it. In that case, it was determined the pilots did not actually know how to fly the aircraft.

[2] The control laws are what are used by the automation to determine safe changes to the aircraft flying profile so it does not exceed the safe operational envelope.

UK forces Chinese-owned company to offload Newport Wafer Fab


Compound Semiconductors

They do GaN (Gallium Nitride) transistors, which are a Big Thing in power electronics as they can yield very high efficiency power supplies among other things.

They also do compound silicon products and photonics and a part of an industry / academic consortium.

Blurb here

Twitter engineer calls out Elon Musk for technical BS in unusual career move


An old Dilbert

You need to straighten out the network cables now and then; the zeros are ok and can slip through but the ones can get stuck.

A regular on newbies for aircraft was to send them for a rotor blade woompah.

Go ahead, be rude. You don't know it now, but it will cost you $350,000


Slightly different

Quite a few years ago, I was the technical lead [0] for electronics at an aerospace and defence site of a $BIGCORP.

One of the big suppliers of power converters (among a great deal more) was asked if they had some devices rated for military temperatures; in this specific case the answer was 'No', but they said they could set up a screening programme (which would set us back about £100k one off NRE).

We were considering this but then the vendor stated that after the screening was in place (which we would be paying for), they would sell the same parts at the same price as our price to anyone. Hmmm.... We decided that we could get by without the screening. We had other methods of achieving the same result and at least the money would be with local suppliers rather than a large corporation.

Strike 1.

A year or so later, we were performing fault testing with some of the same vendors point of load regulators (switching devices) and they were not meeting the datasheet; we would short the output (the devices had a statement that it could be applied continuously and would recover once the fault condition was removed). Problem was that was not happening. 5V output would not recover properly, and the -5V would recover to -15V! Much more weirdness abounded. Interestingly, for power inputs set for 4.2V [1] or below, the devices worked perfectly.

After we had spent several hundred engineering hours (may well have been more) on the issue to properly document it, we presented it to the vendor. After a few months, they eventually admitted it was a silicon error but they would not change the datasheet or the silicon.

My next action was to decree they (that specific division) were now 'persona non grata' in our designs as we could no longer trust them - they were also designed out of all existing designs [2].

Some weeks later, I had a call from our procurement saying the rep had contacted them asking why their volumes had dropped to almost zero. I explained the situation and thought no more of it until the sales manager called me directly demanding very loudly that they be reinstated and that I had overstepped my authority and he would call the director etc., at which point I politely explained that the decision had been made and I hung up. I did get a call from the engineering director (who was already fully briefed) to say they agreed with the decision to drop them and had told them so.

I understand that there was a bit of a 'reshuffle' in that department after that.

We were a 'key account' (as are all the big mil / aero outfits) and losing the entire site (we made a lot of different stuff) was rather a blow apparently.

We eventually started using that division again about 8 years later.

[0] I was not a manager - as the lead I had responsibilities for which vendors, which parts, etc.

[1]. That is the voltage of a fully charged Lithium Ion battery so the parts worked perfectly for consumer equipment although the datasheet stated otherwise.

[2] Fairly simple to do for equipment in the design / pre-qualification stage. Older equipment had been designed with very high reliability figures and we already had sufficient spares for the life of those contracts. Some contracts have a 'technology refresh' planned in and the designs were updated with a more customer focused vendor at that time.

Data loss prevention emergency tactic: keep your finger on the power button for the foreseeable future


American Spellings

I have no problem with them except for one, which is very important in engineering.

Metre for length.

A meter is a measuring instrument.

When I worked in the USA (which I did for over 20 years) I always used the proper spelling for length. It is, after all, a fundamental SI unit. Many of the 're' words are actually perfectly valid according to various (American) dictionaries, incidentally.

Linus Torvalds's faulty memory (RAM, not wetware) slows kernel development


It is easy

To implement an ECC memory system provided the processor itself supports it; I have done plenty of them.

What needs to be done at motherboard level is to expose the ECC signals (for classic ECC) from the processor to the memory devices. ECC is just an extra memory device as the actual ECC (again, classic ECC) is done in the memory controller. Correct one, detect two is the most common form. The extra memory device stores the ECC syndrome bits.

It is only a few connections (10 or 11 depending on details and version of the interface) and does not add significant cost to the design but Intel et. al. want people to think it is somehow magical and expensive - it isn't.

There is a type of embedded ECC (I have seen it in SRAMs) where the checks are done within the device. The testing for this is often done at LANSCE as the majority of problems are due to free neutrons and there is a neutron beam facility to see just how robust the solution is.

Devices that start to fail within a short time (which is a few years) are rarely due to age, but more likely the device has heated up significantly (to the point that the refresh rate has to be increased - provided that has been enabled which it probably is not).

Excessive heating of semiconductor devices shortens the life of them quite significantly - the rule of thumb is the failure rate can double for each temperature rise of 10C (see the Arrhenius_equation).

L1 parity and L2/L3/main memory ECC is required in flight safety critical avionics which is why Intel parts are not used in those applications.

People are coming out of retirement due to cost-of-living crisis


Haven't retired yet

Even though I am well past the state retirement age.

I walked into my current $EMPLOYER at the age of 66 and they are only too happy for a number of reasons.

1. Some of the stuff we work with is over 40 years old and designed accordingly. Discrete transistor circuits. Stripline inductors (a piece of metal on a PCB). Small scale integration logic (think quad gates in a single package) providing quite complex logic (so boolean algebra is a must).The list goes on.

2. Lots of RF and Radar, hardly mainstream subjects in colleges and university. I have been in that world, on and off, for over 50 years. Contrary to common belief, we have had functionality that is now commonly rendered using a digital setup (microcontrollers, microprocessors, DACs, ADCs and so forth) for several decades primarily designed in analogue circuits. The fact the new versions are done using newer techniques does not change the fundamentals of what those things are.

3. Designing to what you can get. Particularly true for precision circuits such as test equipment.

4. Small microcontrollers (mainly 8051 based) with the code in assembly language. Precision timed loops (so cycle counting was necessary to get the correct timing - that is how it was done).

4. I know the electronics ecosystem (be that contract PCB assemblers, vendors, reps, FAEs and so forth); if I need some assistance, parts, development kits (you name it) I know who to ask knowing I have a good chance of getting it.

That said, I stay on top of all the new stuff (which is not always easy but if you don't you lose the edge).

I am not preventing someone younger getting a job (as many seem to think is the case for us older types) simply because in the vast majority of cases they simply cannot do the job(s) I do.


The crime against humanity that is the modern OS desktop, and how to kill it


Re: Skeuomorphism

The theory behind splitting the interface from the implementation was that the interface could remain stable while the implementation could be changed for improvement. Have +1 on me.

Decoupling the display from the data was (should still be) a fundamental rule for many of the diagnostic packages I have written. I have seen horrible code that mixes the two together and attempting to improve the underlying functionality was a hair pulling exercise as all the formatting assumed certain display elements would be of a very specific style.

My view, which was from the *nix mantra of 'do one thing and do it well', served me well and should be beaten into UX developers with a heavy object.

Decouple data / processing and interface. Simple really. A shame that this eminently sensible rule is not followed.

Amazon fails to overturn New York City union election


Clearly mis-transcribed...

"...and we don't believe it represents what the majority of our team executives and investors wants."

The International Space Station will deorbit in glory. How's your legacy tech doing?


Many technologies that we have...

Are as a direct or indirect result of space exploration.

Many years ago, the effective return on investment was calculated to be between $8 to $40 for every dollar spent on space exploration. There are widely varying estimates for the good reason that some things are somewhat intangible - what is the value of an exoskeleton that lets an otherwise disabled person regain mobility? The science and engineering behind exoskeletons was originally developed from space exploration programmes.

That is just one application of one of the advances we have due to space exploration of which there are literally thousands.

NASA has a long write up [nasa.gov]that goes into quite a few details.

There has long been a programme that returns things learned in space to the wider economy - can't remember the programme name at present but it was active in the 80s and 90s at least.

The experiments that are really only feasible in space have yielded amazing results over the decades.

The funding for space exploration is an investment and to quite a large extent from private investment now.

So please stop the 'it costs us money with no return'. The returns far outweigh the initial costs.

Microsoft finds critical hole in operating system that for once isn't Windows


From 'The 10 Commandments for C Programmers'

5 Thou shalt check the array bounds of all strings (indeed, all arrays), for surely where thou typest ``foo'' someone someday shall type ``supercalifragilisticexpialidocious''.

We have known for decades that out of bounds access is an issue, which is why strncpy() is the preferred method.

Also see the next commandment

6 If a function be advertised to return an error code in the event of difficulties, thou shalt check for that code, yea, even though the checks triple the size of thy code and produce aches in thy typing fingers, for if thou thinkest ``it cannot happen to me'', the gods shall surely punish thee for thy arrogance.

The Ten Commandments for C Programmers (Annotated Edition)

In a time before calculators, going the extra mile at work sometimes didn't add up


The GCE O level

I don't know for certain what is on a GCSE maths paper, but when I took my GCE O level [1] it was 3 papers.

Paper 1. Mensuration (numerical calculation). Everything up to and including powers - the question I remember distinctly was evaluate 27^^(2/3). No calculation aids at all except scratch paper and a pencil.

Paper 2. Algebra and plane geometry. You were allowed calculation aids (a calculator if you could afford the batteries). Simultaneous equations, quadratics (including finding the roots), ordinary geometry. I remember having to plot a rather complex function that had cubes, squares and more.

Paper 3. Calculus. Nothing too hairy (didn't need to understand the chain rule for instance) but still 'interesting'

All papers needed a minimum of 50% for a pass and a 'D' was with an aggregate average of 65%.

[1] This was after I left school by some years in the early 70s as I left when I was 15 which was legal at the time. Provided and proctored at a Royal Naval Air Station.

Icon for what I had when the results came back.

This tiny Intel Xeon-toting PC board can take your Raspberry Pi any day


Re: Only with the Xeon CPU

In avionics (particularly mission critical and safety critical), ECC for main memory and L2 has been a requirement for decades. L1 parity is also required.

This is why you won't find much from Intel in those markets (I know of some 6U boards used for display but that's about it). It is not a particularly large market but it can still be quite lucrative.

Freescale (now part of NXP) and others have offered ECC for a long time. I designed a SBC for that market using a MPC8548E about 12 years ago that had an 85mm x 90mm footprint.

On the bit flip front, the chance of it happening is proportional to altitude (up to about 70,000 feet) and there was a study on servers in Denver where bit flips were regularly detected.

You really don't want the flight control computers(s) to have undetected errors.

Too little, too late: Intel's legacy is eroding


Re: "Diversity will destroy this company"

I have had to deal wit this problem in the past.

At $COMPANY in the deep south in the 90s, the director of operations (and a lot of the local males) considered females to be inferior; I am sure you know the type. That is not my view as I have met many women in engineering and other professions who were clearly superior to many, if not all, of their male colleagues.

There was an opening for a supervisory position in the repair department which was a bit top heavy with females (low pay and all that) so I talked to the CEO (my direct boss at the time) and he agreed I could completely anonymise any applications.

Given that I was guiding the technical operation, I was rather well placed to know the strengths (and the areas in which they struggled) of each potential candidate.

The applications were all handed to me directly and I then made new applications with the titles Candidate A, Candidate B and so forth. The operations director was livid because he could not see the names but we had done an end run around that.

Each candidate interviewed with an outside expert in management (some management skills were necessary) and that was written up with scoring.

I did the technical scoring. It had been agreed that the candidate with the best overall score (somewhat weighted as both technical and management skills were required - the solution was to multiply the scores). There were some other things such as attitudes to others and so forth - the last thing we wanted was a psychopath [1].

Now the entire process was explained to the candidates individually who all agreed it was fair and above board.

This objective method yielded a very clear winner who happened to be female. I am still amazed we managed to keep it all secret.

The operations director grumbled but there really wasn't anything he could do about it.

One of the male crew (who really wasn't that good) said he wouldn't work for a woman so he was told to not let the door hit his ass on the way out.

I am certain that had that process not been followed one of the male candidates would have been chosen.

The person chosen was the best choice from all the candidates, which is as it should be.

[1]. Certain psychopathic traits are actually a good thing. The good psychopath's guide to success is both fascinating and a pretty good read although a lot of it is in the vernacular.

Engineers on the brink of extinction threaten entire tech ecosystems


So many reasons

There are a lot of reasons we don't see EEs that much (well, you do if you are in the right place).

When someone who fixes a washing machine using a fault finding handbook calls themselves an 'engineer' it rather colours the perception of young people. I had an amusing altercation when my washing machine broke (in warranty) some years ago.

Me: Washing machine pops the breaker

Them: We will arrange for an engineer to visit

Me: I need it fixed, not redesigned

Them: <silence>

That aside, universities don't give enough hands on experience and unless they get a decent mentor they are lost and drift away from the EE side.

Universities don't give analogue the attention it needs; it is an analogue world and if you want to interface sensors, you will very probably need skills in this arena. Want to do high speed (multi gigabit links)? You will need all manner of analogue skills. Mixed signal (fast digital and sensitive analogue) is challenging and needs in depth analogue skills.

At the physical layer, every signal is analogue (until you get to Planck quantities anyway) as EMC testing proves on a daily basis.

I have met many EE graduates who think EE is a matter of getting a RPi or Arduino and flashing LEDs.

A microcontroller is used more often than is strictly necessary, in my view (ymmv) and I blame the Universities for that.

A lack of 'getting them young'. The spark starts early and it has to be interesting enough to keep them engaged. Talking about engineering (of any description) only after they are 13 or 14 is way too late.

There is a lot of electronics design done and built in the UK where I have been a part of it. I can't trust the Chinese suppliers to use the correct grade of PCB material (the glass transition temperature matters a lot in some designs and when it comes to controlled impedance forget it).

I do electronics hardware and usually write my own software for embedded stuff (often bare metal) - RTOS's are overused and where you need deterministic behaviour they suck.

In "The Art of Electronics" the authors say that electronic design is "a few laws of physics, a few rules of thumb and a large bag of tricks" (might be paraphrased)..

Sticking around long enough to learn those tricks and rules of thumb can be daunting when young grads don't have decent mentors or managers to say nothing of an inadequate education.

Software is often seen as the cool area, but software needs something to actually run on and it usually is not a desktop or laptop in many many cases. To echo someone else on this thread, digital sampling of signals is a science in its own right and is totally different to continuous time (analogue) techniques and it is not simple at all. Why that is not made clear is beyond me (unless the lecturers / professors don't know. of course).

I do not have a degree; I look at my career as a self directed apprenticeship. It also helped that I went to the USA after leaving the Royal Navy where the attitude (in stark contrast to the UK at the time which could be very elitist) was 'can you do the job'

Yes, there is a lot to learn, but that is part of the attraction for me; not sure how youngsters see it in a world where instant gratification and 'everyone should get a prize' seems to be the expectation.

Big Tech bosses call for computer science to be taught in all US schools



I am all for teaching problem solving skills but different people approach it differently - one size does not fit all.

That brings the question of aptitude into this mix; given that logical problem solving typically requires a particular type of mindset, are we going to saddle some children with a subject they will come to detest?

Case in point: when I was much younger (and dinosaurs roamed the earth) the teacher we had for logarithms (never forget it) went on about the exponent and the mantissa endlessly as if it was the answer to life, the universe and everything. While those are indeed both part of the notation it failed to get to the point or actually explain what a logarithm actually is. It didn't help that the teacher had a very strong Spanish accent (I have no problem with the Spanish but a dialect closer to the target audience is usually better). Totally put me off the subject.

Once I decided to look at it again because so many of the things I was being taught in avionics had logarithms, I looked closely and realised that logarithms can be defined in a way that takes one line:

If x = log(base a)y, then a^^x = y. The elegance of that appeals to me, incidentally.

There is an important part here; not everyone learns the same way or at the same rate so a 'standardised' approach is going to be a nightmare (as our education system already is) with two groups getting to detest the lessons:

1. The ones who pick things up very quickly. They will be bored o tears.

2. The ones who require at least 10 iterations to grasp a concept. They will find it such heavy going that they may just give up on it.

Having taught post secondary for some years, I can assure you that those groups exist, to a larger or lesser extent in every class.

Apart from the base problem solving, this should be an elective.

FYI: BMW puts heated seats, other features behind paywall


In some markets...

This approach makes sense.

Test equipment (and I am not talking about cheap multimeters) can be very expensive. I specified a new oscilloscope about 10 years ago capable of measuring signals up to 12 GHz and the price (with $KeyAccount discount) was about £120K - the probes alone were over £20K.

Not everyone needs all the features that the hardware supports so some of the more esoteric features are not enabled - you need to buy a licence (or in some cases a small card that fits in a slot) to get those features.

For older kit, there would be empty slots within the chassis that you could populate to get added features.

The hardware in many high end test instruments is capable of a great deal but if you don't need those features you don't pay for them although the option to enable them is usually available.

I can buy an oscilloscope good to 200MHz for less than £200 but if I need a certified device (full calibration records) as I do in a great deal of my work that just won't cut it.

Lab grade precision multimeters can set you back over £10K.

Test equipment (particularly high end oscilloscopes) are hardly mass produced items so it makes sense for the manufacturers to have a couple of base designs where features are turned on and off with a licence key.

For this market, such a scheme makes sense. Cars not so much.

Systemd supremo Lennart Poettering leaves Red Hat for Microsoft


Re: I couldn't be happier ...

Many years ago (early to mid 80s) I was at Raytheon in Virginia Beach.

The chief engineer got a request from a government department for a reference for a person he had fired.

He wrote that 'the Commonwealth of Virginia and <name> deserve each other'.

I think the same sentiment applies here.

Intel to sell Massachusetts R&D site, once home to its only New England fab


DEC Ethernet devices

The DEC ethernet PHY line of devices were owned by Intel in 98.

I was working with those devices and Intel issued a revised device that did not meet the PCI spec and which caused a great deal of pain for the company I was with at the time.

That was in the days prior to automatic PCN (product change notices) being issued. I found out after managing to find a blurb (via google when they were still a 'do no evil' company) that stated the problem.

That line was probably offloaded to Intel prior to the main company being sold.

Linus Torvalds says Rust is coming to the Linux kernel 'real soon now'


Re: Seriously, are programmers that bad?

I have been doing embedded C for over 30 years for various microcontrollers; the only real difference is the number of peripherals actually on chip. One or two back then, dozens or more now.

The first one I used with embedded peripherals (not my first microprocessor plus peripherals which was an 8080) was a 65121 from California Micro Devices which was based on the 65C02 which was admittedly written in assembly.

I completely agree that it is perfectly possible to write safe object oriented code in C (and probably any language we care to name, although there the question might be why).

Among other things some of my projects require true hard real time performance such as ADC and DAC interfacing (without that, the standard DSP equations break) so the concept of task and time windowing becomes very important. Deterministic performance is not difficult to get, but apparently some find it too difficult.

Sometimes I even use a small microcontroller that is dedicated to the task of actually doing the interfacing with DMA upstream to simplify the application layer.

I have used Rust a bit and I find the syntax a bit arcane but I can live with it.

I have seen many a C macro fu for device dependencies; those 'families' of devices aren't always as close as the vendors try and tell you.

Will optics ever replace copper interconnects? We asked this silicon photonics startup


Re: The medium is the messenger

The velocity of propagation for electromagnetic fields in copper are typically 0.5c for most PCBs and about 0.67c in coax cables.

It is down the relative permittivity of the materials involved.

Strictly speaking, the permeability (magnetic part of this) should be considered but the relative permeability of copper compared to a vacuum is within parts per million.

Buoyant tech sector bucking the UK trend, says consultancy


New skillsets?

RSM UK economist Thomas Pugh said that labour shortages in the media and tech sectors are due to the relatively new skillsets the industries require.

In my experience, it is older skill sets that are in demand especially in electronics.

There is a worrying trend at some universities to minimise lab time and teaching analogue electronics, both of which are critical skills.

If someone is really skilled in analogue they can deal with digital circuitry quite easily [1] but the other way around not so much.

It is an analogue world (at least until you get to the Planck energy level) and it is necessary to interface to it, which has many pitfalls for the unwary.

[1] Designing a modern piece of kit with anything resembling fast edge rates (and that is a lot of stuff) requires a decent knowledge of transmission line theory which is as analogue as it gets.

UK opens national security probe into 2021 sale of local wafer fab to Chinese company


Re: Process nodes

If you look at what Newport are into, it is not an ordinary CMOS foundry.

They are making compound semiconductors which are at the core of a lot of very new stuff(such as high power distribution in electric vehicles and RF power amps for 5G base stations) and they are also the basis for some advanced photonics.

Totally different process and applications and definitely extremely useful to say nothing of being a very attractive IP target.

The world of electronics is far larger than just microprocessors and memory devices on sub 5nm processes.

Thumb Up

Re: More technical details

For microwave, GasAs (Gallium Arsenide), GaAlAs (Gallium Aluminium Arsenide) and SiGe (Silicon Germanium) are common compounds.

Photonics also uses compound semiconductors.

There are other areas where various compounds are displacing Silicon (or providing a capability that cannot be provided) such as GaN (Gallium Nitride - extremely efficient Si MOSFET replacements that provide very high efficiency switch mode power supplies for instance) and Silicon Carbide (SiC) which excels at high voltages and is often used in multi kV power.

So the post is totally correct.

Semiconductors are not just Silicon.


Process nodes

A common misconception, which is alluded to in the article, is that only the shiny 2, 5, 7nm nodes and so forth are the cutting edge.

The Newport fab are experts in power MOSFETs (a critical component in many applications) and are part of the Compound Semiconductor Consortium.

For every shiny new processor / ASIC / <new shiny du jour> there are dozens to hundreds of support components; the global supply chain problems in the automotive market are more for these support components than processors.

Compound semiconductors are (currently) somewhat niche but it is a very fast growing area.

Please remember that the very small geometry nodes are actually a relatively small part of the electronics industry.

Beware the fury of a database developer torn from tables and SQL


Literal translations

Many years ago I was a member of a Fleet Air Arm squadron (F-4 Phantom II aircraft) and the powers that were wanted to get 'up to date' and have the squadron motto in English rather than latin (as many were at the time).

The motto was 'Strike Unseen', but the literal translation was 'Lash out blindly'.


Lost in translation

Many years ago, some of the Japanese semiconductor vendors would provide (translated) datasheets in English.

Those suffered (in an amusing way, although it could be frustrating) from precisely this effect; one that comes to mind is NJR (now part of Nisshinbo apparently).

I think they now employ properly trained translators.

Start your engines: Windows 11 ready for broad deployment


No local account is a killer

At $Company, there are some machines that will never be connected to a network so I am getting replacement kit with Win10 Pro for some really old laptops where I can still make a local account. They run dedicated tests (written in VB6) and have no need at all for network access. Besides, $Customer insists on no network access.

In the future, I might just migrate the tests over to Linux (must check to see if they will run under Wine) but if not, they are not that complex.

A little pain, but nowhere near the mess I would encounter with Win11.

GPL legal battle: Vizio told by judge it will have to answer breach-of-contract claims


Digital protocols

Digital protocols do not require a microcontroller to render them; it just makes it a bit easier.

It could conceivably be done in discrete hardware (not that I would, but it is certainly feasible).

I have designed video systems where the only reason for the processing was to switch sources, destination and select the appropriate decoder.

All that said, the low cost of modern electronics (current supply woes notwithstanding) means it is pretty simple to add all that in (mostly) general purpose hardware.

Open-source leaders' reputations as jerks is undeserved


Prety broad brush

I know and work with some people on the autistic spectrum (they have been diagnosed by suitably qualified people and $COMPANY has a rather inclusive attitude as they are really good in some positions).

They are all pretty unique and I cannot state any single (or even multiple) traits that they share.

I am not on the spectrum but I can honestly say there have been times (many years ago) when I have sent a 'rocket' usually after having to explain for the N(th) time how something operates.

Had such a diagnosis been around when I was young, I would probably have been tagged as ADD / ADHD which can show up as impatience with those who don't pick up concepts particularly quickly (apparently many people are diagnosed with it when in reality they are somewhat different. That doesn't mean it is not a real condition though).

I now follow the advice of my grandmother 'Honey catches more flies than vinegar' (although I am not sure I desire to catch flies, but you get the drift). Being decent to others is something we can all do in my view.

I am not convinced that such diagnoses are larger in tech than other industries, though.

Cisco warns of premature DIMM failures



The article mentions that OS functionality can 'hide' the errors, so clearly these have ECC which is usually correct one, detect two (which may cause the platform to go TITSUP)

Depending on various settings (SDRAM interfaces of all types have literally dozens of registers with hundreds of settings) the OS could trap the error and use the performance counters but whether that is done or not is up to the OS itself.

The problem is unlikely to actually be a silicon issue; far more likely is that a support component (resistor or capacitor) is acting up. It could even be the PCB itself for a few reasons.

FAA to airlines: 5G-sensitive radio altimeters have to go


RF levels

<Puts on RF designer hat>

Although the 5G transmitters are within their specification, RF radiation does not suddenly stop at an arbitrary frequency.

There will still be some 5G transmit energy in the radio altimeter band, and although that level is fairly low, most radio altimeters (regardless of the method used, either pulsed or frequency swept which are the primary techniques) have a receiver sensitivity of around -100dBm.

Put that in perspective; that is 100 femtowatts which is not enough to warm a gnat's behind.

Adding filters won't really help the affected devices very much. All that will do is reduce the resolution of the device and although that may be ok,, it is a material change of specification of a safety critical item which requires re-qualification to level A (failure can result in catastrophic loss of life).

So whether a new piece of equipment or an update to existing equipment is done, there is an expensive piece of work to be done.

The 5G antenna could be adjusted to not point at the sky, of course, which would also solve the problem.

Heresy: Hare programming language an alternative to C


Array bounds

From The Ten Commandments for C Programmers:

5 Thou shalt check the array bounds of all strings (indeed, all arrays), for surely where thou typest ``foo'' someone someday shall type ``supercalifragilisticexpialidocious''.

I have never found it particularly difficult or onerous to check against the actual size of any container in C; admittedly it does require one to do it oneself.


Re: No moving targets

On a thread I watch on a $DifferentPlatform, one commentard believes that many of the more recent additions to C++ have been driven by mathematicians more interested in esoteric constructs rather than useful day to day enhancements.

Move semantics come to mind.