It's a Gartner prediction...
Take with a grain (bag? cargo container's worth?) of salt.
Gartner has asserted that lead times for new networking equipment will remain long until early in the year 2023, and thereafter display "slow incremental improvement over the course of months." The analyst firm offered that grim forecast last week in a document, obtained by The Register, titled "What Are My Options for Dealing …
Security update 'End of Support' date dictates the lifespan for an awful lot of kit now that regulators demand security lapses be reported.
Device management of multiple vendors kit going back years is horribly complicated in any production environment, problem solving becomes a pain when something somewhere isn't working 'quite right' but has not actually failed yet. Moving kit around to optimise the port counts isn't an option in most 24/7 production settings either, the network must be built to service the organisation that pays for it, not the other way around.
These suggestions are basically going back to organically grown networks that save hardware pennies up front while loading the long term management costs, will we see a Gartner report in 18 months extolling the the virtues of network simplification to undo the damage?
While Gartner has a well deserved reputation for publishing useless drivel, I expect supply chain issues to all areas to persist for at least a year. With the possibility of a widening war, it may even get worse. What I have seen is erratic availability of products in many areas I pay attention to. So to expect networking gear not have these issues is rather stupid; I just do not know how bad it will really be and what options might be available.
I think Gartner could have gone further. Although I do understand that even suggesting some vendor-independence is contrary to the way enterprise works and to some of Gartner's revenue.
Firstly, to warn of vendor lock-in via element manager software. If you'd chosen to do your switch port provisioning via generic tools -- say, Netbox and Ansible -- and monitoring via one of the good open SNMP platforms -- say LibreNMS -- then picking up whatever switching hardware is available doesn't raise massive integration and ongoing cost-of-management hassles.
Secondly, to fill out the suggestion for x86. A lot of 'appliance' middleboxes which do packet manipulation are already x86 underneath. There can be large savings in making that explicit. There's a spectrum of choices, from firewalls in VMs, to proprietary software in containers, to generic tools in containers. The state of the art is Linux's XDP software and the fd.io VPP software. Both of these will do high-touch packet manipulation at high rates (over 10Gbps on a modest server). Both can be run in easy-to-manage containers with little performance hit by selecting network interface cards with the SR-IOV feature.
A real cost of moving is in the firewall rules: and again avoiding firewall-specific element management can pay off (added to which, most firewall element managers lack sufficiently powerful storage of firewall rules: lacking auditing of change, a lot of them not even able to carry a JIRA issue ID to identify why the rule even exists; and lacking the abilities of modern configuration management like Git, such as to remove a faulty rule some months old without disturbing the changes made since). There's a lot to be said for maintaining the firewall rules off the firewall, in a YAML file, with symbolic names rather than IP addresses, and then 'compiling' that down to the vendor's format via a continuous integration job which then stages the change into Ansible.
Running a VPN server on x86 is another task best made explicit rather than using an appliance. By running the VPN server explicitly you can divorce it's authentication from corporate user authentication -- replacing corporate passwords with corporate-issued tokens or keys. Then losing a phone or laptop doesn't leak that all-powerful password in some configuration file somewhere. A password loss via client VPN software appears to be the say the Colonial Pipeline was hacked -- that password didn't just allow access to the VPN the say a token might, but onto servers within the network too. Running your own infrastructure also allows multiple types of VPN -- say OpenVPN and Cisco IPsec -- which then allows the use of the VPN clients provided with the device. Avoiding the installation of client VPN software is a substantial saving of helpdesk hassle. Back those VPN servers with a firewall and a Zeek instance to do intrusion detection and the result is as secure as the vendor offerings, easier to use, and runs much faster (because it can be run on this years' x86, not one from 5 years ago packaged into an 'appliance').
Thirdly, these 'some assembly required' systems don't bit-and-dime additional features. If you want active-active or routing protocol support then the questions are technical and management rather than financial -- is resilience best done with load sharing or with a proxy or with anycast; is the added complexity of a routing protocol worthwhile? There are so many firewall pairs configured as active-passive which would be better configured as active-active with a routing protocol, but the clients can't afford that 'added value' solution from a vendor.
Gartner doesn't often give the strengths of these 'some assembly required' solutions. Which is weird as they have massive mission-critical use by the FAANG networks. I would speculate that perhaps those analyses don't look beyond vendors who promote products, to investigate the full range of heavily-used software.