Re: VMS was where I started
Might I suggest a quick revisit to the OSI 7-layer network model (other similar concepts are available)?
X.25 existed down at the Data Link layer, both TC/IP and DECnet utilised this network layer to provide Network Services (slightly simplified because both had parts of their protocol stacks than went down into and below this layer - but both could be run over X.25). As stated, X.25 was more prevalent in the UK and Europe. IIRC in North America the costs of pont-to-point (T1) circuits was significantly lower than elsewhere and later (I think) ATM services were more available and advanced there too, as networks evolved away from T1 circuits. I'm not saying that the IP layer of TCP/IP wasn't actually used to deliver a similar capability to X.25 over T1/ATM.
I write as one of the DECnet gurus of a major global chemical/pharmaceuticals company with it's own extensive private X.25 network (running on GEC minis IIRC). TBH, the overall implementation of X.25 (and the rest of the XXX stack) wasn't all that brilliant and at times failed to deliver exactly what the customer base needed - and I'm talking about the mid-to-late 1980s and early 1990s here.
For running DECnet(-IV) the options were basically a backplane devce or a network appliance configured as a single purpose router (underneath a modified PDP-11/24 and in various configurations ran X.25, SNA Gateway and IIRC pure DDCMP routing). But there were limitations on how many physical connections and speeds each box supported and, again IIRC, it was an OK solution for DECnet-to-DECnet but if you wanted to use it as an "Access Gateway" service for "triple X" terminal connections (from an X.29 PAD to a host service) it was actually quite poor and complex to set up and support.
For one application I worked on this period, we were running highly customised ALL-IN-1 on a VAX-8650 participating in a mixed-use VAXcluster (i.e. there were other VAXen in the VAXcluster but running different application sets, but sharing common infrastructure at the CI level). The users were the division's sales and marketting teams (amd associated hangers-on) - many working "on the road" or in offices away from the major divisional offices in Welwyn and on Teesside (where the computer centre was). They accessed the system if not in an office via dialup to a public network access point and some kind of PAD service then X.25 to X.25 links into the company. The X.25 Gateway solution didn't give enough throughput and the backplane solution (can't recall the product, but used a V.35 connection to the X.25 local switch) was, not to put too fine a point on it, unreliable and took compute resource on an already underpowered host. Not only that, if we had to take the 8650 out of service for maintenance (or whatever) because the X.25 connection went directly to it, there was no opportunity to move the service to another system in the cluster - even if that too had included it's own X.25 card that wouldn't have been a great deal of use as X.25 the address would have been wrong and there wasn't a straightforward way of load-balancing connections between multiple circuits using the single X.25 port address.
What I wanted. and asked for several times without success, was a combination of a beefed up X.25 gateway (probably microVAX based) running the whole X.25 (etc.) stack within itself and then using LAT over Ethernet for connectivity from the Gateway to the host. This would have avoided the CPU load problems on the ALL-IN-1 host and given flexibility and load-sharing had we expanded to run more than one dedicated ALL-IN-1 server.
Don't get me started on the later DECnet Phase V (DECnis) routing/X,25 products .... please (the nightmares haven't really gone away)! Thankfully, cisco were emerging as the MPR (multi protocol router) of choice by the mid 1990s - whatever we think of them today.