Summary: orders are being signed off by those who are out of touch with what's really happening.
Telcos find cloud migrations, security, are a pain in the IaaS
Telecom companies have consumed only 48 percent of the cloud they have committed to, yet seek to secure more, according to a report released late last week by Infosys. The outsourcing giant's "Cloud Radar – Telecom Industry Report" is compiled from over 400 industry insiders, from mid-level management to execs to C-suites. It …
COMMENTS
-
-
Tuesday 22nd October 2024 10:17 GMT Jellied Eel
Semantics
That's further complicated by cloud offering services that directly compete with the telecoms' own offerings – like edge computing, IoT platforms, and virtualized network functions (NFVs).
Cloud doesn't really compete in a lot of those offerings. If the intent is having compute close to users, then edge computing is better in latency terms than relying on a 'cloud', which might be somewhere like Ireland. A big telco might have a private 'cloud' instance from the usual suspects, otherwise cloud is hanging off the edge of the network. Which also makes stuff like NFV slower, and more dangerous, ie extending control plane functions to entities outside the telco's control.
I think a lot is down to the hype around what 'cloud' actually is, ie it's just a pile of servers and storage sitting in a datacentre somewhere. Replicating those functions in a rack in a PoP can do exactly the same thing, just with the challenges of designing, implementing and managing what is effectively a distributed, private 'cloud'.. Along with the usual IT challenges of in-house staff, or trusting the out-house supplier to manage tin & software.
-
Tuesday 22nd October 2024 13:52 GMT Xalran
Re: Semantics
Big Telcos have private clouds in their own datacenters... Built on equipment they bought from the usualt $TELCO_EQUIPMENT_VENDOR suspects.
Said private cloud then hosts all the core network servers found in a 5G Network ( AUC, EPG, you name it, there's so many of them nowadays because the cloud extracted one function after the other from the monolithic equipments to put them on COTS server ).
With 10Gb fiber backhaul & low latency (2ns) routers from the RAN to the Core and eventually some edge equipments for the critical slices implemented to complete the datacenter.
-
Tuesday 22nd October 2024 17:23 GMT Jellied Eel
Re: Semantics
With 10Gb fiber backhaul & low latency (2ns) routers from the RAN to the Core and eventually some edge equipments for the critical slices implemented to complete the datacenter.
Exactly. Smart telcos realised that it made little sense, and cost a LOT of money to buy tin from Cisco, Juniper etc when for most applications, you could run all their functions using decent servers and switches instead. Deutsche Telecom was the first I came across who ran most of their core off *nix servers and gated instead. Especially after BGP (and general routing) tables started to bloat and the box shifters solution was to buy a bigger box. Because despite a lot of routers being little more than PC's hanging loosely off a control plane, they were vastly more expensive and less expandable than decent commodity servers. Plus the OAM costs and overheads were a whole lot simpler than trying to deal with Cisco's licensing system.
Mobile operators generally got that clue early and skipped the telco vendors entirely. And with networks transitioning to Ethernet, so no longer needing to support 'legacy' protocols like SDH or Sonnet, why buy overpriced tin that is often just forwarding out of a few interfaces? And then advances like fast VRAM just made that decision all the more easy. So don't buy Cisco or Juniper unless you absolutely have to. Sure, you might not be able to apply that 'Cisco Powered Network' lable, but when world+dog has that already, it doesn't exactly differentiate. Might mean fewer t-shirts, but business is business.
Which is basically what this article/advertorial is about. Dear telcos, stop doing what you're doing, and give Infosys your money instead! Assuming the telcos haven't experienced Infosys's service levels already, of course.
-
-
Tuesday 22nd October 2024 19:55 GMT Jellied Eel
Re: Semantics
gated? eekk! having mid-1990s flashbacks of running gated on BSDI with X.21 sync cards as a BGP router...
Yep, I think Demon did this as well. Nothing really wrong with the approach, especially given JunOS was pretty much gated and you used to be able to run an emulator (Olive) until they nobbled that. Can't remember who provided Deutsche Telecom, but it was a company that provided support for the implementations as well. It's one of those odd bits of telco history though where despite the Internet being based around 'open standards', vendors moved swiftly to close them with tricks like Cisco disallowing BGP on Cat switches, even though they were perfectly capable running it. Cats were a lot cheaper, more convenient and had higher Ethernet port densitiies that routers could manage. Which made things like running the servers ISPs needed for DNS, mail, news etc a lot more expensive than it needed to be.
Now things have pretty much moved to MPLS and forwarding, the 'BFRs' aren't really needed anywhere near as much. Spin up VMs, load those with VRFs, switch VLANs to instances acting as route reflectors or route servers and call it good.
-
Tuesday 22nd October 2024 23:25 GMT Anonymous Coward
Re: Semantics
"Yep, I think Demon did this as well."
Hmm, don't remember hearing that. They did start out using a pair of "large" (Unix running) PCs as their modem servers using multiport serial cards.
Actually I'm not sure when Demon started doing BGP, they didn't do so AFAIK when they first launched in (April? May?) 1992 using PIPEX as their connectivity provider, I'm not sure if they used BGP when they switched 12 months later (as PIPEX wouldn't renew the contract) to UKnet as their connectivity provider, but obviously they did use BGP 12 months after that when (AFAIK) UKnet wouldn't revew that contract and Demon paid for their T1 (1.544Mbps) connection to Sprintlink (Q2 1994).
Hmm, Wikipedia says "mid 1992" for PIPEX and "1 June 1992" for Demon, my recollection of both services actually went live (rather than the preparation period leading up to that) is vague.
[I assume you know the stories of PIPEX's and Demon's launches and the arguments between them]
"tricks like Cisco disallowing BGP on Cat switches, even though they were perfectly capable running it."
We started out in early 1994 using a 3COM Netbuilder II with 3Com's 1st (and therefore minimal feature) BGP implementation, switching to the BSDI / gated combination and couple of years later. I remember gated being somewhat buggy at the time. The Netbuilder was a bit of overkill to run a single 64Kbps leased line but was the smallest 3Com kit to support BGP.
Back then in the early-mid 1990s ISPs tended to use Cisco AGS+ as their BGP routers.
-
Wednesday 23rd October 2024 09:24 GMT Jellied Eel
Re: Semantics
Hmm, don't remember hearing that. They did start out using a pair of "large" (Unix running) PCs as their modem servers using multiport serial cards.
Ah, those were the days. Plus the time some of those servers decided to relocate to a lower floor. Peter Holder of UKNet fame was collecting UK Internet history and ran some presentations at the UK NOF meetings.
Actually I'm not sure when Demon started doing BGP, they didn't do so AFAIK when they first launched in (April? May?) 1992 using PIPEX as their connectivity provider, I'm not sure if they used BGP when they switched 12 months later.
Yup. Back in the very early days, the Internet was mostly UCCP and academic, then starting the transition to TCP/IP and the Internet we know and love today. So mostly UCCP mail & Usenet, along with frequent dynorodding mail servers to unblock queues. Then telnet-based services, and a mate getting all excited about something called a 'web browser'. Telnet in a GUI frame! Yey! Gopher in a GUI frame! Moar yey!
But I digress. But BGP wasn't needed until ISPs could multi-home, which wasn't really an option at first..
[I assume you know the stories of PIPEX's and Demon's launches and the arguments between them]
Yep. But on the plus side, those arguments lead to the creation of the LINX, which made multi-homing simpler along with needing BGP for peering & transit. Plus arguments continuing, although this time over beers at LINX meetings. Looking back, it's pretty amazing how informal things were, like mailing Jon Postel and asking if we could have a Class B. Also given there weren't many network engineers at the time, customers expecting the 'net to be available 24/7 leading to things like being drunk in charge of an autonomous system.
We started out in early 1994 using a 3COM Netbuilder II with 3Com's 1st (and therefore minimal feature) BGP implementation, switching to the BSDI / gated combination and couple of years later. I remember gated being somewhat buggy at the time. The Netbuilder was a bit of overkill to run a single 64Kbps leased line but was the smallest 3Com kit to support BGP.
Yep, we considered those, but ended up going the Ascend route for dial servers, with AGS for the core. I also remember 3COM having a bit of a quirk with their TCP/IP implementation.. was that MTU size? I have flashbacks of support calls that often started with 'Are you using 3COM?', but can't remember the fix to get those working.
Back then in the early-mid 1990s ISPs tended to use Cisco AGS+ as their BGP routers.
I used to have one of those I used as a coffee table until I donated it to a museum. Which means I can't take it apart to peek at its brain again. The early kit was very.. basic, and often 'PC' based, ie Cisco 7x routers having PCI33/66 backplanes, which lead to some fun remembering which slots were which or cards couldn't get the throughput. Then Junipers being basically an ATM switch with a 100Mbps Ethernet to its PC-based 'brain'. Which then lead to design compromises, ie running things like Netflow could be useful, but not when it swamped the CPU.
That, I think lead to the slow death of the BFR. So vendors kept adding new features, but to utilise those features often meant CPU, and if the CPU got swamped, things like BGP fell over. And the only way to get more CPU was a fork-lift upgrade to a bigger box. So then pointed questions like if we're just forwarding/switching traffic between Ethernets or VLANs with a bit of CPU, why spend $1m+ on a BFR, and not $50k on decent servers and switches. Especially as services were becoming increasingly virtualised.
-
Wednesday 23rd October 2024 16:09 GMT Anonymous Coward
Re: Semantics
"But I digress. But BGP wasn't needed until ISPs could multi-home, which wasn't really an option at first.."
Not true regarding BGP not being "needed". PIPEX was using BGP from day one for their original UUnet/Alternet connection I assume because it was a transit connection. My own ISP needed to use BGP from day as we bought transit (not just bandwidth) from PIPEX for our PI address space, even though we were just "default routing" to PIPEX (as at the time either (a) there was no other UK-based transit provider, or (b) the only other was UKnet with a crazy price).
"Yep, we considered those, but ended up going the Ascend route for dial servers, with AGS for the core."
We went with the 3COM kit as it was far far cheaper than an AGS+.
You mention Ascend for dialup, I'm assuming you meant their PRI ISDN kit (whether with or without "modem cards") as I don't remember them doing analogue kit at all. We started out with a Telebit Netblazer then moved to Livingston PM2s, later getting an Ascend for PRI ISDN - the Ascend stuff was horrible in my opinion, I remember having to run separate RADIUS servers (different software) just for the Ascend stuff as they relied upon Ascend-specific attributes (before the RADIUS spec added generic VFA).
-
Wednesday 23rd October 2024 17:57 GMT Jellied Eel
Re: Semantics
My own ISP needed to use BGP from day as we bought transit (not just bandwidth) from PIPEX for our PI address space, even though we were just "default routing" to PIPEX (as at the time either (a) there was no other UK-based transit provider, or (b) the only other was UKnet with a crazy price).
Yeh, you didn't really need BGP for that given it was still a single-homed connection. But not necessarily a bad thing to do for future-proofing when other Pipex competitors started springing up. Might also be something Keith Mitchell insisted on as well given he had clue.
..later getting an Ascend for PRI ISDN - the Ascend stuff was horrible in my opinion, I remember having to run separate RADIUS servers (different software) just for the Ascend stuff as they relied upon Ascend-specific attributes (before the RADIUS spec added generic VFA).
Yep, we share similar opinions. Like Ascends kinda ignoring 4.7khz because everyone uses the same PSTN as the US.. don't they? Or just when fully loaded, had a disturbing tendency to start smoking. Luckily we never managed to burn down Telehouse or any of Energis's PoPs.
-
Thursday 24th October 2024 15:36 GMT Anonymous Coward
Re: Semantics
"Yeh, you didn't really need BGP for that given it was still a single-homed connection."
My recollection is that RIPE back then *required* you to have an AS (and therefore intended to use BGP) in order to obtain a PI address space allocation. Therefore in order to set up a somewhat independant ISP you needed PI address which needed an AS and use of of BGP.
And from a PIPEX-contract perspective a transit connection was provided only via BGP (from memory).
-
-
-
-
-
-
-
-
-
This post has been deleted by its author
-
Friday 25th October 2024 09:09 GMT Jibberboy2000
But telcos are morons!!
I worked for a fair few years in FinTech baffled how backward, slow and reluctant to change Banks and their Financial supporting business entities around them were. Then I found Telcos … wow the waste is mind blowing, no wonder the folk in the US have such bonkers monthly bills. Seeing how these companies operate behind the scene is shocking. I worked on a project that I was Sub-Sub contracted and to hear that they needed 20Tb (yes you read that right) of memory just to run a part of there overall billing system on a K8s was draw dropping. No scaling was used and they even had configs for “small” and “large” which only really had a difference of longer timeouts and larger memory. Now don’t get me wrong it’s partly due to company’s like Cognizant, Amdocs and TCS who have their crappy software in there and have the Telco by the orbs, but still grow a pair Telcos and get some proper software in. It is over complicated because the telcos have made it that way by being driven by stupid offers and complex contracts marshalled by the marketing muppets!
-
Sunday 27th October 2024 07:25 GMT Anonymous Coward
Re: But telcos are morons!!
The waste in the UK Altnet world was driven by the wall of investment money that headed their way for a few short years. TCS and their ilk milked that, but the main problem was the drive for RFS count, regardless of likelihood to signup, or "stickiness".
Then the capital tap got turned off (thanks to Truss, but it was going to happen eventually anyway) and they hit the "oh, shit, we need to make a profit" moment.
Before they'd got their systems anywhere near properly scalable.
-
Sunday 27th October 2024 20:12 GMT Anonymous Coward
Re: But telcos are morons!!
"Then I found Telcos … wow the waste is mind blowing, no wonder the folk in the US have such bonkers monthly bills."
Having worked in Telco environments for many years I did indeed see lots of "waste" - I can think of at least 2 projects that were killed approximately *one* month before "Go Live" (so the OpCo had already paid for equipment and months of deployment & testing, etc) - for one of those I remember our Professional Services department were making "crazy money" on (20-30 people on-site for extended periods) over a couple of years.
In another situation I remember a new Mobile OpCo being set up by (effectively) a marketing company, they knew nothing (and didn't want to know) at all about the technology, they simply paid a couple of vendors large amounts of money to set everything up and to have staff on-site 7x24 so that whenever any service alarm went off all they cared about was which of the vendors they should scream at to fix it. They didn't mind paying large amounts of money to the vendors as it was still only a small percentage of the even larger amount of money they were making.
-