back to article FTP is crusty and mostly dead, right? AWS just started supporting it anyway

AWS has just launched a cloudy FTP service. Yes, that FTP – the File Transfer Protocol that’s being burned out of Firefox and Chrome and dumped by the likes of Debian because it is insecure, crusty, and just not very fashionable. So why is AWS offering it as-a-service? The company’s explanation for the new service is that “ …

  1. Sam Liddicott

    Good. FTP doesn't require that either the file source or destination be the control client.

    Control client C can coordinate a transfer from server A to server B

    1. Anonymous Coward
      Anonymous Coward

      Nor you need to give someone who doesn't need them SSH credentials. There is far less they can do with FTP only ones.

    2. rmv

      FTP / SFTP

      Yeah, but most servers don't allow it these days. For some reason allowing people to get a server to send data to a random port on another server was being abused.

      It used to be good for bounce scans - proxying a port scan through the FTP server to scan IPs behind the firewall.

      Okay, there's less you can do directly with an FTP password - but anyone capturing network traffic can read it, and if it's also the user's login password then you're stuffed. And most FTP servers (proftpd, wu-ftpd and pure-ftpd for example) by default require that a user has a valid login shell.

      And that's not even considering all the goodies you often find by logging into a FTP server as "anonymous".

      The things you need to do to *properly* configure FTP are not significantly simpler than setting up SFTP and at least if you use SFTP you don't have to worry about people sniffing credentials.

      1. bombastic bob Silver badge
        Meh

        Re: FTP / SFTP

        ftp and sftp give you a 'sort of' shell access, but you have to make sure you lock out ssh (for sftp) when setting up unless you WANT the user to have full shell access...

        You'd also want it to use a chroot'd environment to prevent access to the ENTIRE system. Last I checked, setting up FTP did this by default, i.e. no access above the FTP root. SFTP on the other hand is a bit more complicated from what I've seen (and gives you access to the entire file system by default).

  2. andy 103

    It's used because it works

    The reason people still use it is simple. Because it just works. Without any fannying around.

    When I was working as a web developer I frequently heard this argument between developers who were for/against it in terms of deploying sites or even using it in a development environment to get their files from their local machine to a web server.

    One classic was when there was a really heated argument and one dev said to another, "FTP is outdated bullshit, you should have your files on GitHub then do X, Y, Z and deploy them over ssh". The other dev replied "yeah, but we have to get this live immediately and I've just done it in the time you've been ranting". Both had a valid point.

    1. Sgt_Oddball
      Coffee/keyboard

      Re: It's used because it works

      If I remember Notepad++ has a plug in which allowed you to read and write files to FTP servers as if they where local storage. Always a useful bit for when you run everything live rather than ore-compile.

      That said, this was good when there's only a couple of you, any more and proper deploy procedures need to be in place for everyone's sanity (and for finger pointing when an enevitable blamestorm happens. Any dev who's not cocked up either hasn't been working long enough, hard enough or is stubborn enough not to notice). The trick is making sure you don't screw up badly.

      1. andy 103

        Re: It's used because it works

        @Sgt_Oddball absolutely right. I should have said this was years ago and it was literally a couple of developers in a small office. I wouldn't advocate using FTP for web development now even in that scenario. But...

        Although "proper" deployment processes have their advantages they often just create work elsewhere. Then try to claim that they are extremely efficient. The point that the FTP loving dev was making was that he could get something live pretty quickly because he didn't have to faff around with steps X, Y and Z that his opponent was suggesting. In my opinion that's one of the reasons FTP is still widely used.

        1. rg287

          Re: It's used because it works

          Absolutely - and not just in industry but in the consumer space.

          Good look setting up a github/x/y/z to some bog-standard cPanel or Plesk hosting (unless the admins have been uncommonly generous and enabled the appropriate plugins). I actually do do this with a hugo site - commit files to a private repo, which triggers a Github Action to rebuild the site and then... FTPs the Public directory onto the hosting.

          But fundamentally, your options are login and use the web-based file manager or FTP(S) in. It's so easy your dad can use it.

          If you maintain your own servers and can configure your workflow just how you like it (or use cloud services with the latest workflow options) then great. For many consumers and indeed SMBs, FTP is the lowest/simplest common denominator, regardless of whether it's used directly or at the end of an automated testing/build pipeline.

          Along with RDP, which we're all told is prehistoric and "nobody uses RDP anymore" - oh yes they do!

        2. Joe Montana

          Re: It's used because it works

          What he did with FTP, could have been done just as easily with thousands of other file copy methods too..

          SMB, NFS, SCP, RCP, RSYNC etc.

          1. Anonymous Coward
            Anonymous Coward

            @Joe Montana - Re: It's used because it works

            Actually TFTP rules them all.

          2. Zolko Silver badge

            Re: It's used because it works

            @Joe Montana: "What he did with FTP, could have been done just as easily with thousands of other file copy methods too... SMB, NFS, SCP, RCP, RSYNC etc."

            may-be, or may-be not: we had programmed an FTP client in LabView, so we could connect our experimental lab setup, using LabView, with laptops running scientific languages like IDL or Matlab, and connected live to the LabView by FTP. Still works like a charm.

            FTP connections are simple TCP sockets sending some text messages. Now, do that with SFTP !

      2. Boothy

        Re: It's used because it works

        We used to use UltraEdit years ago, when I was in a support and development team (before DevfOps became a thing!), it also had built in FTP and SSH etc.

        At peak we were a team of 6, all sat in one bay, all on desktops (early 2000s, no laptops or option to WFH). Change control was basically "Anyone doing anything with file x on box y at the moment?", If none said yes, "okay, I'm deploying change 'z' ,should be live in two mins".

        Although I did eventually set up a cron job that automatically took hourly snapshots of all configurable files, only backing up those that changed, and created a diff log, so we could see in one place what changed, where and when.

        Was also quite handy being able to have things like log files open in a tab on your local machine, without having to open a terminal.

      3. P. Lee

        Re: It's used because it works

        I used to use with openoffice.

    2. Anonymous Coward
      Thumb Up

      Re: It's used because it works

      Absolutely. But many comments are missing the big picture. This isn't about developers moving code, it's about companies moving data, When you've set up file transfer systems that work and have worked for years (or even decades) you don't mess with them.

      1. rmacd

        Re: It's used because it works

        This is the key point. I remember some time ago being introduced to a company's SAP infrastructure and all the contracted devs / SAP support folk would speak, sotto voce, of these "interfaces" that all had special codes: "interface 17" and the likes. It wasn't for me to know anything more, I was just to know it was an "interface" and its number was "17".

        Fast forward and I figured out that an "interface" was just a batch process that brought some files in via FTP. As was to become apparent to me, these were decrepit little cesspools of filth, where files would end up being put on there in a very specific structure and set of filenames before the next application could come along and read them off the box.

        It's probably still running in exactly the same way - I dare you to "improve" it.

  3. eqkosch
    Flame

    Psst hey you wanna try some of this?

    Clearly a gateway drug that will lead to much harder substances like K8.

    1. Roland6 Silver badge

      Re: Psst hey you wanna try some of this?

      K8? don't you mean KA9Q.

  4. man_iii

    FISH with KDE3 and konqueror KIOslave!

    I used to run as desktop RHEL3.x back in the day and lots of things I did right was using konqueror and fish to access all the boxen.

  5. Joe Drunk

    Update it not kill it

    Would be nice if FTP would be updated to support modern enterprise file transfer requirements yet still easy enough to setup on any TCP/IP enabled device. I use FTP across my LAN to transfer files between all my Linux, Android and Windows devices. A breeze to setup and I can turn the servers off an on easily.

    1. Joe Montana

      Re: Update it not kill it

      It does, there is FTPS which is FTP over SSL...

      The problem is NAT.

      FTP uses separate ports for data transfer and control, and the benefit here is that you can remotely initiate transfers between 2 servers without the data having to touch your client (especially useful when you have slow or asymmetric connections)...

      But this doesnt play well with firewalls or nat, the firewall doesn't know which ports to open or which address to translate them too. There are kludges for plain FTP where the firewall will watch for FTP control traffic and intercept the requests, but this won't work if the control channel is encrypted.

      There are also techniques like bounce scanning, where you can make an ftp server connect to arbitrary host/port combinations as a slow form of port scanning, so you can see what's reachable from the perspective of the FTP server.

      1. Joe Drunk

        Re: Update it not kill it

        I have done SFTP (FTPS' cousin) from my desktop PC home internet to my Phone's 4G connection so I am more than familiar with port forwarding/NAT. My original point was for FTP/FTPS 2.0 which doesn't require static NAT/port forwarding and plays nicely with firewalls. I like FTP's simplicity of setup - username/password, directories, rwdx access. When servicing computers for friends and family it's FTP or removable media for transferring data from my laptop to theirs. I don't have to install anything on their computers the server/client both run from a flash drive I have with all kinds of portable apps

      2. Anonymous Coward
        Anonymous Coward

        Re: Update it not kill it

        Have you ever tried VoIP? Same issues. You'll need to open the SIP and RTP ports.

        And even when you try to use HTTP only it will have to multiplex that single connection via websockets or the like to achieve anything useful - and you don't really know anymore what travels on port 80 and even more so 443.

        NAT is going away as IPv6 becomes more and more needed as IPv4s are scarcer and scarcer. On firewalls you can configure FTP access on a range of ports for the required host as small as you like - of course it will impact the number of concurrent access. Of course a good IDS will help to keep an eye over them.

        Of course lazy admin will hope just to manage 80 and 443 - but see above, today almost anything can go through them, so thinking you're safe just because you have only two ports open is quite naive.

        1. Roland6 Silver badge

          Re: Update it not kill it

          >NAT is going away as IPv6 becomes more and more needed

          You are joking!

          From having spend too much time in recent weeks playing around with UK 4G networks and specifically setting up reliable inbound connections (IPv4 and v6) to a host on the 4G network, we will be needing NAT to get over the obstacles the mobile operators have placed in our way.

    2. Roland6 Silver badge

      Re: Update it not kill it

      >Would be nice if FTP would be updated to support modern enterprise file transfer requirements

      Well, there is NetBLT which is much better than FTP for big files (and not so good comm's links).

      But once you raise the question about updating TCP/IP protocols, you start to ask questions about packet sizes, etc. ie. are the design considerations of a protocol suite where most stuff being transferred was ASCII text-based and big was a few MB compared to today where we typically transfer GBs of binary data.

    3. bombastic bob Silver badge
      Devil

      Re: Update it not kill it

      I use rsync for that kind of thing

  6. Pascal Monett Silver badge

    "insecure, crusty, and just not very fashionable"

    I have written scripts for my customers that use FTP internally several times a day. Some of them were written six years ago, some were written last week. They are only stopped when the functionality they support is deprecated by business choices.

    I'm pretty sure I'll be writing scripts with FTP routines this year and next year as well.

    FTP is far from dead. It may not be fashionable, but fashion has never dictated my working tools.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like