back to article Explained: The thinking behind the 32GB Windows Format limit on FAT32

There is at last a definitive answer to the question of why the Windows user interface slapped a 32GB limit on the formatting of FAT32 volumes and it's "because I said so," according to the engineer responsible. While many welcomed 2021 within the walls of their own home, retired Microsoft engineer Dave Plummer marked the end …

  1. bpfh
    Headmaster

    Future proofing size constraints

    I remember a discussion with a developer in the late 90's who was designing gaming software for casinos, and I remember incrediously listening to him telling me that within 20 years we will have 2 tb storage on something the size of a finger nail.

    Whelp, last year (dammit, year before last, we are 2021 now) Sandisk came out with a 1 terabyte microSD card. I suppose two stacked on top of each other would meet that 2tb definition given to me in 1997, yet in the days of when high tech was a 2 gb hard drive and 16 meg of RAM, 2 tb on a postage stamp was star trek science fiction.

    So, I guess we should channel both Moore and his law, and Bill Gates and his 640k is enough for anyone, and actually not try to limit ourselves - Storage is always going to be bigger, and it's going to arrive faster than we think. Flexibility is key, and if you have absolutes, make sure they are actual technical ones - like 32 bits is always going to be 2 or 4 billion depending on signing, and not arbitrary "lets set this number limit at 10 million and call it a day". It will trip you up in database ID sequences just as bad as file system design one day in the - possibly not so distant - future!

    1. Flak

      Re: Future proofing size constraints

      Adding to your Gordon Moore and Bill Gates a certain Thomas Watson, president of IBM who declared in 1943: "I think there is a world market for maybe five computers."

      Then again, hindsight is a wonderful thing.

      Thankfully there are alternatives to FAT32 which overcome the limitation.

      1. Martin Howe
        Unhappy

        Re: Future proofing size constraints

        Don't forget Stephen Morse with '1MB is a lot for 1976'; indeed it was, but not having 32-bit segment registers (even with lower 4 bits forced zero to begin with) as a priority? Maybe making SI and DI 32 bits, like having a couple of 16-bit index registers in 8 bit CPUs? That decision alone has cost us a hell of a lot of progress :(

        1. Nick Ryan Silver badge

          Re: Future proofing size constraints

          From memory this was also due to die size, possibly complexity too, and therefore also a significant cost saving.

      2. keith_w

        Re: Future proofing size constraints

        Leave us not forget Ken Olsen (DEC) who is said to have asked "Why would anyone want a computer in their house?"

        Also, I don't think it was Bill Gates who said the 640K quote, it was an IBM engineer working on the first IBM PCs.

        Also, I had assumed that the Format GUI was a simple front end for the DOS Format command.

        1. Version 1.0 Silver badge
          Happy

          Re: Future proofing size constraints

          I have a desktop sized PDP11/23 made by Plessey that runs RSX11M, I'd have to fire it up to check but I think it's got a 36Mb hard disk drive, I used to run it in the living room - it beat the crap out of MSDOS when I bought it.

          1. Missing Semicolon Silver badge
            Happy

            Re: Future proofing size constraints

            @Version1.0 As a room heater, or a computer?

            1. phuzz Silver badge
              Happy

              Re: Future proofing size constraints

              You know how well they built those early computers, they mean it could literally be used beat the crap out of something without getting a scratch.

    2. Doctor Syntax Silver badge

      Re: Future proofing size constraints

      And it's always going to be tricky. Use bigger block sizes and you end up with wasted space for small files as described. Use small block sizes and increasing levels of indirect Unix-style and you end up with the performance hit of multiple indirection. It's never going to be easy and the next use case can shatter your ingenious solution.

      1. John Brown (no body) Silver badge

        Re: Future proofing size constraints

        "And it's always going to be tricky. Use bigger block sizes and you end up with wasted space for small files as described."

        And especially not forgetting that back then, a "program" often came with many, many tiny "support" files not wrapped up in DLLs. eg multiple .ICO files. There was often an enormous amount of wasted space in the cluster "slack space". Windows itself had 1000's of small support files, wasting significant space on large drives. I remember spending many hours optimising my own system at home by trying to get rid of as much of the smaller cruft as possible and trying to balance cluster size against speed, usability and waste.

      2. druck Silver badge

        Re: Future proofing size constraints

        RISC OS's Filecore format mitigates the waste of an entire cluster (Large File Allocation Unit for Filecore) for a small file, by allowing several small files to share the cluster with their parent directory.

    3. Alan Brown Silver badge

      Re: Future proofing size constraints

      I remember making similar calculations as a callow youth in 1982 that we'd have systems with 1-4GB ram in 2000 and thinking it was impossibly huge

      1. Tom 7

        Re: Future proofing size constraints

        I was doing chips for 400mb Trans Atlantic FO cables in the mid 80s. Never thought I'd have about 15 times that capacity in my own house for the kids homework and games. I can get twice that during the day to the rest of the world and haven't used a phone yet this year!

    4. Tom 7

      Re: Future proofing size constraints

      Nowadays it would probably be quite easy - have a look at how a few million disks are used and extrapolate from that. Back then we were veering all over the place and had no tools to collect the data required. I still remember saying the internet would never catch on at home as you really needed a network card and 10mbps to make it faster than ordering a tape or some floppies over the phone. But as a chip designer I underestimated the power of porn in the hands of management!

    5. davepl

      Re: Future proofing size constraints

      Using this idealized approach, early MS-DOS wouldn't have had 32M limits either. It'd be unlimited, with IIDs for sectors and so on, unlimited length filenames, no file size limits. And it should finish booting in about six weeks from tomorrow... to an unusably slow state.

      Sometimes limits aren't failures of foresight, they are CONSTRAINTs arbitrarily and artificially put in place by designers or implementers in order to make an unbounded problem more tenable, or to make it possible to solve given the practical limits of the typical consumer's hardware.

      And if all that failed, you could always still pick NTFS, which had far fewer constraints at a higher performance and RAM penalty.

    6. Roland6 Silver badge

      Re: Future proofing size constraints

      In the mid~1990's I remember attending a presentation by Peter Cochrane, where he illustrated a point by talking about a shirt pocket 1PB storage device and a similarly high-performance wireless network connection capable of utilising that storage. Naturally, we were all at a loss to come up with any idea as to what you could use both that storage and connectivity for, other than something akin to Google glasses with continuous record enabled.

  2. Pascal Monett Silver badge

    "the age-old problem of the temporary solution becoming de-facto permanent"

    Hardly surprising. Everything in computing has been a continuous discovery, and why change what works ?

    That is why file systems have evolved under different names, and will continue to do so. NTFS is much better than FAT32, but FAT32 has its uses.

    Not a mistake, a demonstration that computing has evolved ans will continue to do so.

    You can't be perfect the first time around.

    1. AMBxx Silver badge
      Joke

      Re: "the age-old problem of the temporary solution becoming de-facto permanent"

      >>> You can't be perfect the first time around.

      Speak for yourself!!

    2. Anonymous Coward
      Anonymous Coward

      Re: "the age-old problem of the temporary solution becoming de-facto permanent"

      "Not a mistake, a demonstration that computing has evolved ans will continue to do so.

      Until the de facto changes, enter Electron.

      While all my opinion, there's a lot of backwards going on right now in computing. Cloud, flat wasteful interfaces, AIO runtimes, pop-ups Captcha, rental software, OS spying data gathering... if you stop to take a look, you might see the opposite of progression. One thing is for sure, faster computers are almost solely developed to build even faster computers to run all of the above, as if the human element is entirely ignored.

      1. Lomax
        Megaphone

        Re: "the age-old problem of the temporary solution becoming de-facto permanent"

        > there's a lot of backwards going on right now in computing

        I was just thinking about how barely anyone I know uses anything that looks like a computer any more; the programmers have grown up to middle management (and have forgotten all about code), the creatives have become YouTubers (and have forgotten all about creating), and most others work in hospitality (as in AirBnB), sales (as in B&Q), warehousing (as in Amazon) or driving (as in Uber). The black mirror they all carry in their pockets, and which studiously records their every breath, is what they use for anything which would otherwise require a computer. They give me strange looks when I pull out my laptop to check my emails ("why don't you use WhatsApp!?"). Also: none of them chat, email or write any more - the UI is optimised for voice & video and "typing" is reduced to the occasional "LOL" or smiley. Talk about going backwards - the technology which once promised to set us all free instead turned us into mindless serfs. The most brilliant tool ever invented is no longer used to make, only to consume.

        1. Lomax

          Re: "the age-old problem of the temporary solution becoming de-facto permanent"

          Side note: Until the End of the World, anyone?

    3. Arthur the cat Silver badge

      Re: "the age-old problem of the temporary solution becoming de-facto permanent"

      You can't be perfect the first time around.

      And then you hit second system syndrome.

  3. chivo243 Silver badge
    Windows

    What about FAT file transfer?

    3.96gb? Where did that limitation come from? Is this part of the bigger "Because I said so!" movement?

    1. Dan 55 Silver badge

      Re: What about FAT file transfer?

      File size is stored in 32 bits, giving you a 4 GiB limit.

      1. John Brown (no body) Silver badge

        Re: What about FAT file transfer?

        And hit most people when DVDs arrived. Building a 4.7GB ISO image wasn't possible. The only option was to "build and burn" and hope your computer didn't decided to "pause" while burning the disc. (Yes, there were "burn-proof" options and other methods, but most people probably had a few failures before "discovering" how do it successfully)

        1. Anonymous Coward
          Anonymous Coward

          Re: What about FAT file transfer?

          "And hit most people when DVDs arrived. Building a 4.7GB ISO image wasn't possible."

          Further aggrevation when the size of a Windows install .iso itself became larger than that.

          1. Woodnag

            Whaddabout CDs?

            Must have interesting authoring a 650MB CD in the early 1980's...

            1. DuncanLarge Silver badge

              Re: Whaddabout CDs?

              > Must have interesting authoring a 650MB CD in the early 1980's...

              You didn't. Orange book did not come out till 1990 and even though CD-R like burners existed in 1988 they were washing machine sized and cost $35,000.

              It wasn't till 1995 that CD burners came about that were less than $1000 and by then we were starting to get win 95.

              As far as authoring a CD-DA disc, well you stored the data on video tape as digital signals, much like barcodes. As the machines were used in NTSC and PAL regions it was found that a sample rate of 44,100Hz would work with both standards. This is why CD audio is 44.1kHz and not 48kHz as originally intended.

              Didn't matter, 44,100 Hz, 16 bit still allowed perfect reproduction of the original signal, shame about audiophiles who think that hi-rez is worth it (beyond getting an unabused VERSION of a recording, they won't give you that on CD because they want to cheat you).

              1. Archivist

                Re: Whaddabout CDs?

                Upvoted as mostly correct. The numbers for NTSC get you 44.056KHz not 44.1KHz because the field frequency is 50.94Hz, not 60Hz, so that's not the reason.

                44.1kHz was chosen as a compromise of frequency response (Nyquist) and maximising recording length on a CD.

                1. Woodnag

                  Re: Whaddabout CDs?

                  Thanks to both of you!

          2. tcmonkey

            Re: What about FAT file transfer?

            I still get occasional pains from the install.wim file being larger than a FAT32 volume can hold. Not often, mind you, but it does happen.

        2. Anonymous Coward
          Anonymous Coward

          Re: What about FAT file transfer?

          Yes, it was a huge pain for those attempting to rip a DVD on their lower-end system which couldn't run NT...

        3. DuncanLarge Silver badge

          Re: What about FAT file transfer?

          Yes, I never thought about that.

          These days burning while mastering is the norm, even for BD-r.

  4. Anonymous Coward
    Anonymous Coward

    MS, how about recognising EXT,HFS+ formats so it doesn't result in the format dialog box. FFS 2021.

    I have to use 4 different OS's as part of my job, yes, you can use exFAT, FAT32 across these, but it's about time Windows 10 20H2 could recognise these 'unrecognised file systems' and at least exit showing the format type instead, instead of offering a very dated modal format dialog box, which can accidentally result in formatting the bloody thing (usb drive) if you're not careful, as the focus is left dangerously on the 'Format disk" as the default choice.

    Come on Microsoft, sort it, you have an underlying Linux shell in Windows 10 20H2 FFS, this has gone beyond a joke. I get it, its aimed to keep people within the Windows ecosystem, but it's a bloody annoying modal dialog box, there has to be a better way of dealing with unrecognised formats.

    It's such a poor interface design in 2020, we all have to use different OS's in 2021, as part of our day jobs.

    1. Nick Ryan Silver badge

      Re: MS, how about recognising EXT,HFS+ formats so it doesn't result in the format dialog box....

      That's standard Microsoft Behaviour. As far as possible to only support their own technologies, other systems can make themselves compatible with Microsoft's proprietary standards (often nothing standard about them either).

    2. Stoneshop

      Re: MS, how about recognising EXT,HFS+ formats

      I have to use 4 different OS's as part of my job, yes, you can use exFAT, FAT32 across these, but it's about time Windows 10 20H2 could recognise these 'unrecognised file systems'

      W7 throws up a BSOD if you offer it an ODS-2 formatted USB drive; haven't tried with W10 yet and at the moment I don't have that stick at hand (it's at the office, haven't been there since March and I doubt I'll be going there any time soon)

    3. doublelayer Silver badge

      Re: MS, how about recognising EXT,HFS+ formats so it doesn't result in the format dialog box

      For HFS+, you can get a driver for it which gives Windows read-only support for it from Apple's Boot Camp drivers collection. I haven't used it in a while, but I wouldn't be surprised to hear that it supports APFS now as well.

    4. J. Cook Silver badge

      Re: MS, how about recognising EXT,HFS+ formats... FFS 2021.

      .. that would require them to figure out how to support multiple filesystem drivers.

      And while we are at it: in addition to EXT and HFS+, let's throw in VMFS 5.0, ZFS, WAFL, and for the hell of it, native NFS support, like other grown-up operating systems are capable of. :)

      1. Stoneshop

        Re: MS, how about recognising EXT,HFS+ formats... FFS 2021.

        ZFS is not just a file system, it's also a raid-like storage subsystem that does have a few licencing tentacles on top, so I don't see that happening any time soon (just like the others you mention, only even less soon than those).

        Native NFS would probably be the first as it's available already, only not as an option in a default install.

  5. Ian Bush
    Headmaster

    "temporary solution becoming de-facto permanent"

    Like the increasingly common but incorrect hyphen in de facto, ab initio and similar?

    1. NightFox

      Re: "temporary solution becoming de-facto permanent"

      sim-ilar?

    2. Graham Dawson Silver badge
      Coat

      Re: "temporary solution becoming de-facto permanent"

      Ah yes, the hyphen ex nihilo. (actually I pulled it out of my pocket)

  6. David Roberts
    Paris Hilton

    Command line?

    My probably dumb assumption is that the dialogue box is merely a front end to save the dumb user from having to use the command line and remember the options.

    If so it shouldn't (hah!) be a major development to update the dialogue box to allow a larger cluster size and larger volume.

    Unless I am missing something obvious.

    1. MatthewSt

      Re: Command line?

      I would imagine that it has to go through many layers of planning because it will have to have both the accessibility and internationalisation updated, and any documentation for it would also need changing. Not impossible by any stretch, but I doubt it could be done with less than a person-week of resources, and they're too busy removing useful functionality out of Control Panel to update legacy UI

  7. Jaspa

    Weekend viewing

    Happened across this and the reasoning behind task mangler this weekend.

    I'd recommend his "tweaking" the source code for Tempest as well.

  8. Potemkine! Silver badge

    "Def-Pro"

    "Definitively Provisional"

    Reminds me of "provisional buildings" we were in at school, which were there for at least 20 years...

    1. chivo243 Silver badge

      Re: "Def-Pro"

      It's even better when the "provisional building" is deemed too small, and an extension is added on!

    2. Blofeld's Cat

      Re: "Def-Pro"

      "... provisional buildings ..."

      When Manchester Central railway station opened in 1880 it had a temporary wooden building housing the ticket offices etc.

      That temporary building was still in use when the station closed in 1969.

      1. phuzz Silver badge

        Re: "Def-Pro"

        Or indeed the Pacer. Intended as a short term (and more importantly, cheap) train, they're still in service today. Despite being intended to last only twenty years in 1980, and scheduled to be removed from service in 2019, there's still some in service today.

    3. Alan Brown Silver badge

      Re: "Def-Pro"

      the primary school I was at had a number of such buildings added in the 1970s, intended to be removed by 1980. Several were dropped on playing areas (tennis courts, etc)

      come 2021, They're all still there - and extensions added.

      I just hope the coal-burning pot belly stoves have been removed. Then again they were bloody cold in winter even with those

      1. roytrubshaw
        Paris Hilton

        Re: "Def-Pro"

        I see your 50-(or so)-year-old buildings and raise you the old gym. at my secondary school, built as a temporary expedient in the 1940s and - apparently - still there today if Google Maps is anything to go by.

    4. NightFox

      Re: "Def-Pro"

      Was it just at my school that these temporary buildings were called 'Terrapins' or was that terminology more widespread? I guess that was the name of a brand or model.

      1. Peter Gathercole Silver badge

        Re: "Def-Pro"

        Unless we were both at school in Surrey, it was a common term. I think it was a way of prefab (or should that be pre-fab, or pre. fab. taking into account the previous comments) construction specifically for school classrooms.

        A room big enough for 30 students and a lobby area to hang coats, possibly with a walk-in cupboard. Two sides, two ends, an internal wall and two halves of a peaked roof. Delivered with heating and electric wiring already in place, and placed on piles of bricks or breeze blocks. We had 8 delivered and built in the space of two days in my school in the middle of the 1970's. And I remember one in the late 1960's as well.

        1. irrelevant

          Re: "Def-Pro"

          Sounds like the temporary classrooms I started out in at my primary school, 1972. We moved out into a new purpose designed open-plan school the other side of the field a couple of years later. A new school took over the buildings, which my sister attended for a time, and a quick look on streetview and it seems the same buildings are still there, 50 years on, albeit with extensions at the back, and now operating as a childcare facility. I hope they've moved on from the outside loos in the small outhouse to the side of the playground, though!

      2. Ian Entwistle

        Re: "Def-Pro"

        It was the " cardboard castle" at my school up in east lancs

    5. Tom 7

      Re: "Def-Pro"

      My school had Nissen huts that were build during WWII and still used into the late 70s when I left and never went back. They were actually more watertight than the late 60s built secondary modern in town. But they took out the stoves so seriously cold in winter for those in short trousers!

    6. Anonymous Coward
      Facepalm

      Re: "Def-Pro"

      Here across the pond, the pervasive school crowding solution is trailers which get towed in but never towed out.

      But my favorite example is our Smithsonian National Air and Space Museum which spent half a century in a World War One (not Two) Army surplus Quonset hut. And there it remained for 55 years.

  9. Anonymous Coward
    Anonymous Coward

    UID

    I work for yet another big startup having a 32 bit user ID throughout the system. Yes, tell me that "2 billion users would be a nice problem to have" while I browse through this arcane code that grants permanent user IDs to non-paying customers with burner e-mail accounts.

    1. bpfh

      Re: UID

      2^32 = covers 0 to 4 billion - unless you are using signed whereby you can count from negative 2 billion subscriptions to positive 2 billion subscriptions. Does not solve the overall problem, but may push it off a bit.

      1. swm

        Re: UID

        "from negative 2 billion subscriptions to positive 2 billion subscriptions"

        A negative subscription is an interesting concept.

  10. Steve Channell
    Meh

    FAT fail

    File Allocation Table is terrible design for SSD devices because it requires that the first storage blocks (where the table is stored) are re-written over and over again reducing the life of the device, and is a fixed size irrespective of the size or number of the files .. but has the single advantage of being simple (and standard - copied from CP/M). Things would be different had MS licenced the NTFS format or Unix i-node file-system been open source.. but we are where we are

    1. roytrubshaw
      Headmaster

      Re: FAT fail

      "File Allocation Table is terrible design for SSD devices because ...."

      It's a terrible design for discs too, causing unnecessary head movements, undue wear of the first few data tracks and so on and so forth.

      It's good for indexed tapes as you read the file-information first so you know where everything is.

      DECTape anyone? (I still have one or two in a box somewhere.)

      1. Peter Gathercole Silver badge

        Re: FAT fail

        I don't know that much about DECTape, other than it was a small reel block access tape device frequently mounted in the rack of a PDP-11, but I did use a DECTape II cartridge system, which was like a small QIC cartridge tape system.

        The one I used was attached to a PDP-11/34 using an RS422 or RS423 (can't remember which) synchronous serial interface, so was too slow and too small to make it good for anything at all!

      2. J. Cook Silver badge

        Re: FAT fail

        Until he got out of the retail and home-business computer repair industry, a buddy of mine did hard drive recoveries; Seems that a lot of WD blacks had a failure mode with the start of the drives becoming unreadable after a certain amount of time; he was able to recover them by have the recovery tool start at the end of the drive and work backwards.

    2. Peter Gathercole Silver badge

      Re: FAT fail

      It would not have helped to have the original UNIX UFS file system that was in use when DOS came out. In that, the first portion of the disk was allocated to a superblock and the fixed at formatting time i-node table, both of which would have been regularly written to.

      It was only with the advent of the Berkley Fast Filesystem (and the derivative filesystems) that the i-nodes and block allocation map were scattered throughout the filesystem. And even then, once they were in place, they did not move, and the block allocation maps would be written to very frequently.

      Most enterprise grade SSD do automatic block reallocation and block renumbering as part of their wear leveling algorithms to overcome this. And USB or SD storage devices are not really suited to high change volume filesystems, so you're risking quite a lot by using one for them.

    3. Norman Nescio Silver badge

      Re: FAT fail

      File Allocation Table is terrible design for SSD devices because it requires that the first storage blocks (where the table is stored) are re-written over and over again reducing the life of the device, and is a fixed size irrespective of the size or number of the files .. but has the single advantage of being simple (and standard - copied from CP/M).

      Well, it would be if SSDs did not have wear-levelling algorithms which effectively put a Copy-on-write layer underneath whichever filesystem is layered on top. Yes, writing again and again to raw flash is a bad idea, but that was soon recognised. Essentially, every write to the FAT moves that block elsewhere on the SSD, which has pretty much constant seek times for any block read. Of course, write amplification then causes other problems, which TRIMming partially mitigates.

      NN

  11. Chris Evans

    More size equals more wasted time!

    I'm not worried about wasting space but wasting time. When you've lots of spare capacity you don't archive off and delete things. The file server for my small business has >100,000 files on it and whilst I can find things quickly that fit into a well defined categories, locating some files can be rather time consuming.

    1. eldakka

      Re: More size equals more wasted time!

      Your data requirements will expand to fill the available space.

  12. js6898

    Would love to hear a similar story about why the person who decided that the default on Windows would be to hide the file extension...

    1. bpfh
      Mushroom

      Being more like mac...

      Use the extension to identify the file association and the icon to use in the background, and just display "my file" with say a corresponding word icon. It's something that I deactivate on each windows I use, but i will admit that it has stopped a lot of "what is X extension" or our account managers thinking that all the files on the desktop looked ugly so they took the extensions off the end, then after dillegently doing this for like 80 files, then come and complain that "all their files have gone" (but 80 "default" icons have taken their place) and they cannot figure out why suddently when they open "billing report" in notepad though it that was originally an Excel sheet, it displays hieroglyphs rather than their calculated bonuses and wailing that "this computer is sh*t, it's always breaking never does what it's supposed to...

      Unfortunately they have got hold of forbidden litterature and found out what PEBSAK means...

      I'll go take my pills now.

  13. Anonymous Coward
    Anonymous Coward

    Past proofing

    My OSI C8DF(?been too long) from early 1980 could have a 14" 74 MB hard drive. It had MS basic as one of the 2 Basics it could run. I couldn't justify spending the $5,000 (or $10,000) for it as the dual 8" floppies were big enough for my needs. So I don't know if it was one partition or not.

    But I never understood why MS thought 32MB because I knew they had seen bigger. But maybe it was from the before hitting the big times when just Bill G and his buddy doing all the work.

  14. Citizen99

    Interesting video. And, I nodded in agreement to his remarks on the desirability of tactile knobs and switches for vehicles so that you don't have to stop to safely change radio channel/volume, or climate control.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like