back to article Gone

Gone.

COMMENTS

This topic is closed for new posts.
  1. Shadow Systems

    Thin Clients suck.

    Unless you're sitting at a TC terminal within line of sight of the TC Server, connected via FibreOptic (& a fat pipe at that), and the only one using the Server at the time, the latency between user action & Server sending back the results is a massive PITA. The more users, the greater the distance between user & Server, the thinner the bandwidth pipe between user & Server, and the hardware capacity of the Server, the worse it gets for everyone involved. Your Server may have a Tb of RAM & an ExaByte of storeage, but if it's serving a million users via 10-Base-T then nobody is going to Get Shit Done. You can have a ExaByte/Second FO connection to a single user, but it won't mean squat if the Server is running on an 8088 with 1Meg RAM & a single low density, single sided, improperly formatted Floppy. You have to find the sweet spot of Number of Users, bandwidth pipe to serve them, & Server capacity in order to do it right, and all too often some bunghole in Manglement will cut corners, save money, & make themselves look good to the rest of Upper Manglement/BeanCounters at the expense of creating a working condition that doesn't allow anyone to Get Shit Done. Maybe with an unlimited budget & Managers With A Clue (do those exist?) you could deploy a TC network that lets folks GSD, but more often than not it's just a nightmare of users complaining that their terminal is too slow, the network is too slow, the server is too slow, and they're sick & fekkin tired of watching the screen display e.a.c.h. i.n.d.i.v.i.d.u.a.l. l.e.t.t.e.r. o.f. e.a.c.h. e.m.a.i.l... It's most obvious when everyone shows up at the office, turns on their clients, & tries to log on. If the Server goes from idle to suddenly trying to process 1,000+ users all at once, chances are good it's going to shite itself trying to GSD. If you configure the Server to handle the load at peak times, then the Bean Counters whine about it being under-utilized the REST of the time. If you make the users happy then the Bean Counters gripe; if you make the BC's happy then none of the employees will be (but the Share Holders will love it). It's easier to give each user their own full desktop machine, a standard image, lock it down so they can't do anything to it, & then use corporate/GPO rules to lock them out of the stuff they have no business fiddling with. You deploy updates & security patches from a central WSUS server, you know exactly how many licenses you're using (1 license: 1 Machine), and you don't have to use an OC768 just to handle the intra-office bandwidth between a TC Server & the local users. Toss in Remote Users to the mix & you can kiss productivity goodbye. How do you install a TC in an employee's home via a hard wired, secure, stable ComLink so you can trust that it's them logging on, it's an approved machine doing it, & there aren't any MITM attacks along the way? You can't. You need a VPN for that. And a full on desktop or laptop. Sure you could TRY to use a ChromeBook, but how do you make sure the remote user hasn't compromised the machine to allow them to BE the MITM vector? You can't. If you don't maintain physical control over the TC terminal, the network used to connect to the TC Server, & the data sent over that pipe, you can't pass any sort of Security test from HIPPA, Government Security Clearance, or Regulatory Oversight requirements. So sure a TC may sound all nice & cheap & easy to maintain, but what you save at the user's desk is more than eaten up by everything ELSE required to support them. Get a full desktop, hook it to the intra office CAT6, link it to the local Server, and don't let anyone in the building without proper ID. Firewall off the Intranet from the Internet, and don't let anyone that doesn't *require* such cross-access to have it. And no, tweeting on their FarceBook & watching cat porn videos on Youtube is NOT a requirement, even if it IS your boss demanding it. Remind him that Corporate Policy & TLA Laws require that the two shall never meet, & You Shall Not Pass.

    *Cough*

    Sorry for the long winded nature, but I had to deal with TC & Servers at a Global Communications Company, and even WITH a FO backbone *inside the office*, a Server with more capacity than most would believe possible, AND the Manglement permission to "make it work", it *ALWAYS* brought the users to frustration, us IT Drones to tears, & my supervisor to his knees dealing with the complaints. "It's too slow!", "I can't open email in under 5 seconds!", "I can watch the screen redraw the mouse pointer!". Really? Your TC terminal has the hardware to make it a full on desktop powerhouse, and the ONLY "bottleneck" is the FO link. You've got lag in the low single digit milisecond, the Server shows it's able to respond to you in under 15ms, and yet you claim to watch the screen refresh? How about if we staple your tongue to a passing bus? GAH!

    *Sigh*

    TL;DR: Thin Clients Suck. Just give them a local desktop, lock it down, & save yourself the ulcer.

    1. Robert Pogson

      Thin Clients Don't suck.

      "the latency between user action & Server sending back the results is a massive PITA"

      This is an indication of a seriously flawed installation. Anyone familiar with that phenomenon hasn't used gigabit or 10gigabit/s NICs on the server. A good server might have 4-6 such NICs leaving almost no lag in the pipes for the usual point, click and gawk of typical desktop usage. Consider Largo FL setup. They have a few humongous servers each running one or a few services, like one for the browser, one for the session, etc. There's very little lag because every file needed to do anything is almost certainly cached in many gB of RAM on the servers. They literally have hundreds of users simultaneously on those machines and they are not maxed out in any way. The capital cost per user on the servers is of the order of $100, so it's very economical. All the resources are where they are needed rather than wasted sitting idle all over the system.

      If you really can't stand a few milliseconds waiting for a character to appear on the screen, learn to touch-type. Then you will know what will appear before it appears, a perfectly satisfactory situation.

      I've been using Linux Terminal Server Project for ages and it performs much better than That Other OS on a thick client. e.g. You log in... With XP, the hard drive seeks for 30s or so to load needed files and the desktop appears but may not be usable for another 30s or so. With LTSP, most of those files are already cached in RAM on the terminal server because umpteen other users are already logged in and it's just a few seconds to display the user's desktop. I once had a user fall off his chair because of this difference and his habit of leaning back in his chair waiting, please-waiting for XP to show up. We had a good laugh and carried on.

      I'll grant that full-screen video may suck on thin clients but there are many users who don't need that to maintain the database, deal with correspondence, plan the budget, ... It's not difficult to run a few local applications on the thin client if video is involved, unless you need to transcode/render/etc., in which case you almost certainly should have a thick client or better, a cluster of thick clients like the big boys and girls use.

      I'll give a prime example of where thin clients work and thick clients don't. I was in a room with 24 PCs operated by 24 humans and sweat was dripping down my nose. It was 40C in the room because the air-conditioning was not working and each body shipped ~100W of metabolism and each PC shipped ~150W of electrical power. Add that up... 2400W human power and 3600W electrical power = 6KW! Further, each thick client had 512MB RAM and 40gB of hard drive totally wasted. We only needed 24 copies of the OS so M$ could make more money. One copy would do with GNU/Linux. Imagine what could be done on a terminal server with 12gB RAM... We could have reduced the electrical power to 50W or so, reducing the waste by 2400W and doing everything on a server running 400W with just a couple of hard drives. Capital cost could be cut in half easily and performance improved with much less work than installing 24 copies of an OS.

  2. Extra spicey vindaloo
    Thumb Down

    Are we back to this crap again.

    Every few years someone tries to save money by using Virtualization or remote computing to save money. It never works, because you don't have the same experience as having a computer by the side of you. If you look at places where thin computing is implemented, (ex Airport checkins) the experience is less than Ideal for someone that knows how to use the system. Click wait. click wait. Click wait..

    Thin computing will be out of fashion again about the time you realise that it would have been cheaper to buy desktops than pay the outlandish prices that Support, Hosting and lost productive time due to latency and outages. No thanks.

    1. Robert Pogson

      this crap again.

      Extra wrote, "Thin computing will be out of fashion again about the time you realise that it would have been cheaper to buy desktops than pay the outlandish prices that Support, Hosting and lost productive time due to latency and outages. No thanks."

      Uh, ever heard of an Android/Linux smartphone? They are essentially a thin client when folks are searching using Google, moving images to/from FB, fact-checking via Wikipedia, navigating using Google Maps and browsing web servers out there. There are about a billion of these sold each year and folks have them glued to their bodies. They aren't going away any time soon because they work for people. People love to have IT that works for them instead of having users/IT people constantly trying to fix what's wrong with their local thick client. Thin clients are simpler devices that leave the heavy lifting to servers somewhere else, anywhere else that doesn't trouble the user. Folks don't want heavy, tangled messes on their desktops, blowing hot air at them. They want little slabs of silicon, plastic and glass that just work for them, the way thin clients do.

      Yes, it is possible to set up a system of thin clients that sucks but it's not a characteristic of thin clients but of the folks who don't understand them. I've repeatedly converted thick clients into thin clients and normal users are just amazed at how they get the performance of some newer machine or some server on that old piece of junk that now appears to beat the latest/newest/most powerful thick client. That's real to users. That's what thin clients can do. Real users don't want to be IT people. They want a box that's as reliable as a telephone and just keeps working no matter how fast people move. Thin clients usually have few moving parts and run cooler. That means they fail less often. People love that. People love that they can drop one thin client and pick up another and they are back in business in seconds. The last place I converted thick clients to thin, folks noticed the browser or word-processor springing to life in less than 2s while it took 7s on a thick client. Same for logging in. What's wrong with that? Nothing.

  3. Mike 16

    iPads and Chromebooks

    Exactly how do those differ substantially from "real" Thin Clients (tm)?

    1. ghettomaster

      Re: iPads and Chromebooks

      If you're referring to thin clients such as those offered by HP, the big difference is manageability and they are fit-for-purpose. With those devices you can easily:

      1. Configure them so they forget any changes made to the local machine when rebooted. As the hard drive forgets everything during the session you don't need to worry about config changes, software accidentally installed etc.

      2. Lock them down so all they can do is run the thin client software.

      They also typically run a cut-down version of MS Windows also designed for this purpose and so require little to no maintenance such as Windows Updates etc.

  4. Anonymous Coward
    FAIL

    Which part of 50 > 6 don't you understand?

    Six question quiz? More like 56 question quiz. Having a single question that asks you to rate 20 sub-items is twenty questions you have to think about.

    Sorry. I am fine helping the Reg earn money off their reader base and getting a little journalistic research into the bargain but a 6<->56 question bait and switch is too much.

    1. Robert Pogson

      understand?

      "I am fine helping the Reg earn money off their reader base and getting a little journalistic research into the bargain but a 6<->56 question bait and switch is too much."

      The Register provides a great service to humanity. Giving back a bit of time and sharing wisdom is the least we can do, literally. Further, explaining the benefits of thin clients to folks who may be misled into thinking they are "dumb terminals", "laggy" etc. makes my day. I noticed the questions multiplied like rabbits but they were almost always relevant/useful to someone, somewhere. Thin clients are good for the economy, good for the environment (going and coming and in use), good for human spaces, clean, quiet, compact, cheap, and, done right, faster than thick clients. What a lot of folks miss is that a puny thin client is like a frog that can climb stairs. There's a threshold of performance that can do wonders. The network and the servers are the staircase, a marvellous machine that's efficient and provides great leverage. We should tell the world about them.

  5. Anonymous Coward
    Anonymous Coward

    Hmm. No.

    My 2p.

    Thin client abstraction of the user session away from the user's device introduces more problems than it solves, and makes things considerably harder to manage than the fat client approach. It's a much more complex environment, and the complexity is between the user and their desktop.

    With the fat client, you automatically avoid contention between users as far as booting, starting applications etc is concerned - with VDI, you have a set of bottlenecks of your own making, which require extra effort to work around.

    The financial and green benefits of thin clients are overstated (the extra electricity, cooling and embodied cost of the servers, plus the thin clients, generally outweighs that of the fat client, given sensible power-saving policies).

    In short, as has been the case all along, it may be suitable for particular niches, but not for the general case.

    Great thing to hype up if you're trying to sell something though, which is presumably why it keeps coming round.

  6. MyffyW Silver badge

    A good few years ago I had a HP t5700 thin client. These ran Windows XP and had 256MB of RAM when that was a perfectly reasonable amount of memory for a desktop.

    After booting off the SLAX Linux distribution it ran quite well as a basic PC :-)

  7. enormous c word

    X-Terminals

    X-Terminals and Unix. Remember those? Thats the only time Thin Clients ever really worked in a desirable way - put all the presentation stuff (window decoration / keyboard mouse events) onto the local device and leave the application on the server with an efficient comms protocol between the two - designed from the ground up properly. Everything else has been an ugly hack and sucks.

  8. Robert Pogson

    Affording Thin Clients

    Some anonymous coward wrote, "the extra electricity, cooling and embodied cost of the servers, plus the thin clients, generally outweighs that of the fat client, given sensible power-saving policies".

    Oh, the pain... DO THE MATHS...

    Compare 20 thick clients in a lab or office using 100W for LCD monitor and ATX box combined. Include the capital cost of a powerful processor, a few gB RAM and many gB storage. That's 2KW to start.

    Compare 20 thin clients accomplishing the same thing with a server with 16gB RAM and 512gB storage in a RAID array. The monitor may still use ~30W but the box may use only 10W these days. That's 800W for the clients. The server may run just fine on 200W, for a grand total of 1000W, half as much power. Power costs money. It matters especially if you have to pay to import it and then pay to get rid of it. You can save a few gB RAM in the deal because so many files are cached, but server-RAM may be more expensive so that's about a wash. Storage OTOH, is greatly saved because you may need only two or three hard drives instead of 20. When you consider all these savings in operation and increases in comfort and performance, there's just no reason to use thick clients unless some particular application requires it, like video editing etc. Lots of users just point, click and gawk all day long. They get improved performance with thin clients.

    On management: The thin clients may require little or no management. I like to keep a list of MAC addresses and locations. They don't need much more than that. The most work I ever did for thin clients was unplugging the hard drive on converted PCs or setting the BIOS to boot PXE, just a minute or two per machine for the duration. New thin clients last indefinitely with no moving parts. No body would steal one because most folks still don't use them and they don't do much without the server. "Fixing" one could be done by the office-secretary swapping a unit. Yes, it is easier to maintain one copy of the OS and users' files than 20 copies. Oh, wait. You normally keep the user's files on a server anyway. Why not keep a single copy of the OS there too? Live with that. I've used GNU/Linux thin clients for more than a decade and maintenance was practically zero compared to That Other OS on thick clients.

    1. Anonymous Coward
      Anonymous Coward

      Re: Affording Thin Clients

      "DO THE MATHS"

      Done the maths. Used the wattmeter too...

      The typical "fat" clients we're buying (i5, 8GB, SSD) draw 15-20W idle. It may peak to 60W if you thrash it with a benchmark or something, but in typical use consumption is around 20-25W. It's a fat client so intelligent client-side power management is easy, it can standby or hibernate if left logged in, or power-off if left at the login screen. It's an SSD so resuming is very fast. These states draw just enough for wake on lan - 1W or so. (The monitor draws the same for either client, so that cancels out.)

      We like this - it gets close to consuming power in proportion to the what the user is doing.

      Last I looked, thin clients didn't have the ability to power-save in response to being idle, so you generally end up consuming 10W at a flat rate, 24/7. (When we were last looking at this we were looking at a trial in one of our areas that operates 24/7, although in practice overnight usage is pretty low. But more generally, turning devices off via centralised scripting is not viable for us - power saving has to be decided by the client device to be useful to us.)

      So, if we assume 8hrs usage per day, rounding up a bit to 30W when in use, we get 240Wh due to user activity, plus 16*1W when off/hibernated, giving 256Wh per client per weekday.

      The thin client, meanwhile is drawing 10W flat rate, giving 240Wh. This doesn't leave much margin for any extra server infrastructure the thin client needs. We haven't taken weekends into account for the fat client either.

      Even if you tried to manage the power locally, you might be managing power on both the client device and the VM they're connecting to, and they may disagree on what constitutes 'idle'. If not, your VMs are running 24/7 as well, which also cranks up the electric bill.

      We don't like this - those 240Wh weren't really for anything the user was doing. All of the power consumption arising from things the user does hasn't been included yet.

      I'm aware of one large site that was almost entirely thin client back in the P4 days, when a fat client really did draw 100W even when idle. They're not a thin client site any more, because the sums don't add up. A modern PC is a much more power efficient thing.

      Yes, the embodied cost is less, but for a thin client that you can offload video to, it's not by much.

      Performance improvements on the PC are slower than they used to be, so the expected lifetime of a fat client is longer than it used to be, and getting longer.

      You'll need some server infrastucture in either case - some storage for the user data of various sorts, AD, maybe a couple of VMs for various management tasks. They cancel out because you need them whichever approach you take. What you don't need with the fat clients is some hefty centralised VDI system that is able to cope with the bottlenecks caused by centralising something that was previously parallel. Any significant server infrastructure you need to do this will crank up the embodied costs way over those of the fat client approach, and push you way into the red in power consumption as well.

      "You normally keep the user's files on a server anyway. Why not keep a single copy of the OS there too?"

      We do, in a sense, in that that's where the standard image to reimage the clients comes from. Which is, in our environment, a straightforward process that can even be self-service if the user wants to do that. You don't need thin clients to take that approach.

This topic is closed for new posts.