* Posts by Nigel Campbell

43 publicly visible posts • joined 26 Mar 2010

Roscosmos: An assembly error doomed our Soyuz, but we promise it won't happen again

Nigel Campbell

Re: I can't get the sensor to fit

The underlying problem is a culture that encourages middle management to treat line workers like that and not escalate issues that need fixing. Organisations will get politicised to the point that people start behaving that way if you don't keep an eye on the culture and values. The hard part is to avoid letting carpetbaggers into your management team. Once you get one manager with that mentality the toxicity will spread to everyone that person is in a position to put pressure on. It affects culture - by allowing a culture of undue pressure on workers - and hiring practices - A's hire A's, B's hire C's.

Keeping a culture alive where people are able to work on complex tasks in a zero-defect manner is not a trivial undertaking, and the vast majority of management teams are not capable of doing it. Certainly, British hiring practices for managers favour politically savvy 'people manager' types who aren't inclined to rock the boat, and the trend in management culture seems to be moving more in that direction in pretty much all industrialised countries. It happened at NASA with the O-ring fiasco, it's happened at Toyota (coming out with brakes IIRC) and now it's happened at Roscosmos.

Party like it's 1989... SVGA code bug haunts VMware's house, lets guests flee to host OS

Nigel Campbell

Re: A standard dating back to 1987?

I used to play Doom quite happily on an ET-4000 based card - on a 386/20 with no VL bus, albeit at lower display fidelity and size settings. The ET-4000 (and its predecessor, an Artist ZX1) worked passibly well for something with such constrained I/O. After that I upgraded to a 486 with VL bus S3 video card and it really flew, even without the hardware acceleration on the S3.

Is this the worst Blockchain idea you've ever heard?

Nigel Campbell

Beetlejuice Beetlejuice Beetlejuice

https://www.engadget.com/2018/10/15/sony-tries-using-blockchain-tech-for-next-gen-drm/

Be careful what you wish for ...

Russian rocket goes BOOM again – this time with a crew on it

Nigel Campbell

> The limit is apparently seals and washers in the manoeuvering systems, the propellant used makes them start to degrade.

Gotta love Hydrazine - Corrosive, highly toxic, carginogenic, explosive at a wide variety of vapour concentrations and hypergolic with a wide variety of industrial and domestic materials.

When you see folks talking about green propellants, what they really mean is 'anything but f-ing Hydrazine.'

IT Got me depressed

Nigel Campbell

Re: IT Got me depressed

> Working for a consultancy which I cannot mention.

And therein lies your problem. Get thy botty onto the contract market. Better pay, less pressure, less strings attached.

Now Microsoft ports Windows 10, Linux to homegrown CPU design

Nigel Campbell

Re: Itanic was wildly successful ...

They were quite popular in numerical computing for a while, and not the only VLIW architecture to get some traction in that space (look up the Multiflow TRACE for example). VLIW also makes an appearance in DSPs but it's not terribly efficient for general purpose computing. Now that modern CPUs can do dependency analysis in realtime they get a multiple pipeline scheduling capability that can adapt in realtime.

Nigel Campbell

Re: Computer says "No"

ARM was definitely faster than anything available from Intel at the time the Archimedes came out. BYTE published a series of Dhrystone benchmarks showing it was faster than anything but a Sun 3/80. This was before any of the workstation vendors such as Sun had moved off the M68K family to RISC architectures.

Sun, SGI, HP and various other parties brought out RISC architectures in the latter part of the 1980s, typically coming to market a couple of years before Intel brought out the 486. These machines were 2-3x the speed of a 386 on integer benchmarks, and typically had much faster graphics and I/O than mainstreanm PC architectures of the day.

RISC architectures were still appreciably faster than the 486 and Pentium until the Pentium Pro came out, although not the 2-3x advantage they used to enjoy. However, by the mid 1990's the majority of the I/O bottlenecks and other architectural baggage on the x86 family had been solved and Wintel/Lintel was a viable architecture for a general purpose workstation.

Linux and Windows NT made the ordinary PC a viable workstation platform, although NT didn't really mature as a platform until Windows 2000 came out. By 2000 or thereabouts the writing was on the wall for proprietary workstation architectures as they had stopped providing a compelling performance advantage over commodity PCs. RISC workstations hung on for a few more years, mostly running legacy unix software or meeting requirements for certified platforms.

Around 2005-6 AMD's Opteron brought 64 bit memory spaces to commodity hardware and removed the last major reason to stick with RISC workstations, which by then had ceased to provide any compelling performance advantage for the better part of a decade. The last RISC vendors stopped making workstations by about 2009 and by then most of their collateral was about running platforms certified for stuff like aerospace.

IBM's Power 8/9 are still significantly faster than Intel's offerings, although whether that translates into a compelling advantage is still up for debate. Raptor systems is getting some press for their workstations, but their main schtick is security rather than performance.

Sysadmin sank IBM mainframe by going one VM too deep

Nigel Campbell

That was standard operating procedure in BBC Micro land. The boot script on a floppy was called !BOOT (pling-boot). I heard it called Pling long before I ever heard of a Bang-Path.

Sun billionaire Khosla discovers life's a beach after US Supreme Court refuses to hear him out

Nigel Campbell

Or crimson, or teal, or black, dark brown or several other colour schemes.

Sun boxes were mostly blue and purple during the 2000's.

NASA's Kepler telescope is sent back to sleep as scientists preserve fuel for the next data dump

Nigel Campbell

Re: Refueling Kepler

Any manned mission is going to cost a lot more than 50 million. Kepler wasn't designed for in-flight refuelling. That means sending astronauts out there with tools and tanks full of nasty chemicals like Hydrazine. Not to mention replacing the reaction wheels, calibrating the whole 3-ring circus and shaking it down for a test. There's also the question of whether you've got enough delta-V to actually rendezvous with Kepler in a reasonable length of time - which is a much bigger deal on a manned flight using life support supplies than with a robot spacecraft.

Maintenance of craft anywhere but low orbit is a knotty problem because of the time to get a manned mission to rendezvous the target quickly. Fast burns tend to use a lot more delta-V (and therefore fuel) than a maximally efficient orbit. The exponential nature of the rocket equation makes piling delta-V onto a mission get very expensive very quickly. Combine that with the size of a manned payload and you make maintenance in-situ a very expensive proposition. By and large nobody bothers to do it with satellites.

If you want to learn about orbital mechanics then I can't recommend Kerbal Space Program highly enough. You'll not see any better crash course in the practicalities of the subject anywhere - and crash you will.

https://xkcd.com/1356/

Building your own PC for AI is 10x cheaper than renting out GPUs on cloud, apparently

Nigel Campbell

Re: Old MacPro hardware

Or old HP Z series hardware. I have a secondhand gaming box made out of a Z420 with a Geforce card that cost £350. I added a SSD to supplement the 2TB spinny that it came with and a joystick for about another £150. It runs Kerbal Space Program swimmingly. Dell and Lenovo make similar kit that turns up on Ebay as well, but HP is really the major player in this space.

Developer goes rogue, shoots four colleagues at ERP code maker

Nigel Campbell

Re: A gun is involved in every single mass shooting.

'Regulated' in that context means trained - as in 'Regular Army'. Do you really want your militias going to organised training sessions?

Thought your data was safe outside America after the Microsoft ruling? Think again

Nigel Campbell

Re: America's increasing isolation

The assertion about not giving two hoots about customers' privacy isn't true if it becomes a due diligence issue for the customers. Google and Microsoft - along with various other players - are trying to sell much more lucrative cloud hosting services such as Azure. If USGOV manages to establish a precedent that they can compel data to be handed over without having to make a case to the government of the country where it's domiciled, it causes two major problems to anybody providing any international hosting services.

* First, any entity with an INFOSEC policy has to consider the implications of having commercially sensitive data picked up in overly broad search warrants. This makes U.S. domiciled companies very unattractive to any entity that has any INFOSEC policy.

* Secondly, this type of warrant puts companies providing such services in contravention of the laws of other countries. A significant part of microsoft's defence is that it is impossible to comply with warrants of this nature without being in contravention of the laws of the country the data is domiciled in.

Exposure to the overly broad U.S. search and seizure policies is already causing infrastructure vendors such as Cisco to lose significant market share as it is now considered a significant risk that Cisco hardware may be backdoored. If these precedents are set in the U.S. then any U.S, domiciled company providing hosted services will bring the same operational risk.

However, the major risk for multinational companies trading in the U.S. is that it becomes impossible to do business in both the U.S. and any jurisdiction that has data protection laws, as U.S. law could compel them to do things that would be criminal offences in the domicile the data is held in. This would effectively balkanise the market for hosting services to U.S. and non-U.S. companies; Google and Microsoft would lose a lot of money and just about all of their market influence.

MacBook killer? New Lenovo offering sexed up with XPoint booster

Nigel Campbell

Re: Yet another myopic article on laptops

> For that I find a machine costing £150-£200 from Tesco is entirely adequate. Running Linux of course. A machine that I won't cry much if/when I lose it or drop it under a bus.

You can also get ex-lease Thinkpads off Ebay pretty cheaply, which will definitely play nicely with Linux, and which you might have a fighting chance of getting parts for. Having said that, chances are that you won't need to do anything to them anyway.

Nigel Campbell

Re: It's no good

It already used to have an analog display for function key strips, q.v.

http://jonathanbluestone.deviantart.com/art/Acornsoft-ELITE-Function-Key-Strip-1984-524160400

Let's praise Surface, not bury it

Nigel Campbell

I could see the market eventually splitting into consumer devices descended from tablets and high-end workstations with the 'midrange' PC substantially dying out, except perhaps as a legacy system (with the caveat that legacy applications have a habit of lingering on for many years).

Android certainly has a critical mass of application support, and could (at least in principle) support office productivity or trad business application style software if keyboards, mice and overlapping window managers became first class citizens on the platform. Think something along the lines of a laptop dock for tablets and mobile tech that looks something like a MS surface. It's certainly technically feasible.

From this perspective it's not hard to see a trend where the 'midrange' PC splits into mobile devices at the bottom end, with PCs trending towards becoming workstations sitting in niche markets at the high-end. The 'midrange' corporate or consumer PC is essentially a legacy system at this point.

This dichotomy can be seen in the consumer market already. Consumer PC sales have been substantially displaced by mobile devices. Corporate I.T. has much more inertia due to their incumbent application portfolio, but there are plenty of I.T. departments rolling out mobile devices and mobile business apps are definitely a thing.

Traditionally Microsoft was hard to shift from the desktop because you couldn't easily intermediate yourself between a desktop system and the users in the way you could use terminal emulation software to front mainframe applications on a PC. However, thin client architectures such as Citrix or virtual desktops have made it possible to get a shim inbetween Windows and its punters. The tech certainly exists now to treat Windows as a legacy system if you feel that way inclined.

I could see this trend playing out at some point in the next decade. In practice, it is more likely to be some sort of inflection point where the tech quickly becomes perceived as a viable option by mainstream purchasers.

Nigel Campbell

CPU speed has been a non-issue on PCs for more than a decade now. A year or two ago I gave a couple of old HP workstations to a friend. I had purchased these around 2007-2008 secondhand, and they had been a current model around 2006 (XW9300s for anyone interested). They're now around 10 years old, and he's playing heavily modded skyrim on them, albeit with a newer gaming card fitted. I don't think he has any short-term plans to replace his one.

I only bothered to replace the machines myself because I needed a laptop and bought a thinkpad that could do the job (fitted with a big ssd at vast expense at the time). If I hadn't been working out of town at the time I might well not have bothered to replace them.

Microsoft now awfully pushy with Windows 10 on Win 7, 8 PCs – Reg readers hit back

Nigel Campbell

Re: Am so pissed right now with this!

I have a theory that Windows peaked with NT5 (Windows 2000) and it's been pretty steadily downhill ever since.

Typewriters suck. Yet we're infinitely richer for those irritating machines

Nigel Campbell

Luxury - was Re: VW Beetle

You, at least, had a heater. For a number of years I had the joy of driving an ex-army land rover that didn't have any of your fancy luxuries like a heater.

Uphill both ways etc.

Benchmark bandit: Numascale unveils 10TB/sec monster

Nigel Campbell

Re: But why?

Not so much big data as applications that need a large shared memory space and don't lend themselves to being split up into the sort of isolated steps with discrete inputs and output that Hadoop and its ilk support. This encompasses a large set of traditional supercomputing applications that revolve around big dense matrix operations.

Examples of this include finite element models used in engineering, computational fluid dynamics, or certain types of signal processing applications (e.g. geophysics applications for oil exploration). Any number of scientific applications use matrix operations.

This type of application computes relationships between n entities by representing the relationships in a n x n matrix. If the relationships are dense enough (i.e. there is a non-zero connection between enough pairs of elements) then the most efficient way of doing this is through a two dimensional array held in memory. As this is O(N^2) for memory the data sets can get very large very quickly.

Nigel Campbell

Re: Old chip

Probably to do with Hypertransport, which has a history of being friendly to folks plugging other stuff into CPU sockets. For example, there are some FPGA products (e.g. Altera) that will go into an AMD socket, and AMD has been friendly to this sort of application since the mid 2000's when they brought out the Socket 940.

Although it doesn't specifically mention it in the article, the article does talk about 3 CPUs, which suggests the fourth socket is being used for something else, probably the connectivity. I guess the choice of Opteron is because HT is more friendly to this sort of application than QuickPath.

7/7 memories: I was on a helpdesk that day and one of my users died

Nigel Campbell

I was on the Northern Line at the time, around Charing Cross

I was on the Underground at the time, coming back from a job interview - if I recall correctly around Charing Cross or Embankment Station. The train just stopped in the station for a while, and then there was an announcement about a Power Surge over the P.A. system. I guess that 'Power Surge' was a code word to alert staff about a bomb or suchlike. This was probably the source of the rumours about an electrical fire.

Anyway, the train just sat in the station for ages and in the end I got up and walked to Waterloo to get my train back out to Wimbledon, where I was living at the time. There were heaps of people walking to the station and it was pretty crowded.

I didn't hear what happened until I saw it later on the news.

Reg hack survives world's longest commercial flight

Nigel Campbell

I've done a Lufthansa flight that goes Frankfurt to Jakarta via Kuala Lumpur that has a 15 hour leg. That's pretty horrendous. Anything from New Zealand to civilisation is also pretty awful, but at least it's broken up by a stop-over in the middle.

Who wants a classic ThinkPad with whizzy new hardware? Lenovo would just love to know

Nigel Campbell

It's the ethos. A Thinkpad is a tool as opposed to a fashion accessory. Trad thinkpads - up to the T420/T520/W520 generation - have a nice keyboard, decent feature set and superior build construction. They're designed for road warrior types and the I.T. departments who have to support them. In the Thinkpad ethos, function has always been king and they've been pretty unrepentant about that.

IBM and Lenovo (who bought IBM's PC division about a decade ago) also have very good support for their machines, especially in comparison to certain other purveyors of PC hardware (Dude, you bought ...) and consumer electronics who really are dead set on selling you a new machine every 2-3 years.

Thus, they have good, reliable hardware and a pretty dedicated fan base. I have a friend who bought 3 Thinkpads over the last 15 years - and they all still work. My first introduction to the Thinkpad was when I got a W520 about 3 years ago. It was the first time I owned a laptop that I could truly view as a desktop replacement.

Put in context, I do a lot of typing - enough that there is quite pronounced visible wear on the keycaps. The machine is still trucking along just fine.

If you want a *real* thinkpad then look into getting a used T420 or something of that generation from ebay. T420s weigh about 2kg and should cost about £200-300 for an ex-lease one in good condition (note that the T420 was typically £1000+ when it was new). These are the last generation with the old school keyboards and they're pretty common on the ex-lease market.

They also have space for a mSATA SSD - you can get something like an Intel 525 or 530 off ebay and put it into the machine. This gives you a machine with a system and data disk. Even though people bemoan the insidious influence of the 'thin and light' on the Thinkpad range you can still get a trad one in good condition and fit it out with SSDs and suchlike.

Major London rail station reveals system passwords during TV documentary

Nigel Campbell
Coat

Open Look - joys of legacy systems

I've seen that application open at Waterloo one or two other locations - it looks like a realtime display of the status of the points and signals. The buttons on the application have the fairly distinctive oval styling (rounded ends) of the Open Look intrinsics, which places the app at something like 20-25 years old, probably running on Solaris.

Australia finds $1 BEELLION to replace No-SQL DATABASE

Nigel Campbell

Actually, both Oracle and DB/2 can be deployed on server clusters. Although the architecture is still shared disk, a billion AUD is still enough to buy a few SAN controllers to partition the storage across - and any hardware up to a whole Sysplex to run the DBMS.

Crikey - with that much they might even be able to afford Oracle's licensing.

Supermicro adorns servers with bright and shiny ULLtraDIMMs

Nigel Campbell

Re: Price has a lot to do with it

That's overstating it. While Linus isn't best pleased with NVidia, their binary drivers tend to work fairly well in practice.

Given that these SSDs are server items they will only have to support a much narrower range of hardware platforms - a handful of server chipsets. The testing workload should be much less than NVidia has to deal with.

These are server components - the market won't stand for instability. Either the vendor will get them right or crash and burn. I suspect that server vendors will want to make it work if they view this as a potentially strategic product. The lack of open-source drivers isn't going to be an issue to anybody but die-hard GPL supporters (apologies to RMS).

Dungeons & Dragons relaunches with 'freemium' version 5.0

Nigel Campbell

Traveller, being written in the mid 1970's, had some quaintly amusing notions on computer technology. If I could be arsed I would post this from my TL11 Samsung Galaxy hand computer.

Tech that we want (but they never seem to give us)

Nigel Campbell

Where do I start?

A decent semi-rugged workstation laptop with a 16:10 screen and 32GB+ of RAM. Put the battery in the side and the connectors on the back of the machine; some of us are left-handed. Bonus points (looking at you Lenovo) for bluetooth drivers that actually work on a bare-metal O/S install.

A smart phone that lets you hold the phone by its sides without triggering sh*t on the touch screen (looking at you Samsung).

A SAN and consolidation platform that is even slightly performant on data warehouse workloads. Slower than a 10 year old 32 bit server with U160 disks - srsly?

Privacy International probes GCHQ's mouse fetish

Nigel Campbell

Re: @ obnoxiousGit

The practice is still alive and well. Many SAN manufactures offer 'standard' and 'high-performance' firmware for their controllers. No hardware updates needed. CPU manufacturers still make multiplier locked chips.

EMC's DSSD rack flashers snub Fibre Channel for ... PCIe

Nigel Campbell

On the back of an envelope

I think you might just have invented infiniband. It's switchable and you could certainly implement a key-value store on top of RDMA or a channel.

Game of Thrones written on brutal medieval word processor and OS

Nigel Campbell

Re: Word bad, raw text editor good

My set of System V manuals took up about 3 feet of shelf space, although that doesn't hold a candle to the pallet (literally) of documentation that came with VMS. My alma mater had a room that contained nothing but shelving for VMS manuals.

Traditional RAID is outdated and dying on its feet

Nigel Campbell

Re: and another Eh?

Disks are getting bigger a lot more quickly than they are getting faster. For a first approximation, disk capacity grows in proportion to the square (inverse square to be precise) of the size of the bit, whereas read performance is a linear relationship. A modern hard drive takes a lot longer to fill up than a drive built in 1990.

VMware hyper-converge means we don't need no STEENKIN' OS...

Nigel Campbell

The O/S services still have to be available to the applicatoins

Whatever you run in VMs must still provide system services like I/O, memory management and higher level APIs to applicaitons. You could run an app on a bare VM (which is how VM/CMS, the grand-daddy of them all worked) but you would have to link to libraries providing those services.

For some apps such as DBMS platforms you could arguably make a case for running them with a minimal set of kernel services directly linked into the server. Oracle have been banging on about DB appliances built like this since at least the 1990s.

However, there aren't many application s for which this would be particularly beneficial - at least where it would be cost effective to do this. Developing applications for that type of architecture is much harder and more expensive than developing against a platform that provides a comprehensive set of system services. In short, there is little to recommend this type of platform outside of a handful of specialised applications.

Providing system services through the hypervisor simply turns it into an operating system with a very heavy context switching overhead, likely to be quite inefficient. Far better to provide an O/S kernel with paravirtualised drivers against raw I/O and memory seriices provided by the hypervisor, which is largely how it's done now.

If you wanted a minimal kernel it is certainly possible to strip Linux or one of the BSDs down to a minimal set of system services, but this may or may not be particularly useful. One possible benefit is improve runtime security by removing unnecessary services that could provide attack vectors - in which case you've really just re-invented OpenBSD.

One point to note is that we used to get on just fine without hypervisor sprawl and that *nix or mainframe architectures could run enormous portfolios of applications without needing to split each one into it's own VM. Modern VMs are largely a solution to problems originating from the Windows DLL Hell era, and that has been a substantially solved problem for a decade or more..

That is not to say that VMs aren't useful, but in many cases they are a sledgehammer to crack a walnut. The current technology of paravirtualised services where the O/S kernel has drivers to support services provided reasonably efficiently by the hypervisor allows the O/S to be used with or without a VM. The O/S provides the system services and the VM allows multiple O/S images to be consolidated onto a shared hardware platform. You can run your O/S and applications against a VM or against bare metal. Job done.

For Windows guest - KVM or XEN and which distro for host?

Nigel Campbell

Re: For Windows guest - KVM or XEN and which distro for host?

I agree with the posts suggesting it would be easier to use Windows as the host. What you might do is to benchmark a GCC or kernel build on a native Linux build (i.e. Linux installed on the bare metal) and on Linux running in a VM. See what the overhead really is - if it's no more than (say) 20% then the gains from running Linux as the host might not be worth the trouble.

Look out ARM, Intel, here comes MIPS – again

Nigel Campbell

Quite a bit more than a decade

> MIPS is has been 64-bit for over a decade.

Actually, something like 22 years. MIPS was the first mainstream architecture to go 64 bit and the R4000 came out in 1991.

http://en.wikipedia.org/wiki/R4000

If anything, mature 64 bit MIPS architectures would be its key advantage over ARM, although that will be a transient window of opportunity. It's also got a much richer background in server architectures going back to the 1980s.

Also, you can already get MIPS based laptops if you want a native dev box.

We gave SQL Server 2012 one year to prove itself: What happened?

Nigel Campbell

Actually, there is an API for DQS

The docs do actually document the API for programatically controlling fuzzy match operations in DQS - it exposes a web service that's used in MDS integration. However, they also go to some trouble to emphasize that these APIs are not supported for application code.

Why might this be the case?

If you look at the feature comparison docs for SQL Server, you will find fuzzy matching components that do something very similar to the fuzzy matching functions in DQS. Catch is, these features are only available on Enterprise Edition. DQS is available with the (much, much cheaper) business intelligence edition.

I wonder just how late in the game their marketing wonks twigged on?

Hey, you, dev. What do you mean, storage is BORING?

Nigel Campbell
Unhappy

Storage has been the bane of my existence for some years.

I've been working in data warehousing (bringing intelligence to the business ...) for some years now and I've seen at least half a dozen sites where the infrastructure folks just didn't get storage. When combined with the the 'every problem is a nail' mentality that consolidation environments foster it leads to a great deal of pain.

First example: A couple of sites trying to shoehorn 1TB+ data warehouse applications onto consolidation environments - and wondering why the projected runtimes wouldn't fit into the batch window. In one case they were moving the system off the original crusty old 32 bit hardware that it had originally been built on using SQL Server 2000 - machinery that was the better part of a decade old. Their new *cough*NetApp*cough* SAN cost them a cool half a million and was outperformed comfortably by an antedelluvian direct attach SCSI array.

Second example: On more than one occasion I've been able to demonstrate the same ETL process running significantly faster on a desktop PC than the production server we were supposed to be deploying to (in one case half the runtime).

Third example: I got to know a couple of sales reps from a large storage vendor I used to work in relative proximity to. Off the record they would quite happily say that a lot of their DW customers used direct attach storage because it just wasn't feasible to get the performance out of a SAN.

One of the MS fast track data warehouse papers manages to obliquely refer to this, saying that an improperly tuned SAN (i.e. one tuned for a general purpose workload) is likely to need 2-3x the number of disks to achieve the same performance as one tuned for the application.

In an application domain where your canonical query is a single table scan, all the caching and tiered storage in the world get you no benefit when your workload has no temporal locality of reference. Direct attach storage is cheap and fast. 99% of Data warehouses don't need high availability. Put a HBA on your server and back the DBs up on the SAN.

It's a relatively straightforward concept - really it shouldn't take Einstein to work this out.

I could go on ...

</rant>

BOFH: Shove your project managementry up your mailbox!

Nigel Campbell

Percussive stakeholder management

Known as a 'governance stick' in some circles.

Vote now for the WORST movie EVER

Nigel Campbell

Worst vs. Most disappointing?

I think that one should separate the concept of 'worst' from the concept of 'most disappointing'. For example Avatar was not the worst film ever made, but is a candidate for 'most disappointing' because it could easily have been so much better. The cinematography and special effects were really pretty good, but the triteness of the script was enough to really spoil the film.

The Phantom Menace and The Matrix Reloaded were deeply mediocre films that followed a much better predecessor, but 'mediocre' is probably a better definition than 'crap' (OK Jar Jar Binks was actually crap and some of the antagonists in TMR were pretty rubbish as well). This makes them contenders for 'most disappointing', but not necessarily 'worst'.

I haven't seen Battlefield Earth, although I'm told it is really pretty terrible. Of the ones I've seen, I think Highlander 2 is comfortably the worst on that list in terms of absolute production values. It really was poo - crap script, crap production, crap acting, crap premise, tenuous and contrived link to the first that completely bastardised several the major tropes. Michael Bay would be proud.

Fedora 15: More than just a pretty interface

Nigel Campbell

Start menu was a knock-off

Technically, the Win95 start menu was a knock of the Apple menu from pre-OSX versions of Mac OS, with the minor proviso that the apple menu sat in the top left corner.

Sat-spotters find secret payload launched by giant US rocket

Nigel Campbell

Aerospace does indeed still use pounds.

Historically, pounds were used as a standard in aerospace circles for stating such things as fuel loads, paylods and takeoff weights. It remains that way as a legacy system because it's too hard to change to metric (people might confuse metric or imperial weights).

In spite of this, there was one high profile instance a few years ago of a space mission getting bolloxed up because one team used metric units and the other team used imperial without converting between units.

'Switch to Century Gothic to save the planet'

Nigel Campbell

I think it would probably use more paper overall

If I copyfit a lorem ipsum passage of Century Gothic (which looks like it's a knock off of Avant Garde Gothic) at 10 points it takes up about 10% more space than the same thing in 10pt Arial (11 and a bit vs. 10 and a bit lines). At a guess I'd say that the 10% extra paper usage would substantially outweigh any savings in the ink coverage.