The IT labs at our old Uni were famous for their little known read only network share of Quake2, and the epic deathmatches that would take place of an evening in leui of completing comp-sci programming assignments. Handy the machines all had 8mb AGP video cards - essential for late 90's coding tasks I hear...
56 posts • joined 6 Jun 2007
Blockchain has a huge number of practical applications (indelible chain of evidence, distributed auditability of transactions etc), Burning electricity to generate a growing pile of speculative currency with no tangible backing of state sanctioned promissory debt (which ultimately, is what currencies are built on - the promise that the people a county are collectively "good for it") is the absolute bloody worst use of the technology.
Rather typically though, that's the application that receives 99% of the attention and effort... Humanity eh?
Zuckerberg wants to create a make-believe world in which you can hide from all the damage Facebook has done
Now Nvidia's monster GeForce RTX 3090 cards snaffled up by bots, scalpers – if only there had been a warning
The 3090 is a pup - ludicrously over priced but only 8-10% faster in actual gaming benchmarks than the (very capable) 3080. All that video memory and nothing to use it for, as practically nobody is legitimately gaming at 8k any time soon - and those few that are even running PC games at 4k@60hz (rather than generally more useful 1440p@144hz) will be very well served by a "basic" 3080 - especially on titles with DLSS 2.1 turned on.
Combined with the fact that the drivers are intentionally hamstrung (SRV-IO disabled etc), artificially crippling performance in many of the high end productivity use cases that could benefit from 24GB of fast VRAM, I fail to see who the market is for these cards. It's got to be very niche, and/or full of morons.
Paying over the odds for any of these cards is a mugs game. Supply will catch up with demand in relatively short order - and there are some ludicrous deals to be had on cut price NOS or gently used 2080ti's right now if you're really desperate for something - as they were never cost effective for Crypto mining (unlike the flood of run-ragged 1080ti's that hit the market a while back).
Couple that with AMD chucking out something later in October that might compete or at least put pressure on the midrange market, and you'd have to be mental to throw more than RRP at a 30xx card right now.
Adobe Illustrator's open source rival Inkscape delivers v1.0.1 - with experimental Scribus PDF export
I'd encourage anyone to take a look at the Affinity tools lineup (Designer = illustrator, Photo = Photoshop, Publisher = Indesign). They are dirt cheap at about £50 (frequently on sale for £25 each though!) and are bought as one off purchases rather than a subscription model, and the interface/usability/performance/stability are all exceptional.
I'm not knocking Inkscape - having a plucky Open source option for those who who need to draw a couple of vector boxes now and again and can't afford/don't wantto pay anything at all is great - but even at the tiny one-off price (compared to a Creative Suite monthly sub!) Affinity is an absolute winner.
ZX Spectrum reboot promising – steady now – 28MHz of sizzling Speccy speed now boasts improved Wi-Fi
Sony reveals PlayStation 5 will offer heretical no-optical-disk option. And yes, it has an AMD CPU-GPU combo
Use of third party external storage for game storage is unlikely, given all the hoo-ha they are making about the performance of the extra special Sony ssd storage tech and how it is a requirement that all games utilise it.
That and the box is an eyesore, looking more like something soljaboi would whack his name against than a big three console.
Hard pass from me until they release the inevitable revised version in two years...
MacOS on Arm talk intensifies: Just weeks from now, Apple to serve up quarantini with Kalamata golive, reportedly
Mystery cloud added 10,000 new AMD Epyc servers in under ten days to handle demand for you know what
Go fourth and multi-Pi: Raspberry Pi 4 lands today with quad 1.5GHz Arm Cortex-A72 CPU cores, up to 4GB RAM...
Huge Thin Client Potential
2 x 4k capable video outputs, h.265 hardware decoding (not that any streaming protocols use it... yet) and FINALLY putting the ports on only 2 sides instead of 3. Add that to the USB-C power and a proper, non bus speed constrained GBe NIC and more power to peripherals, and the Pi4 is the basis of a bloody amazing low cost thin client platform.
Just in time for Windows Virtual Desktop to launch as well!
Hoping the various low cost linux thinclient OS vendors (Thinlinx et al) jump on it soon.
Windows 8/10 Control panel
A great specific example. 95% of all the settings you needed to tweak a Windows 7 desktop were all only a couple of consistently pathed clicks deep within control panel, and anything more exoctic you could spin up via MMC as required.
Under windows 8 (and carried to 10) - the modern UI's approach to organising settings in particular is god awful. Stuff is spread out in non intuitive places, with loads of scrolling through radio buttons even though there is more than enough white space to include more information per view. A huge step backwards in usability design.
Re: Company Management
Have a google around for the "Spongebob Plan". That's almost certainly the way this is going. There are likely no secured creditors, so the last remaining director can basically walk away from the business, stop answering the phone etc, and wait for Companies house to strike the company off. If there is no money to pay liquidators, and no assets worth seizing to pay for the liquidators anyway, that's the way it will go.
Generally though, if the last Director dies or goes missing, the shareholders have to nominate a new Director - but as I say, more likely they'll just mark the LTD company dormant and walk away indefinitely. Sadly there seems to be little legal recourse - as any secured creditor that could be found (indiegogo maybe, depending on their T&C's?) would need to underwrite the cost of assigning liquidators, which is just throwing good money after bad given the business has clearly been fully asset stripped.
I think the only option now is to take the £105 sunk cost on the chin, and make a stubborn point to flag up quickly and loudly any business venture involving this bunch of scam artists in future to try and prevent a repeat performance.
Re: Another product moving to pay forever
AFAIK Hybrid MDM needed an Intune subscription anyway, so actually no cost change there, just moving the management interface out of SCCM (which needs monthy patching for new MDM features) to the always up-to-date cloud management console. Pretty much par for the course.
5 quid says...
That they all shake hands and agree to chill out if Intel agree's not to block Qualcoms Snapdragon 835 / Windows 10 x86 compatibility special sauce in Q4.
Also - remember that if Qualcom can demonstrate decent x86 JIT compilation or whatever they are planning on the Windows platform, then you might find Apple suddenly showing an interest as a non-intel desktop/laptop option that could maintain compatibility with x86 osx apps for a time?
Is it still a tab if....
There's no indication that it is touch enabled? The video makes no suggestion that this is the case (or of motion controls for that matter).
Both plus points in my opinion - less gimmicks for developers to worry about, when they could be focussed on the core game content, rather than trying to find a way to utilise a niche input mechanism (as far as console games go).
In terms of the Intel vs AMD argument - I don't really have a preference either way - as a pragmatic Scotsman I'll buy go with whatever works best for the task at hand... In truth that has meant Intel 99 times out of 100 for the last 10 years - but as soon as AMD have a competing solution worth deploying, I'll deploy it.
With a limit of 10 path based rules per ALB - the need to proliferate a large number of ALB's (which come with a fixed hourly cost plus a more-complex-than-ELB's unit based usage charge) to cover any sizeable collection microservices drives the cost up substantially, and makes routing down from the DNS level via R53 more complex as well.
It feels like the ALB product should have been pitched as a massively scalable rules engine, rather than a virtual device with a small number of rules configurable on it. For many containerised "microservices" (which the ALB is firmly aimed at fronting) the hourly standing charge for having an ALB configured could well massively outweigh the cost of the compute workload it sits in front of every month.
I'd rather they used a purely pay on consumption model and dropped the standing hourly charge entirely - then you could economically deploy an ALB per microservice (only paying for throughput), and still remained cost effective for light microservice compute workloads.
While I love the idea, I've seen far simpler and technically progressed kickstarters fall flat.
I'll wait until I see one actually working effectively on proper, wirewool-esque, senioer-developer levels of facial hare before I part with my cash.
Also, if it DOES work it'll be in consumers hands damn quickly... in some cases before the kickstarter backers get theirs.
Re: waste of time
Microsoft should be giving away embedded windows licences and VDI usage permissions for free, as a mechanism to keep corporate users on a windows desktop platform regardless of how they access it - so they can still sell the OS licenses, AD infrastructure and Office tools that bring in all the real money.
With the ongoing shift to web based enterprise SAAS applications (including tools being built for internal usage by major corporates) MS is increasing the risk of some big corporate CIO's (who aren't under Redmonds thumb in other areas) suddenly taking a look at the cost of deploying and maintaining windows desktop environments in general, deciding "f**k it" and using the massive opex cost savings of binning MS at the desktop level as the justification to work through the pain of rejigging their remaining critical apps into web based tools.
By the same token - developers building applications for remote delivery are only too aware that non Windows OS's are now the norm (via IOS, Android), and that web-apps are the only realistic way to deliver a platform agnostic product, because who wants the hassle of designing and coding every version umpteen times? Chuck the UI in HTML5, do all the heavy lifting on a server backend, and be done with it.
If MS's goal was to smother VDI in the crib to protect its stranglehold on the Desktop market, then they've largely succeeded - but the world doesn't care, as the need for end to end VDI is diminishing right in line with the demand for windows desktops in general - with more and more users adopting a non-MS devices (e.g. fondleslab) as a primary device at home and work, developers are now building their tools to be inherently agnostic of the platform they end up running on.
MS are winning the battle with VDI, but loosing the platform war. Once they loose the that fight, they then loose the user familiarity vector that currently allows them to push the 'money' products (Office, Windows etc) into the stack.
Those expensive CSA's are presumably targeted at enterprises who (for some compliance or political reason) aren't ready to adopt a distributed VSAN architecture? I'm going to go with the good old "financial institutions" piñata here who are still in many cases using code and processes older than most of their customers.
I'm guessing that somewhere, carved on a stone tablet in the (presumably very heavy) dungeon masters acceptable risk handbook is a line that says "though shalt buy only physical SAN's with Multiple controllers, and no less than six power supplies, and it shall sayeth EMC or NETApp on the front, for all else is witchcraft and heresy" or something.
I'm also guessing that CSA's are a nice work around for virtualisation admins - they might be stuck with the stipulated backend storage (because until the peasants rise up and stick a pitchfork through the dusty old storage manager who has complete control over that side of things) - but they can damn well fling a CSA in front of it, as that's in their domain god dammit, and one day they'll throw off the shackles and deploy a VSAN and cut the dark lords of storage out of the picture entirely. One day...
It makes sense for the big cloud players to start working on real 64 bit ARM options. The vast majority of AWS Linux micro instances could be capably serviced by a modern ARM CPU at greater densities than Intel can deliver.
HP have already shown off the density possibilities via Moonshot (albeit as "enterprise" grade ARM kit, with a lot of unnecessary guff wrapped around it and only on 32bit). Give someone who builds their own hardware and platform by cutting unnecessary components (i.e. Amazon, google, facebook) and I bet they could cram a LOT of very usable low end compute into a very DC cost effective footprint.
Red Hat have shown off RHEL running on ARM64 now, so ARM servers are certainly coming. I don't expect them to replace x86 an AWS or anywhere else any time soon for many users, but it'd be a great start to diversifying the platform. Good luck to them - them working on diversifying one of the few areas (x86 based servers) in which there is currently no real alternative to the industry standard can only benefit us all in the long term.
I suspect it's a reference to average lines of code committed in changes or something equally skewed.
AWS isn't a magic bullet - the costs of hosting infrastructure on the platform versus leveraging private cloud on an in house hardware setup are significant, amazon aren't giving it away for free.
Also, there is a rising sense of concern that AWS is a closed source platform - if your not careful, it's very easy to paint yourself into corner and make moving elsewhere a real challenge.
I doubt amazon would be waxing so lyrical on the topic if they didn't regard open stack, cloud stack etc as genuine fuel for competitors to their defacto dominance in the public and private cloud industries.
Re: Sounds good but...
The clever bit about the Atlantis software (when I looked at it in anger a year or so ago after seeing it at VMworld anyway) was that it did two things to take the load off of backend disk - firstly, it did full deduped RAM caching of disk blocks in the VMhost RAM, (for both read and write) - the dedupe making it very efficient as most of the OS image hot blocks are common.
Secondly, it did dedupe, serialisation and compression of any writes back to disk, using an "inline" virtual NFS store - basically it shows your VMhost an NFS share (which live sin memory), where you provision your VMDK's as normal, but that NFS actually lives on an Atlantis VM on each host, which is then synced back to the underlying disk.
The idea is you should use the Atlantis provisioned storage for your VDI boot images and working drives, and stick your "proper" data on more traditional storage (file server or NAS or whatever), giving your very fast desktops with reliable file storage.
There's a good writeup of the new release (with comments input from the Atlantis guys) on Brianmadden.com if you want more detail. They've solved the persistant VDI image restriction, and it now supports multiple hypervisors etc. Nice that its licensed per desktop, and I reckon a per host version for server VM accelerationis forthcoming soon...
Re: RE: comparison with a PC and PC performance
My numbers comparison is just a casual abstraction to illustrate the problem Mark, as is the example of hand coding in cpu assembler I don't think anyone expects to dive in and hand code Uncharted 5 in Assembler from start to finish (other than some specific subroutines that genuinely require maximum performance tweaking, computer compiled is generally adequate and quicker) ;)
Perhaps a more accurate comparison would be to say that the compilers available for a stable hardware platform can be far more focussed and better optimised on a console than is achievable on a general purpose PC. Additionally, game engines can be written to take full advantage of the highpoints and avoid the pitfalls of the hardware.
Right now, PC based programming is based on lowest common denominator optimisation routines (as software has to work with a wide variety of present and future hardware, and the easiest way to achieve that is to use computationally expensive abstraction layers) - fixed hardware platforms don't have this constraint, so code can be made to be far more efficient in less time than would be required to get it working half as well on an acceptable range of PC hardware to cover the market.
As the machine remains in the market longer, developers (of both game software and the API's, compilers and engines used to build them) can focus their time on optimisation rather than rebuilding every time a new GPU generating is released, and wasting time on ensuring backward compatibility and scalable performance options to remain inclusive of users with older hardware. The old argument about how good games released at the end of a consoles lifecycle look and perform in comparison to release day titles illustrates this nicely?
My point is that those who assume that any developer is going to take the time to squeeze half the practical performance out of an equivalently specced Windows (or Linux) based White box PC isn't taking into consideration the commercial challenges this would entail. Consoles have a 6-7 year shelf life nowadays, and it is a testament to the unique benefits of closed platform optimisations that the Xbox360 and PS3 can come reasonably close to delivering the gaming experience achievable on a modern PC costing 10 times the price 7 years after they launched!
RE: comparison with a PC and PC performance
Cross porting may not be as easy as all that.
At present, every PC game relies on high level API's to inter face with the underlying hardware - DirectX, OpenGL for 3D rendering. These calls are pretty damn inefficient at exposing the true power of the hardware. This is by necessity - the same API's have to abstract a huge range of physical GPU's from a variety of manufacturers, so this loss of optimisation is to be expected.
You've then got the operating system layer, which operates as a go-between from the code to the hardware, again abstracting to cope with a wide range of hardware varients.
If everyone wrote they're PC games in x86 Assembler, and their graphics code in the AMD or Nvidia equivalent, we'd see performance an order of magnitude better than we do now. Of course, that's not realistic as that code wouldn't be portable to the near infinite number of hardware configuration varients found in the PC world, and I doubt x86 assembly is much fun to work in these days...
Even if the PS4 (and next Xbox) use x86 based CPU's and a variant of the Radeon GPU, they're going to be a single fixed part for the lifetime of the console - Meaning Sony can provide bespoke API's that are much more closely coupled to the hardware, or even provide direct access to the hardware for particularly performance focussed developers to tinker and squeeze out the maximum performance with - Thus the "real world" performance of a game on the PS4 is going to be extremely good compared to its PC version running on basically the same hardware, which is having much of its performance sapped by unavoidable inefficiencies at the API, driver and OS layers.
Speaking to the hardware in the PS4 is like two native English speakers having a chat - quick, simple and efficient.
Speaking to the hardware in a PC is like a guy who only speaks English wanting to speak to a guy who only speaks German - the problem being he has to use an English to French translator and then a French to Spanish translator, and then a Spanish to German translator to communicate every sentence.
Of course - early PS4 games will probably use some familiar, common API layers (OpenGL etc) until dev's get time to get to grips with calling the hardware natively, so don't expect miracles from the first generation of 3rd party software!
Expensive... heres something to try instead.
Yikes - its frightening that decent server RAM is currently so much cheaper per gb that an all flash storage array...
Why not try this out - stick your VMware view golden images on an NFS share mounted on a free ZFS ramdrive - enable block level dedupe (for space saving) and synced writeback to persistant storage (for resilience) enabled. Witness how many golden image VM's you can boot from THAT badboy!
A step forward
Getting a fully functional EQL into a blade chassis is a real boon for Dells Blade lineup. Dual controllers and having the full lineup of disk options (SAS, SSD, NL, Hybrid SSD+SAS) makes it very flexible.
Regards the accuracy of the comparison, a P4500 Left hand pair would be the closest comparison in terms of performance positioning, but the lefthand stuff lags way behind the EQL lineup in terms of performance, density and simplicity, plus its rack mounted - and this Dell sponsored comparison is about blade integrated storage options.
The HP storage blades do however offer you a way to put a dozen SSD's on the PCI bus of any adjacent blade for maximum IOPS and bus speed latency - something Dell can't do at this point in time - this is great if your running high IO SQL or similiar, and want ns (bus) rather than ms (iscsi) disk access latency to your SSD's
Practically - I expect that the Equallogic blade will be a great fit for building out dense DC deployments in a physically resiliant manner - but HP have the edge when it comes to building high performance storage right into the chassis - its a shame the VSA solution is so horrid. I'm hoping for a baby 3PAR blade for the c7000 at some point down the line.
PS - One VERY important point not highlighted above - the Equallogic can be slid out of the rack while live to hot swap dud RAID disks or controllers as it has a cantilever arm and ribbon cable attaching it to the backplane - the HP blade storage VSA appliance needs to be taken offline to get at the disks in the event of a failure - meaning downtime for your storage just to replace a spinner... worth noting if uptime is important to you.
Re: No direct connect?
Its not surprising that they only bundle "demo" licenses really - vSphere Enterprise/Plus licenses for a fully loaded example of the (rather lovely) quarter height dual socket E5 + Equalogic Blade chassis at the bottom of the article would cost roughly twice as much at point of sale as the hardware itself - and the OEM discounts EMC do on vSphere are crap, theres no escaping the cost.
Now - Given that release is slated for august, I would expect Dell to pull a stroke and focus on making the Server 2012 Hyper-V 3 integration nice and slick (they've always done a good job of supporting previous Hyper-V variants on their blades and Equalogics compared to the competition).
Same capacity - equivalent capabilities (in HV3 anyway), a third of the all up cost (provided you have the skills and management tools to support HV3 etc etc). That's a pretty compelling alternative.
This is very different than the virtualised hardware GPU offered under RemoteFX or the software 3d GPU offered in Vmware View 5.
Essentially - VGX is a low level instruction path and API that allows a vertical slice of the phyiscal graphics cards resources to be routed through to a VM - by a method similiar to VMwares DirectIO for those who want a read. Basically, the VM has direct, non abstracted access to the physical GPU, together with all that GPU's native abilities and driver calls - i.e. Directx11, OpenGL, OpenCL, CUDA... the lot.
The Virtualised GPU in RemoteFX is an abstraction layer that presents a virtual GPU to the VM, with a very limited set of capability (DirectX9 level calls, no hardware OpenGL, no general purpose compute) - not only does this not fully leverage the capabilities of the GPU, but it is less efficient due to having to translate all Virtual > Phyiscal GPU calls at a hypervisor level.
Contrary to some comments above - VGX is a real game changer for MANY industries - my only hope is that NVidia don't strangle the market by A) Vastly overcharging for a card that is essentially a £200 consumer GPU B) Restrict competition by tieing virtualisation vendors into a proprietary API to interface with the GPU, thus locking AMD out of the market which is to the longer term detriment of end users (e.g. CUDA vs OpenCL).
Highly scalable inhouse app for data crunching (can't be more specific than that I'm afraid...) - the important thing is I selected that hardware platform as it was the best fit for the task at hand - Bulldozer might be a lemon on the desktop atm (I don't think anyone could rationally argue otherwise), but I can assure you it was a real fight to get initial stock allocation of the 6276 2.3-2.6ghz 16 core CPU's (the sweetspot for powerdraw/price/performance it would seem) so they must be selling for AMD!
Big Bulldozer boxes
I've just deployed a fully populated blade chassis with 8 quad socket 16 core Opteron 6276's.
512 cores and 2TB of ram in about 8U of rackspace (up to 30A under load admittedly).
Under the particular workload they're doing (heavily parallel, integer based, memory intensive workloads) they absoloutely scream when configured correctly. Each 2.6ghz (boosted) core is doing about 75% of the real world work that a 3.4ghz (boosted) workstation Intel sandybridge core was doing.
The key here is that for that level of density, and Intel solution was totally unfeasible - the cost to load up a blade with four 8 or 10 core Xeons was about 2.0-2.5x the price per blade, and would have delivered the same overall performance at best.
I see a lot of bashing of Bulldozer by people who aren't leveraging them at a decent scale - or are comparing them thread for thread against intels desktop SKU's. The Server/DC market is a very different beast however. Intel Xeon prices (and the associated platform) scale much more steeply than AMD's current offering as you increase core density, so pound for pound the AMD kit is a very realistic option right now.
Judge it in 9 months...
Its nice to see Microsoft finally taking steps to unify the functionality and deployment of the system centre toolset. All those RC's and Beta's are no doubt there in anticipation of Server8 being finalised.
As it stands, my VMware Enterprise plus licencing (without any Ops Director type bolton) costs me more over a 2 year cycle than the hefty hardware it runs on. I'm paying a fraction of this every month for my Windows DC per socket licencing via SPLA anyway, and SPLA costs for the System centre suite are similarly minimal.
I'm certainly keen to see how Hyper-V3 and the rest of the Server8 Ecosystem performs - key additions like a proper virtual switch, port aggregation, thin provisioning etc mean Hyper-V now meets or exceeds the requirements of most ESX deployments.
The vSphere management interface is very good but don't forget that system center is now little more than a GUI wrappers for a whole new batch of Powershell commandlets - I don't expect it to be long until third parties start producing superior GUI's built on that fact.
Pound for Pound
I've just priced up some Supermicro AMD blades (Supermicro being very on the ball with getting new tech to market in thier boxes).
Basically, I can buy a blade with four 16 bulldozer core 2.3ghz 6276's for the pricee of a single (non E-series!) 10 core Xeon.
For workloads that benefit from lots of cores/threads, and which don't incur licening that makes it worth spending top dollar to max out per-socket performance (some VMware or SQL situations I expect) these chips will definately be worth investigating carefully.
They're going to be awesome for VDI in genreral and cloud VSP reseller scenario's in particular - lots of 'fast-enough' cores you can allocate, and strong memory
It seems Bulldozer was always going to be a server chip. Makes you wonder why they bothered with a retail/consumer version at all!
I bought a new car the other day... it runs on Diesel. sadly, when I went to my filling station, they had run out of Diesel, so I filled it up with good ol' Unleaded - which my old (though admittedly now rusty, and comparitively unsafe) car ran GREAT on for YEARS.
Wouldn't you know it, my new car runs like a dog, and its all the new manufacturers fault, why didnt they organise with my local filling station to have plenty of diesel available for me?
Moral: If your hardware is so old and unsupported that you can't even get drivers for it then either upgrade the kit, complain to the hardware vendor, or just stick with the old OS? How is any of that Microsofts fault?
We all loved XP, but like Old Shep, its time to let it go and stop living in the past.
Perhaps the precursors of a non-iphone based editing app in the works? - if the underlying hardware and interface on the much rumored apple tablet device is based on the iPhone UI then who knows...
I can't see current iphone hardware being any much use even for casual video editing... (If apple inteded video creation functionality then surely video recording would have been an option from launch, never mind 3.0?) The built in camera is crap anyway, the processing hardware won't cope and there is no external interface to import footage from the majority of other devices (bluetooth and Wifi compatible HD camera's being a novely).
Now, a multicore tablet doo-hicky capable of doing draft editing of footage in a directors hands in the field, now that would be cool.
Just speculation though. The artwork is probably totally unrelated to video editing.
Control Methods still a big block.
The three areas where the pc really excels over consoles (for the most part) are:- FPS, RTS, and MMO's.
Back in the bad 'ol days the sheer power of a PC was the only machine on the block powerful enough to do more than sprite shunting, plus it had lots of "added features" (internet access, network gaming etc). High resolution and powerful 3d gaming etc were solely the remit of the biege box. Nowadays, consoles have caught up with the power of the PC (Full HD graphics, network gaming, internet access) in these fields.
However, there are two areas that the PC still beats them in easily:-
1) Customisability - most competitively played PC games all run custom rulesets, and are easily and widely modded by players (COD4, CSS etc) - given the closed loop nature of console development, this is difficult if not impossible to implement at present on console versions (With an honourable mention to Bungie for the massive number of Halo gametype options).
2) Control method - why is there no WoW client for Xbox or PS3? Why are FPS games such a pain in the ass to play? Why are RTS games on consoles universally pie? Mouse and Keyboard. There is simply no comparison between the most advanced joypad and the cheapest 5 quid Tesco Value keyboard/mouse combo when it comes to FPS, MMO, or RTS games.
Really, as soon as Microsoft and Sony do the decent thing and enforce standardised keyboard/mouse support in all relevent titles, I'll be buying all my software in a console format - thus getting the best of both worlds. For the time being the way I choose to play complex modern titles is still too heavily restricted on console platforms.