* Posts by toughluck

420 publicly visible posts • joined 22 Jun 2009

Page:

Tilera throws gauntlet at Intel's feet

toughluck
Stop

Why doesn't Tilera compare their CPU to SPARC?

Interestingly, Tilera didn't seem to show whether there are any advantages in the merge-sort when compared to a SPARC T3 CPU. Since T3 is also made on 40 nm, the comparison would make sense in that regard, especially since both CPUs are RISC designs.

Itanium's future: Users believe Intel, not Oracle

toughluck
WTF?

Amazing...

Back when Itanic was being actively backed by Intel and major software vendors, its notoriety as triumph of marketing over CPU design for the sole purpose of bringing the general purpose CPU market under Intel monopoly was pointed out and rightly criticized by all major tech sites, el Reg included.

Then, one company after another stopped supporting it, including Microsoft and Red Hat, and Intel was ostensibly lukewarm towards continued development of Itanium, but all remained fine and well, and attacks on Itanic continued.

Now, when Oracle finally jumped the gun and announced termination of development for Itanic, everyone is suddenly rushing to Itanium's defense (and to bashing Oracle)? Come on! I know that the general attitude of most sites is that Oracle is greater evil than even Microsoft, not to mention hp or Intel (which, bafflingly, is still presented in good light despite the uncovered monopolistic practices), but its gotten ridiculous at this point. Microsoft or Red Hat were never subjected to a fraction of the criticism that's being leveled at Oracle.

Itanium was never a good chip, plain and simple. Intel suggested an ambivalent attitude towards it in the recent years and I can hardly believe it will retain any edge over x86, much less any significant edge. Coupled with dwindling market share, this was expected. But to defend Itanium all of a sudden? I'm baffled.

We really need an angelic/demonic Larry icon...

Ellison grilled on $4bn SAP 'theft' claim

toughluck

Let me clarify a few things

First of all, those customers did not migrate off of Siebel or Peoplesoft. The original assertion was that those customers migrated from Oracle support to SAP support, but did not change the software (which would be ridiculous otherwise and Oracle would have no case there).

If you are considering damages, it does matter how many customers have been lured away. However, the number of customers lured away is inconsequential (it could very well be zero) when it comes to determining whether SAP was guilty of IP theft or not.

Ellison did admit that the worst case scenario was averted, but you can't claim 300 customers (out of 300 thousand, give or take 299 thousand) is just a small bunch. The largest customers make up the bulk of the contract value.

Everyone but Oracle demands Java independence

toughluck
Stop

He does make a lot of sense

The anti-Oracle tone I see all over the press would be ridiculous if it wasn't so damaging to Oracle. Sun was going down and nobody seemed to care what would happen to the IP if it went down completely. IBM certainly didn't care, but Oracle did. They bought out Sun with cold, hard-earned cash, and it was obvious then and is as obvious now they want to extract as much value out of Sun as possible.

First such obvious negative publicity was over OpenSolaris's demise. That was fair game, though, especially as it appears OpenSolaris did not live up to its potential and most code donations were from within Sun, not from outside developers, save for small pieces.

Then, recently, it's about OpenOffice -- true, at least that was called out by Sun, but it still appears to have been done haphazardly and certainly without proper funding, it seems. I doubt that without corporate backing (especially monetary), LOo will get anywhere. They'll probably try to fly back under Oracle's wing before 2011 is over. I may be wrong, it depends on how much they will want to drive the point, but I'm fairly sure there will be a lot of stagnation in development in the meantime.

And now about Java -- what's wrong with the roadmap that Oracle laid out? Nothing, apparently, apart from the fact that Oracle was the one that laid it out and Oracle is against Google, which automatically makes Oracle evil and all their decisions null and void?

Is it really bad that Oracle tries to recover the money they spent on Sun?

Oracle hates discs, loves tape

toughluck
Thumb Up

He does make a lot of sense

1. When flash reaches high enough capacity for home use at low enough prices, the market will slowly abandon spinning drives. With less traditional drives sold, losing economies of scale will slowly hike the price of HDDs, closing the gap even further in a positive feedback loop. As consumer drives go up in price, so will enterprise drives. This does not affect tape, which was always niche compared to disk.

2. Bit density on tape drives still has ample room to grow. T10K cartridges have surface area of about 75,000 cm^2. Compare this to about 456 cm^2 maximum for 4-platter 3.5" disks (I'm assuming 3.5" platter diameter with 1" diameter hub). The bit density for T10KB-formatted tape is about the same as of a 6 GB disk. There's ample room for growth. Assuming four-platter 2 GB disk bit density, a typical (4x5x1") cartridge could hold over 150 TB of data.

3. T10KB has 240 MB/s native throughput, not 120 as in the article (that's the throughput of the original T10K). A 20 TB cartridge will store data at 20 times the density. Assuming there would be 144 tracks (compared to 36 of T10KB), linear bit density is 5 times higher, so 1.2 GB/s throughput should be achievable. Assuming 100,000 slots means 10 connected SL8500 libraries with 64 drives each, that 1,380 TB/hour translates to almost precisly 600 MB/s (given rounding, it's insignificant).

4. As opposed to LTO, Storagetek drives maintain backward and forward compatibility with the same cartridges usable on various generations of equipment (based on the formatting), regardless of technology or format changes in between. It can be expected that the T10K cartridge will be usable on T10KC or T10KD drives, depending on their underlying technology. Obviously, Fowler may have meant 20 TB compressed capacity, which makes it perfectly viable -- 10 terabytes in 2015 seems almost like a breeze. Assuming a 2 TB T10KC is released before May 2011, 4-5 TB T10KD in 2013, 10 TB T10KE is certainly possible in 2015. 20 terabytes native is significantly more involved and would possibly require Storagetek to break backwards compatibility.

5. At some point, it may be possible that flash becomes significantly cheaper (although it's doubtful that progress would be notably faster than Moore's observation suggests, though 3-bit MLC could allow flash to overtake Moore's, as could 3D cells suggested by some people), and tape storage will be on the way out, possibly replaced by switched SATA/SAS in a MAID (zero spin-up time could make it possible). This of course assumes that the high-density storage is indeed cheaper to make and that there will be people willing to pay for lower tier (slower, but higher capacity and/or significantly cheaper) SSD storage.

Watchdog calls for Google break-up

toughluck
Thumb Down

That's just fantastic

I just love this double standard. On one hand, you are urged to be successful in what you do. On the other hand, if you're "too" successful, the likes of those watchdogs will want to punish you for trying.

This is ridiculous. Google built an empire of its own from scratch pretty much without any competition. Now that their business model is proven and successful, freebooters want to just copy them and get rich in the process. But one after another fails and then blames Google (rather than their ineptitude and lack of originality and distinguishing features) for that fail.

Dell flogs its 'zero client'

toughluck

So it's a Sun Ray

Only it consumes 3-6 times the power and requires expensive software to run?

AMD cuts to the core with 'Bulldozer' Opterons

toughluck

Hmmm..

Clouded Leopard... How... Prophetic.

Intel Larrabee letdown leaves HPC to Nvidia's Fermi

toughluck
WTF?

Oh sir, you kil me!

> The delayed entry of Intel's Larrabee and the dead-ending of IBM's Cell

> (at least on blade servers) gives AMD's Firestream GPUs a better chance

> against Nvidia's technically impressive Fermi family of Tesla 20 GPUs.

Technically, they're not impressive, they don't exist (fake cards don't count and 7 chips do not really make volume production).

As they don't exist, you can't really bench them.

http://sisoftware.co.uk/index.html?dir=qa&location=cpu_vs_gpu_proc&langx=en&a=

5870 was already benched by SiSoft to be 8.8 times faster in double-precision FP than 260 GTX was. Assuming Fermi is 8 times faster than 260 GTX, it is barely going to be on par with the 260 GTX (we can assume it will be 4-5 times faster than previous generation).

Given that Fermi is going to be a huge part, it is going to have power issues as well, likely drawing more than Tesla, which already draws 10 times more than ATI and 5870 is rather frugal. Needless to say, this isn't going to earn them any top spots in Green 500.

You need error correction? Run two 5870 cards beside each other (or one 5970) and compare the results. It's still going to be cheaper than Fermi.

> The Fermi chips will be available as graphics cards in the first quarter

> of next year and will be ready as co-processors and complete server

> appliances from Nvidia in the second quarter.

Oh, really? With the slips they suffered for the last year they'll be glad if they are able to put *anything* on the market before they run out of assets. Nvidia has nothing to compete with ATI in the GPU market, Fermi is a huge die and is going to be too expensive to interest gamers if they can get two Radeons for the price of one GeForce (unless Nvidia decides to shoot themselves in the foot and sell below their margins).

> And they will likely get dominant market share, too, particularly among

> supercomputer customers who want to have error correction on the GPUs

> - a feature that AMD's Firestream GPUs currently lack.

Assuming that they can actually put anything on the market. While adding error correction is not a simple matter, I think AMD can do that within a reasonable time frame and with Nvidia lagging behind, it would be foolish to think AMD does not have anything on their roadmaps.

Apple wins attack of the clones

toughluck
Jobs Horns

Pretty much the same as happened in the 80s

When it was about PCs, last century in the 80s and 90s, courts have blocked the PC clone industry and gave the only right to manufacture PCs to IBM. Therefore, I am writing this comment on a genuine IBM PC...

Hmm, wait...

Blade servers are hot!

toughluck

Sun

Sun offers up to 32 DIMM slots on their blades (4-socket Opteron X6440 and 2-socket T2+ T6320). With Nehalem, it's either 18 or 24 DIMM slots (4-socket X6275).

AMD desktop rejig: six-core 'Thuban' set for Q2 2010?

toughluck

What will AMD do

@Matt -- don't worry. People use home PCs for four things:

1. Basic office work

2. Web browsing

3. Entertainment

4. Limited content creation

Of course, number one hardly needs more than one core. Number two can benefit from two of them (browser+flash), number 3 will benefit from more cores (more and more games are springing up to take advantage of multiple cores). Of course, 4 is going to take advantage of all the cores your PC can muster.

And multiple cores make for a future-proof investment -- most applications are written to take advantage of multithreading and it is the current paradigm. Even if a dual-core CPU is fine today, it may not be enough in a month or two. If the current HD movies can tax one core if you don't have hardware acceleration, future movies might tax four of them. Sucks to be you if you don't have multiple cores then.

And multiple cores help out a lot in normal usage, too. With more of them, you're able to be browsing, listening to music, running flash and no stuff in the background will disrupt your work (including e.g. virus scanners).

Six might be overkill at the moment, but people will eventually find use for them. And individual cores can idle quite nicely, Windows 7 scheduler is supposedly written to be aware of an idling core and will not assign a workload to it if it would be underperforming.

@h4rm0ny -- AMD wouldn't shoot themselves in a foot like that. They need a well-rounded lineup and new parts should be forthcoming.

@Gary F -- I read online that AMD is well able to create their own i5/i7 equivalent, problem being price given current AMD market share. Magny-Cours and Sao Paolo are supposed to close the gap, and if Intel screws up: 1) by focusing too much on the integrated GPU in Sandy Bridge, 2) if 32 nm ends up too expensive to manufacture and as a result offers no tangible benefit compared to the price point*, Bulldozer might very well end up faster than future chips from Intel.

*) And it's not far-fetched, too -- analysts point to the fact that 32 nm might be too expensive to manufacture, especially at first.

West Antarctic ice loss overestimated by NASA sats

toughluck
Boffin

So, where's the *WEST* Antarctic?

I thought that was one big continent, centered on the South Pole and all the coastline was in fact facing NORTH.

While people can traverse East and West on the Antarctic, you won't make it to the shore if you move in those directions.

Desktops are seen as unimportant until...

toughluck
Paris Hilton

Thin clients are nice

And they work. They work especially well in discouraging users from browsing youtube.

Of course, thin clients do wonders to bills (power, but also air conditioning), and savings scale very nicely with the number of users.

As for thick clients, you can always anticipate problems. Hence monitoring software. Most system vendors provide free tools for their systems which are able to log system events and forward them to a central repository (including SNMP traps, or more commonly, e-mail notifications).

So if any component starts operating only marginally, support people get a heads up on the problem.

Now, it's a wholy different problem to persuade management (or the beancounters) to actually okay system repair costs, so the support personnel can either risk their budget and run preemptive repairs or wait until the part breaks -- at least then they'll magically know which part needs to be replaced.

Paris, because she knows the difference between thick and thin.

Oracle cuts database tags for Sparc T2+ servers

toughluck

Databases and IO

Yes, databases are IO-intensive and that's where Sparcs shine. I know I simplified (maybe oversimplified) the issue, but it boils down to the same thing. Database queries are easily threadable. Sparcs can switch out of a stalled thread (regardless of what the thread is waiting for), and when they switch out, other threads can be executed.

The thread does not have to wait for the IO and stall in a traditional sense (which would cause the CPU to idle), but this mechanism allows other threads move forward and then will switch back to the stalled thread once data is available.

What I meant by a rarely accessed dataset, is that there won't be threads that can move forward while other threads are stalling, so every CPU is going to depend completely on the IO.

Now, I won't go into specint or specfp, I don't even know them for pretty much any of the CPUs on the market, so I've got no idea what I could prove with it or what you would.

toughluck

@Ian Michael Gumby

> Just because a core has 8 threads per core, it doesn't mean that

> the performance of Oracle on the chip will increase significantly or

> that it can be tuned to take advantage of the extra thread.

> The current round of database designs are not parallel enough to

> take advantage of these extra threads. While Sun wants to say that

> a core w 8 threads is really like 8 virtual cores or 4 virtual cores, that

> doesn't translate to 8 times or 4 times the performance boost over

> a core and a single thread/ double thread.

Ummm, actually it does. Databases are one of the few types of applications that scale almost linearly with the number of threads. Each query can be (and usually is) set up as a different independent thread.

Databases are also memory and storage dependant. As database queries are (usually) random, there is no way to avoid heavy memory use and efficient use of available memory bandwidth is that much more important.

Sparcs really shine there.

> There would have to be a major overhaul of Oracle to really scale

> and take advantage of these cpu advantages. Until more of the major

> chip vendors move to a similar architecture, there is little incentive for

> a major RDBMS house to make the effort to change the infrastructure

> to take advantage of these chip advances.

Maybe, no and no.

Maybe an overhaul of Oracle is required.

No, chip vendors will not pick up Sparc, as they would need to divert resources from their other designs, nor is it actually necessary.

And no, Oracle will likely own Sun soon and this gives them incentive to provide any and all necessary improvements or enhancements.

> In short, you may be better off purchasing a cheaper cpu and

> bring down the cost per transaction, than spending $$$ for

> the additional horsepower you can't use.

Maybe, but only for small, maybe medium, databases. Sun T iron is not too expensive compared to the competition -- for what they are worth, benchmark results show it's vastly cheaper than POWER iron and more or less on par with x64 (when comparing bang per buck), and their running costs -- especially power and cooling -- are much lower than systems at comparable prices. Now you also have lower licensing costs. This all translates to much lower TCO for Sparcs and Oracle will not really lose anything on that.

> Maybe this is why they're cutting their prices?

They are cutting the prices to be more competitive. Using Sparcs for databases was, and still is, overlooked by most datacenter owners, even though the pace is slowly picking up since T2+ was introduced.

toughluck
Pint

They are taking competition quite seriously

> Itanium chips were originally at a 0.75 scaling factor, by the way,

> but were reduced at some point,

Well, they were, and reduced, too, because everybody in the business believed that Itanium is going to be the next big one rather than the Itanic.

Unfortunately, while Itanium is a nice all-round CPU, it isn't really good for database work, unless the database is a rather small, rarely accessed dataset (in which case it simply sucks as much as any other CPU).

> and despite the large number of cores in modern x64 chips from

> Intel and AMD (four or six), Oracle has not been tempted to raise

> the scaling factor here. It will be interesting to see what Oracle does

> when AMD crams 12 cores in a socket and Intel starts cramming in

> eight cores.

Nothing will happen. I know AMD is going to make the Magny-Cours a multi-chip module (MCM), is that also true about the 8-core Nehalem? I have read many conflicting reports on that.

Note that Oracle still has an MCM clause regarding IBM Power CPUs, where they are licensed at 2x the cost per socket (they are treated as two CPUs that they actually are rather than one package). I would expect Oracle to use that clause against AMD and Intel in their upcoming chips.

This might make T2+-based machines really nice Oracle boxes, given that they already are well-suited for that kind of workloads.

There were some interesting comments in the last round of SPARC-bashing in the linked article. I would just like to correct some statements by Matt in that discussion that:

1. Memory bandwidth does not make up for memory latency -- idle cycles are lost regardless of whether memory serves gigabytes or terabytes per second. Database queries are rarely larger than a few kilobytes, but the latency prevents that data from reaching the CPU quickly. If you have a few cores and all have to wait on a random query, they will stall. A Niagara will stall too, but instead of 8 or 16 threads stalling, you get 64 threads. Small cache has nothing to do with it because with the speed of a single thread (assuming all threads stall and are switched), the memory latency can be treated as one cycle.

Oh, and the cache of the T2 was enlarged compared to T1 only because you need to retain more data for more threads. That's quite elementary. If Matt's argument for more cache held any water, Sun's microarchitects would have to increase the cache more than two times keeping with the 2x increase in the number of handled threads, and they have increased cache by a measly 33%, from 3 to 4 MB.

By the way, as for the bandwidth, a T2/T2+ chip has four DDR2 controllers on-die. That gives more bandwidth than two or three DDR2 controllers and only 33% less than three DDR3 controllers on-die, so the Niagara chips are definitely not starved for memory bandwidth.

2. DDR3 memory might not be faster than DDR2 memory in some workloads. DDR3 memory might have a cycle latency (CL) of 7 or 9 cycles, whereas typical DDR2 memory has CL of 4 or 5. A DDR2-800-CL4 is always faster for small random queries than DDR3-1600-CL9, even though it has far less bandwidth.

3. If your thread stalls, it doesn't matter if you have a 64 MB cache or 64 KB cache. The CPU does not work on large sets anyway -- 2 or 3 64 bit data at the most per one cycle, with 64 bit instructions adds up to 256 bits or 32 bytes. Some SIMD commands will take more data, and some data may be larger and working on it may be spread across multiple cycles, but a small cache is never a hindrance if the CPU waits on memory. If a random database access comes, a CPU will not have the data cached (by definition of random data). If the CPU waits, say, 50 nanoseconds for the data, it can either idle (as most CPUs do) or switch to a different thread (as Niagara, Nehalem and some NetBurst chips do). Nehalem and NetBurst cannot switch more than once, but Niagara can then switch a 14 times more and when the data arrives, it can switch to the requesting thread at an instance or cache it and wait for the thread. After that random data is processed, it doesn't need to be kept in cache, anyway.

4. As for the Rock. While it's sad that Sun will not be releasing that CPU, they did not revise their roadmap as much as it has been suggested. The Rock was to stay on the market only for two or three years (which is ludicruously short for an enterprise CPU) and all improvements introduced by Rock were to be incorporated in the new VT core of all future Sun CPUs rather than keeping Rock as a separate family.

To the best of my understanding, Sun has agreed with Fujitsu to not duplicate effort, leaving the general-purpose Sparcs to Fujitsu as their SPARC64 line.

Ubuntu's Karmic Koala opens its eyes

toughluck

Debian does hybrid suspend/hibernate

On Debian 5 (not sure about other distros, but Mandriva 2008 and earlier did not have this), there is a s2both, which does what a hibernation would (ie. save state to disk), but instead of powering off, it suspends.

The downside is that the system takes its sweet time to go down (as much as a hibernation would). The flipside is that it goes up as fast as it would from a suspend-to-ram, but if you lose power, the system does not do a full boot, but returns from hibernate.

I have to say, that is the best of both worlds, isn't it?

Microsoft's web world shrinks

toughluck

There's 3.13% left

What would the other browsers be? Mozilla? It could be included in Firefox's market share. Netscape? Is there anybody still using it? lynx/links/derivatives? Hardly...

Shudders... Opera?

Opera Software reinvents complete irrelevance

toughluck
Thumb Down

OU is not meant to be a webhost, for f***'s sake

A lot of people here assume that people will use or try using Opera Unite to:

1. Host illegal content.

2. Make it available to a high number of people.

3. Run an advanced web server (ie. dynamic pages).

4. Run services on a high-availability, high-traffic, 24x7 basis.

While I firmly believe that the majority will use it in order to:

1. Share personal pictures and videos.

2. Make it available to friends and family only.

3. Run basic content frontends.

4. Make their shares available while they're online only.

And as such it is an incredible idea. You can share pictures almost instantenously (without pasting them to facebook or sending via e-mail), one at a time (instead of sending an entire bundle).

Some say it's a ridiculous idea. Wanna bet how long it takes firefox developers to start bundling an Apache-lite server along with their suite? It's going to be the fourth element in their happy circle of apps, a Groundhog maybe.

@Jason Croghan

> As for this piece of dirt they're calling a revolution - have any of you knumbnutz actually

> considered leaving this stuff running on your parents/grandparents/daughters/sons

> machines constantly?

It's not meant to be running constantly. I assume people will share stuff when they're online and turn their machine off once they're done for the day.

> IIS comes with Windows yet none of you are using it, the question begs why not if this

> Opera bullcrap is getting you so aroused?

First, with Professional (2000 and XP) and Premium (Vista) editions only. Second, to set up your own IIS (or Apache for that matter) server and add a nice interface to it is difficult enough for most people. Setting this up with Opera is easy.

Plus, it's free (yeah, so is Apache; see the people actually download it and run and refrain from jumping in to save them from their folly.

> You turn off the computer for a week while

> you go on vacation and all of a sudden you're New Zealand relatives can't see your

> cute puppy doing summersaults.

So what? Once you're back, they'll see it again.

> Then there's the biggest reason your mom shouldn't be sharing files:

> http://www.theregister.co.uk/2009/06/19/copyright_victory_rich/

Oh, I shudder for the thought that I will have to pay millions for sharing pictures and videos I shot. You think I should pull down my galleries or risk running them for my family to see?

Page: