this could be a rather big deal in oppressive regimes...
who like to spy on citizens, such as Syria, China, Russia, or the United States.
63 publicly visible posts • joined 25 Jun 2009
Peter, I agree with your post entirely. The article in purely technical terms is solid. However it's light on the 'what does this really mean to me' angle.
I am a US TMo customer with a '4G' HSPA+ handset (HTC/TMo G2). I am also in a market receiving the 42mbit light-up today. Prior to today with full strength -- bars, I don't routinely look at signal dB-- I would consistently get 3 to 4.5 mbit which is faster than the real-life speed of any 3G format in my city.
The speeds I ultimately see will no doubt be below the official 4G specification, but I think you were trying to say, So What?
If it's equitable to wire speed, we're really on to something. Now about those terribly low data caps that make it difficult to do a whole lot with this speed.. Hrrmm.
Oracle 11g.. yes indeed, for a while unless Larry decides to double the license cost again. (Which conveniently doubles the already excessive maintenance, too.) Nevermind that you won't get a version beyond 11g.
MS SQL IA64? Sure, if 2005 suits your fancy. Neat features you want in 2008 R2? Oh, drat.
Sybase... bwahaha. Sorry, I couldn't actually write a proper sentence about that.
DB2 it is, then. And why are you running DB2 on anything besides AIX??
Title says it all. I realize the number of places in the US where you can get 21mbit HSPA+ is severely limited, but this is my question.
Why invest in LTE at all if consumers if it seems to be there in HSPA+? Presumably TMo built their HSPA+ capability for far less than a conversion to LTE would cost. Is gaining access to this technology AT&T's real goal, and if not why shouldn't it be?
And I realize that 21Mbit isn't enough for 'tomorrow' but HSPA+ looks set to cruise to 84Mbit and beyond, supposedly to a theoretical max of 672Mbit. That sounds pretty good to me, despite LTE Advanced talking some 3+Gbit future state.
EMC wants to create/reinforce the idea that EMC is 'fast' and doesn't care that customers will purchase spinning disk configurations that probably cannot deliever a tiny fraction of this level of performance.
I repeat: they don't care.
It's the same as base model and 'sport' model cars. One of them is the deliever high performance, in many cases well beyond what the owner/driver can extract and certainly beyond legal use of public roads, and the base model is to capitalize on the image of the car at a lower price point.
EMC wants to sell VNXs and it ideally would have customers who naively believe that they are 'better' and 'faster' than the competition. If you look at the SP utilization on a NS-480 or -960, I didn't think the VNX's massively more powerful processors was about crushing benchmarks with SSD. I think those powerful processors are completely necessary to be able to use thin provisioning, automatic storage tiering and, someday, block-level dedupe using primarily spinning disk.
They should be set with 3PAR. However, Dell wisely (and obviously) sees Compellent as the best-of-the-rest. Pillar isn't going anywhere except maybe straight into Oracle. Xiotech would've been good to buy 5 or more years ago.
Who else would bid for Compellent besides Dell? They might be able to get it for a good bargain.
I'm curious if any custom functionality will result out of this partnership.
As it sits today, CV can use snapshots from most any array to get the job done. NetApp just happens to be very good at snapshots.
What's weird to me is the fact that a customer who is hoping/trying to do array-based snapshots on a large scale for backups probably will figure out on their own that they should look at NetApp if the discover their current midrange array isn't up to the task of a few hundred concurrent snaps. No need to mention names? :)
So, in the absence of custom functionality... I rate this one a 'big deal'.
The EVA was indeed groundbreaking when Compaq invented it. Check your calendar; that was over a decade ago.
The current generation EVA is way behind competitors in speeds & feeds, and last I checked it still doesn't have truly online firmware updates. (Asking customers to increase timeouts to the moon doesn't count. iSCSI implementation is horrid, and NFS and CIFS are non-existent. It is a thoroughly unimpression midrange array and only fools are buying them. Even bigger fools are those who buy the XP.
1. Dump the XP line ASAP-- abdicate those who are locked-in to HDS and sell everyone else 3PAR.
2. Get everyone in the highend and midrange on 3PAR-based arrays. They can't scale like an XP, but there are fewer customers for 'platinum' arrays every year. 3PAR will serve the market fine.
3. Lefthand is working fine. Don't screw it up. :-P
4. Buy DotHill. HP is selling a boatload of MSA 2000 arrays OEMed from DotHill, and it is a compelling low-cost SAN and DAS. So quit screwing around and just buy DotHill already.
Granted, we're lacking details here, but it also seems possible that this man has failed in his personal life to the same extent he failed in his professional one... thereby making it well-justified that he not see his son with any regularity.
Sure, he could also be a former Dad-of-the-Year who has been wrongfully distanced from his progeny.
But if it walks like a duck, and quacks like a duck...
Fully expected a Matt Bryant post here and while I'm glad not to be disappointed, that wall of text is several times more than I'm interested in reading. Let me see if I can make my point a bit quicker...
I believe that HP is planning to release a 2-socket Nehalem-EX in later in the year. I have no information as to why this is, but I am happy to speculate.
The Nehalem-EX is launching with low clock speeds relative to the Westmere-EP. It is a much more expensive processor, so HP could be waiting to make sure it is a commercial success before launching a 2-socket offering. A server that costs a lot more than a DL380 G7 but underperforms for most workloads will not be overly popular.
I want to like the AMD 6100, but the clock speeds are too low and some early test reports seem to confirm exactly what I was concerned about: parallelism be damned, it's so much slower than the Intel 5600 series for most workloads but particularly those that only need 1-4 cores. The AMD might hit some sweet spots for databases that can utilize parallelism; it does have more memory slots per socket than the Intel 5600 which makes larger memory footprints more affordable.
But I'm betting on Intel to continue to own the 2 socket performance crown and to decisively capture 4 socket as well. I don't particularly like Intel, but I buy what works. Like I said previously, I'm not sure what the value proposition will be for the 2 socket Nehalem-EX. Once they get it to near 3ghz, then I'm interested. I've got too much stuff that needs one really damn fast thread. Sorry.
"It's not showfriends, it's showbusiness."
These analyst estimates seem extremely optimistic.
Most applications and databases cannot fully take advantage of the speed promised by flash-based SSD due to bottlenecking somewhere-- usually code efficiency but sometimes CPU or memory. (Memory bottlenecking is common for 32 bit apps.) It takes time to recode things, especially when you're waiting on a vendor to do it.
15k drives will be marginalized, but 7.2k SATA drives are going to be around for long after SSD goes mainstream.
The existing 60GB hot plug 3g/s SATA goes for around $1400. $3k+ for the 120GB version.
Why are they so expensive? And why do they only have a 1 year warranty? --There are no moving parts and a lot less heat generated, so you would think a SSD would fail a lot less often than a spinning hard drive.
Totally unimpressed by this piece of news.
..Yet you didn't even complain about the keyboard! I'm one of those must-have-a-physical kb nutters, and I don't think I could live with a Droid due to the tiny keys with no detent and no spacing.
There's nothing you can do about #6 right now.. every smartphone on the market is a battery hog, unless it's dog-slow. And even then they're still total hogs compared to dumbphones. If you hear someone say they only charge their smartphone every 3 days, it's surely a Blackberry that never leaves the holster.
Rabid fanbois and ISVs are the only ones buying NVidia graphics cards right now. They don't have a single new product that compares favorably with ATI options at any price point.
Generally when you release a new product in a competitive field you should have the best product or best value for some period of time. Or at least best price, right?
I hope they know what they're doing with Tegra because their graphics card business shows no signs of recovery.
This is one image to manage. How much labor, or more likely, software licenses and labor are you investing in managing a bunch of PC with separate hard drives?
Building new machines, installing apps, patching, and so on.
I'm not saying this particular product is anything great, or more worthy of investment than Citrix or VMware, but that's the math you would actually use to help determine if it were.
They've absolutely, positively got to have a credible consumer-focused tablet available for the 2010 holiday season.
Whatever is out at that time will sell. iPads have a huge head start, no doubt, but that's primarily affluent early adopters-- Joe Mainstream will be buying tablets this Christmas.
"Balmer tightlipped on 'nano bomb plot'"
There's only one "Balmer" that most IT folks know of: Steve Balmer of Microsoft. But what's he doing with IBM?
"...prosecution spokeswoman Jeannette Balmer..."
Oh, her. Of course. Yes, she's exactly who we were thinking upon reading the headline.
The subtitle should be 'prosecution tight-lipped' or something similar.
Most likely you are correct, that is what the authorities will think and that is what most normal people will believe as well.
Personally, I'm amazed that a ban on headgear and sunglasses is not already in place for all banks and places of commerce. YES, it would be inconvienent for some customers.
However, it's truly maddening how many times security cameras are foiled by con artists, forgers and the like by simply wearing a ballcap and looking down during the transaction. Even if you catch the person, the jury rightly throws out the positive ID because you can't see their face clearly.
Or maybe there's just a raft of check fraud and ID theft in the US...
is Intel trying hard not to let LightPeak die. They tried to convince everyone that LightPeak was the future now, and no one was buying it.
They refused to produce USB 3.0 motherboards and the industry went around them. Now they've realized that they cannot hope to stop USB 3.0, so they are trying to position LightPeak as a successor.
Maybe, maybe not. USB 3.0 is just getting started. Let's not forget that eSata looks rather good right now as well, for storage devices only. I also expect eSata to make the jump to 6g/s SATA-3 when it's needed in 1-3 years.
Fiber Optics are more expensive than copper in every implementation I have experience with. (Toslink, Fiber SC, Fiber LC.) They are more fragile as well, and they definitely cannot carry power. So I see this technology going nowhere in the face of USB 3.0 and continued progress with eSata.
This is exactly correct. HTML5 has been stuck for a long time because there is no credible open-source video codec available. Apple has paid for H.264 because they wanted to move on. Google wants to move on too, but they see that H.264 is a major problem because of the cost.
If this rumor is true, then all it means for now is that Google wants a unified HTML5 badly enough to pay $126 million for it.
Of course they monetize everything in some way. What fool doesn't? But Google makes free or inexpensive tech that people WANT to use... rather than boxing people in and forcing them to buy or renew.
And some of you have clearly missed the point. This isn't just another codec, it would be the only open source one. And it is less about controlling technology than it is about ensuring that technology marches on. Google wants to make products that use HTML5. No great mystery in that.
What TMS desperately needs is to add features common to mid-tier arrays such as no single points of failure and online firmware upgrades.
Their product has no moving parts other than fans, and is presumbaly extremely reliable. But if a piece of silicon does go south inside it, no matter how you've provisioned a single unit, you're going to lose some data. The only solution at this point is to have mirrored arrays, and I believe you have to do the mirroring on your own.
Fully online upgrades have come to EMC, NetApp (with some caveats) and others. And setting device timeouts to 1000000 ms does not count.
The charts show that esata was nearly as fast except for reads. CPU utilization of USB 3.0 is completely relevant as USB 2.0 alone can drive some major CPU time.
However, I still believe that USB 3.0 is a good path forward compared to that LightPeak garbage Intel was trying to propagate in its place. (Since when has fiber optic ever been resilient or cheap?)
Sorry, but I did not point my browser to Google (or Bing, or whatever) in order to find a craptastic list of would-be search results that usually:
- have minimal context
- tons of ads
- do not have what I'm looking for
So honestly, I'd like to see all metasearch filtered right out. Google never asked me, but if their searches are as self-tuning as they like to tout-- then they already know and are listening. Right?
The time for that has come and gone. The PS3's cell is no longer compelling for compute clusters on a time versus results basis due to a variety of alternatives.
This is definitely about security just as Sony is claiming.
And I also agree with the poster citing that it is no big deal. I ran Linux on my ps3 for less than a week. Terrible platform for it-- so severely memory constrained it isn't funny. Much happier with ubuntu on an Atom + Ion. No, it's not free, but neither is my time or satisfaction.
..for Intel to compare the fastest Westmere to the fastest Nehalem.
The x5570 is/was an expensive processor. A lot of people bought them anyway, and many more bought the x5550 or x5560 instead, to get most of the value at significant discounts.
Anyway, companies will buy the x5670. When you are piling dozens of VMs on there it's not a bad idea. If you're installing Oracle database enterprise edition, who cares about an extra $2,000 tied up in processors when your database license was a few hundred thousand?
You can't allow your political or personal opinions of the injured party to influence your opinion of whether or not the accused is guilty of a crime.
Hacking, even through social means, is clearly illegal.
This person should be tried and if convicted should face some sort of penalty. I have no personal suggestion as to what is appropriate. But you can't sweep this whole matter under the rug, that's absurd.
One of the biggest problems most if not all of the SAN vendors are facing is scalability.
If you have a box like a CX480 made to hold greater than 400 spinning disks but you put in 2 trays of SSD and tap out both controllers then you've paid a rather lot for your roughly 2.2 TB. Let's be kind to EMC and say that it took 4 trays. Still, that's a tiny amount of data to tap out a frame that would have held well over 100TB of disk. You can scale easier in architectures that use a meshed star topology, true. Most midrange aren't like that, and in this economy it seems that hardly anyone is ordering up a Symmetrix with a bunch of controllers and SSD.
Texas Memory Systems is the only maker of dedicated Flash SSD arrays that I'm aware of-- and while they have perfect scalability (because they built it that way, rather than for spinning disk) they are nowhere near the features, redundancy or ease of administration with the conventional SANs or the world.
Fusion IO, TMS and others make PCI-E cards that go directly into server. That's all well and good, but it doesn't work for a clustered system. (Those that support disparate / non-shared storage do so via private bus, and using that would sacrifice all the speed of SSD.) If your business has the need for SSD speed and the money to buy it, you probably have clusters. That's not to say you can't find some way to use direct-attached SSD...
I expect the next generation of midrange arrays to have much more controller horsepower than they presently do. And soon enough someone will make an array that has controllers, dram cache memory, flash memory and some spinning disk all working in concert. SSD will be like a second layer of cache.
Just my opinions of course-- stay back, all you lawyers and storage vendor employees. :)
ECC was one of the last products moved under the Ionix name, and the only one my company is using presently. It doesn't seem to have any of the synergy described in this article as the reasons for the asset transfer... so is ECC now just regular ol' ECC once again, and still an EMC property?
It wouldn't make much sense to have a tool that reports and manages arrays be under the VMware flag instead of EMC. Although it does support non-EMC arrays.. hmm
Cisco is way more dominant in networking than HP is (or ever has been) in servers.
HP I'm sure is terrified of losing Cisco branded products entirely, such as used in their C-class blade chassis. Procurve may be a decent competitor, but when you have a Cisco shop I assure you that the network monkeys do not want any Procurve coming in, not supporting the Cisco proprietary bits. And it's sure that any network outage will find its way to being caused by the HP gear.
Cisco servers are laughable. They're more poorly engineered than Sun's AMD models and that's saying something. Have a decent admin look over HP, Dell, IBM, Sun and Cisco and they should rank roughly in that order. I think that Cisco is just so dominant in networking that they've begun to branch out because they have nothing else to do and want to see how much their can ram down their tremendous installed base. Even if only a small percentage ante up, there's a huge market forged by their networking.
I have a G1. I am very firmly in the camp of real keyboards, although I periodically try touchscreen ones and shake my head with dismay. (I do use my G1's touchscreen keyboard for very simple tasks.)
I've carried several generations of Blackberry handhelds for work. They are terrible phones and even worse web browsers, but the full-sized ones have great keyboards. (The half-keyboard / predictive text ones are garbage.)
But back to the Droid. The droid's keyboard is nearly useless. You need small pointy fingers and a healthy dollup of luck to touch-type with any speed or accuracy. Besides not being raised, it also has minimal detent when pressed. Bollocks.
I drool over the browsing speed Droid offers compared to my oh-so-quickly aging G1, but that keyboard is simply unworkable. I'm under contract and won't be switching for another 8 months, but even if I were free to choose I certainly wouldn't be carrying a Droid.
It's not the density. It is in fact the total energy. There's 7 TeV per proton and something like a quadrillion of them in the beam. (That's 10^11, right?)
Fortunately they've already done some cute calculations here and your .40 S&W doesn't quite compare: http://lhc-machine-outreach.web.cern.ch/lhc-machine-outreach/beam.htm
Personally, I favor the .45 ACP anyway. ;)
Gartner will not disclose the exact metrics they apply to determine placement within the quadrants, but if you look through their ranking criteria and read between the lines, you come up with this..
The chief difference between Visionaries and Leaders is 'ability to execute' -- which is analogous to sales and service revenue. If it were a linear chart, EMC would be higher on the vertical axis than the next nearest 2 competitors combined.
But they haven't done it that way, of course.
More on topic with this item: if you look closely at the features, HP's EVA doesn't compete well at all. The XP series also doesn't look great compared with its peers, but in this economy who is buying platinum tier / non-stop storage anyway? Fools and governments, that's who.
So, they're aiming to be the market leader in internal server hard disks. And that's a big market, I agree. But how big do 2.5" internal disks really need to be? If you want really big, surely you've got a DAS or SAN. And soon enough we'll probably be booting our servers from flash drives because the server probably doesn't need more than 60-120 GB internal, flash drives are way faster, and failing a lot less often sounds pretty good to all the server monkeys out there.
About the 'enterprise' bit--
I have seen exactly zero storage arrays that use 2.5" SFF drives. They all use 3.5" drives because the IOPS from a 15k drive are far superior and 1-2TB from a 7.2k SATA drive can't be beaten. There are a few DAS enclosures that use 2.5" drives but they're a sideshow mostly-- by far the biggest sellers for DAS are also 3.5" drives.
Everyone says that companies hate Flash because it dominates web audio/video content, and certainly it does, despite Silverlight and innumerable lesser wannabees.
But there's a genuine problem with Flash; it requires a tremendous amount of processing power. Which makes it slow and battery-hungry. If flash animations can bring old computers to a standstill and give netbooks pause, then they will surely clobber your average smartphone available today-- or in particular, the phone from 2-3 years ago when these decisions were first made.
That is the one technical, rather than administrative, reason that the Apples and Googles of the world won't give you a mobile version of Flash. It can't reasonably be done. I'm not sure this argument holds water when you start talking about spanking-new devices that have 1ghz Arm processors and especially if they have hardware graphics acceleration. *ahem*
Pure speculation, please feel free to throw out your own ideas...
But I believe that Motorola and Verizon asked to be bumped way ahead of the standard rollout schedule for the Droid / Milestone. In fact knowing what they spent on the Droid I wouldn't be surprised if they *paid* Google for the right to do that. (more spectulation! ohnoes.) This gave Motorola a sizeable leg up on other Google phones, right up until the Nexus One came along.
Evidence for this comes in 2 additional ways:
-How very late Google was in getting the 2.0 SDK to developers
-How many phones are still released on 1.6
I believe that everyone else is still releasing Android 1.6 because if they have a 6-12 month development cycle and they were working on 1.6 before the Droid release date (19 Oct 2009 here in the US), they wouldn't have had time to develop and test based on 2.0+ code and get the product out the door.
Meanwhile, as a G1 user, I suffered through buggy updates as developers scrambled to get their apps up to 2.x snuff and now I'm sitting on my thumb waiting for T-Mobile to push 2.x to my phone. Before summer, perhaps?
The United States has very low prices for fixed and mobile broadband relative to Europe.
Europe has advantages in speed mostly because of population density and geographic size. It wouldn't surprise me if all of Sweden were on one sonet ring, and obviously it takes a lot more infrastructure to cover the US coast to coast. Especially all those annoyingly low-population areas in the middle.
But sure, sign me up for 1Gbps. I currently use 6mbit DSL because I see no need to go faster, at the cost of $20-40 more a month.
I think I'm actually seeing Tweets and Facebook updates from people that I've followed with Buzz. And I absolutely, positively do not have a Twitter or Facebook account.
Interesting that this content is being parroted through Buzz. I may comment on one of them just to see if I can.
I agree this about chiefly about ads. Easy call there, nice slow one right over the plate.