
I plead the 5th
Gotta love those Bill of Rights the Yanks drafted up, eh? Considering "Parliament" is mentioned, this is most definitely a UK thing, for those across the pond in the USA readership.
1042 publicly visible posts • joined 23 Sep 2009
Spot on. As long as people have had to drive a very boring, uneventful route, people have found ways to make it not so monotonous. Radios for one. Then people started rummaging around, reading maps, talking to the passenger, then on the phone, then texting, surfing, watching videos, etc. American highways (definitely out West) are quite the long, straight, and boring that most don't see in the UK. Driving in a (roughly) straight line at 120km/h for 30min can be quite mentally taxing (in the sense trying to stay awake: research "road hypnosis"). Once automated highway/freeway driving becomes safe/mainstream, we won't have to worry (as much) about all these drivers. Until then, buy yourself the larger-than-them vehicle. At least then, in a crash, you'll win (maybe).
This article is a good overview of options, with the BIG benefit of not focusing or trying to sell a particular system. Good show :)
With the Tape vs Disk argument, very balanced. I think, however, the "30 years" for tape is a bit generous, considering there have been studies showing that even under ideal conditions, tape degrades and leaves very little data (20% in one study) actually usable after even just 10 years. Combine that with requiring a tape drive (with interface!) that can read the tape. The same argument can be said about hard drives, but ask yourself how many computers still have IDE interfaces and then ask yourself how many computers have Ultra 160 SCSI interfaces (that still work). Not to mention you can get an external IDE USB enclosure whereas a SCSI tape drive equivalent is rare (or non-existent). Basically, the new ($2000+!) tape drives running SAS as an interface likely will still be connectible in 10 years, but I'd be more willing to bet on a SATA protocol. I would also be fairly certain a hard disk drive will last 10 years over a tape, if merely for the robustness and impermeability of the enclosure. For those thinking "flash drives would be awesome!" the answer is no, they wouldn't be. I believe the charge leakage of a flash cell is 5 years. Of course, the wear leveling algorithms of SSDs would move that cell data long before the charge diminishes to unreadability. Backups are meant to be cheap anyway, that's why tapes have lingered for so long. With 1.5TB disks dipping to the $80 mark though, and LTO5 tapes (1.5TB uncompressed, 3TB at the optimal 2:1 compression) at $70, it's almost a no-brainer to get the HDD, especially if your D2D software does compression too (which can get higher than a measly 2:1 depending).
As for types of backups, I personally am using a synthetic full with incremental forever method on D2D. For our archive off-site, we dump a full D2D (hence the need for synthetic fulls). Of course, the thing cautiously avoided in the article is the overwhelming cost of some of these backup methods (such as a tape library). Many small businesses could get away with a rudimentary tape backup system, or better yet, the good-ol' external HDD. Their DBs could be shut down at midnight for a full file-level backup (or sql dumped), and their file server copied. Perhaps an Exchange store exported. However, as far as dedup goes, only the file server would benefit from file-level dedup, and the exchange store and DB would highly benefit from block-level dedup. The more monolithic file stacks (think Sharepoint, SQL Server, VMs, etc) you have, the more block-level dedup can help with your on-disk storage.
I will say this: if the total size of your full backups is over 3TB, your benefit from tape backups significantly improves though :)
I think Microsoft was the first company I had heard of that went out and actually bought the IP from a movie (Minority Report's surface computing that turned into "Microsoft Surface" for those of you guessing). Granted, Minority Report producers found existing research tech to use rather than throwing in something that only works in CGI or in the imaginary-land of stage props....
Opera isn't targeted, likely due to obscurity, not any "safety" mechanism in the browser that prevents these kinds of things. That is, unless, Opera no longer has a "click on the link and download a file" capability? Still does? Well, you're just as vulnerable then.
Also, I have a hard enough time finding people who would even notice if the "popup" or whatever is even associated with their browser program at all. I've seen Windows 7 users get the fake "Windows XP My Computer" scanning screen and think that it's their computer, even though it has green non-transparent bars and the other coloring-book design hints. Fail users. Having a Chrome icon isn't likely to trick them any better than simply saying "Your computer areinfected!!!" [space missing and "are" on purpose].
It's funny, because I had mused only last week in a comment that the fake websites should do a User-Agent meta check to target appropriately. Guess someone else finally got the clue too.
Since this response thread is getting a bit long, and has some incomplete comments, lets go over a couple things:
iOS has multitasking, yes, but only for Apple apps, such as iTunes (plays music in the background) for instance. Android 3.0 (on the Xoom) has full multitasking (allows any 3rd party app to run in the background [such as your alternate favorite mp3 player] while surfing the web [on say, Firefox or Opera]). Android 2.x has had partial multitasking (similar to iOS) which allowed some apps such as the music player to run in the background.
As far as the iPad being some product that "came out of nowhere" and sold 15mil units, that's "shipped" 15mil units. Likely most all will be sold (or returned/RMAed). Apple doesn't release its actual floor-sell numbers. Also, tablets have been around for ages. Most used a stylus or the like due to not having capacitive screens (at least at affordable prices) until recently, and resistive screens had a hard time on the uptake. It was pointed out recently that the first "iPad" actually appeared in some episodes of "The Tomorrow People," a show that aired several decades ago. Granted, it was just a stage prop, but it functioned the same as a current-gen iPad (fingers to gesture and interact with the screen, same case design even, but likely used USB to interface with :P). So no, not "out of nowhere," just a better take on what was currently being offered (the iPod Touch).
As for "not being able to get Honeycomb or the Xoom," this is false. The Xoom is on the shelf of my Verizon store as of last week confirmed. Likely longer. It was sitting on the shelf doing its song and dance right next to the iPad1. The salesperson actually pointed out a funny incident about why the Xoom was better than the iPad: the websites used, by default, to do certain actions. She tried to use an iPad to look up a local chinese restaurant. It gave her a small handful on a map, which she could click on it it would take her to fullscreen website for the business (opens Safari to do so). On the Xoom, she showed me, the Google Maps came up with more eateries, and when touched, would provide an info bubble containing address, phone number, and a few links, one of which was their menu from allmenus.com. This would pull up in the browser, sure, but the MOBILE version, so it was clear to read and you didn't have to navigate around on the website. These little nuances are what is making Android a better platform. There are a TON, as I'm sure iOS has many as well. I just know that Android is likely going to have more over time, simply due to the nature of its driving force: Open Source and Google. Google does great for giving you the information you want as quickly and easily as possible (hence the embedding of allmenus.com in their business results). Apple has no such hooks (for better, likely worse).
Likely, the market will tip to a similar ratio we currently see of Apple vs Microsoft, but in the tablet market of iOS vs Android. Android will proliferate merely because it costs less, and supports more things. Apple will continue selling their products to those willing to pay the markups, and they'll be perfectly content with it. Why? Their markups. They were never a volume company. I doubt they know how to be, as proven by their marketplace (oh, "App Store," as they're trademarking...) that they've severely mismanaged. (argue against this point, and I'll simply posit "then why do they have the DoJ sniffing around about monopolistic practices?").
As for why I won't be buying an iPad2:
No SD card.
Requires iTunes, which means it can only "sync" (receive files/music/etc) from one computer and you can't "restore" the files back out the computer if your computer goes down, so even though you have a copy, it's not a "backup" copy. That is, unless you jailbreak/hack/etc, but those should be unnecessary....drag & drop please.
Really, those are the only two arguments (besides MAYBE cost) that would hold water, as arguments such as "functionality" and "true multitasking" go both ways. If you use an Apple piece of hardware, expect to be forced to use their Apps too. iBooks, The Daily, iTunes, et al. They're the only non-neutered, or "tax"-free options.
The specs on this iDevice look the same as the Viewsonic G-Tablet, which has been out since Nov 2010 or so. Well, the Viewsonic tablet has a crappier screen, granted. Sadly, the iPad still has only a 4:3 1024x768 screen. Such a shame. Not to mention no SD card support in sight. Hope that 16GB model can hold everything you want to do with it, or can shell out for the 32/64GB versions.
Yes, G-Tablet has the 2.2 Droid as opposed to the 3.0 in the Xoom, and is also sans 3G/4G, making it more of a comparison to the WiFi iPad2, pitching the Xoom more toward the iPad2+3G (except the Xoom will have 4G shortly, and Verizon will exchange the current 3G Xooms for a 4G version when it comes out).
I really wish Apple would have made this iPad2 a bit more competitive, rather than a spec-clone of devices that have been on the market for a month or more now.
The thing with magnetic media such as spinning disks is that even overwriting data won't stop someone from reading the old data. Why? The old data still has a residual magnetic signature remaining that can be detected. This is why the DoD and the like incorporate 7-pass (or better) random overwrites for the whole drive. This is also where the idea of a "Shred" delete came in: overwrite the file A LOT of times with random garbage in an effort to purge the latent signature of the old data.
Contrast this to flash, which doesn't have such a problem, since the data is represented by a contained charge, rather than the polar position of a magnetic bit. Once a TRIM has been issued and the cells purged of a charge, one won't be able to determine if there was a charge there to begin with. Issuing a "secure erase" on a drive may purge the allocation tables, and eventually the flash will get garbage-collected and purged, but the time that it takes to accomplish this is indeterminate. This being said though, I'd still rather have my virii and hax0r tools on a small garbage-collection-plus-TRIM capable SSD with a panic-button script that would "secure erase" my drive and then start spewing junk to it in hopes of finishing off the actual data. With a UPS, SWAT would have physically unplug the machine to stop it, and by then it's too late. At least using spindle drives, police had a chance of stopping the overwrites before much of the drive could be overwritten even just once, let alone a full 7+ times. If you think the tools written by black hats are cool, you'd love their unreleased "protective measures."
"When you delete a file on your PC, the OS just updates the directories and FAT (or equivalent). There is no signal to the drive that the blocks which contained the file data are no longer needed"
Apparently someone hasn't been keeping up on what TRIM is all about...
"However, this article does suggest that overwriting your file blocks with zeros *might* actually have some value for flash drives"
Yes, smart SSDs will dedup the zero-padded blocks, and thus, not actually fill up your HDD with zeroes....thus the "can not be deleted through traditional means" bit of the other article...
When will commentards and "scientists" in general stop treating SSDs like a traditional spinning disk and realize it for what it is? SSDs and their data are as much of a moving target as RAM with an OS using address randomization. Files aren't even stored in sequential blocks! Go read a wiki page at the very least.
Intelligent flash-caching is nice. Allows for a smaller, fully-utilized flash cache. However, there are 2 things that would be nice: support for any disk (and size), and a GUI enhancement for "right-click -> Add folder/file to flash cache"
Why the second feature if it has intelligent capabilities? Perhaps you wanted to simply cache your favorite game or two permanently? If they're not used as much as, say, Firefox (or Opera), your game components could get pushed out (depending on space available) due to lack of use. Or, even worse, have varying performance while the cache realizes it needs to cache more and more of the game as you move through it, leading to slow initial performance, and then continued performance hits each time through as the cache has to re-cache things that were dropped since last run.
There are cases for both. If your business is rich enough (cares enough about the IT department), you can afford to have a SAN for shared data. For those of us without such extra funding, or perhaps in the category of grandfathered into a sprawled DAS setup, redundancy on the storage level (replicating SANs for HA or the like) isn't very feasible. Hence the DBA jumping in saying "we can replicate it." Granted, I wouldn't replicate on the DB level for redundancy since end-users would have to point to the redundant DB if the primary goes down, unless you're using a DB gateway of sorts, at which point redundancy on the back-end does nothing if your gateway fails. Replication would be more useful for distributed load, specifically to target performance. Running that 15min report against your secondary DB server puts a lot less strain on your end-user experience than running such a report against your primary DB server.
An ideal world would have all of us running mini datacenters with a replicated SAN, fully redundant servers hosting a variety of <insert-vender>Motion-enabled VMs with a fully-redundant 10Gb+ network. But when IT is viewed as a no-returns expenditure, we make do with what we are given, and provide the best reliability that we can. This just reinforces the "can't cookie-cutter servers" idea posited in the article.
So, while there are "simple" solutions for everything, sometimes the "only I am clever enough" solutions are within the economics of a business.
Yes, the 10Gbps is dual-link. However, there's not "another 10Gbps in both directions for DisplayPort." DisplayPort is just one of the two protocols that Thunderbolt supports (the other being PCIe). So no, you can't have 2x10Gbps to a monitor and 2x10Gbps to a daisy chain of devices at the same time.
As for RAID using the bandwidth, it would take quite the RAID0 to pull off 10Gbps. Likely something akin to 8xIntelSSDs in RAID0 actually. Not quite a portable device.
Last I checked, my laptop came with 4xUSB (hand-picked over the 3xUSB counterparts). eSATA would have been a nice addition, but I had a certain ceiling for expenditure.
However, have no fear. I'm sure Apple intends on dropping those USB ports since Thunderbolt is likely to kill them as FireWire did....at least in Steve's reality....
"because it has always taken design as the key component of everything it has produced"
That and this statement, albeit true, should have been followed immediately by: "...and disregarding end-user usefulness, 3rd party enhancements, and generally making hand over fist in his overpriced products."
My IP says I'm about 400km from where I actually live, so not likely. Find My Mac is perhaps just an interface to locate your /other/ lost iStuff. Wouldn't doubt it works a bit like most PC laptop trackers (basically, if it finds the internet, it phones the company with its current IP for police use).
I'm quite surprised to read about some of these features actually:
Autosave and Versions. Really? Sounds more like iProductivity apps to me. Last I checked, autosave has been in MS Office since at least 2003. Versions just means they stuck a front-end on RVS/CVS/etc.
FileVault: I applaud whole-disk encryption built into the OS. Linux has had it in Fedora since Fedora 11 I believe. Did BSD kernel finally get updated to that point? Same for SSD support. Win7 was /released/ with TRIM. Does the BSD kernel devel team really move this slow, or has it just taken this long for Apple to work on their front-end?
QuickView, if it works for /any/ "normal" file, would be fairly nifty. Currently, Windows only has "quickview" for pictures, Office files, adobe pdf/ps files, and txt (I might be missing some). Granted, as Adobe has shown with photoshop files, you can add a preview filter for any file type you please, but it's not baked in to the OS.
Sadly though, the list of things Windows does out-of-the-box is fairly limited, whereas OSX ships with a raft of iApps, and thus more things they can list as "OS Enhancements." Granted, the last time MS tried bundling something with their OS, they got sued from every angle (Internet Explorer). Imagine what would have happened has MS Security Essentials been installed and active by default with every copy of Win7. Yes, you can't deny it, even if you think MS SE is crap.
"Java never died, there are more people programming in Java than any other language. It really is everywhere, it's just become so much a part of the tech ecosystem that you don't even notice it anymore."
Just because CS majors are exposed to only Java during their education, doesn't equate to "more people." As for "it's everywhere," Last I checked, Call of Duty: Black Ops wasn't written in Java. Nor was Windows/Linux. Nor was Flash Player, or BigTable, Avast, PeachTree, etc etc. (various samplings of different programming fields). As a poster said before: Java is primarily used in business or online web games (think Bejeweled). It's just not practical for many other application types.
"Smartphones" have had this option of granularly setting application permissions (heck, for the masses, Facebook does this too), however that hasn't helped the situation one iota. Why? The software still asks for ALL permissions. What does a racing game want with internet access and contacts? Why does a Facebook game want access to my personal info? All these things are checked as "allowed" by default (because the game says it requires them), and all the user has to do is hit "OK." Therefore, as long as a user can hit "allow," there will still be a problem, and it will be for the same reason people still get infected by "websites" posing as My Computer antivirus scans.
"POP3 would allow you to backup your mail, but not restore it. Better than nothing, I suppose, but you'd never again have those messages available to you when you were away from home. That would seem to defeat one of the touted advantages of webmail."
POP3 has a checkbox option of "Leave messages on the server." Check that and your emails are not deleted, thus leaving the messages online and accessible. Simples.
"Most Mac users I've come across tend not to be application hoarders, they use their beloved Macs quite respsonibly, so: Some photo editing, some Mac Office use, synch their Jesus phones, Fondleslabs and iPods and of course to surf the Interwebs"
So, what you're saying is that "most Mac users [you've] come across" pay a huge market for cobbled hardware and do nothing more than use it as a $300 netbook?
Yes, it's a Trojan. However, you don't need to download warez or p0rn to get infected. There's plenty of sites out there that attempt to infect Windows users by landing them on a fake My Computer antivirus scan page. When you try to click on anything, or close the browser, etc, you get an auto-downloaded .exe asking if you want to run it. Unfortunately, most computer Sheeple click "yes" and then MS tries to hold their hand and ask AGAIN if they're sure they know who sent them the .exe and that they shouldn't run it otherwise, and they hit "yes" again. Boom. Infected. They now have a Trojan. Yep, a trojan. It's even classified as a trojan. Why? It poses as something it's not (AV software in this case). Not warez or p0rn; security software.
Now, apply this scenario to Apple users who get a page that, instead of blindly throwing them onto a Windows landing page, actually uses the User-Agent meta data of their GET request and lands them on a Safari-targeting page and pops up with the Mac equiv? Perhaps even a warning: "OSX has been the target of many new virus threats that the general public has been largely unware of. Clean your computer now! Click here to remove these viruses"
Apple users are Sheeple too.
"How can it be a win? Surely a system breach of any kind that allows scumbags to access private data is a fail for all decent people, regardless of operating system? Have you heard of the phrase "have a day off you bell end"?"
It's a win because it points out the need for security software for ALL operating systems, not "just Windows." Mac users have spouted (somewhat correctly) for many years that "Macs don't have viruses" and that "Antivirus software is useless" for them. Now we're approaching an era where Mac users will have to make the paradigm shift into knowing they need security products to prevent crap like this from getting on their system. The only trouble now will be re-brainwashing the fruit-bearing mass(es) into being security conscious, and then have Apple be able to explain to them why their system now runs slow and occasionally doesn't work right....
I think you missed it. 800MB/s. Note "MB/s" not GB/s. Yes, typo in article. The FAIL because you even typed out 800Gbps saying that's gigabit ethernet speed, which is 1Gbps. Although, 20 bonded 40Gb-ethernet would be a nice interface, or the even better 100Gb Fiber interfaces...but still would require 8 of THOSE bonded.
When you're on a local server system (non-cloud basically), all your eggs are still in one basket. HOWEVER, the difference is, you have full control over that basket. You can create a second basket at your secondary (or other) offices (or CIO/CTO's home if needed!). You can take copies of your "eggs" to a bank vault if desired. You can scatter your encrypted backups like salt to the CIO, CEO, and CFO if you like. Heck, take that and dump the highly encrypted year-end into a cloud backup service if you like. Your primary building burns down? Fine, you had last night's backup at your "secondary" location. Both the business and your "other" location got wiped out in a nuke/earthquake/random-act-of-God? You have that copy in cloud storage (perhaps). What makes it even easier is having all your "local" servers as VMs. Building burns down, you reload your most recent backups of the VMs, and restore last night's data (from your "other site") to them and you're good to go. Of course, wimpy outages such as single-server failure can be handled with some Xen/VMMotion setup, or lacking funds for that, a bit of downtime while you swap in the spare part.
The difficulty with the Cloud is backups really. How often do they do them, and how long are they stored, and to what granularity? The most common form of data loss is a user deleted something within the last 24hrs, be it altered recently or not. Can you call your cloud service and get that file restored as it was, even as of last night, within 5 minutes? I've sat in hold queues for longer than that period of time. Even your local server backups can't recover a single word doc from last night's backup in under 5 minutes? Rethink your backup strategy. And no, it doesn't cost thousands of dollars to do it.
Back to cloudy thoughts, I really hope small businesses that can't afford a proper IT person jump on the cloud bandwagon. It will save them money in the long run and perhaps lower prices. Anyone large enough to have IT staff should look into a local setup (as long as their IT guy isn't some CS-degree drone that doesn't have the versatility to be solo in a biz). Really, it's that jack-of-all-trades skillset that is required by small & mid biz, but is commonly lacking in the workforce.
At the risk of turning this into a cable bashing thread.... "expensive monster cables" are pointless. You don't need to pay $200 for a Monster HDMI cable unless you're running 50+ meters (at which point, you'll have to send your data with a wish and a prayer anyway). For your 3-15ft runs, those $0.01+2.99S&H cables from Amazon work perfectly (unless you're unlucky and get the 1in20(ish) defective cable, as any production environment churns out the occasional lemon). Likely these will work fine for 15m runs too, just make sure you get the 1.4b-rated cables so, even with a slight defect, you'll still manage a full 1080p if not the 3D it's rated for. (yes, HDMI will auto-downgrade your quality based on the capability of your link. If the cable can't handle a full 6.4Gb/s, it will step down until it finds a speed that works.)
One wonders just where this extended battery goes too. Is it the "replace the DVD drive" type? Or perhaps just an oversized 12/15-cell battery wart? The 30hrs most likely means 50% screen brightness, WiFi/BlueTooth turned off, no DVD player (in the machine at all, likely), idling at the desktop. Give me numbers looping a DiVX or AVID at 100% brightness with WiFi turned on (even just idle WiFi is fine) and I might believe it more.
"Gallium nitride - ..... it's an extremely low-resistance material that simultaneously holds off large voltage."
Some info gleaned:
"Gallium nitride, on the other hand, is better [than silicon] at preventing leaks by holding onto the maximum voltage when it’s not delivering power"
Sounds like it will primarily be used to stop leak current.
As for handling peak-to-peak voltages in 400kV range:
"Transphorm’s first product will be in the 600-volt range and suitable for industrial operations such as data centers, solar panels and automotive drives, said Primit Parikh, president of Transphorm. The company is working on 900-volt designs, he added."
So no, the power grid wasn't the target of their product (yet).
Citations from:
http://gigaom.com/cleantech/transphorm-the-new-data-center-waste-power-slayer/
Try google. Great way to find Enlightenment.
"Beer coz I know where I can buy some unlike a lot of the Android tablet mythical beasts.(not those el cheapo ones running 1.6 thank you very much)"
Archos 70 and 101: Runs Android 2.2, shortly will bump to 2.3, and has promises of moving to 3.0. But even at 2.2, they're a very nice bundle of features (SD card slot! Take that iPad).
And with the mention of iPad:
"With the iPad-2 just days away from launch, there is a lot of catching up still to do to make something a slick as the new fondleslab is likely to be."
Won't the world be a little disappointed when the new iPad2 releases with the usual 1280x900 or somesuch screen, HDMI, no USB (perhaps just USB charging), a wimpy 1.3 or even 0.3MP front-facing camera, vanilla 5.0MP rear camera, a 1.2GHz single-core CPU, a bump to the RAM, bump to the internal Flash (due solely to 25nm as opposed to any good will on Apple's part). With only one device out there running dual core ARM, and that it's nVidia, means Apple either got the scoop from another ARM maker for a dual core, or will only be a single core. (off chance of signing with nVidia).
What else could be missing? 4G support perhaps? Likely, the cell-data version will support Verizon's network in addition to AT&T-type networks. If Apple makes me eat crow on this off-the-cuff speculation on launch day, I say bring it on. It will do the market some good if they go all out and drop an iPad2 with nVidia Tegra tech, 1-2GB of RAM, 64-128GB flash, SD card support, USB connectors, DisplayPort/HDMI (&Thunderbolt?) and 1920x1080+ screen resolution. I might even get one at that point. Or perhaps the 'droid device that comes out to "top" it.
Fight Fight Fight!
"Those declarations are order-independent. Different ways of thinking can EASILY result in a different arrangement to an order-independent grouping because it's less a matter of objective logic and more a matter of subjective style."
As for as Unit-Test code goes, I'm unsure, but the actual Java APIs are quite thoroughly documented, including private variables, etc. so one can extend them and use them properly in your own Java code. The need for 100% compatibility with Java forces the Android developers to completely whole-sale rip off the Java docs so custom extends and be supported. The easiest way to do this? Duplicate the Java API classes and member variables then write code that utilizes them. As far as classes such as Array and Iterator are concerned, there's not much leeway in how to implement the code utilizing only the member variables (and functions!) listed in the Java Docs. Given a very small set of pre-moulded Legos and told to build the same simple structure, it's no surprise programmers came to the same conclusion (code). Although, as a side note, they likely did just decompile UnitTest code. What better way to test compliance and compability of their own Java build than to use the actual Java Unit Tests? Bad? Likely. Good to ensure complete compatibility? Definitely.
"The new basis is explicitly drawn from video over WiFi"
The new Sandy Bridge chip has on-die video decoding (H.264 among others), allowing the CPU to effectively idle ~3% while playing high-def content. The power draw would be mostly all from the WiFi as even the HDD wouldn't have to spin up if it was streaming the content. With that in mind, 7 hours is still pretty decent considering how horrid WiFi is to battery life. Best way to get more life out of your laptop? Disable WiFi when not in use and cut the screen to 75% brightness or so. (does the new battery life measurement run vid at full 100% brightness one wonders...)
Anyway, my Core2-based MBP only pulls 2.5hrs with WiFi and actual use (screen at 85% or so), so I don't know what magic sauce you guys are using.
"Intel says a high-definition movie can be transferred across it in less than 30 seconds (neglecting to tell us the size of the file)."
A quick napkin-math calc shows: 10Gb/s = 1.25GB/s. We'll chop off the 0.25 for overhead and other mythical roundings (basically assuming only 75% is useful throughput), leaving us with ~0.94GB/s. 30 sec transfer at ~0.94GB/s comes out to 28.2GB of data, so they likely considered it a full 30GB Blu-ray rip.
Having the capability to push PCIe out of the computer, in addition to a USB-type serial connection, will be a very nice addition. However, I doubt a laptop with only 1 of the connections will be useful. Quite likely USB will still have ports, considering using a laptop cooler, mouse, and USB stick eats 3 ports in themselves. I personally wouldn't want to carry around a LightPeak hub just to use more than 1 external device. I'm just hoping they allow AMD motherboard makers to license the tech.
A common tactic to justify higher prices is be having a shortage. Look at what Hurricane Katrina did in Texas (USA). The oil refineries being offline created a shortage causing gas costs to spike. The oil companies, even when the refineries came back online, kept production lower to create a shortage, thus being able to justify keeping prices higher. (The government has a restriction on oil companies to prevent them from raising prices more than a certain amount, except when a shortage occurs). Apple is doing the same thing: create a shortage to cause Sheeple to pre-order to ensure a sell-out at a predictable number.
The other driving factor will be those that wanted a tablet, but bought one of the now readily available alternatives. With the scores of tablets coming out this year, Apple's market share (and perhaps quantity of units sold) will obviously reduce simply due to alternatives in the market.
The sad thing to speculate is if they're cutting due to a "significantly upgraded" iPad3 coming out later in the year, that means they expected to get away with forking out yet another sub-market-standard device at a premium. This is what Apple users don't realize: the hardware they are getting is only physically worth half of what they're being charged. The brand name and the "Oooo" factor makes up the rest of the device's cost.
Unfortunately, with the "subscription-gate" FUD leaking into the rest of the world due to the "your subscription services must be the same InApp as through other channels" line of the "agreement," causing a likely inflation of subscription prices globally, that Apple can no longer be viewed as a self-contained fringe monopoly.
Proprietary code always seems to move slow, and it's likely due to what Open Source was designed to overcome: many eyeballs in the code = more bugs found and fixed. Granted, the "lower quality" of their programming prowess was sometimes blamed on creating more bugs than closed source. But now that programming is being outsourced with everything else to people of wildly varying programming ability and experience, I'd dare say FOSS has moved up a notch in comparison. Open Source Internet Explorer. Let the fanbase make it better.
Actually, if you take a good look at Windows 98 (for which this "part of the OS" was stated), Internet Explorer is indeed part of the OS. "Windows Explorer" uses the same GUI as "Internet Explorer." I wouldn't be too surprised to hear the file/folder browsing was generated HTML code al la Konquerer-style. With that in mind, it's no wonder even their solution of "uninstalling" it meant only hiding the public face of it. You could very easily type http://www.google.com into the "location" bar and have google pull up in the "Windows Explorer." Nowaways, doing so will cause IE (or default browser) to pop up with your page request. Seems it's no longer hard-baked now.
"Apple won't missing out on their cut, the Publisher can't miss out and reduce it's margins so I'm left paying extra for something Steve Jobs hand no hand in!"
It's even worse than that: EVERYONE will have to pay more for it, not just Apple customers. The New York Times would have to bump their subscription cost 42% to make that 30% tax disappear as far as their margins go. But since the InApp Subscription service is REQUIRED to be the same price as elsewhere, your Droid NYT reader now has to charge that 42% mark-up, making Droid users suffer the same fate as their poor-of-choice Apple counterparts. Likely, NYT and others will find a happy medium, perhaps a 30% sweet-spot to mark everyone up to, taking a margin loss on their App Store subscriptions, and a margin gain (since even Google's subscription service is only 10%, but still allows you to use other in-app payment methods, and doesn't require you to use Google's in-app payment services), which in the end would provide a net gain or perhaps equal margins as before the price bump.
Now, with that global price fixing in mind, Apple simply has to undercut these inflated prices with their own services (The Daily, iBooks, and iTunes) and suddenly no one is buying the New York Times subscriptions. Why? It's cheaper to get The Daily. Even if some rogue few get the NYT, Apple still gets a significant cut. Win Win for Apphole, Lose Lose for the rest of the subscription world (remember, you can't even subscribe on their website to website view at a reduced price). Apple is merely trying to push competitors out of their marketplace. Frankly, in six months when Android Tablets are more prolific (See Archos 101 on Amazon for $300), content providers should simply dump Apple apps en masse and see what Apple does when their precious App Store has nothing in it besides $0.99 fart apps and flashlights.
Agreed that they likely didn't test with alternating sides of the head (left or right ears). Another thing they could have done was put the phone against unusual parts of the head (the top, or rear) as opposed to sticking it near the auditory regions. A simple wired headset attached would have accomplished this. If the affected regions shifted with the phone, be they circular or not (as a pole antenna produces a doughnut omni-directional broadcast rather than spherical), then I'd have more interest.
As it is, the mere anticipation of a phone call could have caused the spike. A "muted" phone would still produce EM radiation, to which the brain may well be sensitive, but most likely "trained" to, having used cell phones before. Ever have that sensation of knowing you're going to get a phone call just before your phone starts ringing? Your body is likely reacting to the EMF spike of the incoming call due to some form of subconscious training. Perhaps they should try this test on some South American Rainforest tribe members that have never used a cell phone to eliminate this fringe-but-possible reaction.
I doubt it's due to studio ownership, but more due to a different type of ownership: personal items.
Yes, Apple has been a long-time favorite of movie studios. They like have dozens of the kit laying around, from actor/producer's laptops on set to full workstations for review, etc. Why nip down to the corner Apple Store to pick up an atrociously expensive piece of kit when you can just grab one from a nearby backpack or neighboring room? It's product placement due to convenience.
Last I checked, a full apt-get or yum update for a "fresh" Linux install ran into the 700MB range. Surely Linux is better written and wonderful too, right?
The ISOs for SP1-equiped versions of Windows 7 and Server 2008 were on the MS licensing portal for the past week already, just as an FYI. :)
Data rates are huge rip-offs, definitely. Unless one is akin to using their Tablet while driving or at a technophobes' home, I can't really see other places that don't have WiFi. (Don't say work, you should be working anyway, right?).
My ideal tablet would be sans-3G/4G. I just can't justify the trickle of data. My email will update when I'm back in a WiFi spot TYVM.
The idea of hiding the URL is ludicrous at best. Sure, you get to see it (briefly) while the page is loading, but on a decent broadband, that's all of 2 seconds, if not faster. Seeing the URL is an important anti-phishing check. Even on Google's own search results, the "URL" (the bit of text below the description) says one thing, but pulls up something completely different (common for phishing/scareware sites). These pages load in less than a second in some cases too. They're made to look like part of your OS (like a window or popup), and having the "webpage" go to pseudo-full-screen would just exacerbate the issue. Not to mention phishing sites that send you to "http://bankofamerica.corporateportal.tail.ru" or somesuch. By the time they get half across the URL, the page is loaded and the URL would go away, and that's for us Keen-Eyed that actually glance at the during loads. Then there are the bait-n-switch URLs which start loading a page then quickly shuffle you off to some odd URL (even legitimately). I've caught many-a-website doing this, primarily for "referral" credit or somesuch, but the "store" I thought I was browsing ends up dumping me into a no-name marketplace.
Leave the URL, even if it's as a status bar at the bottom of the screen. And leave it that way by default. People won't ever become security-aware if we keep abstracting stuff away from them as a default.
"and partitioned the SSD into drives C and D, with a clear demarcation between my Windows 7 system in drive C and my data in drive D"
And how did you do that? Pad the "space between" with "sectors"? Data is not stored sequentially on an SSD. That's hard-disk-drive territory. Common SSDs likely have 16 flash chips, with 5-10 "channels" to read/write to those chips. The data is more likely to be physically stored RAID0(ish) style as opposed to sequentially on one chip. That's not counting the fragmentation that will occur as the drive is used, due to the copy-on-write methods of wear-leveling. In the end, there is no "clear demarcation" except logically (in your head and OS). SSDs don't even have sectors. Those 512byte blocks are simply emulated, just like any other sector/track concept. That is why page and block alignment is so important to establish optimal performance on your SSD.
Best thing to do? Use TrueCrypt whole-disk encryption from the start. Your data would not have ever been written to the drive in an unencrypted manner at any time, and thus you won't have to worry about it lingering around. The only thing likely to be vulnerable would be your network configuration and a bit of browser history that it takes for you to get on the network and download TrueCrypt initially after your OS install.
wear leveling "hidden space" is in fact extra storage. SSDs are a particular creature. Data tends to be a copy-on-write setup, so when you overwrite your file, the new copy (parts of it at least) end up elsewhere on the drive and the old data gets flagged as available (whether it gets used or not is based on the write-count). That "elsewhere on the drive" can be within the CONCEPTUAL capacity of the drive, or land in that over-provisioned wear-leveling space. The controller doesn't care, it just doesn't want you to fill up the full 120GB of your "100GB" drive, because it is greedy and wants to maintain peak performance with its wear-leveling.
Storage locations on an SSD are a moving target, and that is the point MANY people (including these researchers) seem to forget. They're talking about traditional HDD "shred" techniques and then getting shocked that not even a "defrag" overwrites the data. They tried sequential overwrite methods, which with the above explanation of over-provisioning and a bit of brain power, one would realize would be ineffective.
A bit of intelligence shines through when they say the best way to "sanitize data on SSDs was to use devices that encrypted their contents." Bingo. Many SSD /drives/ do this. Granted they have a point about purging the crypto keys, but that's easy enough by grinding the chippery, or the thermite option.
Lastly: "Furthermore, there is no way to verify that erasure has occurred (e.g., by dismantling the drive)." is utter bullocks, as I can tell straight away that a drive that's had its flash chips melted by thermite (or perhaps dissolved in an acid bath) is cleanly "erased" (in proper Ahhh-nold fashion via "Eraser").
Oh, and a P.S. for Mr. write-spam monkey, when an SSD "fails" due to write exhaustion, the last-written data remains in the cell, perfectly accessible to be read. If you had an entire block or even chip fail in your SSD, unbeknownest to you, the data may still be accessible and no amount of continual rewriting will destroy it. Fail 4 u :)
"The main problem seems to be that they want the visibility of the app store but don't want to have to pay 30% of their income to Apple."
No, they don't want the visibility of the App Store. The problem is that they MUST use the app store in order to distribute their app the first place. So no, no App Store envy. Unless Apple allows other methods to get apps on their devices? Don't mention Jailbreaking, because then you'd just be proving a point.
"If you can't support that 30% then write your app for something else"
Yes, they likely will just jump the sinking ship and go with Android. However, the problem we face isn't just having to pay the Tax. It's a monopoly. In dealing with foreign and domestic products, countries place import taxes and such on foreign goods to ensure domestic (read: Apple) products sell better/make money as opposed to "cheaper" foreign (read: competitor) products. You must consider Apple now has a music store, book store, and newspaper. It's rumored that they'll have a music subscription service. Do all of these services have to inflate the prices or lose profit margin due to the tax? Nope. So, get The Daily for $5/mo or get New York Times for $6.50/mo. The Store-Brand Principle would show people will buy the cheaper "similar" alternative when presented with both. This 30% hike is a way to drive higher sales to their home-brew services, but also gouge a cut off people that still use the alternatives. Apple is using their market control to gain more revenue. Sound like monopolistic practices? Yeah, I thought so too.
The compiler, true, does some guesswork at potential parallelism, but the programmer has to write his program threaded to begin with to take full advantage of parallel processing. The nice thing about threads, I can run 5 threads in one program on a single-core computer and get normal performance (minus overhead). But, I can run the same 5-thread program on a quad core with HT and the OS (key word) will assign each thread (hopefully) to the various cores available to it, essentially allowing each thread to run in parallel as opposed to time-slicing on a single core. This is why a recompile isn't necessary: the previous Itanium chips were multi-core already, thus the compiler already optimized for multi-core. More cores simple provide more places for the OS to assign threads. The program itself should (SHOULD!) be intelligent enough to determine the max number of true threads it can take advantage of.
I do agree with the concept of ARM in a server environment though. Likely would have happened already if there wasn't roadblocks. However, it will likely take a bit of engineering to stick ARM cores in a worthy environment for HPC. Perhaps they'll adopt HyperTransport for inter-core/CPU communications?
Welcome to the Apple sword-of-damocles world Facebookers :) Of course, the "Simple. Don't use Facebook" comment is utter rubbish, for the same reason "Simple. Don't use Apple" is: it won't stop anyone. Well, that and it's not entirely feasible anyway: both Apple (yes), and Facebook offer merit in there wares. If that merit is worth having said proverbial sword looming over one's self is another matter altogether.