So, what happens if the photons get lost on the way?
"I knew I shoulda taken that left turn at Albuquerque!"
395 publicly visible posts • joined 5 Sep 2008
... system hibernation to me.
Excerpted from the article, with relevant points emphasised w/ asterisks:
"The method for a quick boot process includes the steps of performing a power-on self test (POST) operation when a personal computer system is powered on or a reset button is pressed; performing a normal boot process after the POST operation;
***saving the contents of memory and the status of the attached devices to a hard disk***;
checking if a reboot is requested;
***restoring the saved boot configuration information from the hard disk, after POST is completed***
during the reboot process;
***checking whether or not an initial device configuration file and/or an automatic batch file were changed***;
and executing commands in the two files and
***saving a newly created boot configuration information to the hard disk***
for future boot."
Windows 98 supported ACPI (Advanced Configuration and Power Interface) -based hibernation (though not very well, unless every driver on the system was WDM-compliant), and even Windows 95 had a "pseudo-hibernation" feature known at the time as "Suspend-to-Disk," back in the ol' APM (Advanced Power Management) days...
Noted and agreed.
However, my comments were not meant to include legal "white-hat" cracking, in which the intruder is performing research on his/her own systems, or on systems he/she was contracted to maintain.
Many would say that cracking a system (that the intruder does not own or maintain), intentionally doing no damage, then notifying the owner that there's a problem with its security falls somewhere in between the "white-hat" and "gray-hat" camps, while others would argue that such behaviour is a fundamentally "gray-hat" activity, if not "dark gray."
I would also venture that most network security admins regard cracking a system and leaking its contents (or defacing a web page) -- regardless of motivation -- as a wholly "black-hat" operation
Simple script-kiddie DDoS attacks are "black-hat" from a motivation standpoint, but in my mind hardly rate as a legitimate demonstration of "31337 $|<i112"...
Some people support what LulzSec/Project AntiSec are doing, and argue that making examples out of high-profile targets will help spur others to implement strong IT security policies and procedures.
Other people are against what LulzSec/Project AntiSec are doing, and indicate that it doesn't matter what the underlying motivation is; intruding upon a protected system and releasing the data it contains is trespass and theft at best, and cyberterrorism at worst.
Either way, one thing is for sure: If LulzSec/Project AntiSec keeps things up, eventually the good folks in the UK are going to see a return of Wacki Jacqui-esque rhetoric, with talk of IMP, massive data silos, mandatory ISP monitoring, and everything else that comes with it. People will demand that the Internet be made "safe," resulting in Parliamentary knee-jerk reactions that turn the UK into an Orwellian state.
The risk depends on the mode of attack, and the modulation and encoding used to transfer the data:
-- -- Wikipedia: Near Field Communications ("Security aspects" section)
-- -- -- -- http://en.wikipedia.org/wiki/Near_Field_Communication#Security_aspects
That said, no RF-based transaction system is immune to eavesdropping at a distance (Van Eck phreaking):
-- -- Wikipedia: Van Eck phreaking
-- -- -- -- http://en.wikipedia.org/wiki/Van_Eck_phreaking
Whether the collected data itself is useful is another matter: If strong encryption was **properly** used to secure the transaction, then the risk would be very low; however, we have seen fairly often how strong encryption is only properly implemented after something bad happens that gets people's attention.
In the novel, Michael Crichton describes (with a certain amount of florid fanfare) the implications of just this situation:
-- Step 1. Make genes patentable, thereby making them "property" subject to Eminent Domain:
-- -- Eminent Domain in the United States:
-- -- -- -- http://en.wikipedia.org/wiki/Eminent_domain#United_States
-- Step 2. Allow State Universities to enact Eminent Domain to harvest genes from individuals of scientific interest (such as "HIV elite controllers" -- individuals which demonstrate a remarkable natural resistance to HIV, and do not develop AIDS after exposure to the virus).
-- Step 3. License the harvested genes and related University "research" to pharmaceutical companies for Fun and Profit.
Just for my edification: is an Imperial alcohol pint different from a Standard pint?
Over here, in the US, a (US Standard) pint is 16 ounces, with **four** pints equal to 64 ounces.
Also, is "proving sobriety to the barman" handled automatically, via breathalyzer, or do you have to do a heel-toe-walk for the guy behind the counter?
About the time device manufactures started moving from Nickel Cadmium [NiCd] and (early) Nickel Metal Hydride [NiMH] to Lithium Ion and Lithium Polymer...
Lithium-based batteries have a significantly higher energy density per unit mass than the Nickel-based batteries, but they are also constructed from chemicals that are much more volatile, and so require active safety measures (such as charge control and safety circuits) to prevent criticality excursions...
... may not be an impossible result, if the chip controls (to a certain extent) the charging and battery safety circuitry, and can be hacked so voltage or current detection thresholds are skewed appropriately.
For example (and very simplistically), your typical, properly-maintained, not-worn-out lithium-ion battery cell is charged to around 4.2 volts. Once the 4.2 volt threshold is reached, charging current will begin to drop. When the charging current drops to about 3% of the nominal charging current, the charger will usually exit its continuous-charge mode, and will either wait until cell voltage drops to a certain level before starting a new charge cycle, or will trickle-charge the cell intermittently using a timer.
If the chip being discussed controls charging cycles and safety, and its detection thresholds can be overridden so that it (hypothetically) reads the 4.2 volt full-charge threshold as 3.9 volts, and tells the charger to keep pushing a 100% nominal charge current into the battery even though it is already fully charged, the battery **could** conceivably overheat, rupture, and catch fire from the abuse.
Not something I'd like to encounter, if I have a habit of actually using my laptop on my lap, such as on the train while I'm commuting to/from work...
... I get along great with my (1st generation) Palm Pre` (Sprint/Nextel), and really love webOS.
However, when it comes to tablets, the lack of an SDHC or microSDHC card slot is a deal-breaker for me. (Curiously, for phones, not having a microSDHC card slot doesn't bother me... Mostly because I actually tend to use my phone more as a phone than a portable media/Internet consumption device.)
I had an almost Pavlovian response when the Samsung Galaxy Tab 10.1 was finally announced, but then I heard that it didn't offer an SDHC or microSDHC slot. The Motorola Xoom does offer such capability, but seems to be a "rushed" product, design-wise, and fails to hold my interest due to a lack of polish and (perhaps mistakenly) perceived stability problems.
My concern with regard to this scheme is that since a key pair is linked to a specific email address, miscreants and/or tyrants can sift through traffic data, access logs, and other information to mount correlation attacks and gather evidence establishing patterns of behaviour: Every time my BrowserID is used to login to a service, my public key is retrieved from a third-party server, and matched against the private key sent by my browser in response. By correlating the two events in time, an interested party can easily determine when and where my computer is used to access the service being monitored. It should be noted that this interested party can mount correlation attacks against existing "enter your password" systems as well.
Also, since my private key is "stored with the browser," the scheme only provides as much security as supplied by the physical environment surrounding my computer, and, if used, any folder/file, keychain database, or full-disk encryption that has been implemented within my computer itself.
Thus, on the whole, it looks like that the BrowserID scheme doesn't really do all that much to enhance security and provide anonymity: The system is still just as vulnerable to various time-related attacks, and still depends on a well-protected physical environment to be secure as an authentication method. What it **does** do, however, is make it less cumbersome to **manage** my authentication info, which means that as Joe/Jane User, I may be more likely to use it in the first place.
IBM, after having stiffed the ASF by not backing Harmony in addition to OpenJDK, is now attempting to return to^H^H^H^H^H^H^H^H^H^H bribe its way back into Apache.org's Good Graces by donating Lotus Symphony?
Don't get me wrong... IBM has probably done more than just about anyone to protect Open Source projects (except maybe pre-Attachmate Novell, which heroically defended Linux from attack by SCO), and to foster their collective development into enterprise-grade software. And for that, IBM deserves quite a bit of praise.
However, IBM really let the F/LOSS (Free/Libre` Open Source Software) community down when it threw 100 percent of its weight behind Oracle in the Java Wars. This is troubling, because it was hoped that Apache Harmony could eventually be shipped pre-configured with Tomcat as a "fully certified" JSP servlet platform. Since Oracle won't provide Apache with a TCK license under terms suitable for Apache's own use, it is unlikely that a certified pre-packaged Harmony/Tomcat servlet platform will ever be released (which would have gone a long way toward making Tomcat a lot more stable and easy to use).
I feel about the same as you do.
The Shuttle was a technological marvel for its time, and can do things that no other spacecraft can do,... But it's hugely expensive to operate and maintain, has proven to be rather finicky, and turned out in some ways to be a lot more fragile than anticipated.
Even so, I'll be sad to see it go...
I still believe that the world needs a reusable space--plane like vehicle for ferrying cargo and people to and from LEO; the idea of throwing away a perfectly good booster stack -- engines and all -- every time you want to climb up the [gravity] well seems to be a wasteful way of doing things.
Elon Musk's Falcon 9 booster (launch) stage is intended (eventually) to be recoverable and re-usable:
-- Wikipedia: Falcon 9 (Section: Reusability)
-- -- -- -- http://en.wikipedia.org/wiki/Falcon_9#Reusability
... and if SpaceX can deliver on the necessary engineering, then they'll have a Good Thing going, at least as far as wasted material is concerned.
However, I am not sure that recovering and re-using a liquid-fueled stage is practical, given the amount of refurbishment and testing required to ensure galvanic corrosion and salt-water contamination didn't compromise the components after splashdown...
(Grabbing my jacket; gotta take a walk around the Vehicle Assembly Building before they shut out the lights...)
I bet they would...
Yes, sir, I'm sure they'd love to have my help in getting them a multi-million-dollar "cost of representation" (legal fees) award, while I get a, what, coupon for a twenty-five or fifty dollar widget of some kind.
I'm not against class action suits, per se; I believe that companies should be held to account for shoddy and/or negligently dangerous product design... But I also believe that it makes no sense to bring a class-action suit when the aggrieved parties know in advance that the bulk of any damages awarded (if any) will go to the trial lawyers, and not the individuals impacted by the defective product(s).
Class-action suits also tend to take much longer than first-party suits to adjudicate, resulting in an over-burdened civil court system and costly delays.
(Possible exceptions to this axiom are those actions brought against corporations in the Chancery Court of the US State of Delaware, which is noted for having a highly business-oriented legal system, and is one of the reasons a multitude of large US companies choose to incorporate there, even if their practical headquarters are located elsewhere. Too bad the suit was [had to be?] filed at the Federal level in New York; I expect the legal wrangling in this case could take at least a couple/three years to put to bed...)
... for illicit financial market and trading software is, unfortunately, a correct one:
**** Matt Asay's original take on the matter:
-- -- http://www.theregister.co.uk/2010/09/24/piracy_open_source_bsa/
-- -- -- Scrolling about 1/2-way down:
-- -- -- "While the BSA is concerned with paid-for, proprietary software, most of the world's software is not written by proprietary software firms, but instead by enterprises whose primary business is not software, but rather finance, pharmaceutical and so on. The software written by Morgan Stanley for Morgan Stanley simply isn't going to be pirated."
**** My response in the Comments to the above article:
-- -- http://forums.theregister.co.uk/forum/1/2010/09/24/piracy_open_source_bsa/
-- -- -- Again, about 1/2-way down:
-- -- -- "To use Morgan Stanley as an example: A slightly-off-center firm could "buy" a chunk of code from a disgruntled Morgan Stanley IT wonk, reverse-engineer the code to gain insight into Morgan Stanley's trading algorithms, and look for routines related to arbitrage transactions**. They could then design more efficient, lower-latency routines that take better advantage of price difference windows, thereby gaining a competitive advantage with regard to automated trades.
Never underestimate the power of (successful) industrial espionage."
**** Then on March 18, 2011, Dan Goodin of El' Reg reported that a Goldman Sachs programmer got sent to the Big House for code theft:
-- -- http://www.theregister.co.uk/2011/03/18/programmer_sentenced/
... and now we have this guy (Chunlai Yang).
It's good that the Law Enforcement community is taking prompt action to keep such sensitive information from falling into the wrong hands. However, the questions I have regarding these affairs aren't "What were the alleged perpetrators were trying to steal/sell?" or "To whom were they trying to deliver the stolen code?" but rather "What were their fundamental motivations?" and -- most importantly -- "What did they leave behind [in the systems they compromised]?"
Since much of the world's major market Exchanges (and economies in general) are so software driven, it really makes one wonder how easily an incident lik the Flash Crash of 2010:
-- -- http://en.wikipedia.org/wiki/2010_Flash_Crash
could be triggered intentionally, for either personal gain or, more sinisterly, large-scale economic sabotage.
(Wasn't sure whether I should use Battle Stations, or Black Helicopters. Flipped a coin; Battle Stations won.)
Population of UK:
-- -- 61,840,000 (approximate)
-- -- -- -- Source: World Bank, World Development Indicators
Snoop Requests:
-- -- 552,550
-- -- -- -- Source: Sir Paul Kennedy
Doing the math, presuming One Snoop Request Per Person
-- -- 61,840,000 / 552,550 = 111.92 (approximate)
This means that if authorities are requesting just one "snoop request" per person (which may be the case, if UK law allows for "open-ended" requests; I don't know, because I do not live in the UK), government minders have their collective eyeballs watching approximately 1 out of every 112 residents (about 0.9%).
If multiple snoop requests are initiated per individual, say 5 per person on average, then that still means at least one out of every 560 people is on the snoops' radar.
Buggers the imagination, that does...
Any geodynamics folk out there who can explain how the central peak is formed from an impact event?
I had always presumed that the central peak was caused by a form of induced elastic compression and rebound generated by the tremendous forces in play, but the relevant Wikipedia article says otherwise:
-- -- Wikipedia: Complex crater
-- -- -- -- http://en.wikipedia.org/wiki/Complex_crater
and indicates the center cone (or cone ring, for very large impacts) are created by "a process in which a material with little or no strength attempts to return to a state of gravitational equilibrium."
Anyone care to elaborate?
... is the relative silence from Anonymous regarding LulzSec's recent forays.
Some accusations have been made (such as by Branndon Pike):
-- -- Fox News: Group Claims It Was 'Paid to Hack PBS...'
-- -- -- http://www.foxnews.com/scitech/2011/06/02/man-denies-paying-group-to-hack-pbsorg/
that LulzSec is a "splinter group" or otherwise affiliated with Anonymous.
Usually, when such pronouncements are made, Anonymous is fairly quick to file a response (in either confirmation or denial), such as it did with the original Sony PSN breach (in that case, a denial).
But ever since LulzSec appeared on the scene, it seems that Anonymous has intentionally "faded into the background," so-to-speak. But I don't think it's a defence against "guilt by association" move; it's more tactical than that...
On a percentage of aggregate flight-hours basis, maybe.
But after the Akron, Macon, and Hindenburg incidents, interest in LTA aircraft dropped precipitously, before the technology even had a chance to mature. Then World War II came along, which fostered little in the way of LTA vehicle development, except maybe the creation of tethered barrage balloons to protect against nap-of-the-earth attacks. By the end of World War II, heavier-than-air aircraft became the standard method of transporting people and cargo, except in a few niche markets (such as advertising: the Goodyear, FujiFilm, Zurich, and MetLife blimps come quickly to mind).
So one could argue that lighter-than-air aircraft never got a proper chance to establish themselves as a practical means of transportation, and with such a small relative "sample size," such a comparison may not be scientifically valid.
It is unfortunate when any aviation accident takes lives and causes damage; I hope the passengers are OK, and offer condolences to the pilot's family.
Another unfortunate aspect of this accident is because blimp and airship accidents are so few and far between (from an absolute numbers standpoint, because there are a lot fewer blimps/airships in service than heavier-than-air aircraft), it is likely that people will compare this incident with other notable airship accidents, such as the crashes of the USS Akron (US Navy ZRS-4), USS Macon (US Navy ZRS-5), and, yes, the Hindenburg, thus reinforcing the notion that lighter-than-air craft are inherently more dangerous than their heavier-than-air counterparts.
How's this for a strategy to protect my data privacy and security:
For *all* organisations (Commercial, Governmental, Telcos, and Landline ISPs):
-- -- 1. Don't track my browsing activity with persistent Cookies/Flash LSOs/DOM storage.
-- -- 2. Don't store *any* of my account info in an unencrypted format.
-- -- 3. Don't require me to opt-out (as opposed to opt-in).
-- -- 4. Don't accept data from client web browsers without sending it through a string-scrubber first.
-- -- 5. Don't use unencrypted sessions to perform *any* sensitive transactions (not just financial).
-- -- 6. Don't send GPS or other location data upstream without asking first.
For Landline ISPs:
-- -- 7. Don't perform deep-packet inspection to target advertising and/or manage traffic; respect the sanctity of my packets.**
For Governments:
-- -- 8. Don't snoop on what I do without a legitimate court order supported by concrete evidence.
There... Was that so hard?
** (General traffic management without packet sniffing, such as "pay $XX/month for YY Mbits/sec bandwidth" is OK by me. The more I pay, the more I get. How I use it is *my* business.)
> "There. Fixed that for you."
Actually, Apple didn't license the technology from Xerox, and after Apple sued Microsoft for copying the GUI paradigm, Xerox sued Apple, claiming it was first (which it was):
-- -- University Texas Archive of NYT: Xerox vs. Apple: Standard 'Dashboard' Is at Issue
-- -- December 20, 1989:
-- -- -- -- http://www.me.utexas.edu/~me179/topics/copyright/case2articles/case2article6.html#suit
Unfortunately for Xerox, the suit was dismissed on Statute of Limitations grounds:
-- -- Wikipedia: Xerox Star
-- -- -- -- http://en.wikipedia.org/wiki/Xerox_Star#Legacy
-- -- -- -- "Xerox did go to trial to protect the Star user interface. In 1989, after Apple sued Microsoft for copyright infringement of its Macintosh user interface in Windows, Xerox filed a similar lawsuit against Apple; however, it was thrown out because a three year statute of limitations had passed. (Apple eventually lost its lawsuit in 1994, losing all claims to the user interface)."
... is the sincerest form of flattery.
But in this case, I'm not so sure.
This feels like the "Apple Mac OS vs. MS Windows GUI War" all over again, except now, both companies can afford a lot more (and a lot more expensive) lawyers.
Way back when, Apple was ahead in user-friendliness, having swiped the mouse-meets-GUI paradigm from Xerox PARC Alto workstation. Microsoft soon jumped on the bandwagon, and answered with Windows. By making its platform more hardware-agnostic, MS took advantage of PC vs. PC competition, which drove down commodity hardware prices, and Windows ended up outselling Apple by a wide margin. (Apple then got its knickers in a twist, and sued Microsoft, claiming MS copied the overall look-and-feel from Apple. Which it did, never mind that Apple got the idea from PARC, first).
Flash-forward to today. Apple has again gained the lead in a vital, high-growth market, by focusing on the nascent techno-hipster demographic. It learned that "The User Experience" is key, and set the craftsmanship bar very high with the original iPhone (or iPod, depending on how far back you peg the beginning of the "Apple Renaissance"), product packaging included. It didn't matter that the iPod/iPhone/iPad ecosystem was a walled garden. The devices looked cool, and were fun to use. And that's what mattered. Microsoft is again answering by aping the Cupertino Fruit. Only this time, I think, MS is floundering. Microsoft, being the stodgy old geezer, didn't get it, so now it's trying to play catch-up by copying Apple's game. Except that Microsoft started about two years too late, and has little in the way of inspirational leadership and/or product vision. Microsoft's strategy of "imitate when you can, innovate when you can't," which used to work back when things evolved more slowly, doesn't cut it in today's fast-moving mobile market.
Which is a pity, because I started to see some flashes -- some faint, tiny twinkles -- of hope once Windows 7 was released, because it really is (from a usability and stability standpoint) a very good general-purpose desktop/laptop OS. Its security and process isolation model still needs some improvement, but it's orders of magnitude better than XP. And that's saying something, given the fact that I'm a hard-core GNU/Linux user, and don't have any Microsoft-based OS installed at home.
... how will they know that I've given my password to someone else?
If I have a NetFlix client on both my phone and laptop, then it is entirely possible that I can have multiple NetFlix sessions open at the same time, and from different external (outside-the-home) subnet addresses. For example, my laptop could be hooked into NetFlix through my broadband ISP, while my phone could be connected to NetFlix through my mobile provider's network.
So, if I have my laptop at a friend's house, and we're watching a movie over there, and my wife/husband/child/significant other is watching a movie at home through a set-top box, is that Theft of Service?
From a technological perspective, the solution to the problem is simple: allow only one active session per account. Period. There's no reason to enact yet another "clap 'em in irons" law.
Great Diety, what a political football the Internet has become...
I think both Intel and tablet/slate aficionados are missing the mark here. It's not a question of "tablets vs. laptops," but one of convergence.
Laptops are old hat, and while they are still the preferred method of computing for mobile businesspeople, for many individuals, a tablet/slate is suitable for everyday tasks. For example, do you really need a device with a full keyboard and hardware pointing device (touchpad/pencil-eraser joystick/bluetooth mouse) to read an email and type "Yes, go ahead, place the order" in response, or to look up market data on your favourite financial website?
It is understood, I believe, that taking a laptop and making it into a tablet has probably failed, insofar as market penetration is concerned (although tablet-convertibles are quite popular in certain niche markets, such as the medical office records industry). Convertible laptop/tablet hybrids are too heavy to be held in a comfortable reading position for long periods of time, and their swivel/flip hinges aren't known for long-term reliability if the device is frequently changed from one mode to the other.
However, taking a native tablet and turning it into a laptop, by plugging it into a dock with full keyboard/mouse/external video support, shows a lot more promise. This model gives you the best of both worlds: the ultra-mobility of a device that lets you perform basic business tasks while travelling, yet can become a full-fledged computer when you return to "home base." In addition, lightweight "mobile docks" could be carried in your luggage. This way, you can still have the convenience of using a full laptop-like computer in your hotel room, while still carrying just the tablet to that meeting at the customer's office. The first device falling into this category is likely to be the Asus Eee Pad Transformer:
-- -- ASUSTek: Eee Pad Transformer TF101:
-- -- -- -- http://www.asus.com/Eee/Eee_Pad/Eee_Pad_Transformer_TF101/
-- -- Engadget: ASUS Eee Pad Slider and Transformer arrive for those that can't imagine using a tablet without a physical keyboard:
-- -- http://www.engadget.com/2011/01/04/asus-eee-pad-slider-and-transformer-are-here-for-those-that-can/
Some may also include the Motorola Atrix in the "dockable tablet" category, even though it is designed as a smartphone:
-- -- Dvice: This Motorola smartphone dock is secretly a full-fledged netbook
-- -- -- -- http://dvice.com/archives/2011/01/this-motorola-s.php
That's **exactly** what I was thinking, and is a pretty elegant way of describing one of my "cornerstones" of GUI-friendliness: I should be able to do anything I need to do to manipulate the desktop, without having to set down my cup of coffee (or mug of ale) and use both hands to do it...
Definitely drink to that one...!
:-)
... and lands in Playmobil City Life / Fisher-Price Play Family Village / Weebleville.
I've tried Gnome Shell, and have tried to like it... But I just can't. It seems to me that GNOME.org has decided that the whole "desktop" metaphor is broken in some fundamental way, and that the best way to fix it is to completely ditch established Human Interface Guidelines developed through years (or even decades) of ergonomic and semiotic research.
Sure, Gnome Shell **looks** slick, but it lacks flexibility, and takes away much more than it brings to the computing world: It has no customisable panel(s) (it does have a top panel-like bar, but you can't do much with it) or panel applets; no native, always-visible panel-based task switcher (the Gnome Shell "Dock" extension doesn't count; there are fundamental differences of behaviour between docks and task switchers); is too rigidly designed around a "one instance per app" paradigm; and (in my experience) has flaky multi-monitor support.
Unity has, IMHO, more promise, but still isn't ready for prime-time (Canonical should have waited another six months to one year before releasing it as the default desktop for Ubuntu), and also lacks a certain amount of flexibility and customisability. Things may get better if/when Canonical rewrites Unity to use GTK+ 3.x, because GTK+ 3.x is quite a bit cleaner and more modular than GTK+ 2.x. (Right now, the "standard" Unity interface is written as a Compiz plugin and uses GTK+ 2.x, the "non-compositing/non-accelerated" Unity interface uses Qt 4.x.)
So, where to from here, then? For me, unless the "Classic Gnome" desktop is forked and/or re-written under GTK+ 3.x, I'll probably be migrating to Xfce. I've tried KDE SC 4.x (since version 4.3, it has been quite stable), and it has a lot of cool features, but is still a bit too resource-heavy for my tastes, and has a little too much of a "cartoonish" look to it (especially with regard to its native window decoration and icon sets).
(A note for completeness: "GNOME Shell" does not equal "GNOME 3." "GNOME 3" is the third-generation GNOME/GTK+ framework. "GNOME Shell" is a user interface and window management system based on GNOME 3 and GTK+ 3. Canonical has indicated that in the near future, Unity's codebase will be migrated to GNOME 3 and GTK+ 3, to take better advantage of GTK+ 3's modularity.)
I'm surprised Lotus Notes/Domino wasn't mentioned in the article, at least where "document-oriented" database storage models are concerned.
The Lotus Notes NSF ("Notes Storage Format," or "Notes Storage Facility," depending on whom you ask) database architecture, where records are stored as variable-length document objects that support OOP-like features such as object inheritance has only been around for, what, almost 18 years now? (This time span is based on the year in which Lotus Notes R3 -- which is often regarded as the first "modern" incarnation of Lotus Notes -- was released: 1994.)
OK, so maybe Notes/Domino NSF databases won't scale to the limits envisioned by the creators of MongoDB and CouchDB, the the root ideas behind the technology are anything but new...
I concede that the word "melt" may be a misnomer, the way I used it above, but I was speaking in generalities.
That said, the basic premise still holds: Embed an SMM in a phase-change substrate that limits the mobility and fixes the orientation of the SMM when the area surrounding the it IS NOT being illuminated with the laser, and that allows freedom of (re-)orientation when it IS illuminated.
Thank you, also, for bringing HAMR to my attention. That one somehow slipped by me, but I've been focusing on Spintronics-based memory technologies...
Umm... So who's gonna swap that dead server hard disk drive, or replace that failing power supply, or open the telecoms closet for BT when the line goes down, then?
I get it... You'll re-assign a few of your customer-facing people to be back-office wonks!
Oh, wait...
I would say that the people are the legitimate representatives of the will of the people.
Most modern Western-style democracies and republics are based on "Rule By Consent of The People" (At least, in theory... In practice, maybe not so much.)
However, it would be (from a practicality standpoint) very, very difficult to regulate the Internet into submission.
Technology evolves too quickly for legislation to stay abreast; even the Great Firewall of China is riddled with easily exploitable holes, the 09F9 AACS encryption key controversy demonstrated the futility of battling the Streisand Effect, the "CTB Super-Injunctions" have been effectively nullified by Twitterers @large, and WikiLeaks is still in operation, despite attempts to limit its operations through various means. In short: "Can't stop the signal, Mal..."
The best way to "regulate" the Internet is to educate people, starting with Primary education, on how to use it responsibly and safely, show them how to protect their privacy and personal information while using online services, and demonstrate the dangers of carelessness.
I wasn't referring to China; I was referring to us here in the West.
That said, there can be no doubt that China's government has embraced (an admittedly very corrupt and twisted form of) capitalism... Otherwise, it wouldn't be the economic powerhouse that hit has become.