"...tube travellers might be concerned about the one pressed against them..."
There's a flap for that:
-- http://www.difrwear.com/
My coat's the one with the copper microwire skein lining...
395 publicly visible posts • joined 5 Sep 2008
Disclaimer: This post is not intended to either defend or persecute WikiLeaks, Julian Assange, and/or Bradley Manning.
In a post on an earlier article:
-- -- Comments to "Pro-Wikileaks Hacktivistas in DDoS Dustup with Patriot Contras"
-- -- http://forums.theregister.co.uk/forum/1/2010/12/08/wikileaks_assange_ddos_dustup/
-- -- "Flashback to the '70s"
I indicated that there are two very important words that need to be taken into consideration regarding any attempt to prosecute Julian Assange and/or WikiLeaks for their action(s) in publishing the diplomatic cables (allegedly) provided by Bradley Manning:
-- -- Pentagon Papers
-- -- http://en.wikipedia.org/wiki/Pentagon_Papers
For those too young to have lived through what is arguably the most famous of Cablegate's foreshadowing scandals, here's a quick recap:
In 1969, during a time of political upheaval fairly similar to what we're seeing today, a guy named Daniel Ellsberg copied a 4,100 page document (small by WikiLeaks' standards, but I digress) called
-- -- "United States–Vietnam Relations, 1945–1967: A Study Prepared by the Department of Defense"
while working at the RAND Corporation, a non-profit think-tank with high-level FedGov connections.
After sitting on the study for a while, Ellsberg contacted Neil Sheehan, a reporter at the New York Times, and handed over the kit in February 1971. The NYT tossed the issue around internally for a bit, debating the legalities of publishing the info, then started printing excerpts on June 13, 1971.
President Nixon was rather unconcerned, since the incidents described in the study occurred during the terms of his predecessors. However, his Administration's upper rank-and-file weren't so laissez-faire. After National Security Advisor Kissinger convinced Nixon to change his stance, Attorney General Mitchell used the Espionage Act of 1917 to obtain an injunction against the publication of further excerpts by the New York Times. The NYT appealed, and the case quickly climbed the judicial ladder and landed itself in front of the US Supreme Court:
-- -- New York Times Co. v. United States (403 U.S. 713)
-- -- http://en.wikipedia.org/wiki/New_York_Times_Co._v._United_States
The Supreme Court ruled in the New York Times' favour, saying that material provided in the Public Interest to a news organisation cannot be censored by Prior Restraint.
It was a landmark case, strengthening the foundations of the United States' Freedom of the Press. Daniel Ellsberg and the New York Times were hailed as heroes by some, and vilified by others, much the same as with Manning and Assange, today.
A quick comparison of the legal ramifications of Cablegate (Now) and the Pentagon Papers Affair (Then) provides the upshot to all this: In the United States, at least, Julian Assange, WikiLeaks, and the news organisations working with them are legally in the clear. By legal precedent of the United States' highest Court, the barriers against pre-publication censorship are very high indeed, and the Government needs the Mother of All Ladders to climb over them.
In a reply to my earlier post, one commenter said that (basically) Daniel Ellsberg and Julian Assange are nothing alike, and that Assange is "all talk and no walk," indicating that Ellsberg had a lot more backbone. The problem with this assessment, in my opinion, is that the commenter is (in my words) "comparing apples and oranges."
If you take a look at the parts being played by the various Cablegate cast of characters, Bradley Manning is Daniel Ellsberg, and Julian Assange/WikiLeaks is the New York Times. Assange can't be equated with Ellsberg, because Assange didn't steal anything. He was **given** the documents by another party, in much the same way that the New York Times received the Pentagon Papers from Daniel Ellsberg. Assange is a reporter, and WikiLeaks is his media outlet. If anybody "walked the walk" in this affair, it's Bradley Manning, should it be proven that he is the person responsible for collecting and leaking the documents to Assange.
Whether Manning and Assange/WikiLeaks are heroes or villains is not the point of my ramblings. Rather, my point is that by legal precedent, there's nothing to prosecute. In the United States, Prior Restraint against publication in the Public Interest is unconstitutional.
"THIS is why he's [Daniel Ellsberg] considered a hero. He walked the walk.
So far, Assange, et al, are only talking the talk."
That doesn't hold, because you're comparing apples and oranges.
**Semantically**, Daniel Ellsberg equates to Bradley Manning. Julian Assange equates to the New York Times.
In the Pentagon Papers affair, Ellsberg is the one who obtained access to the documents, the same way that Manning obtained access to the diplomatic comms in Cablegate. By the same token, Ellsberg **gave** the papers to the NYT, the same way that Manning **gave** the cables to WikiLeaks.
My comments are related to legal precedent regarding a journalist's right and privilege to publish material provided to them in the Public Interest, not as to whether a crime was committed by the person who (allegedly wrongfully) provided the materials to the journalist in the first place.
WikiLeaks is most certainly, by modern definition, a publisher of journalistic information. And Bradley Manning IS considered by some to be a Hero.
Disclaimer: This post is not intended to either defend or persecute WikiLeaks or Julian Assange.
There's just two words I'd like to make here, with all the blathering about Assange's, WikiLeaks', and/or Mass Media's supposed guilt or innocence:
-- Pentagon Papers
What's that? Never heard of 'em? Well, to hear the way our politicians, news anchors, and other pundits go on about things, neither have they.
So I'll provide a quick recap, for the uninitiated:
A long time ago (1969), in a political climate pretty similar to today, a dude by the name of Daniel Ellsberg copied a 4,100 page document (small by WikiLeaks' standards, but I digress) called
-- "United States–Vietnam Relations, 1945–1967: A Study Prepared by the Department of Defense"
while working at the RAND Corporation, a non-profit think-tank with high-level FedGov connections.
After sitting on the study for a while, Ellsberg contacted Neil Sheehan, a reporter at the New York Times, and handed over the kit in February 1971. The NYT tossed the issue around internally for a spell, debating the legal ramifications of publishing the info, then started printing excerpts on June 13, 1971.
President Nixon was rather nonplussed, since the incidents described in the study occurred during the terms of his predecessors, but his Advisors and Cabinet weren't so laissez-faire. After Kissinger convinced Nixon to change his stance, Attorney General Mitchell used the Espionage Act of 1917 to obtain an injunction against the publication of further excerpts by the New York Times. The NYT appealed, and the case quickly climbed the judicial ladder and landed itself in front of the US Supreme Court:
-- New York Times Co. v. United States (403 U.S. 713)
The Supreme Court ruled in the New York Times' favour, saying that material provided in the Public Interest to a news organisation cannot be censored by Prior Restraint.
A quick comparison of the legal ramifications of Cablegate (Now) and the Pentagon Papers affair (Then) provides the upshot to all this: In the United States, at least, Julian Assange, WikiLeaks, and the news organisations working with them are legally in the clear. By legal precedent of the United States' highest Court, the barriers against pre-publication censorship are very tall indeed, and the Government needs the mother of all ladders to climb over them.
It was a landmark case, and was a foundation of the United States' Freedom of the Press.
So what's the problem, you ask? Simple:
*** In all of the news coverage being bandied about by the talking heads on Fox News, CNN, MSNBC, BBC News, C-SPAN, and other 24-hour news outlets, about the whole WikiLeaks thing, I haven't heard a SINGLE anchor or politician mention the Pentagon Papers and its subsequent legal precedent. ***
Amazing.
Some people say, "Those who don't study history are doomed to repeat it." I disagree. The saying really should be, "Those who do study history are doomed to ignore it."
Love to stay and talk more, but I gotta grab my coat, and head out to teach a History class...
That is one bizarre looking surface formation.
It looks more liquid than solid, the way that water's surface tension causes it to "fold" when a high-velocity, laminar wind blows across a large pool of water toward a gently-sloping shoreline.
According to the article's footnote, the winds are coming from the "north" (relative to the picture's orientation; the top of the picture may not actually be "true north"). But by the way the dunes are constructed, I would expect that the winds were coming in from the relative "east" (right side of the photo). If you take a close look at snow drifts here on good ol' Earth, the windward side of a snow drift is often banked more steeply than the leeward side. The Odyssey photo in the article shows dunes that are more steeply banked on the right than the left.
Any geoscience folks out there who could enlighten us?
It's selling point is the fact the system (i.e., the Netbook/OS as an integrated unit) does one thing and does one thing well: it provides access to content over the Internet.
And while I would be apprehensive about running Google Native Client code on my general-purpose desktop/laptop -- compiling a webapp down to x86 so it can run directly on the hardware give me the shivers** -- the advantages provided by near-native execution speeds would probably be necessary to ensure adequate performance in webapps that actually do something useful.
Since the world's moving back to the mainframe paradigm anyway (i.e., centralised back-end, remote user interface), if properly implemented, I would venture that a ChromeOS netbook could be a very cost-effective way to provide access to in-house compute facilities to a roaming user base. For example, I would not be at all surprised if Panasonic came out with a ChromeOS-based Toughbook oriented toward the needs of industrial equipment technicians, field service engineers, and public utility linemen...
**Google's design scheme notwithstanding: The Google Native Client builds a sandboxed environment that uses three methods to enforce good code behaviour: privileged instruction trapping, x86 memory segmentation, and block-aligned branch targets.
-- -- The first method uses a code verfication facility to filter and/or trap all "dangerous" instructions.
-- -- The second method makes use of a legacy x86 memory management facility that was once the mainstay of DOS-based memory managers: A block of memory is carved out of the available memory pool, which is then subdivided into segments. In x86 Real Mode or Virtual 8086 Mode, the memory manager allows the creation of segments fixed at 64KiB in size within the bounds of a 20-bit virtual address block. Code within a given virtual block can normally jump only to code within the same block, and can only access data in segments within the same block. In x86 Protected Mode, the memory manager allows for the creation of a contiguous virtual address space constructed from chunks of non-contiguous RAM. Protected mode segments can vary in length. As with RM or V86M, code can normally only jump to instructions and/or access data within its own virtual address space.
-- -- The third method forces all indirect jumps and branches (read: pointers) to land at the start of a 32-byte aligned block. The environment is also constructed in such a way that instructions cannot straddle the 32-byte alignment boundary. This is intended to ensure that code can't jump to an unsafe/unallowed instruction intentionally buried within an otherwise safe multi-byte instruction.
Even so, the x86 platform has a long and storied history when it comes to exploits, and it may be only a matter of time before the Native Client's shell (no pun intended) is cracked as well.
Not a fan of rolling releases, meself.
Updates tend to break too many things, because it becomes too difficult each app's/library's interaction with every possible "other version" of the apps/libraries installed on a user's system since the last major release "code freeze" (i.e., the point at which an ISO image is released).
What would be preferable, IMNSHO, is to make it a lot easier for users to add/subtract updates provided by LaunchPad PPAs through Ubuntu Software Center.
This way, users can test "bleeding edge" packages and send feedback, but still be able to revert to a consistent baseline, should they so choose...
The X-33 engine program was cancelled when its fuel tanks kept fracturing when filled with cryogenic hydrogen and/or oxygen. The fuel tanks were constructed with composite materials to help reduce lift-off mass.
Apparently, the composites that were tried became exceedingly brittle at cryotemps, and their lack of elasticity at very low temperatures made it difficult to isolate the tanks from vibration transmitted from the engines by the fuel/oxidiser feed lines.
This vibration, combined with the lack of required elasticity, would cause the tanks to become friable along the composites' "grain", much is the same way that it's easier to split a log into firewood along the wood-grain, as opposed to across it...
If the mobile companies are in charge of providing infrastructure for and managing near-field comm payments, then they'll have another revenue stream: transaction fees.
Whereas, if the credit card companies have control, and the NFC transactions are simply "bits passed upstream to the Internet," then the carriers won't be getting as near a big chunk of that pie...
NFC is bound to be a boondoggle, anyway... Cybercrooks have already figured out how to skim all manner of magstripe and RFID payments systems (no-name ATMs and point-of-sale terminals in corner shops/convenience stores come to mind); it boggles the mind that we'd create yet another avenue for ID theft.
Cash may be clumsy and inconvenient, but it's identity-agnostic, and doesn't rat out your account numbers when it's stolen...
Not sure I agree with that statement.
For one thing, cache, by definition, is a staging area; stale data isn't expected to hang around very long; i.e., once a particular chunk of data isn't considered "useful" any more (by whatever criteria the array controller uses to make such a determination), it's supposed to be flushed to disk. In so doing, one can keep the size of the cache -- which is expensive -- small (on a relative basis, compared to overall persistent storage capacity).
So let's say the server(s) connected to the controller keep reading and updating a very large sequential data set. The data set could be anything, perhaps a chunk of petroleum geo-prospecting data. This data set is large enough, in fact, to use almost the whole cache. Since the servers crunching the data keep updating the set, the chunk never gets flushed to disk; it is constantly held in cache. Since almost the entire cache is occupied by a single data set, I/O for other data sets ends up being heavily constrained. This might not be a problem, if the array is dedicated to a server cluster oriented toward a single task.
However, in the HPC/supercomputing arena, one is often dealing with multiple large data sets being processed in parallel, and competing for processor time and storage resources. In combination, these data sets could dwarf the size of the cache. Your cache becomes a bottleneck, because it isn't large enough to handle the competing storage access demands.
Better to build the whole array out of Flash, and (if necessary) use on-controller RAM for cache, methinks, than to use a flash cache with mechanicals hanging off the tail-end. Much more expensive, sure, but probably a lot less of a performance bottleneck for large sequential storage requests.
Sure.
If Iran can't enrich its own uranium, it would be forced to buy enriched uranium from someone else.
What better way to ensure a customer's loyalty, than to sabotage said customer's quest for market independence?
I suppose that Stuxnet, Zeus, Conficker, BredoLab, etc. are just slightly-annoying nuisances, then?
What sticks out like a sore thumb to me is that while the simulated attacks were designed to knock "critical services" (read: large Banking/Communications/Government/Industrial organisations) off-line, most of the criminal and economic damage comes from attacks on personal (i.e., Home/Small Business) infrastructure.
Large businesses with well-established IT shops and actual IT security budgets are harder to attack, whereas the unwitting Mr. and Mrs. Smith, who are often ill-prepared to deal with cybercrime, tend to be much easier targets.
It's death by a thousand (or in the case of society as a whole, many millions of) cuts. The economic hardship caused to the average citizen victim of cybercrime may be small (a few thousand dollars/pounds/euros per person, on average), but the cumulative effect is enormous.
Until we make computer security and safe Internet/personal computing practices a priority component of our primary/elementary school curriculum, very little is going to change. People need to be better educated about how to conduct themselves in an Internet-connected society.
No amount of "Internet snooping" legislation can fix this problem. The solution begins in school and at home.
... Across the stratosphere, a final message: "Give my wife my love..."
Then nothing more.
Far beneath the ship, the world is mourning.
They don't realise he's alive.
No one understands, but Major Tom sees...
"Now the Light commands."
This is my home, I'm coming home...
Not sure I like the idea of near-field comm payments.
Card skimming is already on the rise, with counterfeit readers attached to all matter of devices, from ATMs to video rental kiosks.
But at least with physical magstrip readers, there's a chance that you'll notice something isn't quite right about the device (i.e., the keys "look wrong" or the card slot "grabber" sticks too far out of the front of the machine).
With the security of many popular RF-based transit cards (MIFARE Classic, etc.) already in doubt, adding NFC credit/debit payments to the mix just doesn't seem like a good idea. Side-channel attacks are too difficult to spot; a directional, tuned, high-gain antenna can be mounted almost literally "anywhere" in the surrounding environment, making RF-based attacks very difficult to detect, even for experts.
> "Very little push would be necessary to bring it to the thirsty satellites that we rely on for comms, navigation, observation and so on."
No, but you'd need a pretty big push to slow it down once it got here, so it didn't:
1. Get snagged by the atmosphere and burn up
2. Plummet to the surface like a jumbo hailstone
--- --- or --- ---
3. Zip on past on a cruise to a less useful part of the Solar System.
The amount of energy required to reduce an object's relative velocity to zero is equal to the square of that velocity.** Even a small ice tug (say, 100 tons, give-or-take) moving just a few kilometers per second would need a pretty big reaction engine to slow the cargo down enough for insertion into a stable orbit.
** For purposes of discussion, zero relative to the orbital receiving station, at the point along the ice tug's path tangent to the receiving station's orbit.
Wanna bet?
I expect that there are a bunch of financial market and commodities trading firms in less-regulated countries who would LOVE to get their hands on Morgan Stanley's programmed-trading innards.
To use Morgan Stanley as an example: A slightly-off-center firm could "buy" a chunk of code from a disgruntled Morgan Stanley IT wonk, reverse-engineer the code to gain insight into Morgan Stanley's trading algorithms, and look for routines related to arbitrage transactions**. They could then design more efficient, lower-latency routines that take better advantage of price difference windows, thereby gaining a competitive advantage with regard to automated trades.
Never underestimate the power of (successful) industrial espionage.
**For a description of what arbitrage is and how it relates to financial markets, here's the Wikipedia article:
-- -- http://en.wikipedia.org/wiki/Arbitrage
My jacket's the one with the smuggled flash-drive in the pocket... * wink *
A cursory examination of the code seems to indicate that disabling many (almost all?) of the browser features that Evercookie uses to store its persistent data would have the side-effect of breaking basic interfaces that would be necessary for the proper functioning of most modern web sites (especially AJAX/Web2.0 pages).
The use of CSS to embed the cookie into your history cache (line 580 and sundry, evercookie.js) -- if I'm interpreting things correctly -- is both ingenious and disturbing. Storing your browser history as data linked to a custom CSS attribute (line 785 and there-abouts, evercookie.js) is just as twisted.
Looks like the dev's got all the bases covered.
No doubt about it, this code is EVIL.
Big Brother, indeed...
Howdy,
The internationally accepted beginning of space, for most intents of purposes (especially Guiness Book of World Records-style record-keeping) is known as the Kármán line, which has been set at 100 kilometers (approximately 62 statute miles, or 54 nautical miles) above sea level by the Fédération Aéronautique Internationale.
The FAI's official definition of "space-based activities" can be found in the FAI Sporting Code handbook (PDF), on Glossary page 3 (PDF page 52 of 53):
-- http://www.fai.org/system/files/GS_2010.pdf
However, this definition may run afoul of the territorial proclivities of certain twitchy governments.
I would expect that Russia, China, and the United States probably fall into the "twitchy" category (being the three nations with anything close to "superpower" status in the post-Cold War world), along with North Korea, Iran, Pakistan, and India (which are countries with known aerospace and/or nuclear deterrent ambitions). Israel and South Africa may also belong to this group, depending on who you ask...
> Just because the data and code memory sections do not overlap does not mean it is impossible to attack a Harvard architecture machine.
I never indicated that attacking a Harvard architecture machine was impossible. I indicated that Harvard machines may be considered "inherently safe," depending on implementation.
Hence the " **may** be considered 'inherently safe' -- if properly implemented " bit.
It should be kept in mind that some of the more modern Harvard architecture machines are microcontrollers in embedded systems, which are (generally) more readily hardened against code infiltration and code rewriting attacks, by virtue of requiring the chip containing the software to be physically removed and mounted in an EEPROM programmer to modify the programs stored within.
However, even this level of protection only gets you so far. Not even a Harvard machine will stand up to a highly sophisticated attack, if the attacker can gain access to the processor's address and data buses, divert the bitstreams to an external system for processing and analysis, then inject specially-crafted data back into the system.
A given system can therefore only be considered "inherently safe" with regard to the purpose for which it is designed. A microcontroller implementation which is suitable for telco use, and considered "inherently safe" by the telecom industry, may not be suitable for running a nuclear missile launch control system. The device is thus "inherently safe" for one application, but not another.
> You're assuming that IBM/SunOracle by default would be "safe" by virtue of being different...
In this case, yes. Given an examination of the code, there seem to be a considerable number of explicit references to x86-32-specific and x86-64-specific registers and features (%rax, %eax, %ecx, %edx, %rsp, etc.), which would obviously not exist on POWER or SPARC architecture CPUs.
> The "heterogeneous target" excuse is also faulty, for example, the 10 year old bug in SPARC Solaris that allowed direct root login using telnet was re-introduced into Solaris 10
I disagree on this point. telnet is an application, and so is not inherently architecture-dependent at the source-code level.
The x86-32/x86-64 call stack translation subsystem, being a hardware-oriented component of the Linux kernel specific to the x86 architecture, **is** architecture-dependent at the source-code level. This makes it very likely that this bug would not be reproduced in Linux kernels compiled against other CPU types.
To clarify my earlier position: That is not to say that general-purpose computing systems with inherent safety don't exist. They do; they're just not very common.
One general-purpose computer design that **may** be considered "inherently safe" -- if properly implemented -- is a so-called "Harvard architecture" machine. This type of computer has physically separate data and code (program) buses and memories, so data can't grow to overwrite code, and code can't grow to overwrite data. The IBM ASCC/Harvard Mark I was the basis for this type of design. Modern examples of this kind of architecture include embedded systems based around AVR (Atmel Corp) and PIC (Microchip Technology, Inc.) microcontrollers.
> The system is inherently safe anyway.
Huh?
Not sure what you mean by that... No computer operating system, application, or platform can be called "inherently safe" unless it was specifically designed for safety from the ground-up. Very few general-purpose, consumer- and commercial-grade operating systems and platforms fall into this category.
Telco-grade Class 4 (4ESS) and Class 5 (5ESS) circuit switching equipment, certain automated railroad signalling systems, some types of industrial control equipment, and various medical device control systems fall may fall into the "inherently safe" category, but your home or office PC, even if it runs Linux, most assuredly does not.
I'm an ardent GNU/Linux supporter, and have been using it almost exclusively as my OS of choice for the better part of 10 years now (none of my home PCs run Windows or Mac OS X). Even so, I would be foolhardy if I trusted it to be "inherently safe."
While I do believe that GNU/Linux-based operating systems are **safer** in many ways than Windows and Mac OS X, I have seen my share of GNU/Linux boxes crash-and-burn (figuratively) because of poor configuration, lackadaisical patching, improper oversight, and yes, even the not-so-occasional bugs (both new and regressed).
This is what happens when you develop an operating system that purports to work "the same" across multiple architectures: It doesn't.
By looking at the code for both the bug and the exploit, they each appear to be heavily x86-32/x86-64 dependent.
I would venture that there is a high degree of probability that this bug/exploit combination does not exist, for example, in versions of the Linux kernel developed and compiled for IBM's midrange (AS/400 / System i / System p) and mainframe (S/390 / System z) iron, in versions of the kernel developed to work on Cell-based parallel processing systems, or in versions compiled against SPARC machines.
This leads me to believe that it is perhaps the x86-32/x86-64 architecture that is at fault, at some lower level, for not properly securing access to 32-bit facilities provided by 64-bit processors. This kind of bug could **conceivably** be used to compromise Linux-based x86 hypervisors, by allowing an intruder to context-switch out of the virtual machine and into the host OS.
Granted, the fact that a regression of this magnitude was re-introduced into the Linux kernel is regrettable, but it isn't difficult to understand how such a mistake can be made, given the kernel's rather heterogeneous target audience. No one person, or group of persons, can be an expert on all of the different processor architectures supported by the Linux kernel.
> It would probably be unnecessary to test it at both low temp and low pressure, if it passes the low temp test separately.
Not necessarily...
The amorphous, polymeric structure of the rubber can become brittle at low temperatures, making it much less elastic, and thus more prone to fracture. This brittleness, combined with the higher pressure contained within the expansion tube (relatively, as the outside pressure drops as the device gains altitude), can cause the rubber to crack at the inner/outer "accordion" edges. The problem is that the rubber's amorphous structure makes it difficult to predict its failure modes with a high degree of precision. The only way to know for sure how the device is going to react to the low temperature environment is to test multiple samples under stress, take a weighted average, add a certain amount of safety margin, and then build the production device accordingly.
And the temperatures don't even have to be all that low, either: In 1986, a bunch of bigwigs at NASA and Morton Thiokol were convinced that the rubber O-Rings in the Space Shuttle Challenger's Solid Rocket Boosters would remain elastic and flexible, and thus respond gracefully to the high pressures in the SRB fuel casing, at a mere 34 degrees Fahrenheit (1.1 C). They decided to go ahead with launch, even though a bunch of in-the-trenches engineers indicated that launching at such low temperatures would be very risky. Furthermore, it has been suggested that overnight temperatures at the pad could have fallen to below 20 degrees Fahrenheit (-6.7 C). (During the pre-launch inspection undertaken that morning, a record low temperature of 8 degrees Fahrenheit (-13.3 C) was reported by an infrared camera pointed toward a location on the right SRB, near the joint that failed. The inspectors decided that the reading was "erroneous.")
The word "Oracle" (or "oracle"), as used in the context of this article relates to the cyptologic principle known as the "random oracle," which is an abstract/mathematical construct that responds in a truly random fashion to each possible input, with the constraint that if a given input is duplicated, the resulting output is also duplicated. More detailed information on cryptologic random oracles can be found on Wikipedia, here:
-- http://en.wikipedia.org/wiki/Random_oracle
More generally, an "oracle machine" is a mathematical "black box" which is used to study whether a given input to a construct will map to a given output, in an effort to learn the operational functions or bahaviors contained within the construct. More detailed information on general oracle machines can also be read here:
-- http://en.wikipedia.org/wiki/Oracle_machine
Both concepts are related to decision theory:
-- http://en.wikipedia.org/wiki/Decision_problem
The constant "c" (as in "just 'c' ") is the speed of light **in a vacuum**.
The speed of light can and does vary, depending on the material (or lack thereof) through which the light moves.
Thus: c_Rb != c_vacuum
It is even possible for physical charged particles (electrons and/or protons) to **exceed** the speed of light **of a given moderating material**, if the particles have enough momentum. For a more detailed explanation, look up Cherenkov radiation:
-- http://en.wikipedia.org/wiki/Cherenkov_radiation
... as you think.
Heisenberg tells us that you can't measure something without changing it, or more (less?) precisely, you can't know everything about something in full detail.
Therefore, if the Feds read your mail before you send it, then the copy you send won't be the copy they've read... :-)
Yup... A tasty, yet lightning-fast, Aflac-Duck-esque Thanksgiving gobbler thwacking Agent Smith with its wings-and-webbys.
Smith answers with his viral replication routine, infusing the fowl with his program essence, exploiting a back-door into a hapless individual's manifest self via his/her digestive tract.
All tastes like chicken, anyway. Think I'll have another round...
> "... like Microsoft, arguably pays only lip service to the open source cause."
Let's be fair: While it may be true that one side of Microsoft is a capital-oriented, patent-wielding, operating system/office suite/search/advertising company, the other side of Microsoft (the fluffy, warm-fuzzies side) does contribute quite a bit to the FLOSS community, if one looks beyond contributed lines of code. **
This is especially true where Hyper-V is concerned, because Microsoft got caught with its pants down over virtualisation, and needs to "play nice" with the other kids in the sandbox if it wants to catch up.
This corporate schizophrenia is being brought about by its efforts to remain relevant in a highly mobile world, where chaining people to desks and providing them with a word processor and spreadsheet application isn't enough to get the job done any more.
So while Microsoft is undoubtedly embracing (to use the word loosely) Open Source as a means to further its own ends -- not to mention its continued existence -- I would say the company is paying a lot more than mere "lip service" to the cause.
**Disclaimer: I'm an avid GNU/Linux/FLOSS fan, and normally eschew MS products. Even so, I can still find legitimate value in a select portion of its product offerings (and no, Xbox/Xbox 360 don't count), and have been known to support the company on occasion.
(Paris, because we all know about her lip-service skills, now, don't we...?)
Sensationalised article here:
-- Neolithic Windows security hole alive and well in Windows 7
-- http://www.itworld.com/security/93442/neolithic-windows-security-hole-alive-and-well-windows-7
Well-written technical write-up here:
-- [Full-disclosure] Microsoft Windows NT #GP Trap Handler Allows Users to Switch Kernel Stack
-- http://archives.neohapsis.com/archives/fulldisclosure/2010-01/0346.html
Last I checked, 17 > 5...
Be that as it may, pride has oft been the downfall of many an OS/Kernel developer. Even Linus himself has been called-out on occasion:
-- "That does not look like a kernel problem to me at all," Linus Torvalds is quoted as saying in an email message. "He's running a setuid program that allows the user to specify its own modules. And then you people are surprised he gets local root?"
-- Excerpted From: On Bugs, Viruses, Malware and Linux
-- http://www.linuxinsider.com/story/67818.html?wlc=1282255849
No operating system is perfectly secure. In fact, I would venture to say that anything more complicated than an 8080/8085 executing:
-- 0100 NOP
-- 0101 JMP 0100
is probably in some way "insecure."
> ...you can lean over and hit the power button on the front, reboot with only a bash shell as you kernel, remount the filesystem input/output, and reset the root password.
Ummm... Not quite. The "bash shell" is not a kernel; it does not provide access to any core, system-level functionality on its own. It requires at least a very minimal kernel to be loaded and running first, before it can do its thing.
> In fact, thinking about it, if you had physical access to the machine, and wanted to cause it harm, you could just hit it with a big axe.
Probably. Unfortunately, most people who would go through the trouble of obtaining physical access to the machine would probably find it to be a much more valuable item in working condition.
But yes, in principal, the kind of attack you describe will work, provided the mass storage device(s) used by the target machine isn't (aren't) encrypted.
However, a hardware keystroke logger interposed between the target machine's keyboard and the machine itself can easily help you get around the encryption issue.
Which is why I recommend that anyone who uses the latest Ubuntu-flavoured versions of GNU/Linux follow the instructions presented here:
-- Ubuntu Lucid Lynx 10.04 Full Disk Encryption with USB Key Authentication
-- http://lfde.org/wiki/index.php/Ubuntu_Lucid_Lynx_10.04_Full_Disk_Encryption_with_USB_Key_Authentication
... especially for laptop users (no warranties expressed or implied, and I didn't write the article at the link provided above), and periodically check your keyboards/mice to make sure they aren't being sniffed in some way.