Largest EVE Online Battle
2010/10/30: 3,110 ships at LXQ2-T:
-- -- Massively@Joystiq:
-- -- -- -- http://massively.joystiq.com/2010/10/30/the-largest-battle-ever-held-in-eve-online-is-going-on-right-now/
395 publicly visible posts • joined 5 Sep 2008
The full-blown PC (or Mac) desktop/laptop won't die completely, at least not in the medium term, because tablets and smartphones are by-and-large content **consumption** devices, but still aren't very good as content **creation** devices (relative to a full-blown desktop/laptop).
(Although, to be fair, there has been quite a bit of progress in this area lately. For example, tablets can now do relatively simple photo and video editing, and are making inroads into the DJ and live performance markets as audio mixing board system controllers.)
I do think, however, that you will start seeing the average "no-longer-in-school-so-I-don't-need-a-computer-for-term-papers" consumer shift away from full-blown desktops/laptops for home use, since for many people tablets, smartphones, and the new generation of "media portal" DVR boxes, Blu-Ray players and TVs can provide all of their home electronic media needs.
That would be the worst thing SpaceX could do at this juncture...
I think one of the reasons that access to space is currently so expensive is because we rely too much on big, behemoth-esque, publicly-traded aerospace companies with inefficient labour structures, and whose shareholders demand cushy dividend payments. This forces an emphasis on market performance as opposed to engineering performance, with contracts written so governments are forced to pay millions (perhaps even billions) in contract cancellation fees when projects are cancelled before reaching their goals. All of these things combine to drive up the cost/unit mass to climb the gravity well.
The only way to make access to space affordable to the masses is to allow open-market competition, and to encourage companies to make engineering investments that deliver "keep-it-simple-stupid-we've-already-built-it-and-proven-it-works" products for sale.
It still is. Well, maybe the "expensive" and "highly inaccurate" parts...
Never pay market prices for a good service when you can spend three times as much to do it yourself and get substandard results in the bargain.
...wanting to stick with "pesky graphite controllers."
From an ergonomics standpoint, I actually think the touch-based electronic volume control is a pretty cool idea. However, there are some real safety issues related to this particular innovation.
Anything that causes a moving change in the sensed capacitance (or resistance, depending on the touch-sensing technology being used) of the touch device in a proper direction can cause the volume to change. Depending on the sensitivity of the touch sensing device, this can present an opportunity for a practical joker to seriously damage a person's hearing; the miscreant's light and barely-noticeable touch could crank the volume up to 11 before the user even realises the device is being manipulated.
A physical lock-out switch would require a certain amount of increased pressure to be applied to the headphones in such a way that the user would almost certainly know that someone was fiddling with the controls.
(Bullhorn, since we're talking about volume being cranked to 11.)
Not sure that's a good idea... Merry pranksters could seriously damage one's hearing by making use of this particular feature without notice.
Also, there are bound to be instances where certain fabrics, etc. could act like said finger, and cause unexpected (as well as painful) increases in volume.
At a minimum, the phones should have a physical lock-out switch that prevents the local volume control from activating. Miscreants would then have to physically interact with the headphones at a much more noticeable level, thereby letting the user know that unwanted tomfoolery is occurring.
... when our most venerated telescope finally closes its eyes on the heavens.
When its time comes, we should find a way to move it into a stable parking orbit, so a future vehicle suitable for satellite recovery can bring it home intact, rather than let it be consumed by re-entry.
If any device deserves to be preserved in the Smithsonian, this is it...
SSD manufacturers should be focusing on bringing the advantages of SLC to mass affordability, as opposed to increasing unit cell level count.
IMHO, MLC flash already trades too much reliability and write performance for storage density as it is.
I would much rather see flash fabs work on shrinking SLC unit cell size, and develop three dimensional "stacked SLC" dies, than try to cram more bits into an individual MLC unit cell.
... but governments will be the biggest exploiters of it.
For example, here in that country West of the Big Pond, there's been talk recently -- at the Federal level -- of introducing mandatory technology that would render most cell phones' functions inoperable when in a moving vehicle:
-- -- United States National Transportation Safety Board
-- -- Gray Summit, Missouri Board Meeting (section "Conclusions," item #6):
-- -- -- -- http://www.ntsb.gov/news/events/2011/gray_summit_mo/index.html
Here's the excerpt:
"#6 Manufacturers and providers of portable electronic devices known to be frequently used while driving should reduce the potential of these devices to distract drivers by developing features that discourage their use or that limit their nondriving- or nonemergency-related functionality while a vehicle is in operation."
The problem with these proposals is that they're (from a technology standpoint) non-discriminatory. For example, how does the vehicle's cell phone inhibition system determine that it's MY phone (since I am the driver) that needs to be deactivated, and not my passenger's phone? What if my passenger wants to borrow my cell phone to make a call while I am driving?
Granted, distracted driving is a major problem, and has killed and injured quite a few people over the years, but no technology can be flexible enough to provide for every eventuality and exception.
... in case Grunt decides to land on my house/car/pet/family member/significant other? I'm not sure I can trust government bureaucrats to handle my claim in a timely manner:
-- -- El' Wiki: Space Liability Convention (section "Claims between states only")
-- -- -- -- http://en.wikipedia.org/wiki/Space_Liability_Convention#Claims_between_states_only
... of low-thrust, high-specific-impulse ion thrusters.
If whimpy xenon thrusters can help move a satellite from low-/mid-earth orbit to geosync, running continuously for as long as they did, then there should be no problem with using ion engines like VASIMR to cross the Earth-Mars expanse.
Presuming, of course, we can crack the electric power nut. VASIMR takes quite a bit of power, which means any manned spacecraft we send to Mars with a VASIMR engine would almost certainly need a nuclear-fuel based power plant, such as a battery of radioisotope thermoelectric generators (which is likely to be unpalatable to various envirofactions).
Solar panels could -- in theory -- be used to provide the needed electricity, but are vulnerable to damage from interplanetary particulate matter and servo-mechanical failure.*
* "Servo-mechanical failure" refers to the failure of the mechanical equipment used to deploy/retract the solar panels and maintain their orientation.
El' Wiki has a pretty concise write-up on such boffinry:
-- -- Wikipedia: Nitrous oxide (Section "Rocket motors"):
-- -- -- -- http://en.wikipedia.org/wiki/Nitrous_oxide#Rocket_motors
My thoughts would be to lean toward the NO2/petrofuel blend idea. If you increase the dimensions of the glider, you could probably carry quite a bit of fuel. Add a small Arduino-based microcontroller and a solenoid-controlled throttle valve, and the University chaps should be able to bodge together a fairly practical liquid-fuel engine...
... sites like "Angie's List" could never survive in the UK.
The Honourable Judge's restrictions against the publication of personal identifying data are understandable; people should be able to rely on some measure of separation between their business and private affairs.
However, the restrictions against allowing the Public at Large to post negative critiques of businesses with whom they've had dealings has serious Freedom of Speech implications. As long as such negative opinions are expressed in a sensible, forthright manner and do not contain threatening remarks (i.e., they're not out-and-out "flame" attacks, and do not contain any substantive harassing phraseology), the website, its owner, and its users should be in the clear.
I've been thinking about this as well.
I guess if you know the checksum digit generation algorithm used, there would be only 10^7 combinations, but if you didn't know the formula for generating the checksum digit, then there would be 10^8 uniques, since you would have to test each last digit along with the other seven.
The same goes for the "upper-quad-then-lower-quad" PIN-probe attack described by the article (and my example, above): If you know the checksum algorithm, then the complexity is
-- -- 10^4 + 10^3 (11,000 guesses required)
but if you do not know how the checksum digit is calculated, then the complexity increases to
-- -- 10^4 + 10^4 (20,000 guesses required)
which is still a whole lot less than any threshold that can reasonably be considered "secure."
...here's a (very simplistic) visualisation of how researchers arrived at the "10^4 + 10^3 = 11,000 attempts" figure:
First off, you send PIN association packets to the Wi-Fi router, starting with
-- -- "0000 0000" (space added between quads for clarity)
and increment the upper quad by one, like so:
-- -- "0000 0000"
-- -- "0001 0000"
-- -- "0002 0000"
-- -- "0003 0000"
Each time the "probe PIN" is sent, the router replies with a message that tells the device if the upper quad (the first four digits) is incorrect. Since the upper quad is four digits long, you only need to send at most ten thousand (10^4) "probe PINs" -- from "0000 0000" to "9999 0000" -- to determine what the first four digits of the real PIN actually are.
For purposes of this discussion, we will say the correct upper quad is "4976." This presumably took us 4,977 guesses, if we started at "0000 0000," and tested the upper quad sequentially.
Once you know the first four digits, you only need to guess the first three digits of the lower quad -- from "000" to "999," or one thousand (10^3) combinations -- to find the rest of the PIN. The last digit is deterministic, since it's calculated mathematically from the first seven digits, and used as a checksum:
-- -- "4976 000[checksum]"
-- -- "4976 001[checksum]"
-- -- "4976 002[checksum]"
-- -- "4976 003[checksum]"
Again, for purposes of discussion, we'll presume that the correct first three digits of the lower quad are "387," with the calculated checksum appended at the end.
Thus, given an upper quad of "4976" and a correct lower quad of "387[checksum]," we should be able to find our association PIN in
-- -- 4,977 + 388 = 5,365
guesses.
As a matter of clarification, here's a (very simplistic) visualisation of how researchers arrived at the "10^4 + 10^3 = 11,000 uniques" figure:
First off, you send PIN association packets to the Wi-Fi router, starting with
-- -- "0000 0000" (space added between quads for clarity)
and increment the upper quad by one, like so:
-- -- "0000 0000"
-- -- "0001 0000"
-- -- "0002 0000"
-- -- "0003 0000"
Each time the "probe PIN" is sent, the router replies with a message that tells the device if the upper quad (the first four digits) is incorrect. Since the upper quad is four digits long, you only need to send at most ten thousand (10^4) "probe PINs" -- from "0000 0000" to "9999 0000" to determine what the first four digits of the real PIN actually are.
For purposes of this discussion, we will say the correct upper quad is "4976." This presumably took us 4,977 guesses, if we started at "0000 0000," and tested the upper quad sequentially.
Once you know the first four digits, you only need to guess the first three digits of the lower quad -- from "000" to "999," or one thousand (10^3) combinations -- to find the rest of the PIN. The last digit is deterministic, since it's calculated mathematically from the first seven digits, and used as a checksum:
-- -- "4976 000[checksum]"
-- -- "4976 001[checksum]"
-- -- "4976 002[checksum]"
-- -- "4976 003[checksum]"
Again, for purposes of discussion, we'll presume that the correct first three digits of the lower quad are "387," with the calculated checksum appended at the end.
Thus, given an upper quad of "4976" and a correct lower quad of "387[checksum]," we should be able to find our association PIN in
-- -- 4,977 + 388 = 5365
guesses.
...a "direct" flight around sensible security measures.
I am not sure that expanding the Wi-Fi protocols to allow Wi-Fi devices to connect to each other willy-nilly is the way to go.
Wi-Fi Protected Setup is a case in point: Although the protocol was designed to make it easy for the layperson to attach his/her Wi-Fi enabled computer to his/her wireless router, weaknesses in the protocol allow for efficient brute-force attacks against association PINs. Specifically, many Wi-Fi routers that use the PIN method for device association reply with an EAP-NACK message that lets an eavesdropper know if the first four digits of the 8-digit PIN are correct. Also, the last digit of the PIN is used as a checksum.
This results in an effective reduction of complexity of the PIN:
-- -- from 10^8 (100,000,000 uniques)
-- -- to 10^4 + 10^3 (11,000 uniques)
A US-CERT notification on the Wi-Fi Protected Setup vulnerability can be found here:
-- -- US Cert: Vulnerability Note VU#723755
-- -- -- -- http://www.kb.cert.org/vuls/id/723755
and a more in-depth technical description can be found here (PDF):
-- -- Stefan Viehböck: Brute forcing Wi-Fi Protected Setup
-- -- -- -- http://sviehb.files.wordpress.com/2011/12/viehboeck_wps.pdf
I'm all for technology that makes it easier for users to manage their devices, but wireless interface and device association protocol designers need to spend more effort to make sure that their implementations are secure and don't take unnecessary shortcuts to accomplish that goal.
Yup.
A long time ago, in a county (borough) not too far from where I live, I installed a shiny new file and print server that ran NetWare 3.12/3.20.
A couple years later I paid the agency a visit, because it needed to be migrated to a Win2K Server; the soon-to-be-released "upgrade" to the agency's insurance policy management software couldn't be hosted under NetWare.
Imagine how disappointing it was to shut-down and decommission a NetWare file and print server that, according to the System Console, had accrued *** 782 DAYS *** of straight uptime. (Having the server plugged into a beefy, true-sine-wave UPS with proper grounding also helped.)
At the time, Win2K Server couldn't even come close to NetWare's reliability and security. Win2K3 Server was better than Win2K (it was a lot more stable than Win2K), but still had its share of security issues. Win2K8 Server is better still, and from a technology standpoint has probably eclipsed NetWare in certain aspects regarding deployment and manageability.
But I still have fond memories of NetWare...
They already make those... They're called "feature phones" and "laptops," respectively... :-)
Although I do agree with you - in the main - on OS selection. I think it would be really cool to be able to combine a full-featured GNU/Linux-based OS with a slate form-factor machine.
I have an HP tx2z-1300 "convert-a-tablet" which runs Linux Mint quite happily, but isn't really practical when used in its tablet mode, except to take hand-written notes in Xournal, with the unit sitting firmly on a desk.
Yeah, this NEVER happens on an Apple iThingy... Unless, of course, you include "bricked devices" as a subset of "broken apps," because, mathematically, if the entire phone is broken, then logically all the apps on the phone are broken, too (from a usability standpoint, anyways):
-- -- Information Week: iOS 5 Launch Becomes Update Disaster
-- -- -- -- http://www.informationweek.com/news/mobility/smart_phones/231900684
I seem to recall that OS-update-app-breakage not being limited to just Android.
(For the record, I do not own any device that runs Android, BlackBerry OS, iOS, or Windows Phone. I do have a first-generation Palm Pre - which serves me just fine - but that's the extent of my smart device-iness.)
Well-stocked auto mechanic shops qualified to do air conditioning repairs have vacuum pumps that are used to evacuate the air conditioning system prior to refilling the system with refrigerant.
You may also be able to convince a local HVAC (Heating, Ventilation, and Air Conditioning) contractor to come out with an A/C service truck, and use some engineering tomfoolery to connect its A/C evacuation pump up to REHAB.
You may even be able to pay for the job by offering to run an advert for the company providing the equipment on LOHAN's side...
The important thing to take from this statement is that it ** can ** -- from an engineering and architectural standpoint -- be done.
We already have supercomputers on this planet, like the Fujitsu K:
-- -- Wikipedia: K computer
-- -- -- -- http://en.wikipedia.org/wiki/K_computer
that consume electricity at similar scales (9.8 Mw for the Fujitsu K, 15.3 Mw for a full-blown human brain emulator using these synapse chips), and interconnecting 133 million components, while admittedly a very daunting engineering challenge, is not impossible.
The key factors against building a device like this are cost and time. Current geopolitical sentiment is leaning away from "big science" endeavours, especially those that appear to be wholly theoretical, and thus be of little immediate commercial, military, or consumer benefit.
Which means we're still probably at least a few decades away from building -- and then having to live with -- a Forbin-esque computer.
Exactly. But that's the crux of the issue: While ** I ** may not have a problem with adverts being skewed toward my interests, I may have a problem with other people being able to deduce hidden proclivities and unearth buried skeletons based on the advertising content served to my browser.
People can learn a lot about your interests by paying attention to the ads that appear when doing web searches, especially if you let your wife/husband/girlfriend/boyfriend/significant other/visiting family member use your computer (or other "shared" computer in your home).
For example: Suppose you're a heavy pr0n user, and have done a lot of targeted searching to satisfy your, ahem, urges while travelling internationally. Then your human-rights minded cousin visits for a day or two before heading to his/her next rights convention, and asks to use your browser to look up news on the latest military crackdown in Bangladesh.
Imagine what they would think if, on every search page they scan, ads appear in the sidebar offering "sex tourism" junkets to Bangladesh, even though your cousin is searching for news on a completely different topic. All the while, you've been judicious about keeping your system clean of malware, so the computer itself is running well and does not appear to be obviously infected with invasive, browser-hijacking, pop-up pr0nware, so you can't even use "must be a virus" as an excuse.
Thus, the problem isn't what advertisers think about me; it's what other people exposed to the ads served to me on a consistent basis think about me.
I actually took some time to read the draft standard, and on the whole it seems to be fairly well thought-out: DOM-compliant, HTTP 1.1 compatible, easily implementable (or should be, on the client side, at least), and reasonably elegant.
However, the main thing that concerns me is that there doesn't seem to be any mechanism for enforcement of the DNT (Do Not Track) preference even if the web server in communication with the browser does not honour the request.
For example, I would think that a browser whose DNT preference is set to "1" (true/on) should automatically set/override any Web Storage / Web SQL DB / Indexed DB storage preferences to "off" or "session only," but a perusal of the listed Issues (in the draft specification document) implies the topic hasn't been brought up for discussion...
Unfortunately, Apple's iRDF (iProduct Reality Distortion Field) would have counteracted any attempt to engage an iOS engineer in a useful fashion.
There have been multiple instances over the recent years where app developers and users have reported verifiable, repeatable problems to Apple engineers, either directly or via forums, only to be told by Apple that either (1.) they're "doing it wrong," or (2.) they're not welcome any more.
Not that Apple is the only organisation guilty of this; it happens at Microsoft and in the FLOSS (Free/Libre` Open Source Software) worlds as well. It's probably not very uncommon for programmers at the top of any operating system or API development pyramid to exhibit a certain amount of hubris and/or "ostrich-puts-head-in-the-sand" behaviour...
Many modern airships have multiple independent bladders enclosed by the main envelope, to allow for things like attitude control. For example, if the nose of the airship is pitching upward, you could pump out (or, if price is no object, release outright) helium from the forward bladder to return the airship to level flight.
I would expect that the hypothesised mile-wide balloon-spaceship would be constructed in a similar fashion, to allow for such eventualities.