* Posts by Hungry Sean

244 publicly visible posts • joined 11 Aug 2009

Page:

Apple lifts iPhone code ban (for chosen few)

Hungry Sean

interesting explanation from his Steveness

As much as I disagree with how Apple lock down the iPhone in so many ways, I actually think Steve's explanation makes a lot of sense-- look at what happened with Sony and the PS3. They have undoubtedly the best hardware in the console market, but games that come out for both PS3 and xbox don't show off that hardware advantage. From a developer's perspective, delivering a mediocre product to a broad market may well be a better decision than an amazing product for a portion of the market. I think Steve's goal is to create more Objective-C/Cocoa developers (which will benefit Apple across all its markets) and use the app store as an enormous carrot. Flash apps for the iphone do nothing for the broader Apple ecosystem.

I think Apple is looking long-term and making a very carefully reasoned strategic maneuver.

Nerd alert: First Lucid Lynx Ubuntu beta fun

Hungry Sean
Unhappy

sigh

I can see where Ubuntu is ok for casual users, but I'm not personally a big fan of it for development use, mainly because unlike other distros, there's no option to control what gets brought in in the install process, and the biggest headache it gives me is that everything then gets tied to metacity. I'd rather have the ability to select windowmaker from the get-go and bring in all the various development packages I know I'll need, rather than hunting stuff down one at a time later. Like a lot of operating systems, it needs a "I'm a professional, damnit." button. SuSE treats me better in that regard, while for casual use I prefer OS-X (and no, I don't think Ubuntu will ever be nicer). Unfortunately, Ubuntu's rep for being an "easy" linux distro is making it more common as a development platform, and I can't really avoid it.

Navy, NASA 'committed' to restoring Silicon Valley Colossus

Hungry Sean
Pint

good idea with naming rights, wrong sponsor

The Hangar One Hangar One would be fantastic, and they are a local company too.

Beer, I guess as a chaser?

US comp-boffins claim fix for multicore 'concurrency bugs'

Hungry Sean
Dead Vulture

where's the beef?

University of Washington has a very strong CS and computer architecture program, so I'm sure this group is doing something cool, but there's absolutely no detail in this article about what it is. We all know that multi-threaded code is hard, and we all know that writing it correctly is becoming more important as we move towards tens of cores on a chip. What exactly is it that they are doing to help detect concurrency bugs? This is a hard problem, and any novel approaches are certainly interesting.

TSA worker tried to sabotage terror database, feds say

Hungry Sean
Paris Hilton

warning?

Why on earth would you give someone with administrator access to a confidential database a month's warning before terminating them? I thought standard procedure was to announce termination effective immediately, precisely to avoid these kinds of shenanigans.

IBM closer to chips with frickin' laser beams

Hungry Sean

on the other hand

on the other hand, your point about size is excellent, and there isn't much of a way to get around that. 8 microns minimum diameter is pretty huge and would limit the benefit significantly.

Hungry Sean
Boffin

disagree

I think you're underestimating the potential value of this-- I'm guessing you know some circuits and comp arch as well, but I'm going to break it down a bit more than you need for the benefit of others.

You're obviously right that something like this has little or no value for local interconnect, but global interconnect is a major pain in the ass that does not scale or improve with process technology, and this could be a big help in that context. Pipelining global interconnect is most certainly not a panacea-- basically, by adding in more stages, you increase the number of cycles that pass before it's possible to determine whether or not a branch instruction was predicted correctly. When you guess wrong, therefore, you've wasted more time following a bad path, which brings down the average number of instructions that get completed per cycle. Intel tried this strategy with P4-- their thought was that they would use techniques like wire pipelining to reduce the cycle time so far that the drop in efficiency would be outweighed by blazing speed. It didn't really work. By slashing global interconnect latency in half, you could potentially have much larger cores without resorting to these shenanigans (allowing larger branch predictors for example, and possibly helping with sequential execution speed), or you could facilitate communication between cores.

For global interconnect, optical communication has a lot of attractive features even beyond the roughly 2x reduction in latency. In a traditional bus, you have multiple long wires in parallel. The metal layers used for global interconnect tend to be relatively tall and thin-- the thinness is to get density, and the tallness is to compensate for the effects of that thinness on resistence. As a result, you have large plates close to one another, and you develop significant capacitance, which means that the relative voltage of the neighboring lines will tend to stay the same. So, if one line moves from high to low voltage, it will push its neighbor down as well. This is called cross-coupling, and it can do some very nasty things. Suppose that one line is driven at constant low voltage (we'll call this the victim). The other line starts off at high voltage, and transitions low (this is the attacker). The attacker pushes the victim down below 0 volts. Suppose that the victim is driving to a latch (memory element) which is not enabled. In a typical latch, there are cross coupled inverters, and an nmos transistor acting as a gate, with the input at the drain and the output at the source. When there is a positive voltage difference between the gate and the source, electrons flow from the source to the drain. Normally, when you have 0 volts on the gate, then, you expect nothing to flow through the device, and your state will not be written. But, if the victim gets dragged below 0, you potentially have enough of a voltage difference between the source and the gate to turn the transistor on, allowing a write to your memory when the latch is supposed to be disabled. This is a serious ugly, it's transient, and it's very difficult to catch.

Circuit designers sometimes use techniques called shielding and half shielding to reduce these problems. Shielding involves inserting lines tied to ground, either every other wire, or every two wires (half shielding) in a bus. As you can imagine, this uses up a lot of area. There are other issues from cross coupling as well (burning more power for example if neighbors transition in opposite directions)-- the hard core analog side of things is not really my cup of tea-- but pretty much all of this crap should go away with optical interconnect.

Also, with all the capacitance in global interconnect, you can blow a lot of power charging and discharging the lines, and to get it to go fast, you need large, power hungry transistors, and probably repeaters every so often which burn still more power and add extra latency (now you have gate delays on top of your wire delay).

In short, if it's fast enough to convert between the electrical and optical domains, and the pitch of optical interconnects is fine enough, and the interconnects can be forked (one driver multiple receivers), this could be a big winner (faster, lower power, more reliable, what's not to like?). I do agree with you that they've been talking about this kind of thing for years and nothing's come out of it yet, but that's not to say the hurdles will never be circumvented, and there are obviously some fine minds working on this stuff, so I feel it's a bad idea to dismiss the possibilities out of hand.

Chimps don't like short measures: Official

Hungry Sean
Pint

volume perception

Seems like the shape of the glasses would have a large effect on the decision process. For example, with the opaque containers, the chimps would presumably be able to look at the top and compare the depth or if the glass were tapered, the cross-section and could compare on that basis. I think it would be interesting to see if chimps actually have a handle on volume perception, something that humans are easily fooled at (given two equal volumes in cylindrical containers, we will judge the narrower, taller one to be larger).

Open source - the once and future dream

Hungry Sean
WTF?

open source as a business

I feel like it's worth differentiating between commercial open source projects (such as Red Hat, JBoss, Jasper Reports) and non-commercial projects (e.g. CPAN, gcc, Gimp, Open Office). My relatively limited experience with the commercial side of things struck me as a tad strange. My former employer (a small business) wanted to deploy a business intelligence stack, and we selected an open source product partly because my employer had a moral commitment to open source, partly because the commercial offerings in this space would have been far beyond our budget. I will leave out the name of the product we used because while I have some negative things to say about it, I believe they're doing the best they can with the resources they have, and I don't want to badmouth what was all-in-all a reasonably useful product at a good price.

Initially we elected to use the "community edition" of the product, but because of limited documentation and a relatively immature product that required far more technical expertise than is really appropriate for a product targeting business users, we eventually caved and shelled out for the enterprise edition, which was basically the community edition with support and some nice closed source goodies to simplify life. We also had promises of great documentation which didn't really materialize. Shortly after this, the platform received a major overhaul which required us to redo most of our reports, re-install completely, and generally suffer. Without the support, it would have been too much given the size of our technical staff and the poor documentation. From that perspective, at least, it was well worth buying the support (bug reports also managed to get responded to, rather than ignored once we were paying people, surprise surprise). Given the total cost of support however, the offering was still far cheaper than any of the major vendors.

Here's where I get confused: If this company had a more mature product with better documentation and less upheaval, we would probably never have shelled out for a support contract. What happens to these guys when their product matures to the point where a handful of engineers can sit down with some manuals and get a working and reasonable system up on their own? How about when the product is friendly enough that business people with no technical background can sit down and use it? Do they hope to be like Red Hat and generate their income primarily from very large and complicated installations? At that point they'd be doing battle with Oracle, SAP, and Microsoft, and I have to believe they'd lose that fight.

Look at Open Office-- It's a good solid product that basically does what it says on the tin, and partly because of that, I have an awfully hard time imagining anyone paying for a support contract for it. Are these sorts of open source projects essentially self strangulating, or is there a way out that I am missing?

UK universities being broken by border control measures

Hungry Sean
Unhappy

sigh

Having spent several months in the UK and fallen in love with it, I'm really sad to hear this sort of thing happening. As a yank, I'd like to imagine my own government holds a monopoly on xenophobia, small mindedness, and short sightedness, but I guess we're just taking after our parents.

The part I find most bizarre is the statement that they want to make sure students are studying, and not doing other things like, *gasp* working. Shouldn't people who are talented enough to get admitted to a British university also be an asset to the workforce? I'm not sure how affordable your schools are relative to ours in the states, but a lot of students here need to work part time to afford their bills and lodging, so it's not unreasonable for a foreign student to want to do the same. I would think that it would be more in the UK's interest to say something along the lines of, "here is a combined work/study visa, contingent upon maintaining good academic standing and graduating within 6 years. Visa will be good for one year after graduation or 6 years, whichever is longer." Giving a bit of extra time after studies would be helpful for bringing these people who have studied at your universities into your work pool.

Engineering may be a strange field, but I've worked with a huge number of brilliant Indian, Chinese, and Taiwanese engineers, two Iranians who were among the best (and nicest) professors I've had, not to mention handfuls of people from Romania, Switzerland, Spain, Nigeria, and Latin America. My education, and the US tech sector would both be a lot poorer without these migrants. I suspect your green and pleasant land is the same.

Women face 'glass cliff' after breaking glass ceiling

Hungry Sean
Badgers

@Graham

Betraying some of your own prejudice? You start off your argument with the notion that getting more women involved in engineering entails bringing in women who could barely muster a D in art. The fact of the matter is that in recent years high school women have been beating the guys in math. Women are also doing fine in medicine and pure science, so the issue is certainly not one of capability. And the old argument that "girls aren't interested in computers" simply doesn't hold since women now use digital electronics, particularly mobile devices, at a higher rate than men.

I don't have a great solution to offer, but I can throw out a few interesting examples.

1) At my undergraduate, about the time I entered, the faculty in computer science decided that there were not enough women enrolling, and made some changes in their selection criteria. By the time I graduated computer science was relatively gender equal. Over in computer engineering, this was certainly not the case, and in fact, computer engineering was worse than mechanical engineering.

2) A friend of mine at graduate school was a very good computer engineer and exceedingly sharp with her math. During her undergraduate, she met with her advisor to tell him that she intended to go on and get a PhD. He told her that she probably wouldn't be able to get into a graduate school and not to bother.

3) Another friend of mine from grad school had a paper rejected on the basis of "I read the introduction to your paper, it's crap, and I didn't bother reading the rest of it." Now, I'd seen my friend present her work and it was absolutely top flight. I can't say for sure, but the odds are good that the reviewer saw a female name at the top of the paper, and mentally shut everything else out.

I could go on, but honestly, just look at the first 20 posts on this article, and reflect on the fact that those posts are coming from people in this field who believe this sort of stuff is perfectly ok to spew. Sometimes it only takes one bozo to turn someone away (or interest them), and running into this crap through the course of lower education (girls don't do math) through university (girls don't do computers) and onwards, it's no surprise that very few women persevere to the end when those who are capable of doing it have plenty of other options for their talents-- if engineering is full of dicks, might as well become a lawyer, make five times as much money, and know your rights.

Hungry Sean
Flame

what the hell

Why is it that articles like this bring out the most backwards, sexist, boneheaded bastards who have the nerve to complain, while putting down the fairer sex in puerile language, that they've got it rough and the women are to blame. This sort of crap is widespread-- during my graduate education, for example, there were about three women in my field, compared with about 50 men. When women stepped into the building, guys looked at them and said, "fnar, fnar, she must be lost, hurr hurr." Three of my four recent job interviews had no women at all in their department. My significant other just got back from a conference where one of the keynote speakers asked the audience to forgive his friend who couldn't make it because "his wife was crotch-fruiting." Obviously all these things are unconnected, and the lack of women in engineering is really just because women have a menstrual cycle and would rather be home playing with dolls, being barefoot and pregnant, and cooking dinner.

Speaking as a man, I wish Sarah would moderate you all off the face of the map. FOAD

Big Blue demos 100GHz chip

Hungry Sean
Boffin

not quite, but good thinking

First off, you're comparing individual transistor switch speed vs switching speed of a full pipe stage (probably 10-20 transistors deep, plus some pain from capacitive load and wire delay). The key bit is in the article where they say it is 250% the speed (2.5x) of a similarly sized silicon gate, so presumably the prototype is slower than a 40 or 32nm silicon gate. 100GHz number is attention grabbing, but not relevant to architectural speed.

Now, your idea about simpler architecture is not a bad one if the actual issue were a big fast gate vs a small slower gate, but there's two major problems. First, if you could make a significantly smaller core through the right choice of simplifying architectural decisions, with silicon gates, you could have more of those cores, or they could have wider execution paths (this was the driving principle behind the Cell's architecture). The other problem is that one of the major limitations on core speed is propagation delay along wires, particularly global interconnect. So, even if you have a very fast transistor, it still takes the same time for a signal to get from one side of the die to the other. This will be a killer if you can only fit a single core on your die. The more likely use case for big fast gates (assuming the processes were compatible) would be to mix them in sparingly along critical paths, and indeed the circuit guys already play all sorts of games with gate sizing (mainly for driving big capacitive loads quickly).

Hungry Sean
Pint

oi

Further than that, they are talking about the switching speed for a single transistor, presumably driving no load (they don't say if this is nmos or pmos either). A very very heavily pipelined processor might have logic about 8 fanout of four inverters deep, plus pulse latched registers would bring you to about 10 gates deep, so if the 100 GHz was for the digital range of the transistor, the system clock would be maybe 10 GHz, probably less once the effect of load is included. I'm not good with the analog side of things, so I'll bow to joeuro above and conclude that should be cut in 10 again, so maybe 1GHz system clock with the current generation. This jibes with the statement that these are 4x as fast as a conventional transistors with a similar gate length (40nm refers to the gate length after some depletion, so this is about 5-6 times as big as the bleeding edge, and by rough calculation, about 1/2 the speed).

All that said, this seems like some very cool technology-- IBM's hardware is always very impressive. I hope they succeed in bringing it down in size.

Beers to joeuro for quick thinking, and beers to the brainy bofffins.

Men at Work swiped Down Under riff

Hungry Sean
Unhappy

agreed

I'm surprised that there's not anything equivalent to a statute of limitations on copyright violation. I'd be very surprised if the 40-60% of the royalties these guys are claiming is even there to take anymore. Looks like the original author died a few years ago and her publisher got greedy.

But then, just another argument to limit copyright to a reasonable period like 10 years. . .

Vote, vote, vote for Barbie the computer engineer

Hungry Sean
Badgers

view from across the pond

Why on earth do you Brits think a computer engineer is someone who spends their days mucking about with wires and providing tech support to lusers? Here in the states, computer engineering includes disciplines such as: digital circuits; embedded systems; computer architecture; distributed systems; networking; and DSP. CE Barbie obviously spends her days in a cubicle covered with Matrix and LoTR posters, squinting at tiny font, diagrams, and possibly wave forms.

In her cube, she needs to have a stuffed Tux, some old dilbert books, xkcd comics, dual-head monitor, and a giant coffee mug. As fashion accessories, she should come with removable carpal tunnel support gloves, a carabiner filled with keys, bottle opener, mini leatherman, etc., an android phone, and little or no social life.

Hmm, on second thought, it'd be nice to get more women entering this field. Maybe honesty isn't the best policy.

Free postcoders bang on Ordnance Survey door

Hungry Sean
Pint

disagree

I know that the UK's postcodes are impressively accurate (often down to a couple of buildings), whereas our ZIP codes are much larger, but ZIP + 4 should be pretty darn close (of course, no one seems to remember the last 4 numbers).

Cheers from across the pond.

More MIDs with ARM than Atom by 2013

Hungry Sean

patience!

I'm guessing that 2011 will be the break-out year for ARM mids-- NVIDIA's Tegra 2 sounds damn sweet, and it's just coming out (1.5 GHz ARM9 SMP + hi-res graphics). I'm guessing that it'll be another year before we start seeing products using Tegra, and maybe another year before it starts moving into more netbookie things. On another front, Apple did snap up PA-semi a few years back, it's possible that they are developing something crazy as well. Add in increases in SSD density and reduction in cost, and you have the potential for some very fast, very low power little devices.

The interesting thing will be seeing how Microsoft responds: if Linux/ARM netbooks start gobbling up the market, they're going to need to port Windows 7 over; I don't think Win CE / Win mobile will really cut it. So, if we imagine 2011 for first ARM/Linux netbooks, and Microsoft gets worried, it seems like they might be able to port Windows 7 in two years, which would put us at 2013, as per the article. Maybe a tad aggressive, but not unreasonable.

RockYou hack reveals easy-to-crack passwords

Hungry Sean
Flame

write 'em down

I feel like the IT community really brought a lot of this on itself by at some point getting the idea into user's heads that passwords should never be written down. Really, it's all down to your threat model-- which is more probable? Li'l Johnny Hax0r running a script from his bedroom and incidentally hammering my bank account, or someone mugging me/breaking into my house and deciding of all things to abscond with my notebook of passwords? Which is easier for me to detect and react to (e.g. by resetting all my passwords?). And if my password is written down in ink, rather than entered into a spreadsheet on my computer, then I don't need to worry about spyware finding and stealing it.

What I think I'd like to see would be websites that have a strong password generator built in on the password creation page, you click, it gives you one and tells you to write it down in a notebook. Us humans are bad at creating random data (amanfrommars may be superior to us in this respect), why put the onus on the user when we have great algorithms to let the computer do it?

Why an embedded OS is like a mammal

Hungry Sean

not sure about conclusion

El Reg concludes that general application developers could learn a lot from the hard-core embedded world, but I'm not so convinced-- the problem domains are different. In application development, you're generally trying to pump out features, and bugs, while regrettable, are a necessary evil in the process. You don't have time to get everything right, and I've heard stories of companies where when the bug list gets 80% addressed, the product ships. By contrast, it's damn hard to upgrade a program once it's embedded in someone's anti-lock brakes. I've heard it said (and agree) that the best way to debug an embedded system is to write bug-free code. That entails a slower, more deliberate approach, and (hopefully) precludes the use of code-monkeys. Both considerations combine to increase the cost of the code and time to market.

Similarly, performance in the embedded world is a very binary thing-- if you can meet all your deadlines under the worst possible conditions, and you have enough memory, life is good. Once your code fits, there's no point in further optimization (except maybe to conserve power), and you generally need to throw away CPU cycles to make those guarantees. Application software is entirely different-- all else being equal, code with faster average execution time is better, and if your quicksort occasionally goes quadratic, so be it. You can also come close to 100% utilization.

Different problem domains require different strategies, and while customers may think otherwise, the highest quality solution is not always the best. As much as I dislike it, a broken piece of crud that's delivered quickly is often more useful than a work of beauty that arrives years later. The applications guys generally know what they are doing, realize the limitations of their approach, and overall do a pretty good job of hitting the tradeoff between time to market and reliability.

Dell crowned Bad Santa computer maker by angry customers

Hungry Sean

Dell's business stuff's not so bad.

I had a bad experience with a shoddily made Dell home desktop years ago and they earned my wrath. I've seen a lot of dell laptops crap out too over the years. But at my previous job we had Dell business desktops which I was very impressed with. They had some nicely designed cases that made it easy to open things up and swap parts, and generally held up pretty well. We also bought four or five used dell xeon 1U servers which for the price were excellent-- they had dual power supplies, hardware raid, good network cards, ECC RAM, and were generally pretty spiffy. Just my experience.

US employers slash 85,000 jobs in December

Hungry Sean

laid off for Christmas

Count me as one of the 80,000 unlucky stiffs and a computer engineer/software developer. I got laid off at the start of December and applied all over the place with very few responses, all negative. It's a bad time to be out of work because: 1) there's all the holidays messing with the schedules of recruiters and hiring managers, and 2) new hires add on to the depleted yearly budget-- better to wait a month and push them onto next year's total. Since the new year, things have been looking brighter, so I'm crossing my fingers and hoping I'll find something good. It's ugly out there.

Controversy rages over robot vasectomy reversal in Florida

Hungry Sean
Pint

Thanks Lewis

First Reg article I've seen that appropriately named our various American universities. Other commenters and I have griped about things like "Pennsylvania Uni" in the past, so thanks for listening. One minor nit-- I think the last school in question should be Medical College of Wisconsin.

Beers as a peace offering and because we'll all need a few before the robots start manipulating our delicate bits.

FTC whacks Intel with anticompetition complaint

Hungry Sean
Big Brother

reaping what they sow

Glad to hear that Intel's settlement with AMD didn't end the matter.

@frankg778: victims include everyone who couldn't get AMD gear through their normal supplier during those years when AMDs products were faster and cheaper, everyone who hasn't been able to get ION based netbooks, both because of intel's threats to break discount agreements and because of intel's practice of selling atom + chipset cheaper than atom alone (go figure). Everyone who is potentially denied high resolution, high performance netbooks because intel has used its market position to define the netbook segment as not capable of high-def.

This is a pretty clear case of a repeat offender using their market clout to hold back innovation on all fronts. I just hope that the FTC's action will help clean things up. Big brother, 'cause he's got it right for once.

IBM chums with Swiss to build 3D brain-density processors

Hungry Sean
Boffin

sharp enough to cut yourself

You raise some good points, but some of your points seem to be based on slightly outdated information.

1) Problems re wafer scale integration: We're now at the point where very few large dies are defect free. Most chip houses already have technology that allows them to blow fuses and disable whichever cores/caches have defects. Then they bin the parts and set prices accordingly. Potentially 3-d technology would allow the use of smaller dies, helping improve the relative proportion of perfect dies, which could possibly even simplify matters.

2) Energy consumption and clocks: Clock gating is now fairly established technology. Switching power drain is actually becoming less important compared to leakage power, and there are strides being made there too (e.g. through silicon on insulator technology and ground/power gating). Again, 3-d technology might help here. If the same sort of die is stacked vertically, circuit designers might be able to take advantage of vertically stacked units to reduce the size of their clock tree relative to a chip with units side-by-side (e.g. use a single layer clock grid for the entire chip, then have vertical taps down for every register-- I realize it's probably not exactly that simple). If the total wire length for the clock tree can be reduced, the capacitance should drop and with it, the energy consumption.

3) Asynchronous processors and lack of adoption: Use of clock-rate as a metric for selling parts has been steadily fazing out over the last several years. The real reason we're not seeing asynchronous processors is because they are very very difficult to design and hard to test. Given the complexity of a billion plus transistor system and the number of engineers required, this is a killer. I guarantee that if someone is able to develop methodologies that simplify asynchronous design and test to the same levels as synchronous design, the advantages in power consumption and performance will win the day.

4) Parallel processing and limitations on scaling: It all depends on what you want to do. GPUs are massively parallel, and because of the problem domain, can take advantage of it. I would suggest that as we increasingly store and record multimedia content and expect our devices to deal with it in an intelligent manner (e.g. "Computer, find me the picture of me 'n' Ted down the bar last friday."), our problem domain becomes more parallel. This sort of behavior will require a wide variety of very intense but computationally different and largely independent tasks. In that scenario, more processors -> more processing per unit time (possibly human reaction time for a hand-held, or maybe a few weeks for scientific applications), rather than more processors -> same processing, less time.

Aren't computers cool?

Free software lawyers hit Best Buy et al with GPL 'violation' claim

Hungry Sean
Pint

@copsewood

That was really informative. It's always a pleasure learning something from comments. Beer's on me.

Eggheads solve England penalty-shootout crapness riddle

Hungry Sean

hmm

If part of the problem is that high stakes change player behavior, seems like managers might be able to get better penalty practice by putting something on the line. E.g. whichever player scores the fewest penalties in training has to buy beer for the other players and wear a ridiculous outfit. Or, split the players into teams and have a shootout. Whichever team loses, the players on the losing team who didn't score stand and watch while their team-mates do a ton of pushups (I think this would be a worse threat than making the non-scorers do pushups themselves, because of peer resentment). Then again, for all I know they already do this kind of thing.

Hungry Sean
Badgers

hitting what you look at

Similarly, I've heard that cops get shot in the hands far more than would be expected. People running away look at the gun in the cop's hands, and the bullets cluster in that region. Not sure if this applies to Blighty-- you've traditionally been much smarter than us Yanks about the level of armament you entrust your police with, though I gather that is changing.

A Deadlock Holiday

Hungry Sean
Pint

here here

This was obviously a fun article, but it also reminded me to get off my ass and brush up on my locking. The second link on mutexes and semaphores (feabhas.com) is the first in a fantastic three part series, and I wouldn't have found it otherwise. Thanks Verity, I learned something.

Linux devs exterminate security bugs from kernel

Hungry Sean
Unhappy

yuck. . .

Maybe I shouldn't be surprised that Ubuntu and Fedora are doing that, but I am. Someone please correct me if I'm wrong, but my impression is that Ext2 remains the best bet for Linux filesystems, unless you have a real need for filesystem performance. Ext2 is stable and well tested, and linux has plenty of recovery tools for it. Even other mature filesystems like xfs, reiser, and jfs seem to lack the same support on Linux.

If you run into a strong need, you can always upgrade ext2 to ext3 or ext4, but I don't think you can go backwards. Given Ubuntu's target demographic, do the performance benefits of ext4 really outweigh the risks, or is this just a case of some hacker geeks who are happy to have the latest and greatest on their own machines sharing the joy with everyone else? As I get old and boring, I increasingly lean towards avoiding unnecessary risk.

Intel Larrabee letdown leaves HPC to Nvidia's Fermi

Hungry Sean
Boffin

well said. . .

Also need to include that "running two side by side" requires all sorts of overhead to support checkpointing, comparison, and roll-back, and is generally non-trivial. Even detecting the error is exceedingly expensive because you need to compare all of the memory in your program at each point. And what happens if memory corruption prevents roll-back? Point being there's a lot of very hard problems in achieving reliable computation through redundancy.

There was once a company called Tandem that was able to charge lots of cash precisely because they were able to do this right. The finance industry appreciated their efforts. I don't know that AMD has that same level of experience, or frankly, the motivation to roll it out for niche science applications that may not have the same level of financial backing.

Ellison compromises on MySQL control?

Hungry Sean
Welcome

what me, worry?

I don't understand all the fear in the mysql community about Oracle assuming control. Honestly, the makers of the world's premier database engine could only make the hack that is mysql better. I use and appreciate mysql, but damned if it doesn't occasionally feel like banging stones together.

Time series data? Your choice of equally, but subtly differently, broken datetime and timestamp. Want fine grain access control? Go ahead, use a view, but it'll choke because it can't use any underlying indexes. Foreign keys? Yeah, you can declare them, just don't expect them to, you know, enforce a constraint.

I understand the EU's concern about limiting competition, and they probably have a good point, but I have a hard time seeing how this could possibly result in a worse product for MySQL's users.

Why probe Google for antitrust? It 'does no evil'

Hungry Sean

not so black and white

In the last year alone, 3 new search engines were announced (Cuil, Bing, Wolfram Alpha). None of these claimed to be ready for primetime, but the mere existence of a full-fledged google engine indexing billions of web pages and refreshing its information at an amazing rate I think has pretty much starved all of these engines, regardless of technical innovation. Although google can't be said to be actively using its market share to freeze out new competition in search, maybe that's because it doesn't need to due to its enormous scale. In the US we certainly have had companies that got too big being broken up regardless of any particular wrongdoing, but simply because their market position was too strong and stifled competition.

Swedish cyborg gets haptic hand

Hungry Sean
Pint

skins?

Very cool achievement indeed, but straight metal on a robo-hand seems a bit intimidating. I'm curious if there are thin silicon covers for the limb to make it look more natural. Or maybe users would want something a little swankier, like cool-blue, or two-tone tangerine/white, or maybe cammo? Is anyone making designer skins for iLimbs?

In all seriousness, well done to the researchers. I'm sure this will lead to vast improvements in quality of life for amputees, and the technology is amazing. Beers all around!

Is this the world's dirtiest PC?

Hungry Sean

air quality measure

several years ago, I owned a Dell P3 while going to school in Pittsburgh. About four years into its life, the graphics card started to act funky, so I opened the bastard up, thinking maybe things were getting hot inside and it needed a good cleaning. It wasn't unusually dirty all things considered, but what really got me was that the dirt itself was pitch black. I guess after all these years since the mills shut down there's still a ton of coal particulates in beautiful Pittsburgh and they adhered to my poor machine. Made me wonder what was happening to my lungs.

Pirates get extra seat in Euro Parliament

Hungry Sean

abolish patents, eh?

My knee jerk reaction was, "That's pretty stupid." But then I thought about it a bit. Patent application fees in the US are, I believe, about $65,000 a pop, and that doesn't even cover the cost of the patent lawyer to make sure things get written up properly, nor does it cover the cost of monitoring and enforcement. Clearly, this is beyond the scope of your average individual who has a bright idea, so patents are only of possible benefit to corporations.

Ok, so with that as our starting point, what really do patents achieve for said corporations? If a company has a really good idea that is hard to reproduce, they will generally prefer to keep it as a trade secret (e.g. Coca Cola has not patented their formula, it's a secret, and therefore, as long as no one else can reverse engineer it, they maintain a monopoly over their formula). If, on the other hand, there is a chance of a competitor arriving at the same idea, or being able to recreate the method following some analysis, patents are the way to go. A great example of this would be the famous race to the patent office over the telephone. If I recall correctly, Bell barely beat out another inventor whose telephone was much more sophisticated. It would seem to me that in many senses, patents actually impede innovation and technical progress.

I'm still not convinced abolishing patents outright is the best solution, but neither is it a preposterous one. I would be very interested to hear a clear description of ways in which patents actually spur technology (and not a lame appeal that companies need the possibility of an additional monopoly in order to innovate-- there are plenty of other forces towards innovation in the absence of patents). I'm having a hard time thinking of anything myself.

Boffins working on biodegradable flexi LED implants

Hungry Sean
Dead Vulture

tread carefully

Dear El Reg,

Pennsylvania Uni is developing this very cool tech, you say. Which institution exactly would that be?

University of Pennsylvania (aka U. Penn)

Pennsylvania State University (aka Penn State or PSU)

U. Penn is a member of the ivy leagues (Harvard, Yale, Princeton. . . )

Penn State is a bit more blue-collar.

Confusing the two generally causes a good deal of indignation.

I know that other American readers have commented on these shenanigans in the past-- our institutions of higher learning don't generally lend themselves to idle re-ordering (e.g. University of Miami at Ohio really should not be listed as Ohio University). I also know that you earn your buck by slinging a breezy, whimsical writing style. You could make everyone happy by employing the existing (and unambiguous) short-hand for our Yankiversities-- a brief google or gander at wiki should help clear things up.

Student boffins take on iTunes' not-so-smart Genius

Hungry Sean
Pint

damn cool

Signals guys do some very neat stuff, but the math was a bit over my head. Anyway, this sounds very impressive-- it'd be cool to see a slightly more technical description of the techniques they're using for the analysis-- and I hope this comes out soon so I can play with it :-).

Microsoft drops Family Guy like a hot deaf guy joke

Hungry Sean
Gates Halo

the problem with Family Guy

is that it's shit. Formulaic and offensive with no heart. American Dad is basically the same pap with slightly different drawings. It's not funny either, sorry. Much as I don't like to say it, Microsoft got this one right.

Guardian in hot water over activist face flash

Hungry Sean
Megaphone

Argument does not hold water

The pictures on the police cards are not necessarily pictures taken at a rally, they could just as easily be pictures obtained by police infiltrating activist groups, surveillance of homes and business, or taken at detention. What people do on their own time is their own business, but by publishing the faces and names of a handful of people in an internationally viewed publication, it seems pretty obvious that the Guardian now has made the views of those people the business of their employers, particularly for those who have jobs with public exposure. Not to mention that this information will live forever in the bowels of Google where any potential future employer will see that the applicant has been a "person of interest" to the police, regardless of conviction or arrest.

While I support the Guardian's efforts to fight these police actions, I feel they really should have gotten release from those activists who are not concerned about being identified as members of the police list (e.g. labor union activists) and blurred the images of those who did not explicitly consent.

Bullhorn, because the human voice is only so loud.

Coin-sized nuclear isotope battery minted

Hungry Sean
Boffin

what's the use case?

I have a hard time seeing these finding their way into cell-phones and flashlights. Seems like they would not be rechargeable, would not store well (Richard Pennington above says the half life is only 60 days), and would possibly be even more of a pain to legally dispose of than current batteries. But, a power unit thinner than a human hair seems like it could have all sorts of cool futuristic applications-- possibly even woven into e-textiles? Smart-dust? Active RFID?

by the way, what do you Brits call Popsicles?

-sean

NASA enlists schoolkids in Moonbase piss-recycler push

Hungry Sean
Grenade

re AC@15:45

Way to miss the point. Obviously a bunch of young kids and teachers are unlikely to engineer a complete solution for waste recycling in a lunar environment.

But, this sounds like it could be a really great project for getting kids excited about science and engineering while providing a lot of opportunities for them to learn real physics and chemistry. For example, a teacher might ask the class to suggest means of purifying the water-- obvious answers are boiling and filtering (chemical and mechanical). The teacher could talk about the temperature on the moon as well as the low pressure and how that might affect an approach based on boiling, as well as the fact that different substances boil at varying temperatures. How do they get energy (solar panels?). There's a lot of really cool stuff involved here. If this project helps teachers spice up their lessons and inspires some kids to go into the so-called STEM fields, I think NASA will have accomplished their goal here.

Shame on you for being so bloody negative.

Apple's move to kill Hackintosher suit denied

Hungry Sean
Badgers

@richard102

"Apple has a copyright on OS X and its software. In other words, they have a right on every copy and how it is used."

This is not strictly correct. Copyright only covers the right of others to modify, distribute, and produce copies of a work. Copyright does not allow any control over the use of a work and I believe that extends to resale (hence ability to sell used books, software, etc. without asking permission). This is why Apple, Microsoft, etc. generally bring in an additional license agreement to control the usage of the product once in the purchaser's hands (EULA).

In this case, Apple is not suing Psystar for copyright violation (they have legally obtained all copies they are distributing), but for violating the EULA by misapplication of the software. I believe Apple was earlier making some noise about attacking them with a DMCA violation for reverse engineering the EFI code, but I don't think that's going to court.

I think the legal status of EULAs is murky (possibly because the user is badgered into accepting it after having given worst-buy their money?), but I'd be happy to be corrected by someone who knows better.

Vulture 1: Calling all electronics wizards

Hungry Sean

micro controllers

Signed up specifically to vote against pics-- done way too much development with them in the past. Their performance is worthless, their compilers are frequently buggy, and the errata lists are enormous. You'd be much better served by an ARM or Atmel AVR (both supported by GCC so you won't need to shell out extra for a compiler). Though I'm a worthless Yank, I'd plump for some sort of ARM, this being a British space effort.

Depending upon budget and resources, you'll also need to make a decision on surface mount vs. DIP (DIP is bigger, heavier, but easy for hobbyists and prototyping). There used to be a "tiny arm" DIP with a surface mount ARM and supporting hardware in a 40 pin package which was quite nice, and I also seem to remember liking the AT-mega 70 series (interrupts available on every pin).

You'll need to browse a lot of spec sheets also to find a part that can tolerate extreme cold-- you may have some success looking for military spec parts. Spraying the whole assembly in insulating foam to hold in the heat might help too.

Page: