Sure whatever, keep on voting man ;)
100 posts • joined 31 Oct 2011
That article is full of bullshit.
At a time I could tell you exactly when I was going to receive an SMS or a call, and that was 100% accurate with no false positives and more than 60% of the events detected.
If that's possible it means that it's possible for the human body to sense signals at that wavelength (and we have more than one way for that, since 2.4ghz is a frequency that inevitably has an effect on water).
From there, if your belief is that those waves are harmful, you will associate negative feelings with sensing those waves, thereby creating the stress-like condition described in the study.
Therefore, removing the associated negative feelings through therapy will inevitably lower the stress-level.
That in itself does not prove that they're harmful, you could lower the stress level of rape victims by therapy too (and quite luckily it's done for many of them), it doesn't mean they haven't suffered, it only goes to show that most of the suffering is related to the associated negative feelings rather than the event itself (how otherwise, would sex slaves in less developed countries keep their sanity).
In extenso, your study is not scientific, it just takes scientific studies, and try to package them and present them in a way that is suitable to the author, but in no way implied by measured facts.
Whether all those wireless frequencies are dangerous or not, is something that will only be known in a hundred years, when all financial interest to hide it is gone.
Right now, the wireless industry is way too big to not be able to keep that stuff canned, just like the oil industry is still strong enough to prevent serious alternatives to emerge.
So maybe it doesn't hurt, anyway you will not know until later, and if you really care you should faraday-cage your house and quit whining.
A long time ago, many PSUs did exactly what you're talking about for G5 towers.
It was not powered from the desktop PSU, simply just wired to the PSU in -
This wasn't such a good idea either, considering the power cable requirements that would vary wildly with a PSU burning from 100 to 700 watts and more, and a screen that could also do the same, depending on your pick.
20V is bad because it implies yet another power converter to bring it back to 12V - if you want to go above 12V, be at least smart enough to use a multiple for future simplicity of build.
FinFET is 3D by comparison to a 2D flat transistor.
The fact that the switching happens on a VERTICAL plane instead of the HORIZONTAL plane where everything else is, does make it have two non-parallel planes, enough to define a volume, which is 3D mkay ?
You're the meaningless hyperbolic suffix du jour I suppose.
The only good part about ARM in super computing is that ARM cores cost nothing and are good enough to do the very small CPU part that GPUs cannot do.
ARM does not have the ability to design new CPUs much faster. They only have a basic toolkit to help you build stuff with their designs, you'd still be in a lot of trouble if you wanted to create a 16-core arm cortex-a9 because it's not part of the ARM designs (cfr Tilera, took them a while to get all 100 cores plus interconnect on the chip).
You're both missing the point
The big advantage of ARM is not architecture, as there are other RISC processors out there that can beat it at its own game.
The big advantage is the ecosystem, where one can buy an ARM chip from just about anyone, and anyone can join and be fabless too.
That drives the prices down, turning the market into something Intel does not want to be a part of, as they have demonstrated in the past leaving markets when margins became too low.
Your comparison between a core iN and an ARM core is not adequate, the x86 core in the Intel CPU has a bajillion more features (x86 decode, SSEs, AVX, specific decode/encode functions, etc.) and wider computing units on top of a lot more cache and having 64bit support. And they're way more than twice as fast, even if pure DMIPS is about a factor 2 between a core i5 quad and an arm cortex a9 quad like the iPad cpu, you have to remember that's only integer performance and hardly representative of overall performance.
The big reason the Intel cores win that round is because they're much wider and consequently have more IPC than the ARM core which is really really tiny. You could for the purpose of benchmarking include a wider Integer Unit in the ARM core and get exactly the same DMIPS/clock as the Intel, but the CPU would still be quite inadequate.
Either way, there is NO competition between 22nm and 32nm chips, since the first cost MUCH more to produce at the moment, and Intel's process is still far from maturity, as the first Ivy launch demonstrated - let's wait till they get it right and we'll see exactly how far behind TSMC is.
Plus, as the number of bleeding edge -capable foundries shrinks, I wouldn't be surprised to see TSMC emerge as a winner and end up with more fab capital than Intel, which it's worth reminding, is only worth about 20 new fabs right now ( as the costs rise for every additional process - at the moment at least - Intel could very well end up facing too high a capital cost to remain one node ahead).
Yet again, another stupid retarded benchmark from one of the big players.
A Violin Mem V6000 costs about $500k
A 2-socket server can be had from $5k
And a Cisco Server 2S using a Violin Mem V6000 scored good points . lol - Given the price they should at least try connecting 100 servers to it for a comparison that is not completely retarded.
Re: Not all that useful...
A dedicated firewall is a linux box running iptables, or a proprietary piece of crap that does the same, but with more bandwidth limitations and no additional services.
Do you really want to put it behind yet another firewall ? because it's a linux box, so you know, according to your Senior advice ...
There's no sexism in the dev culture, most of us geeks would kill to meet a woman who's in IT (and is not ugly of course) and good at it.
Unfortunate reality is most women are in the fake jobs, such as HR, accounting, sales and whatnot, where actually having a brain is not exactly required.
Also there are probably quite a few gay coders so bigdiiccs is probably in use somewhere, soon to be defaced.
Re: 'statutory rape'
No actually its' bullshit.
He's accused of having sex with a woman while she was asleep.
Like so many others, men and women, could also be accused of being drunk and having sex with another drunk person.
There's no rape in that story. none. there's only corner cases of a stupid swedish law that considers that if your boyfriend jumps on you during the night, it's rape, that if you say put a condom and he doesn't, but you still let him proceed, it's rape.
You want a world where those laws are respected ?
Do you even know where that kind of reckless oppression of the male sex will lead the world ?
The one thing I don't understand is how ultra-feminists managed to come up with the most sexist system ever - one that is more one-sided than the old days ever were.
Re: Nothing to do with Modern Workloads
No I think it's a perfect way to introduce it.
All those cloud services are used by people who don't understand hardware, and are ready to pay to not understand it and have a running service.
Now what they call "modern workloads" is mostly the kind of bullshit startup pitchers love and are indeed going to buy, since who except a startup would want to pay 20 times the hardware costs just to be "in da cloud baby".
Nothing modern, nothing even huge there, just amazon waking up and selling ssd-equipped stuff. good to know but at 125K over 5 years they can stuff it down someone else's throat.
GG FusionIO, straight to bankruptcy
Fusion IO is putting itself out of business, product by product, news by news.
Flash always was memory, every flash controller out there uses it exactly as memory, and fusionIO is striving to make everyone use flash without controller.
The good news is, noone will need them when it's done, they don't bake flash, and once the integration is complete, you'll be buying flash boards from the ram makers themselves --
Who the F could think Intel is ahead of AMD in the co-processor arena -- Intel is afraid of releasing their Xeon Phi because even at 22 nm it can't compete with half a 7970 - which is not even a compute offering - by AMD.
The true fact here, is that Intel is so afraid to lose the CPU part of HPC to more open licenses that they're ready to sink billions in a failed architecture just to say they're in the market AND get vendor support to lock customers in crap tech once more.
Fact of the matter, a pentium core sucks as a co processor, it can't ever be interesting to do vector processing and pure math on an multipurpose CPU - it does not make sense at all.
On the other hand, this article was most likely posted just to flamebait people into commenting - way to go el reg.
Re: Not sure it is game-changing
It is game changing because the game is the market, not the product.
However, there is absolutely zero new technology in their product, as most IT products anyway, but it's a working ultrafast mux, and the companies buying out zeptwhatever won't care if it costs a few millions for a working product line.
When the big fishes join, they always do it by buying out, they don't have the in-house capability for innovation, only mass-market efficient production to sales capabilities (that makes them so much money they can simply buy out innovators).
So what will be interesting is not zeptonics as anyone can buy it for any price, but the next company that is going to sell itself for much more because a competitor will need to fill that market niche fast in order to remain full stack competitive.
Anyway, might be another few years before that happens, as big fishes tend to let small fishes kill each other before buying the right one (look at the flash market).
Who the f is that guy ?
Does he even know you can get 8GB DDR3 LRDIMM for $70 while SLC is still over $12/GB in existing products.
The only thing that's cheap right now is MLC and it cannot stand being overwritten too many times --
Good thing he's a CEO and not a CTO --
And for his company .. everyone has pcie flash, only a few make it happen (pcie networking and all).
Re: "Average human response time"
You are pretty much nonsense. Trained individuals react MUCH faster than untrained ones, as can be seen EVERYDAY with anything reflex sensitive such as driving, combat flight, FPS or RTS.
The advantage is NOT just anticipation, and that has been widely verified - I personally used to have an untrained (when I used to game a lot) reaction time of about .14s, a mate who was pro player in AoK and EE was around .11 , etc. - that is on a stupid js test that was unrelated to our respective training.
And he was, like all non-Koreans, absolutely no match for these overtrained monsters, some of who no doubt reach 50ms reaction times in the most competitive games.
Re: often stated and pretty consistenly wrong
Optical interconnects are NOT better... light travels only marginally faster than electrons AND it requires two electric to optical converters to work. It is highly unlikely that this will be used anytime soon INSIDE a cpu or even motherboard.
Don't forget graphene and frequency scaling. there's a lot of room there still-
Re: We all don't remimber
That's mostly an illusion, the only military tech you know of is public knowledge, i.e. those weapons that were shown and that governments could not hide because they extensively used them.
However there has been ongoing military research and no real war to force the new tech on the field for at least 50 years, anyone with half a brain understands that there has to be a ton of stuff in store for the unlikely event of another real war.
Re: Women are drawn to Pinterest the way men are to pornography.
It cannot be cost effective, amazon's EC cloud and assimilates cost more than 10 times their physical counterparts over a three year cycle.
The "fraction of potential" issue is inevitable and is part of amazon's pricing - the only way you could go around that would be to have some heavy folding activities or w/e to fill the gaps - good luck finding someone to pay for it.
Amazon's piece of crap cloud... If you take the cheapest compute platform (compute xlarge) you get worse than single 2500k performance for 17870,4 bucks over three years... if we start talking servers that's about 6 dual socket sbe xeons that will have way more ram (full ecc 8GB slots) and also deliver 6*2*(I don't have a conv factor from e5-2630 to i5-2500k but it should be close to 1.2) = 14.4x faster. and also 16+ x the ram.
And you get to tweak your IOPS as you feel, etc. and pay for a rack and bandwidth of course, but when it's 15x cheaper, why not ?
Re: And how much does 410TB of storage on S3 cost?
Dude... they don't have that kind of performance or reliability on Amazon, and a box for 410GB is going to use dedupe heavily (either way going homegrown would mean zfs with dedupe and flash cache). In the end it really is just another startup taking the easy risk-free road.
You have to remember the amazon EC2 cloud is weak and crappy, with 8 core instances lagging behind old 4 core opterons, meh IO, etc.
Overall their own infra would cost much less than amazon for one refresh cycle, but it would drive them away from their core business so it's a no-no for a startup, just like optimization of anything not POC critical.
Re: not an issue for me
... you do realize there are countless reports of that "issue" and you do realize that any such ARM + GPU SoC put under stress is going to hit beyond 40°C whatever you do ?
I didn't say it was a huge issue, i'm just saying it's there and it's a design flaw --
And w/e Ars Technica reviewed is of no importance when the rest of the world says it's friggin hot w/ intensive use.
Sure . Like microsoft doesn't often fail ?
or SAP ? <-- stackpile of fail right here
or Oracle ?
Come on ;)
IMHO it's worse on the professional territory, with AutoCAD that can't even use most (or multi) GPU's etc. etc. - and slick marketing does the trick even better. - there, take another slice of cloud saas big-data business-driven piece of buzzword-pie .
I think what people are rambling about here is just sales to uninformed customers, i.E. sales.
Like Apple buyers would ...
return those iPhones ?
They're social status symbols, most of the people buying them don't give a crap if the antenna has an issue, and less than 1% returns can also be a case of the usual laziness :
Out of 100 phones,
60 w/ fail antennas
40 don't even notice
15 don't care
4 can't be arsed (because returning a product is hell, even w/ the best customer service you lose a lot of time for nothing)
Considering the iPhone product and its target market, I would be surprised if a major technical flaw (like half the signal strength of another phone) caused more than 1% returns, considering how people praising and buying those devices are more interested in the shiny and magical rather than the technical world.
Re: "1+1=5 because I want it to"
It could very well be more sensational than substantial - not really interesting anyway till we see the real deal and the benchmarks and the price wars :)
I don't see why someone would be desperate for ARM to be in servers, except for drastically lowering the cost of computing of course - but the sure thing is ARM is going to become the main cloud (everything virtualized really, with or without buzzwords) platform before 2020 - there is no way around that.