I want:
.atheist
26 publicly visible posts • joined 26 May 2007
We ahd a similar issue with eclipse, and we weren't torrenting much just a lot of RDP. Our connection used to get about 25kB max yet we could sync at 7mb. After moaning a eclipse for months and upgrading to option 4, they simply said
"that is the performance we expect for your package"
What can you say to that.
If we assume that we want to heat 10g of popcorn to 150C , then by my calculations:
Using Specific Heat Capacity:
q = mct
c = 4200, t = 130, m = 0.01
Therefore:
q = 4200 * 130 * 0.01 = 5460 J
Now Power is Joules per Second:
p = q / t
p = 2.5 , q = 5460
Therefore in order to raise the temperature of 10g of pop corn to 150C at maximum USB power it would take:
5460 / 2.5 = t = 2,184 Seconds = 36 Minutes
This assumes that heat transfer is 100% efficient and there is no heat loss.
Now providing that the heat input into the system is greater than the heat loss, there should eventually
be some popcorn.
I think the more interesting question is how long it took.
Another thing to note, is that the USB bus does not provide 500mA of power unless requested by the device.
Therefore it is likely in this situation the USB bus is only providing 100mA, IE 0.5W
But that is just my views.
How long before a security patch is released to close the holes used to install its self, Or are the police going to force terror suspects to install it.
After all what stops a malicious attacker using the the same exploits, or even better exploiting the spyware.
I doubt it has a Linux variant, and if did I'm sure the security hole would be closed immediately. Therefore if your a terrorist, use Linux.
I'd really like to see this chip compared with the new VIA Nano processor, 60W of power is seriously high for these kind of systems.
From what I've seen in the tests it looks like the VIA Nano could very easily beat this processor on performance, which would be interesting.
The VIA Nano is already beating the 1.6GHz Intel M parts on performance per watt.
Having spent the last year doing research into server power efficiency I am both interested and skeptical.
Although this saving 10% is pitiful, the system I designed was using less than 30% of a dual core dual CPU Opteron server. 77W instead of 226W, reading to a significant saving on operational costs.
A 10% power saving would equate to around 30W upon an average server. The question is how is this achieved, most server processors do not really see this power saving. Secondly is this an averaged power rating or instantaneous. For example, if the OS is using less processor time when idle then over 24hr this may be possible, however if the processor is under a constant full load, the power consumption of both OS should be within 1% or so, since the CPU consumes around 80% of a servers power. If the power saving is instantaneous then facts about the situations are needed.
Certainly power management features of OS can reduce power consumption, so it would be very interested to see Linux Vs 2008 server. I expect Linux will beat server 2008 hands down.
This comment in the article I find highly amusing:
"Hyper-V can still throttle the amount of voltage to the CPU based on load – which is something VMware and Xen can NOT do today,"
This amuses me for two reasons, firstly altering the core voltage of a CPU is highly likely to cause it to crash or worse. Secondly there is no standard system to manipulate the CPU core voltage on most server motherboards as it requires direct control of the Core Voltage VRMs.
Is there anywhere I can get a copy of this paper, I'm quite interested to see the latest propaganda of Microsort.
I had resisted joining facebook until last night, mainly on privacy grounds. I must admit that after using it for about 2 hours, it is painful to say the least.
The website is poorly designed a slugish at the best of times. It is difficult to use, certainly to find people. However best of all, trying to hide ur personal info, is so complicated and irritating. I found the best solution was not to add any info.
I've been a long time Mandriva user so I am completely biased however. The Mandriva configuration tools and the entire distro feels more rounded compared with Ubuntu, however this is my PERSONAL PREFERENCE.
Rather than the OS community continually bitching about different distros, Why not admit they all have differences and in the end are fairly similar, get over it, find that YOU like and get on with it.
Both MSIE and FF are at fault!
Any program should validate its input data before it does anything else with it, thus MSIE should validate the data pass it to FF which validates it again. It is a common failure with ALL programs that they never validat input data.
Browsers are not the only insecure programs, infact it is hard to think of many programs which are. It is a generalised failure within the software industry which breads bad programmers and software. Focus is always on cost and not quality.
Shutting down a phone once it hase been stolen is possible. Using appropiate techniques it is possible to make it impossible to reprogram a microprocessor. Also there is no reason why a shutdown comamnd could not permantly dissable the phone, that should be relatively mundane.
However the only problem: is there a need for a solution ?