1x Dairy Milk Bar
1x fishing rod
1x treadmill with dynamo
1x 30-something single woman
Not all that portable admittedly, but I've got a patent pending on a nationwide network of charging stations :)
ARM's employee number 16 has witnessed a steady stream of technological advances since he joined that chip-design company in 1991, but he now sees major turbulence on the horizon. "I don't think the future is going to be quite like the past," Simon Segars, EVP and head of ARM's Physical IP Division, told his keynote audience …
I'm not sure if Moore's law was/is really applicable on mobile processor performance.I would like to benchmark Axim x30 with Intel's ARM V5 (XScale) running at 624mhz from 7 years ago against the latest smartphone around.
If you look at the desktop world and how vast an advancement in architecture and clock speed we got since 2004, well no comparison really.
As an aside, Is intel not kicking itself for selling xscale?
"The number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years."
Note there's nothing specifically performance-related there. Yes, in the desktop world those advances were often used to increase performance.
But in the mobile sector, they've been used as much or more for miniaturization, power efficiency, or adding functionality, which is why today's smartphones are smaller, and run longer, than an Axim x30, even though they have to give some of their battery life and space to the relative hog of the 3G/3G+ radio (not to mention the wi-fi, bluetooth, GPS, accelerometer, etc.)
Re your aside, I certainly hope so.
Isn't this rather obvious? The microprocessor exemplifies the concept of jack of all trades, master of none. Frankly the only reason my netbook is capable of showing me animé is because there is enough grunt power to decode the video data in real time. But then my PVR with a very slow ARM processor can do much the same as it pushes the difficult stuff to the on-chip DSP.
Likewise the older generation of MP3 players were essentially a Z80 core hooked to a small DSP, all capable of extracting ten hours out of a single AAA cell.
Go back even further, the Psion 3a was practically built upon this concept. Bloody great ASIC held an x86 clone (V30) and sound, display, interfacing, etc. Things were only powered up as they were actually required. In this way, a handheld device not unlike an original XT in spec could run for ages on a pair of double-As.
As the guy said, batteries are crap. Anybody who uses their smartphone like it's their new best friend will know that daily recharging is the norm, plus a car charger if using sat-nav. So with this in mind, it makes sense to have the main processor "capable" without being stunning, and push off complicated stuff to dedicated hardware better suited for the task, that can be turned off when not needed. Well, at least until we can run our shiny goodness on chocolatey goodness!
The problem with dedicated hardware for task x is about where to draw the line. Having purpose built chips for every task soon stacks up to be a lot of chips in one device, and ramps the costs up too. Not to mention the design costs for a hardware solution plus the inability to upgrade it later.
Besides, the same problem still applies - he's comparing the cost of a 2G modem with a 4G modem as an example. Even specialised hardware is still going to be more energy intensive - the scale still exists.
ARM chips are RISK processors, specialized towards flow control operations and simple arithmetic. They use pipe-lining the push a lot more operations through per cycle than the CISC chip you get in your Desktop.
But they are rubbish at the kind of high throughput mathematics that is required for video decode, and even wireless networking these days. CISC chips have massive instruction sets, giving access to a combination of DSP hardware and optimised microcode for vector math. It's not as extreme as a vector engine, but its there.
For me, this proposal that packages should contain a range of semi-specialized hardware to carry out different types of generic computing task is a migration back to CISC, a surrender of the RISK concept that has dominated mobile devices.
Backwards compatibility has crippled desktop CISC, and I hope that the new specialist CISCs will be a bit more pragmatic rather than being shaped by the migration from previous hardware. A nice way to achieve this would be for SOC vendors to offer a huge base of C++ libraries, with the proviso that the instructions set was prone to change between devices, and using it directly was asking for trouble...
It isn't the nineties anymore. Get with the times, granddad. Whilst you're there, learn about the difference between a DSP and a general purpose microprocessor. Compare and contrast with the sort of highly parallel simple processing units used in modern graphics cards. The semiconductor world is not a simple place, especially when it comes to mobile device SOC cores such as those designed by ARM.
When you're done, I invite you to take a look at the 'crippled' processors of today, and have a quick think about how monstrously powerful they are. A new non-backwards compatible instruction set would make everything sweetness and light, you say? Hello x64! You're not suggesting anything new, or even useful.
Is there actually a continuing market for slightly faster kit at higher cost in the current climate? IMHO most kit has been running fast enough for the last couple of years, despite constant efforts to force us to buy more CPU to support the same functionality.
Extreme gamers can link a few GPUs together, data warehousers can add terabytes of SD disk, and the rest of us can upgrade to Linux or Windows XP running Libre Office ;-)
This article suggests it's time for software to catch up with the hardware.
Maybe the way to make these devices to run faster is to tighten up the code. After all we've been getting rather a lot of bloat whilst Moore's Law has applied. In the 70's when processor time cost money it was a time when shaving the time off your code had a distinct advantage, and they didn't have cut'n'paste coders in that era.
I'd predict a trimming back of all those functions that don't get used unless it's the 5th Tuesday in February, to make what does get used rather a lot quicker.
Tux - possibly the home of better software.
There's not much of a market for faster kit, but there is certainly a market for kit that is as fast, but consumes less power in the process. Moore's law benefits this too: smaller features require less power to switch on or off. This is why you can run them faster, but it also means that, speed-for-speed, the smaller part consumes less power than a larger one.
From mobile phones to data centres, power consumption is now the number one enemy. It's only really the desktop market that gets a free ride for this; but even here, large corporate buyers are waking up to just how much of their annual electricity bill is spent generating 3d images of pipes through the night, and it's having an effect on buying decisions.
The future must surely be 3D where layers of a chip are sandwiched together. That also allows cheaper production as faulty layers can be checked for and rejected before the final sandwich assembly is completed. So it becomes a question of finding ways to increase mass production of layers (which can be improved), not a question of reducing the geometries of layers which cannot continue.
That would even work with older larger geometries so older Fabs would still be very useful. (Plus older Fabs are still very useful for a lot of smaller more dedicated chips which is a very big market).
@"since the Intel 4004 appeared 40 years ago this November"
I hope that historic anniversary is recognised by the world's press. Technology has after all totally changed the world in the past 40 years and we all owe a lot to that historically important work.
It's interesting to think that if there was any *major* advancement in battery technology in the near future (and we hear about new "breakthroughs" every other month), ARM could be wiped out in the mobile space as there would be no need (or at least, far less of a need) for their power efficient hardware...
A real-world 10-fold increase in battery storage density (naturally involving nanotechnology of some kind) is probably the kind of breakthrough that Intel dreams of, and ARM has nightmares about.
Until then, go ARM!
1. Portable computing devices e.g. Smartphones, Tablets, laptops etc, even with a 10 fold increase in battery, this either allows for:
a) An increase in performance with no degradation in battery life.
b) Huge increase in battery life (becoming increasing important for many users)
c) Balance of the 2.
If you can halve the power consumption of the chip you can use a smaller battery for the same job, making the device cheaper and lighter.
2. Data and Processing centre's are one of the largest consumers of CPU's, their operating costs are mainly power consumption, batteries are not going to affect this and many studies have recently shown that 10% reduction in CPU and memory power consumption has huge implications for their operating costs as there is much less heat generated and resulting reductions in cooling requirements.
You may see as much as a 30-40% operating cost reduction for an equivalent setup.
I'd say ARM would be quite safe in the event of a breakthrough in battery technology because they'd extend that life too. Which would you rather have, an intel based device that would need charging every 2 days or an ARM based one that would need recharging every 5 days? I know which one I'd pick ;)
(Note: Numbers plucked out of rectal sphinctor as a means of giving an example, any resemblence to reality is purely coincedental. And yes, my spelling sucks :P )
Even if a major break through came out tomorrow that made batteries 10 x more efficient, that would make my current mobile with a ARM chip need charging about every 10 days. With current Intel mobile CPU id probably only get half of that run time and no real other benefit (other than being able to run full fat Windows on my phone which i would have no interest in doing).
Also most phone manufactures have no interest in putting an Intel CPU in their phones as they can get ARM chips from several sources much cheaper than Intel cpus. Look at the recent article about how much Intel want to charge for CPUs for the Ultrabooks to see how expensive Intel are compared to ARM.
It looks like the ARM Vet is saying we need to go back to the design days of the Amiga 500 which had a relatively low powered cpu but lots of custom chips for handling other tasks which coupled with well written software made is seem much faster than PCs costing the same price. Maybe if commodore's management hadn't been so useless running the company into the ground they might have been a major player in todays mobile scene.
If battery storage density increased tenfold, the smartphone manufacturers would just fit smaller batteries of the same capacity.
My first phone had six AA NiCads which took up 40% of the volume of the case. My current phone is powered by what appears to be an After-Eight mint which takes up less than 10% of the case.
Yeah, that's the way it goes.
Remember back when the phones had keypads where you could press keys with your finger instead of a toothpick?
These days, I have to pat my pockets to find where my phone is. A few years ago, you could feel at least a little bit of weight.
........beverage in my case is coffee) is extraordinarily attractive. That was in fact my reaction to the article. It was indeed interesting and instructive and I felt refreshed after having read it instead of totally wound up and ready to bite someone's head off. Obviously El Reg made a big mistake and it won't happen again. No doubt somebody will be disciplined such this type of error is not repeated.
Thumbs up for mentioning Ars.
Also, the CPU expert at Ars have been telling their readership for ages that ARM does not have any magic dust they can sprinkle over their chip designs to make them more efficient. The only reason they consume less power is because they are _way_ less powerful than x86 chips. The day the start approaching them in computing power, they will consume pretty much the same. Also, to the CISC vs. RISC debate people: Ars again mentions that instruction decoding into microops today comprises a very very small percentage of the CPU time, and therefore the ABI or instruction set is mostly irrelevant these days when talking of powerful chippery.
Again, just quoting Jon Stokes and the other Ars experts, but they do seem to know their stuff.
He talks about a move to dedicated hardware, if designers can achieve this balancing act between CPU's and GPU's as he suggests moving to a device like an FPGA may also be possible.
For dedicated tasks that overload CPU's and GPU's, ASIC's can't be beaten, not even FPGA's can match them for size and power usage.
However, ASIC's by their very nature are inflexible, he mentions putting encryption hardware on the chip, how many times a week do we see on this site. "Some team, somewhere has cracked x algorithm or technique".
An FPGA would allow the majority of the performance of the ASIC, but allow for the update to a new algorithm or technique.
As stated in the article, one of the major costs is verification and once the silicon is set it can't be reworked, in FPGA it can, certain issues can be fixed with firmware updates.
Also not all features are used at once so the device can be reconfigured on the fly to serve the purpose in use at a specific time, reducing silicon area and as a result power consumption, without compromising functionality.
When compared with pure custom silicon, yes and no. Pure custom silicon in a high volume product is in principle cheaper per unit than an FPGA to do the same job, but the custom silicon has a much bigger up front cost (and in the event of a hiccup, reworking costs a lot more than changing the FPGA program). For low volumes the FPGA wins. In the middle, there's a discussion to have.
You can get ARM-capable FPGAs too.
"We want bigger batteries so we can burn more power"
Power => heat. How hot do you want your phone to be? I wouldn't fancy holding a running POWER7 CPU in my hand, even if it had a dirty great heatsink and fan.
Dedicated hardware (yes, this is quite expensive) and highly optimised, clever software (yes, this is also expensive and difficult to get right). Good luck with that.
Uh, wrong - unless you have some vague meaning of "power" in mind. By the simplest analysis, heat and power are simply two words for the same thing. Perhaps you meant to say that heat <> temperature, except that that would allow you to say, as I can, that I wouldn't mind holding any running CPU in my hand, provided that it was attached to a large enough heatsink.
In simplest laymans terms they may equate, but let's try some simple analysis:
Power = rate of energy conversion, ie. energy / time.
Heat = form of energy.
Over time, the energy stored in the battery is converted to heat.
The more power the processor requires, the more heat produced in a shorter period of time.
Therefore increased processor power consumption = more heat to dissapate over area of phone = hotter phone.
I really would prefer increased run time over higher performance. My phone does everything required as long as I remember to charge it every day..
"ARM could be wiped out in the mobile space as there would be no need (or at least, far less of a need) for their power efficient hardware..."
It would take a while.
ARM don't build chips, ARM licencees do. These companies/people have years of experience in designing and building system on chip designs for specific markets. SoC designs that have all the important components a system needs (often including a DSP or two if the ARM's existing DSP capabilities aren't appropriate for the task at hand).
ARM-specific features not found on alleged competitors also bring excellent code density (less memory for the same workload, ie cheaper) and other stuff along similar lines.
Yer man from ARM raises an interesting point re the economics of chip manufacture at ever smaller geometries with an ever smaller number of customers (one solution to which is presumably for Intel to buy outfits like ASML), but I'd have been interested in some hard info on chip and wafer volumes currently being built at specific geometries. I'd be amazed if something tried tested proven and relatively cheap around 100nm didn't dominate the market - but I could be wrong. No one except a select few *needs* ~20nm technology.
Today's Intel have nothing in their bag of tricks once they run out of process enhancements; they've not had a technical success outside of x86 enhancements for decades.
Obviously the legacy commercial muscle of the Wintel empire is not to be sneezed at, but the future is in System on Chip design.
It's what your right ARM's for.
By setting fire to the box it came in. But like the chocolate bar, the only way that works is to consume large amounts of oxygen and produce waste products. It takes a little longer to 'recharge' your chocolate bar if you consider the complete carbon cycle. So it's a little facetious to compare its energy density with a sealed battery.
Fuel cells (although AFAIK not chocolate powered ones)
Seriously it's just a device to show how poor the energy density of batteries is compared with (say) the equivalent weight of hydrocarbon. It's only the same as electric car range 50 miles/ diesel car 600 miles
Particularly the foundry side of the business.
Makes me wonder, where are all these miracle technologies and fabricating methods, new materials et al we read about that are going to revolutionise computing. Are they all dead in the water, or are there just no takers because everyone's enjoying silicon?
Its not the end of anything, pretty much this article summarizes what I have been observing for years, that CPU's are pretty much farted out of the FAB full of bugs, and with barely any more optimization than what the smaller geometry inherently gives to the processor.
Intel/AMD and the like just spit things out.
Well I for one welcome the end of moore's law and the beginning of build a good product law.
What this means is that if they can not cram more transistors into less space, they will have to start thinking on ways to make the same number of transistors do more more efficiently.
Perhaps that will also put an end to the huge cooling monsters modern CPU's require.
The holy grail of mobile power:
Insert bar of chocolate into phone (with 35x energy density that's about one a month at a quid apiece - who needs chargers?)
Extract and consume energy (=calories) from chocolate - leaving just the flavour.
Lick phone (OK, so this bit needs work).
Organic and fully renewable.
So many commentards seem to be falling into the trap of comparing the ARM with the x86 that I feel moved to write an educational piece.
The problem revolves around the term "CPU". Both an ARM and an x86 can be classified as Central Processing Units, but what exactly do we mean by "processing" and where exactly is "central"? I blame the OED, which should have a ruthless and notoriously effective hit-squad, silently removing from the gene pool all those who wilfully introduce ambiguity into the language.
Some history: The RISC versus CISC debate came to a head when the world's most successful microprocessors were the Z80 (about 500 different machine instructions) and the 6502 (about 150), but petered out when Intel introduced the 486, which used a RISC-like core with microcode to implement the CISC instruction set, with simple instructions being passed straight through to the core for rapid execution. This made the simplest instructions execute at near-RISC speed, whilst retaining all the advantages of CISC, such as hardware iteration.
Almost all the ARMs sold today are not physical devices, but licences to use the ARM architecture in a custom SOC (system-on-a-chip). The x86 is _already_ an SOC - one that Intel have been developing for 20 years. People design their own SOCs around the ARM core because their applications are nothing like a PC, so the x86 would be a poor choice. If your application _is_ similar to a PC - particularly, if you want it to run PC software, or to port PC software to it - then an x86 is the obvious choice, unless you think you can design something better, given that Intel have a 20 year head start.
The point is, that to compete with an x86, you need to bolt a lot of stuff like multi-parellel pipelining, speculative pre-processing etc. onto the ARM, and to compete with the ARM, you need to cut the x86 open, extract the RISC-like core, and then bolt your own bits onto it. To draw an analogy or two, an ARM CPU is like C, and an x86 is like C++. We'd much rather have the simple elegance of the former, but sometimes tractors are better than Ferarris.
I hope this helps somebody.
I must admit to being a little out of date with the recent intel x86s, but they now have SOCs?
No need for a "chipset"? The processor connects directly to the disk, ethernet, USB, sound, PCI, etc.? The processor includes real-time clock and interrupt controllers, perhaps DMA too?
If you look at the design of a system using an ARM SoC, there is normally very little else needed, whereas when you look at a design using an x86, it still needs at least a Southbridge chip plus several others.
I'm interested in what you are comparing with what.
ARM silicon started appearing in about the same time (give or take a year) as the 80386, and IIRC, ARM systems actually fared relevantly well in benchmarks compared to the i386, and even then were clocked at much lower clock speeds.
So although Intel had all of the years of 8086 development under their belt (which, incidentally, was less that 10 years), as 32 bit architectures, you can consider ARM and the first 32bit x86 processors as being of the same generation, and actually makes the ARM a more 'mature' processor than the 'great leap forward' of the i486.
It's interesting you mention the Z80 vs 6502 years... I remember looking at the instructions sets for each, and the clock cycles required for their instructions. The 6502 could do most things way faster than the Z80, even when the Z80 was running at a higher clock speed. So it came as no surprise that ARM, a company that spawned out of Acorn (makers of 6502 based machines such as the Atom, BBC Micro and Electron) was focused on small but efficient instruction sets.
I was a 6502 man... In my book you can't call yourself a machine code programmer if you can't do multiplication with nothing more that bit shifts and add commands!
Mines the one with several Rodnay Zaks books in the pockets.
Ah, yes - Rodnay Zaks of the improbable name, author of "Programming the 6502" and "Programming the Z80" which duplicated about 25% of their content and dedicated an entire page of mostly white space to each instruction. The NOP pages were especially helpful.
Zaks' explanation of multiplication (8-bit processors didn't have hardware multiply) however, stood me in good stead many years later when I needed to multiply a 17-bit number and a 23-bit number (from 2 A to D converters) on an 8051. I could do it with a 32-bit C maths library, but that involved lots of redundant shifts and adds because of all those leading zeroes, and I was running out of realtime. So I did the multiplication using shifts and adds in C and halved the run time. I checked the assembly-code output, and it was exactly the same as I would have written myself, so I left it in C.
I too preferred the 6502 over the Z80, but in the days before instruction caching, pipelining etc, the Z80's better code density amounted to faster execution speed for many routines. And how could you not like DJNZ, for example?
Z80 DJNZ e - 13 T states if branch taken, or 3.25 microseconds at 4MHz
6502 DEY ; BNE e - 5 clock cycles if branch taken, or 2.5 microseconds at 2MHz
OK, it's one more byte (3 rather than 2), but your assertion that code density == speed is completely wrong when considering 8-bit microprocessors, because there was no overlap in instruction fetching, decoding and execution. The time of any instruction on either a Z80 or a 6502 is exactly what it says, from fetching the instruction and arguments to completion.. From the end of the last instruction to the end of the next is an absolute time, and is easy to determine.
Many Z80 instructions run into 15-20 T states, meaning that there are some situations where it is quicker to run several simple instruction than one complex one, even in the Z80 machine code.
ARM being a design company, of course one would expect them to say that the future of microprocessor will depend on design innovations, not on continued process technology innovation.
I'm sure technology-focused Intel will make a case (if not at Hot Chips, then at IEDM, at ISSSC, or at their developer's forum) that Moore's law is alive and well.
The death of CMOS technology scaling has been predicted by many in the past, and all of them had to eat their words. Segars makes many good points, but calling the scaling 'brick wall' is a risky business.
Small correction: It was not TJ Rodgers but Gerry Sanders of AMD who once said that “real men have Fabs.” All of this author’s comments about lithography challenges are well know. Intel has stated plans down to about 10nm with lots of conviction. If one shifts the Moore’s Law rule to one of simply system speed – from transistor doubling every two years, with implied speed – then things like interconnects, stacking, architectural heterogeneity and new materials such as those related to III-V can offer very large functional and processing speed benefits. But, only the largest companies will be best positioned to capitalize on such inventions. And just for fun: Suppose a “wild duck” use for some of those billions of transistors was for some clever energy harvesting - then what happens to the battery life issue? So, what companies are best positioned?
Where can I find an SOC with a decent performance core with DSP extensions, hardware accelerated cryptography, two ready-to-go Ethernet interfaces, a PCI interfaces, controllers for DRAM and Flash ROM, and power consumption well under 2W. And more.
Intel, that's where.
It's their IXP42x.
Or rather, it was, back in 2006 or so, because it's an Intel Xscale (StrongARM) SoC, not an Intel x86 SoC, and it dates back to when Intel actually had a StrongARM business (which they sold off to Marvell around that time).
If anyone wanted such an SoC today they can pick from a number of ARM licencees.
If Intel have such an SoC product today based on x86, they've kept it very quiet. What Intel do have today is no more an SoC than stuff from the Motorola 68K era twenty years ago. A handful of companies may have genuine x86 SoCs but they have no unique selling point (well, one, see below) why anyone would pick them over an ARM SoC.
Meanwhile in the years since 2006 the various ARM licencees have introduced various other capabilities to the ARM range, capabilities which leave any likely x86 SoC literally years behind.
"If your application _is_ similar to a PC - particularly, if you want it to run PC software, or to port PC software to it - then an x86 is the obvious choice"
Watson, I do believe he's got it! If you're Microsoft-dependent you're x86-dependent too. Otherwise, you're not, as any mobile phone, router, TV, printer, or other mass-market cost-sensitive SoHo equipment will prove.
Still, if you want to call a random x86 chip an SoC, despite the amount of motherboard glue it needs around it which an ARM licencee puts **on the chip itself**, feel free. But the industry in general has a rather more useful definition.
Damn, beat me to it!
That page is a bit out of date now. The latest is the Vortex86mx+ which integrates the RAM and video memory, so you don't need to provide both externally, just RAM, and then the onboard VGA interface can "borrow" some of it.
Got the slightly older 86mx here, runs at a smidge under 1Ghz, onboard sound, vga, 2 serials, ethernet, USBs etc etc. The box I have also has an SD card slot. Power consumption? Well I can run it off a USB port, so total system consumption is under 2.5watts. Obviously can't do this if I start handing hard drives off the USB ports though.
Quite happily running Linux, complete with Gnome desktop with an SD friendly zero swap file.
This fortnight's Economist has a fascinating article on Moore's "Law" and where things may be going. You may want to take a read of http://www.economist.com/node/21526322 which directly covers the fact that architectural changes at the wafer level may well mean Moore's "Law" for the next few years.
Even though Moore is dead, the assertion certainly is NOT!
As the Transputer guy said just the other day, when will Americans get over the fact that you can't have a hundred cores all with the same view of a single shared memory system? ARM, you're going to screw your architecture if you kow-tow to this nonsense. Cell does just fine without coherent memory. Just put in fine-grained partial cache flushes and async notification of when they complete, and you're golden. Coherency costs - you spend silicon on it, you decimate your design space to achieve it, and then you use Watts to maintain it. Spend that silicon on something useful, and save the power.
That goes double for hardware that's going to be specialized to radio transmission using a given protocol anyway, and can be programmed once for all users.
>. Cell does just fine without coherent memory
Funny the marketplace said otherwise. And as anybody that has ran any kind of science app on a CELL for any amount of time knows the SPUs often become unresponsive and require a reboot. You can make your point but using a Sony/IBM implementation as your example probably wasn't the best choice.
You make some good points however I wouldn't say the X86 was SOC in a conventional sense. Its just that everybody expects the CPU to to provide more features through the years as part of the basic 'brain'. For example early on hardware co-processor was developed/offered for floating point maths. Now its standard part of x86 and most CPUs.
The ARM is well advanced on this due its small size and and variety of developers all wanting different features. An ARM SOC is likely to have a GPU, custom DSPs, and most of the associated buswork etc. Basically a full chipset which the x86/Intel is only just started fabbing.
On a related note and and in response to some other comments. As Intel has noted to their great cost (in dollars) it is far far easier building up from a very efficient chip adding stuff as required and keeping power consumption down in the process by optimizing than the reverse process. This is constantly keeping ARM ahead with faster cleaner development etc and suspect will do for some time. Secondly variety is the spice of life as they say, With Nvidia, Qualcomm, Samsung etc. all competing now better than ever (Dual core/Quad core etc..) its pushing speeds along even faster.
The power source of the future he is looking for is called the "fuel cell". They used them on the Apollo moon missions. Electrical power at chemical fuel energy densities.
Admittedly, they're more suitable to luggables than pocketables... but if people can refuel cigarette lighters, then I suppose a hydrogen port on a cell phone is also possible.
Or that's it - it might not be great for global warming, but why not a cigarette lighter that supplies heat to run a tiny steam engine? Powering a little generator, naturally.
Biting the hand that feeds IT © 1998–2021