Power Management in SuperSpeed USB

superspeed-usb-logo-1

USB has become the most successful PC peripheral interconnect ever defined, with over 10 billion USB 2.0 products installed today. Still, despite its convenience, USB has never been either the fastest or the lowest-power interconnect protocol out there. USB 3.0 seriously attempted to address both of those problems.

Facing competition from other high-speed interconnect protocols like 400- and 800-Mbps IEEE-1394 (FireWire) and HDMI—both of which targeted high-data rate streaming of video—in 2008 the USB Implementers Forum (USB-IF) formalized the specification for USB 3.0, which promises a “SuperSpeed” data rate of 5Gb/sec, a 10x improvement over USB 2.0 while at the same time reducing power consumption.

How can they do that you ask?

For starters, by eliminating polling. A USB 2.0 host continuously polls all peripheral devices to see if they have data to send to the host controller. All devices must therefore be on at all times, which not only wastes power but adds unnecessary traffic to the bus. In USB 3.0 polling is replaced by asynchronous notification. The host waits until an application tells it that there is a peripheral with data it needs to send to the host. The host then contacts that peripheral and requests that it send the data. When both are ready, the data is transferred.

USB 2.0 is inherently a broadcast protocol. USB 3.0 uses directed data transfer to and from the host and only the target peripheral. Only that peripheral turns on its transceiver, while others on the bus remain in powered-down mode. This results in less bus traffic and a considerably lower power profile.

SuperSpeed USB enables considerable power savings by enabling both upstream and downstream ports to initiate lower power states on the link. In addition multiple link power states are defined, enabling local power management control and therefore improved power usage efficiency. Eliminating polling and broadcasting also went a long way toward reducing power requirements. Finally, the increased speed and efficiency of USB 3.0 bus – combined with the ability to use data streaming for bulk transfers – further reduces the power profile of these devices. Typically the faster a data transfer completes, the faster system components can return to a low-power state. The USB-IF estimates the system power necessary to complete a 20 MB SuperSpeed data transfer will be 25% lower than is possible using USB 2.0.

The SuperSpeed specification brings over Link Power Management (LPM) from USB 2.0. LPM was first introduced in the Enhanced Host Controller Interface (EHCI) to accommodate high-speed PCI-based USB interfaces. Because of the difficulty of implementing it, LPM was slow to appear in USB 2.0 devices. It’s now required in USB 3.0 and for SuperSpeed devices supporting legacy high-speed peripherals. LPM is an adaptive power management model that uses link-state awareness to reduce power usage.

LPM defines a fast host transition from an enabled state to L1 Sleep (~10 µs) or L2 Suspend (after 3 ms of inactivity). Return from L1 sleep varies from ~70 µs to 1 ms; return from L2 Suspend mode is OS dependent. The fast transitions and close control of power at the link level enables LPM to manage power consumption in SuperSpeed systems with greater precision than was previously possible.

Link Power Management

Link power management enables a link to be placed into a lower power state when the link partners are idle. The longer a pair of link partners remain idle, the deeper the power savings that can be achieved by progressing from UO (link active) to Ul (link standby with fast exit), to U2 (link standby with slower exit), and finally to U3 (suspend). The table below summarizes the logical link states.

Link State Description Key Characteristics Device Clock Exit Latency
U0 Link active On N/A
U1 Link idle, fast exit RX & TX quiesced On or off µs
U2 Link idle, slow exit Clock gen circuit also quiesced On or off µs-ms
U3 Suspend Portions of device power removed Off ms

Most SuperSpeed devices, sensing inactivity on the link, will automatically reduce power to the PHY and transition from U0 to U1. Further inactivity will cause these devices to progressively lower power. The host or devices may then further idle the link (U2), or the host may even suspend it (U3).

Both devices and downstream ports can initiate Ul and U2 entry. Downstream ports have inactivity timers used to initiate Ul and U2 entry. Downstream port inactivity timeouts are programmed by system software. Devices may have additional information available that they can use to decide to initiate Ul or U2 entry more aggressively than inactivity timers. Devices can save significant power by initiating Ul or U2 more aggressively rather than waiting for downstream port inactivity timeouts.

Power Over USB

The USB Power Delivery (PD) Specification (2012) recognized the importance of delivering power over USB, increasing the maximum power to 10W at 5V, 36W at 12V, and 60W—now 100W—at 20V. Equipment with large power requirements such as laptops can now be powered over USB. Since power can now be delivered bidirectionally, this should do away with the ubiquitous power “bricks” and the countless proprietary connectors they require.

The USB PD Specification defines how USB Devices may negotiate for more current and/or higher or lower voltages over the USB cable (VBUS) than are defined in the USB2.0 and USB3.1 specifications. It allows devices with greater power requirements than can be met with today’s specification to get the power they require to operate from VBUS and negotiate with external power sources (e.g. wall warts). In addition, it allows a Source and Sink to swap roles such that a device could supply power to the Host. For example, a display could supply power to a notebook to charge its battery.

Backward Compatibility

While the advantages of SuperSpeed USB are impressive, these devices are just beginning to appear in a world dominated by USB 2.0. For backward compatibility SuperSpeed devices must support both USB 2.0 and 3.0 link speeds, maintaining separate controllers and PHYs for full-speed, high-speed and SuperSpeed links. By maintaining a parallel system to support legacy devices, SuperSpeed’s designers accepted higher cost and complexity as a price worth paying to avoid compromising the speed advantage of their new architecture.

Posted in Power management, USB | Leave a comment

Electric Flight–The Ultimate Energy Efficiency Challenge

If you think electric cars are impressive, how about an electric 747? On a smaller scale, that flight of fancy is becoming a reality.

Two years ago in Santa Rosa, CA, an electric-powered 4-seat light plane won the NASA/Google Green Flight Challenge by flying over 200 miles non-stop at over 100 MPH while achieving 403.5 passenger miles per gallon (mpg) using the equivalent of less than one gallon of gasoline. Compare that to the Chevy Volt—the current state of the art in electric (land-based) vehicles—which gets the equivalent of 112 mpg in all-electric mode while driving slowly over flat roads. And even with the benefit of wheels and a 435 lb. battery, the Volt can only keep that up for 35 miles, at which point it reverts to its gas engine, which gets 37 mpg.

The winner of the $1.65 million prize was Team Pipistrel from Penn State, flying a Taurus G4 manufactured in Slovenia. The G4 is a four-seat, twin engine plane with a wingspan of 69’2” and weighing 2,490 lb, slightly less than a Volkswagen Beetle. The two 145 KW (194 HP) motors can drive Pipistrel to about 114 mph, so it won the Challenge race running almost flat out.

Detailed data on the custom-built G4 is hard to come by, but not for the production model Taurus Electro G2. The body is a composite of epoxy resin, fiberglass, carbon fibers and Kevlar in a honeycomb structure. The motor is a high-performance synchronous 3-phase outrunner with permanent magnets, delivering 40 kW on takeoff and 30 kW continuous. The best glide ratio is 1:41, which really qualifies it as a powered glider. To put it in perspective, the typical glide ratio for a two-seat general aviation plane is about 1:10. Aside from getting unimpressive mileage, you really don’t want to run out of gas while flying your Piper Cub. Or in a 747 for that matter.

Electric gliders have been around for a while. The first commercial one was the AE-1 Silent, which first flew in 1997. Weighing a mere 430 lb., the AE-1 is easily powered by its 13 kW (17 Hp) electric motor, which in turn works from a 4.1 kW/77 lb. Li-Ion battery. If you’re so inclined the AE-1 is FAA certified as an ultralight aircraft and it’s still being produced.

More high powered is the Antares 20E from Lange Aviation GmbH, in production since 2004. The 20E is powered by a 42 kW (52 hp) BLDC electric motor weighing 64 lb. Energy storage consists of 72 Li-Ion cells each rated at 44 Ah at 3.7V, for a combined capacity of 12 kWh @ 266V. With a wingspan of 65 ft. and weighing in at 1,455 lb, this is a serious airplane—though still a one seater. The 20E can self launch and climb to 3,300 ft. in four minutes and climb to 10,000 ft., where it can fly for 1.5 hours. Assuming you’ve covered 93 miles at that point and a maximum glide ratio of 1:56 (!), the maximum range then becomes (93+(2×56))=205 miles.

Now let’s figure the mileage for just the powered portion of the flight. Assuming your flight fully depleted the 12 kWh batteries, that works out to 12 kWh/93 miles or 12.9 kWh/100 miles. Using the same formula the EPA applied to the Chevy Volt—where 36 kWh/100 miles = 93 mpg-e—the Antares comes in 2.8x better at 260 mpg equivalent! That’s a pretty energy efficient way to travel.

In an interesting twist Lange is now producing the Antares DLR-H2, which is powered by hydrogen fuel cells, with the tanks slung in pods under the wings. The actual motive force is a 42 kW BLDC motor. The 130 lb. fuel cells can generate 20 kW continuously, twice the 10 kW required for level flight. The DLR-H2 can attain a height of 12,000 ft and has a top speed of 105 mph and a range of 1,240 miles.

Using solar cells to recharge your batteries while in flight can greatly extend your range. In 1990 the solar powered plane Sunseeker flew across the U.S. powered by a 250W array of thin-film solar cells. Since solar cells obviously don’t work at night, it took two weeks to accomplish this task.

The first solar powered plane to complete a 24 hour flight was Solar Impulse. Claiming to have “the wingspan of an Airbus [208 ft.]…the weight of a family car [3,500 lb.]…and the power of a scooter [40 hp],” its designers plan to fly it around the world in 2012. The solar cells on the wings of Solar Impulse cover 650 sq. ft. and can generate 6 kW (8.2 hp), which is stored in Li-Ion cells during the night. All things being equal, this should be enough to keep the 1.6 ton plane aloft day and night while traveling at just over 40 mph.

Even electric commercial airliners are in the works. In Europe EADS, Airbus’ parent company, has proposed the VoltAir ducted fan engine that would power commercial airliners. To achieve the energy density required to move such a massive aircraft, the VoltAir motor would be constructed of high-temperature superconducting (HTS) materials, cooled by liquid nitrogen. HTS motors are expected to reach power densities of 7-8 kW/kg, comparable to 7 kW/kg for today’s turboshaft engines. The batteries will still be Li-Ion, which EADS hopes will become more efficient, or Li-Air should it become commercially viable by then.

Coming to an Airport Near You

While electric flight is both fun and interesting—especially to engineers—it may impact you sooner than you think. Every major city and most smaller ones have general aviation airports. The Taurus G2 and numerous others like it would make quiet, inexpensive air taxis practical. Not only are the planes inexpensive—about the cost of a high-end car—they’re extremely inexpensive to operate, highly reliable, quiet, and essentially non-polluting. Instead of fighting the traffic between New York and Boston or San Jose and Sacramento you would be able to hop a quick, cheap flight there and gaze smugly down at the congestion below.

So there you have it. Electric boats and cars—been there, done that. Stay tuned for electric aircraft. You hopefully won’t have to stay tuned for long, and it will be worth the wait.

Posted in Clean energy, Electric flight, Electric vehicles, Energy Efficiency | Leave a comment

Storing Volts

While electric vehicles have been around since the late 19th century, they only became practical with the development of energy storage systems that sport a lot better horsepower-to-weight ratio than bulky lead acid batteries.

By the mid-90’s automakers had pretty much given up on being able to go very far on batteries alone, which led Toyota to introduce the Prius—the first commercial hybrid—in Japan in 1997. In EV mode the Prius is powered by a sealed 38-module 6.5 Ah/274V NiMH battery pack weighing 53.3 kg. That works out to 1.78 kWh total capacity. According to the EPA’s formula, one gallon of gasoline is equivalent to 33.7 kWh—almost 20x what the Prius’ battery alone can deliver. So it’s hardly surprising that the Prius relies primarily on its internal combustion engine for propulsion.

Volt battery pack

The Chevrolet Volt features a much larger battery with a considerably higher energy density than the Prius. The Volt uses a 16 kWh (197 kg) manganese spinel lithium-polymer prismatic battery pack, which alone can power the Volt for 35 miles (56 km). The Volt’s lithium-ion battery is 2.5x larger in terms of energy density than the Prius’ NiMH battery (.0812 vs. .0319 kWh/kg). Considering that the energy density of NiMH is under 2x that of NiMH—140-300 Wh/liter for NiMH vs. 250-620 Wh/liter for lithium ion—that’s well on the high side of what you would expect.

In addition to having a greater energy density than NiMH—in terms of both weight and volume—lithium-ion batteries also display a much lower self-discharge rate; a greater maximum number of charge/discharge cycles (i.e., they last longer); a more linear discharge rate, which enables more accurate prediction of remaining capacity; and they perform better at low temperatures.

As far as durability goes, both battery types are about the same: NiMH batteries can be discharged and recharged 500-1000 times, with Li-ion batteries being good for 400-1200 cycles. Since replacing an EV battery pack can be a very expensive proposition—currently about $8,000 for the Volt—manufacturers typically guarantee them for an extended period. GM guarantees the Volt’s battery bank for 100,000 miles or eight years.

Not Your Dad’s Li-Ion Battery

Li-ion battery

OK, assuming your Dad had Li-ion batteries, the ones in the Volt are better. The Volt’s battery design is based on technology developed at Argonne National Laboratory. The Lab used x-ray absorption spectroscopy to study new cathode compositions. They came up with a manganese-rich cathode that resulted in a dramatic increase in the battery’s energy storage capacity while at the same time making it less likely to overheat, and therefore safer and easier to maintain. To complete the trifecta, the new cathode material is also cheaper to manufacture.

Even if there isn’t much beyond Li-ion in terms of energy density—unless you’re comfortable with a thorium-based energy source—there’s still room for improvement. According to Khalil Amine, an Argonne senior materials scientist, “Based on our data, the next generation of batteries will last twice as long as current models.” Chances are your car would give out long before your battery does.

Recycling

When your Volt battery bank finally sends you an End of Life notice, what can you do with it? For one thing you could keep it and use it to help recharge your new Volt battery. Or you might rig it to an inverter bank as a backup source of electricity during power outages or at least peak billing times.

If GM gives you a credit for turning in your old battery on a new one, what can they do with it? The EPA claims that rechargeable batteries are not an environmental hazard if they’re not dumped in landfills; European governments aren’t quite so sanguine, since Li-ion isn’t exactly something you’d like to wind up in your water supply. Both the cathode and anode material can be recycled, which is what most jurisdictions require.

In the end the Volt’s energy storage system turns out to be as high-tech as the rest of the car. Considering how much more reliable electric motors are than internal combustion engines, Volt owners could wind up owning their cars for a very long time.

[This post was originally part of a series of articles on the Chevy Volt for the UBM/Avnet series Drive for Innovation.]

Posted in Automotive, Batteries, Electric vehicles | Leave a comment

Are You Ready To Be An Internet Node?

iStock_000000907067MediumI read an interesting article in this month’s IEEE Communications on the impact that 5G wireless communications will supposedly have on us. Call me old fashioned but I still remember when mobile phones were used for making phone calls. Well that was then, this is now.

According to the authors, “Instead of the consumers going to the Internet, the Internet will come to them, and in fact we will become nodes on the Internet.” We will become “both the source of valuable information and the sink for highly personalized information and content.” For this to happen, “all people and their information on the context of their environment need to be continuously available to one another.”

How about you? Are you ready to be an Internet node?

Lead me around by the node

First and second generation mobile phones simply handled analog and digital voice and text messaging. Then they invented camera phones and Netflix and all hell broke loose. The tsunami of video data swamped cellular networks and forced telcos to erect new basestations at a breakneck pace, which continues today. Third-generation (3G) phones introduced mobile broadband, yet consumers kept griping about slow data rates. So bring on 4G and start planning for 5G. But when they arrive I guarantee you that consumers will still gripe that they’re too slow. We want it all. Now.

So far mobile phones have been a good thing—they let you stay connected (when you want to) and call people and not just places. They let you access a world of knowledge that would otherwise be locked away. But the Google-served personalized ads have gone viral—less like a YouTube video than something your kids brought home from school.

Take for example what happens when you click on your browser to go to a new web page. The page loads an impression, which is forwarded to an ad server. Using information gathered from Internet cookies left on your computer while visiting other web sites—plus information gleaned by robots scanning your Facebook, Twitter, LinkedIn, and other social media postings—the server knows your interests, location, age, and a lot more about your personal life. The server then tries to match what it knows about you against an inventory of pre-sold ads. If it’s a match, then bingo, up pops an ad specially designed to catch your attention.

If no match is made in a fraction of a second the server forwards your profile to an international ad exchange, where a network of ad servers bid for the ad slot in real time. The winner then pops up a banner ad for the book/car/shampoo/local Chinese restaurant you were looking for 10 minutes ago.

Things get even more personal if you’re walking through a shopping mall. Getting a very accurate GPS fix on your phone’s location, the shop you’re walking by may pop up an ad on your phone with a special offer good only for the next hour. If this hasn’t happened to you yet, it will. My daughter the shopper may love this stuff, but it creeps me out to have my location be publicly trackable within a few feet.

To see this in action open a map on your cell phone; switch to satellite view; find yourself on the map; set it for maximum resolution (a combination of cell tower and Wi-Fi triangulation); and watch the map as you walk around. Those GPS satellites are in low earth orbit (100-1,200 miles), but I can watch the little location dot on the map (superimposed on a satellite photo of my house) move as I walk from one side of my small office to the other.

Back to the future—or pulling back from the future?

In the brave new world the authors foresee, will it be a crime to go “off the grid”? Could you be cited or fined for creating a cyber blind spot in your little corner of the world?

And here you thought Facebook was playing fast and loose with your personal information (well, that and it’s being trolled by bots). If this is the future I’m hardly a Luddite but I am so not ready for it.

Posted in Cell phones, RF/Wireless, Wireless | Tagged | Leave a comment

When Low-Power Design Gets Personal

I lost my hearing in Hong Kong in 1996. Well, everything much over 1 kHz, that is. By all rights I should have lost it during rock concerts back in the ‘60s, but I guess the crowds made it hard to get too close to the speakers. Getting too close to pile drivers turned out to be a big mistake.

In Hong Kong I lived for a few years on Lantau Island and took the ferry to work every day to my office in Central. They were upgrading the Central ferry piers, which included spending several weeks driving huge steel I-beams directly through asphalt down to bedrock right next to where I got off the ferry. The piercing sound of a pile driver banging a steel I-beam into bedrock could be heard all around the harbor. When you’re walking next to it for two blocks it’s extremely painful and can do permanent damage to your auditory nerves, which according to the doctor is what happened to me. I noted with pained amusement that this all happened right outside the offices of the Occupational Deafness Compensation Board.

Many years ago I worked as a stereo technician and could easily hear notes above 15 kHz. Suddenly my hearing was down 20 dB (1000x) at 5 kHz vs. 1 kHz—the chart resembles an expert-level ski slope. Since speech intelligibility depends heavily on higher frequencies, this was a serious problem. I could easily converse with one or two other people in a quiet environment, but as soon as the noise level would rise—or even if the television was on in the background—I’d lose the thread. Holding a conversation in a noisy restaurant or bar was completely out of the question.

I got fitted for a couple of the hot, new (1996) “completely in (ear) canal” (CIC) hearing aids, which looked and felt like chewed peanuts. They used 6-channel DSPs to cover the range from 500 Hz to 5 kHz. No programming was involved, just a one-time frequency compensation made by the audiologist. AGC was primitive, and they were of only limited help in noisy environments. As soon as I stepped out in the street after getting them fitted, I was greeted by two jackhammers, which caused them to completely shut down. I popped open the battery compartments and they made great earplugs. This technology was not ready for Hong Kong.

Low-Power Wireless State of the Art

That was then, this is now. I’m now wearing a pair of sub-miniature, 16-channel wireless hearing aids. These little puppies are awesome.

The Phonak Audéo SMART hearing aids sit behind your ears, with an almost invisible wire connecting to a tiny transducer that fits in your ear canal. Unlike my old ‘chewed peanut’ CIC devices, these allow unamplified sound to enter around them, so you can hear low-frequencies directly, with—in my case anyway—only the highs boosted. These come with 8/16/32 DSP channels and a number of programs that adjust automatically for different acoustic environments, including the aforementioned noisy restaurants and bars, where I’m happy to report they work superbly.

They’re also wireless. After initially placing them in my ears, the audiologist tuned each device up individually from his computer across the room. No wires, no “Stick your head in this acoustic box.” At the click of a mouse he showed me different programs for different listening environments. Cool!

Each device has two microphones, one pointing forward and one behind you. They communicate with each other to focus on a 45 degree cone in front of you; any loud sounds outside of that cone are attenuated; they can even notch out a single point source 45 degrees behind you to the left. You can tap either earpiece to select different programs to suit a wide range of acoustic environments, ranging from listening to a flute solo to sitting in the front row at a Metallica concert. I’ve found that the automatic setting can handle everything to my satisfaction, though Metallica might be ill advised.

The Audéo’s wireless technology has a transfer rate of 300 kBits/s using continuous phase frequency shift keying. The transmission frequency is 10.6 MHz with a bandwidth of 300 kHz. This frequency was chosen to be able to support the transfer of complex broadband data with virtually no interference.

The magnetic field intensity needed for hearing instrument wireless communication purposes is low intensity as they are placed on the head in close proximity to each other. The measured field strengths for Audéo hearing aids is 3 mV/m at 1 m, which equates to 0.18 picoWatts. The magnetic field strength is < -62 dB ?A/m at 10 m. The Specific Absorption Rate (SAR) value of the Audéo hearing aids is under 0.001 W/kg, more than three orders of magnitude less than what the FCC allows for cell phones. Don’t expect them to warm your brain up first thing in the morning—that’s what coffee is for.

If you’re an iPod addict, you can buy an optional Bluetooth device to connect your hearing aids to your iPod or to replace the headset on your computer-based VoIP phone. The thin palm-sized Bluetooth gadget also lets you redirect your hearing pattern in any direction, including to the right or left in the car if your spouse is driving.

The only limitation I’ve found is that pure sine waves—such as the “Put on your seatbelt!” signal from my car—are distinctly choppy. This doesn’t seem to be an AGC problem but more likely the result of slow data conversion. You can only run DSPs so fast if you want the tiny zinc-air hearing aid batteries to last 7-10 days. I don’t hear any choppiness or distortion when listening to music, but then I can’t do a meaningful comparison since the portions of the spectrum that the Audéos boost I can’t hear very well without them. I’m sorely tempted to do a teardown, but I doubt that the warranty would cover it.

I’ve been writing for years about low-power wireless and experimenting with new technologies as they came along. These little gadgets are the most impressive use of low-power wireless that I’ve seen to date. They’ve brought home to me in a personal way just how much technology can contribute to your personal well being.

Posted in Low-power design | Leave a comment

Powering Down

LPD_Transparent_leaflogo_551x538Ever since Intel hit the Power Wall in 2004—when the Pentium 4 drew 150W and approached 1000 pins—low-power design has come into its own. Over the past decade smart engineers have come up with a seemingly endless number of innovative tricks to stave off the frequently predicted death of Moore’s Law, which was supposed to happen first at 90 nm, then 65 nm, then 40 nm, etc. Still, when gate doping variations of several atoms can cause a transistor to fail, the laws of physics are finally asserting themselves. As one wit observed recently about Moore’s Law, the party isn’t over but the police have arrived and the volume has been turned way down.

On one level better process technologies have gone a long way toward enabling low-power design. Smaller geometries enable lower voltage cores, which helps exponentially on the power front. Strained silicon, silicon-on-insulator, high-K metal gates and other clever process innovations have all enabled the continuing push to smaller geometries and more energy efficient designs.

On the system level design engineers have developed a long succession of power management techniques. Modern microcontollers (MCUs) typically rely on power gating, clock gating, and more recently dynamic (even adaptive) voltage and frequency scaling to minimize power consumption in both active and inactive modes. With the number of sleep modes and voltage islands proliferating, fine-grained power management becomes so complex that most CPUs now rely on separate power management ICs (PMICs). Since MCUs are more self-contained, much of the power management burden is shifted from the embedded developer back to the chip designer.

Low Power –> Ultra-Low Power

If not the chips then the ‘race to the bottom’—in terms of power—between MCU vendors is getting heated. With the numbers they’re hitting, it’s hard to argue that newer MCUs are not indeed ‘ultra-low power’.

Renesas claims their 16-bit RL78/G13 delivers “the lowest power consumption in its class.” With up to 512 KB of Flash and 32 KB of ROM the RL78/G13 can deliver 41 DMIPS performance (32 MHz) while consuming 66 µA/MHz. In Halt mode they consume as little as 0.57 µA (RTC+LCD)–or 0.23 µA  in Stop mode (RAM retention).

TI promotes its 16-bit RISC ‘ultra-low power’ MSP430 line in a wide range of applications, including a wireless sensor circuit that can operate from a single coin cell for up to five years (thanks in part to a very short duty cycle). The MSP430C1101—with 1kB of ROM, 128B RAM, and an analog comparator—draws 160 µA at 1 MHz/2.2V in active mode, 0.7 µA in standby mode, and 0.1 µA in off mode.

Microchip’s answer to the MSP430 is its eXtreme Low Power PIC Microcontrollers with XLP Technology.  XLP processors include 16 to 40 MIPS PIC24 MCU & dsPIC DSC families with up to 256 KB of memory and a variety of I/O options. On its web site Microchip emphasizes how low power its devices are in deep sleep mode, comparing the PIC24F16KA102 favorably to the MSP430F2252 LPM3 at 3V. Comparing power in active modes is considerably more complex, being highly application dependent. That’s what evaluation kits are for.

Silicon Labs claims that its C8051F9xx ultra-low-power product family includes “the most power-efficient MCUs in the industry,” with both the lowest active and sleep mode power consumption (160 µA/MHz /50 nA for the C8051F90x-91x) compared to “competitive devices.” Comparing data sheets is often and exercise in “apples and oranges,” but the numbers do justify the impression that ‘ultra-low power’ is a lot more than marketing hype.

NXP is definitely into green MCUs with its GreenChip ICs that “improve energy efficiency and reduce carbon emissions.” NXP’s recently announced LPC11U00—being a Cortex-M0-based MCU—is decidedly low power, but this one focuses more on connectivity, incorporating a USB 2.0 controller, two synchronous serial port (SSP) interfaces, I2C, a USART, smart card interface3 and up to 40 GPIO pins.

STMicroelectronics features 8- and 32-bit families of ultra-low-power MCUs, apparently skipping over the 16-bit migration path that Microchip needed to fill. The 8-bit STM8L15xx CISC devices can run up to 16 MIPS at 16 MHz but still only draw 200 µA/MHz in active mode and 5.9 µA down to 400 nA in various sleep modes. Like NXP, ST is into connectivity, including a wide range of options on different devices.

Connectivity and flexibility are the main selling point for Cypress’ programmable system-on-chip or PSoC. PSoC 5 is based on a 32-bit Cortex-M3 core running up to 80 MHz. Incorporating a programmable, PLD-based logic fabric, the CY8C54 PSoC family can handle dozens of different data acquisition channels and analog inputs on every GPIO pin. The chip draws 2 mA in active mode at 6 MHz, 2 µA in sleep mode (with RTC) and 330 nA in hibernate with RAM retention.

While the MCU landscape is constantly changing, the specs of low-power processors are increasingly impressive–the payoff of a decade of innovative chip design that shows no signs of letting up. Moore’s Law may be reaching the point of diminishing returns, but my money’s on creative engineers continuing to drive down the power curve for many years to come.

Posted in Low-power design, semiconductors | Tagged , , , , , , , | Leave a comment

The RF Challenge in Portable Designs

cell phoneIn simpler times most designs were digital. Add a few converters to handle I/O and you could ship the product. Consumer electronics—and cell phones in particular—changed all that. Now there are few consumer designs that don’t involve a large analog/mixed-signal component as well as multiple RF chains. Adding a few ADCs and DACs to the signal path isn’t enough; the three worlds are now heavily intertwined.

Digital and analog designs start with some basic differences. Digital designs tend to focus on the time domain, whereas analog designs are more concerned with the frequency domain. Digital designers worry about time delays; analog designers worry about the accuracy of their components, which they can’t change by editing a few lines of code. For RF designers there are no simple components; every resistor has stray capacitance and inductance, and every trace is an antenna. Parasitic extraction hits a whole new level of complexity in RF designs. RF integration is the single biggest challenge for SoC designers and a major headache at the board level, too.

Designing the RF front end for a cell phone involves some serious tradeoffs. The power amplifier (PA) is second only to the display as an energy hog in handsets. Modern handset receivers typically have a sensitivity in the range of -106 dBm; they also need to be able reject a 60 dB out-of-band signal without flattening the front end. The obvious solution is to crank up the power to the front end, since bandwidth and power are directly related—a tough tradeoff in a portable device.

In handsets you’ll also need to provide multiple RF chains that operate on different frequency bands for cellular, Bluetooth, Wi-Fi, UMTS, Mobile WiMAX, GPS and more. Oh, and you want DTV, DAB and FM with that, too? Just finding room on a tiny PC board for a combination of these protocols, each with different antennas operating at different frequencies—or MIMO antennas with multiple data streams—is problematic enough. Keeping them from interacting or radiating spurious signals back into the analog sections of the board is a serious headache. Integrating RF components on silicon along side analog mixers, filters and LNAs is trickier still.

One way to ease the pain of RF integration is to go digital as quickly as possible. So called “digital RF” doesn’t really replace a UHF sine wave with a string of bits, but it comes close. On the receive side, direct-conversion receivers combine direct RF sampling with discrete-time signal processing. The RF signal is sampled at the Nyquist rate, converted into packets, filtered, down-converted and fed to the baseband processor. The transmit PA, in one configuration, is a series of digital NMOS switches that feed a matching network. On-chip capacitors smooth the square waves into an RF sine wave that is then fed to the antenna. This approach can cut PA power consumption in half.

The tools to enable designers to simulate and verify an RF/mixed-signal design have only recently started to appear. Traditionally analog designers have used SPICE models while their digital colleagues used VHDL or Verilog; rationalizing the results was at best time consuming. Now we’re starting to see SystemC models that include concurrency, bit accuracy, timing and hierarchy, enabling designers working at the architectural level to do hardware/software co-design, synthesizing and verifying a design down to the silicon. We’re still not to the point where you can go smoothly from algorithmic exploration to net lists, but we’re getting there.

Someday soon analog and RF will no longer be the exclusive turf of grumpy greybeards in corner cubes. They’ll be just two more tools in every designer’s toolkit.

 

Posted in Cell phones, RF/Wireless | Leave a comment

How Green Is Your Prius?

car in ashpileElectric vehicles (EVs) would seem to have everything going for them: Aside from being quiet and cool, they’re also environmentally friendly and cheap to operate. But are they really? In this month’s issue of IEEE Spectrum (Unclean at Any Speed) Ozzie Zehner dares to challenge those assumptions with a mountain of research. While it’s fun to zip smugly past gas stations, the inconvenient truth is that when you look at the vehicle’s entire life cycle it’s not a pretty picture.

Charge It!

Part of the problem has to do with the sources of energy needed to charge EV batteries. Burning natural gas to produce electricity produces CO2, undercutting one of the key arguments for EVs. Most electricity in the U.S. (and almost all of it in China) is still produced by highly polluting coal-fired power plants. And nuclear power plants? Don’t even ask.

Relying on alternative energy sources doesn’t get us off the hook, either. Solar cells contain heavy metals, and manufacturing them releases some highly toxic greenhouse gases such as sulfur hexafluoride, which has 23,000 times as much global warming potential as CO2. Plus fossil fuels are burned in extracting the materials used in solar cells and wind turbines, not to mention the lithium, copper, and nickel used in EV batteries. None of substances are easily recycled.

Zenher cites an extensive 2010 study by the National Academy of Sciences, “Hidden Costs of Energy: Unpriced Consequences of Energy Production and Use.” Taking a holistic approach the study drew together the effects of vehicle construction, fuel extraction, refining, emissions, and other factors. The researchers found that of course EVs produce no pollution while you’re driving them. However it concluded that the vehicles’ lifetime health and environmental damages (excluding long-term climatic effects) are actually greater than those of gasoline-powered cars. Adding insult to injury the lifetime difference in greenhouse gas emissions between EVs and vehicles powered by low-sulfur diesel was negligible.

Show Me the Money

While the politics of EVs vs. gas engines are attractive, the economics aren’t nearly as compelling.

Replacing a gas tank with a battery bank involves a huge downsizing of available energy. The table below demonstrates the dramatic advantage that gasoline has over even the most efficient batteries. Gasoline has an energy density of about 46 megajoules per kilogram (MJ/kg)—100 times greater than a lithium-ion battery; this in turn translates into 100x more Wh/kg. Batteries are also both heavy and expensive. The battery bank in the Tesla Roadster, for example, accounts for over a third of the weight of the vehicle. The battery bank is also the main reason that EVs are considerably more expensive than comparable gas-powered vehicles.

table

Furthermore—assuming that gasoline can deliver 36.6 kWh/U.S. gallon and that a gallon costs $3.50—it costs a mere $0.01/Wh, almost 50x cheaper per Watt hour than Li-Ion. Those numbers involve comparing the capital expenditure on an automotive Li-Ion battery bank—amortized over the life of the batteries—with the cost of an equivalent unit of energy derived from gasoline. From the consumer’s perspective let’s say you drive your EV 100 miles and recharge it at a cost of under $5. Driving the same distance using your gas engine is likely to cost in the range of $15 (25 mpg @ $3.50/gallon)—3x as much. To the EV driver recharging seems to be a trivial expense compared to pumping $50 worth of gas into your tank. That illusion disappears when it comes time to spend several thousand dollars to replace the battery bank.

Short of some unforeseen, dramatic breakthrough batteries will never be in the same ballpark with gasoline as a power source for cars, though hybrid electric vehicles (HEVs) are an attractive way to split the difference.

Driving Forward

In characteristically downbeat fashion Zehner concludes, “Upon closer consideration, moving from petroleum-fueled vehicles to electric cars begins to look more and more like shifting from one brand of cigarettes to another.” His conclusion: “Perhaps we should look beyond the shiny gadgets now being offered and revisit some less sexy but potent options—smog reduction, bike lanes, energy taxes, and land-use changes to start.”

All good suggestions, but let’s not write off the patient just yet. Addressing the power generation question, natural gas—widely used for backup power generation—is increasingly replacing coal fired power plants; it’s hardly non-polluting, but it’s cheap, plentiful, and an order of magnitude cleaner than coal, presenting a practical partial solution.

Similarly wind power is a non-polluting and highly viable power source, with huge wind farms in West Texas and along the Gulf Coast supplying much of my state’s electricity. Wind power can be used in conjunction with hydroelectric sources—another non-polluting source—by pumping water up into reservoirs while the wind is blowing and running it back down through turbine generators when it isn’t.

On the street level the name of the game is creating more energy efficient electric vehicles. That’s a challenge on which a lot of smart engineers are working, and they’ve made dramatic progress in recent years.

Don’t give up on your Prius quite yet.

Posted in Automotive, Batteries, Clean energy, Electric vehicles, Global warming | Leave a comment

Build Your Own Personal Drone

arducopter

When I was a boy I loved flying model airplanes. I’d laboriously build them from balsawood kits; cover them with tissue; and add a noisy .049 gas engine. Then I’d go to the neighborhood schoolyard and get dizzy flying them in endless circles at the end of control cables. Today for under $100 you can buy a Styrofoam plane with a battery-powered engine and wireless remote control—a cheap radio-controlled (RC) aircraft.

I bought one recently and took it to the neighborhood schoolyard where my son and I had a lot of fun with it. The noisy gas engine had been replaced by a small MCU-controlled BLDC motor running off a 7.2V/1000 mAh NiMH battery. I could easily add a small camera and transmitter and our inexpensive model airplane would suddenly become an unmanned aerial vehicle (UAV)—also known as a drone!

It turns out that a lot of engineering creativity is going into these things. The web site DIYDrones claims to be “the leading community for personal UAVs.” A very active site, it’s sort of a cross between SourceForge and Home Depot. You can download and/or buy just about all the hardware and software you’d ever need to create your own backyard drone.

CrazyflieThe tiniest of the lot is the CrazyFlie Nano Quadcopter that can sit in the palm of your hand but zoom around like a crazed hummingbird, controlled from your PC or Android phone. The CrazyFlie is controlled by a 32-bit STMicro MCU and includes a 3-axis MEMS gyro, 3-axis accelerometer, an altimeter, sensors for heading measurement, and a 0 dBm (1 mW) 2.4 GHz transceiver. Weighing in at just 19 gm it can only carry a payload of 10 gm, so it would be hard pressed to carry a camera—though it’s possible—but it can pack an array of LEDs so you can chase the cat around in the dark. If you want to get creative the software is open source, and expansion headers let you trick out the hardware, too. Priced at $179 with radio.

arducopter2If you want the real deal the ArduCopter 3.0—built around the venerable Arduino platform—claims to be “more than your average quadcopter” (whatever that might be). It’s an open-source multi-rotor UAV. This bad dog includes:

  • Automatic takeoff and landing
  • Auto-level and auto-altitude control
  • ArduPilot, which can automatically pilot the copter to up to 35 waypoints and return it to the launch point (GPS module required, of course)
  • A complete Robot Operating System that can enable multi-UAV swarming (hopefully that’s an option you can turn off)
  • MissionPlanner software, which lets you click on waypoints on a map, to which the Arducopter will then fly
  • Fully scriptable camera controls that can be preset for each waypoint—or you can control them in real time

Since the ArduCopter is a kit, it can take a number of configurations, with lots of options. If you want one ready to fly, the base price is $600—though it goes up quickly from there.

arduplaneIf fixed wing is your cup of tea—and you don’t care about keeping a camera pointed at one place—there’s the ArduPlane, which won the 2012 Outback Challenge UAV competition. Base price is $550, but the extra goodies can add up.

Finally, if you already have an RC plane you can buy APM 2.5 autopilot with GPS ($179)—well, and maybe an optional telemetry kit ($75)—and convert your RC airplane into a fully autonomous UAV. But don’t forget the 5.8 GHz video transmitter and receiver ($190). Suddenly my $95 plastic plane costs 5x as much as I first put into it.

Maybe I’m not that interested in seeing what’s in my neighbor’s yard after all.

 

Posted in Uncategorized | Tagged , | Leave a comment

Where is the next factor of 10 in energy reduction coming from?

Over the last decade chip engineers have come up with a large number of techniques to reduce power consumption: clock gating; power gating; multi-VDD; dynamic, even adaptive voltage and frequency scaling; multiple power-down modes; and of course scaling to ever smaller geometries. However according to U.C. Berkeley’s Jan Rabaey, “Technology scaling is slowing down, leakage has made our lives miserable, and the architectural tricks are all being used.”

If all of the tricks have already been applied, then where is the next factor of 10 in energy reduction coming from? Basically it’s a system-level problem with a number of components:

  1. Continue voltage scaling. As processor geometries keep shrinking, so to do core voltages—to a point. Sub-threshold bias voltages have been the subject of a great deal of research, and the results are promising. Sub-threshold operation leads to minimum energy/operation; the problem is it’s slow. Leakage is an issue, as is variability. But you can operate at multiple MHz at sub-threshold voltages. Worst case when you need speed you can always temporarily increase the voltage. But before that look to parallelism.
  2. Use truly energy-proportional systems. It’s very rare that any system runs at maximum utilization all the time. If you don’t do anything you should not consume anything. This is mostly a software problem. Manage the components you have effectively, but make sure that the processor has the buttons you need to power down.
  3. Use always-optimal systems. Such system modules are adaptively biased to adjust to operating, manufacturing, and environmental conditions. Use sensors to adjust parameters for optimal operation. Employ closed-loop feedback. This is a design paradigm shift: always-optimal systems utilize sensors and a built-in controller.
  4. Focus on aggressive deployment. Design for “better than worst-case”—the worst case is rarely encountered. Operate circuits at lower voltages and deal with the consequences.
  5. Use self-timing when possible. This reduces overall power consumption by not burning cycles waiting for a clock edge.
  6. Think beyond Turing. Computation does NOT have to be deterministic. Design a probabilistic Turing machine. “If it’s close enough, it’s good enough.” Statistical computing I/O is stochastic variables; errors just add noise. This doesn’t change the results as long as you stay within boundaries. Software should incorporate Algorithmic Noise Tolerance (ANT). Processors then can then consist of a main block designed for average case and a cheap estimator block for when that block is in error.

In his keynote at Cadence’s Low Power Technology Summit last October Rabaey emphasized several points that bear repeating:

  • Major reductions in energy/operation are not evident in the near future;
  • Major reductions in design margins are an interesting proposition;
  • Computational platforms should be dynamically self-adapting and include self-regulating feedback systems;
  • Most applications do not need high resolution or deterministic outcomes;
  • The challenge is rethinking applications, algorithms, architectures, platforms, and metrics. This requires inspiration.

What does all of this mean for design methodology? For one thing, “The time of deterministic ‘design time’ optimization is long gone!” How do you specify, model, analyze and verify systems that dynamically adapt? You can’t expect to successfully take a static approach to a dynamic system.

So what can you do? You can start using probabilistic engines in your designs, using statistical models of components; input descriptions that capture intended statistical behavior; and outputs that that are determined by inputs that fall within statistically meaningful parameters. Algorithmic optimization and software generation (aka compilers) need to be designed so that the intended behavior is obtained.

For a model of the computer of the future Rabaey pointed to the best known “statistical engine”—the human brain. The brain has a memory capacity of 100K terabytes and consumes about 20 W—about 20% of total body dissipation and 2% of its weight. It has a power density ~15 mW/cm3 and can perform 1015 computations/second using only 1-2 fJ per computation—a good 100 orders of magnitude better than we  can do in silicon today.

So if we use our brains to design computers that resemble our brains perhaps we can avoid the cosmic catastrophe alluded to earlier. Sounds like a good idea to me.

Posted in Energy Efficiency, Power management | Tagged , , | Leave a comment