Storing Volts

While electric vehicles have been around since the late 19th century, they only became practical with the development of energy storage systems that sport a lot better horsepower-to-weight ratio than bulky lead acid batteries.

By the mid-90’s automakers had pretty much given up on being able to go very far on batteries alone, which led Toyota to introduce the Prius—the first commercial hybrid—in Japan in 1997. In EV mode the Prius is powered by a sealed 38-module 6.5 Ah/274V NiMH battery pack weighing 53.3 kg. That works out to 1.78 kWh total capacity. According to the EPA’s formula, one gallon of gasoline is equivalent to 33.7 kWh—almost 20x what the Prius’ battery alone can deliver. So it’s hardly surprising that the Prius relies primarily on its internal combustion engine for propulsion.

Volt battery pack

The Chevrolet Volt features a much larger battery with a considerably higher energy density than the Prius. The Volt uses a 16 kWh (197 kg) manganese spinel lithium-polymer prismatic battery pack, which alone can power the Volt for 35 miles (56 km). The Volt’s lithium-ion battery is 2.5x larger in terms of energy density than the Prius’ NiMH battery (.0812 vs. .0319 kWh/kg). Considering that the energy density of NiMH is under 2x that of NiMH—140-300 Wh/liter for NiMH vs. 250-620 Wh/liter for lithium ion—that’s well on the high side of what you would expect.

In addition to having a greater energy density than NiMH—in terms of both weight and volume—lithium-ion batteries also display a much lower self-discharge rate; a greater maximum number of charge/discharge cycles (i.e., they last longer); a more linear discharge rate, which enables more accurate prediction of remaining capacity; and they perform better at low temperatures.

As far as durability goes, both battery types are about the same: NiMH batteries can be discharged and recharged 500-1000 times, with Li-ion batteries being good for 400-1200 cycles. Since replacing an EV battery pack can be a very expensive proposition—currently about $8,000 for the Volt—manufacturers typically guarantee them for an extended period. GM guarantees the Volt’s battery bank for 100,000 miles or eight years.

Not Your Dad’s Li-Ion Battery

Li-ion battery

OK, assuming your Dad had Li-ion batteries, the ones in the Volt are better. The Volt’s battery design is based on technology developed at Argonne National Laboratory. The Lab used x-ray absorption spectroscopy to study new cathode compositions. They came up with a manganese-rich cathode that resulted in a dramatic increase in the battery’s energy storage capacity while at the same time making it less likely to overheat, and therefore safer and easier to maintain. To complete the trifecta, the new cathode material is also cheaper to manufacture.

Even if there isn’t much beyond Li-ion in terms of energy density—unless you’re comfortable with a thorium-based energy source—there’s still room for improvement. According to Khalil Amine, an Argonne senior materials scientist, “Based on our data, the next generation of batteries will last twice as long as current models.” Chances are your car would give out long before your battery does.

Recycling

When your Volt battery bank finally sends you an End of Life notice, what can you do with it? For one thing you could keep it and use it to help recharge your new Volt battery. Or you might rig it to an inverter bank as a backup source of electricity during power outages or at least peak billing times.

If GM gives you a credit for turning in your old battery on a new one, what can they do with it? The EPA claims that rechargeable batteries are not an environmental hazard if they’re not dumped in landfills; European governments aren’t quite so sanguine, since Li-ion isn’t exactly something you’d like to wind up in your water supply. Both the cathode and anode material can be recycled, which is what most jurisdictions require.

In the end the Volt’s energy storage system turns out to be as high-tech as the rest of the car. Considering how much more reliable electric motors are than internal combustion engines, Volt owners could wind up owning their cars for a very long time.

[This post was originally part of a series of articles on the Chevy Volt for the UBM/Avnet series Drive for Innovation.]

Posted in Automotive, Batteries, Electric vehicles | Leave a comment

Are You Ready To Be An Internet Node?

iStock_000000907067MediumI read an interesting article in this month’s IEEE Communications on the impact that 5G wireless communications will supposedly have on us. Call me old fashioned but I still remember when mobile phones were used for making phone calls. Well that was then, this is now.

According to the authors, “Instead of the consumers going to the Internet, the Internet will come to them, and in fact we will become nodes on the Internet.” We will become “both the source of valuable information and the sink for highly personalized information and content.” For this to happen, “all people and their information on the context of their environment need to be continuously available to one another.”

How about you? Are you ready to be an Internet node?

Lead me around by the node

First and second generation mobile phones simply handled analog and digital voice and text messaging. Then they invented camera phones and Netflix and all hell broke loose. The tsunami of video data swamped cellular networks and forced telcos to erect new basestations at a breakneck pace, which continues today. Third-generation (3G) phones introduced mobile broadband, yet consumers kept griping about slow data rates. So bring on 4G and start planning for 5G. But when they arrive I guarantee you that consumers will still gripe that they’re too slow. We want it all. Now.

So far mobile phones have been a good thing—they let you stay connected (when you want to) and call people and not just places. They let you access a world of knowledge that would otherwise be locked away. But the Google-served personalized ads have gone viral—less like a YouTube video than something your kids brought home from school.

Take for example what happens when you click on your browser to go to a new web page. The page loads an impression, which is forwarded to an ad server. Using information gathered from Internet cookies left on your computer while visiting other web sites—plus information gleaned by robots scanning your Facebook, Twitter, LinkedIn, and other social media postings—the server knows your interests, location, age, and a lot more about your personal life. The server then tries to match what it knows about you against an inventory of pre-sold ads. If it’s a match, then bingo, up pops an ad specially designed to catch your attention.

If no match is made in a fraction of a second the server forwards your profile to an international ad exchange, where a network of ad servers bid for the ad slot in real time. The winner then pops up a banner ad for the book/car/shampoo/local Chinese restaurant you were looking for 10 minutes ago.

Things get even more personal if you’re walking through a shopping mall. Getting a very accurate GPS fix on your phone’s location, the shop you’re walking by may pop up an ad on your phone with a special offer good only for the next hour. If this hasn’t happened to you yet, it will. My daughter the shopper may love this stuff, but it creeps me out to have my location be publicly trackable within a few feet.

To see this in action open a map on your cell phone; switch to satellite view; find yourself on the map; set it for maximum resolution (a combination of cell tower and Wi-Fi triangulation); and watch the map as you walk around. Those GPS satellites are in low earth orbit (100-1,200 miles), but I can watch the little location dot on the map (superimposed on a satellite photo of my house) move as I walk from one side of my small office to the other.

Back to the future—or pulling back from the future?

In the brave new world the authors foresee, will it be a crime to go “off the grid”? Could you be cited or fined for creating a cyber blind spot in your little corner of the world?

And here you thought Facebook was playing fast and loose with your personal information (well, that and it’s being trolled by bots). If this is the future I’m hardly a Luddite but I am so not ready for it.

Posted in Cell phones, RF/Wireless, Wireless | Tagged | Leave a comment

When Low-Power Design Gets Personal

I lost my hearing in Hong Kong in 1996. Well, everything much over 1 kHz, that is. By all rights I should have lost it during rock concerts back in the ‘60s, but I guess the crowds made it hard to get too close to the speakers. Getting too close to pile drivers turned out to be a big mistake.

In Hong Kong I lived for a few years on Lantau Island and took the ferry to work every day to my office in Central. They were upgrading the Central ferry piers, which included spending several weeks driving huge steel I-beams directly through asphalt down to bedrock right next to where I got off the ferry. The piercing sound of a pile driver banging a steel I-beam into bedrock could be heard all around the harbor. When you’re walking next to it for two blocks it’s extremely painful and can do permanent damage to your auditory nerves, which according to the doctor is what happened to me. I noted with pained amusement that this all happened right outside the offices of the Occupational Deafness Compensation Board.

Many years ago I worked as a stereo technician and could easily hear notes above 15 kHz. Suddenly my hearing was down 20 dB (1000x) at 5 kHz vs. 1 kHz—the chart resembles an expert-level ski slope. Since speech intelligibility depends heavily on higher frequencies, this was a serious problem. I could easily converse with one or two other people in a quiet environment, but as soon as the noise level would rise—or even if the television was on in the background—I’d lose the thread. Holding a conversation in a noisy restaurant or bar was completely out of the question.

I got fitted for a couple of the hot, new (1996) “completely in (ear) canal” (CIC) hearing aids, which looked and felt like chewed peanuts. They used 6-channel DSPs to cover the range from 500 Hz to 5 kHz. No programming was involved, just a one-time frequency compensation made by the audiologist. AGC was primitive, and they were of only limited help in noisy environments. As soon as I stepped out in the street after getting them fitted, I was greeted by two jackhammers, which caused them to completely shut down. I popped open the battery compartments and they made great earplugs. This technology was not ready for Hong Kong.

Low-Power Wireless State of the Art

That was then, this is now. I’m now wearing a pair of sub-miniature, 16-channel wireless hearing aids. These little puppies are awesome.

The Phonak Audéo SMART hearing aids sit behind your ears, with an almost invisible wire connecting to a tiny transducer that fits in your ear canal. Unlike my old ‘chewed peanut’ CIC devices, these allow unamplified sound to enter around them, so you can hear low-frequencies directly, with—in my case anyway—only the highs boosted. These come with 8/16/32 DSP channels and a number of programs that adjust automatically for different acoustic environments, including the aforementioned noisy restaurants and bars, where I’m happy to report they work superbly.

They’re also wireless. After initially placing them in my ears, the audiologist tuned each device up individually from his computer across the room. No wires, no “Stick your head in this acoustic box.” At the click of a mouse he showed me different programs for different listening environments. Cool!

Each device has two microphones, one pointing forward and one behind you. They communicate with each other to focus on a 45 degree cone in front of you; any loud sounds outside of that cone are attenuated; they can even notch out a single point source 45 degrees behind you to the left. You can tap either earpiece to select different programs to suit a wide range of acoustic environments, ranging from listening to a flute solo to sitting in the front row at a Metallica concert. I’ve found that the automatic setting can handle everything to my satisfaction, though Metallica might be ill advised.

The Audéo’s wireless technology has a transfer rate of 300 kBits/s using continuous phase frequency shift keying. The transmission frequency is 10.6 MHz with a bandwidth of 300 kHz. This frequency was chosen to be able to support the transfer of complex broadband data with virtually no interference.

The magnetic field intensity needed for hearing instrument wireless communication purposes is low intensity as they are placed on the head in close proximity to each other. The measured field strengths for Audéo hearing aids is 3 mV/m at 1 m, which equates to 0.18 picoWatts. The magnetic field strength is < -62 dB ?A/m at 10 m. The Specific Absorption Rate (SAR) value of the Audéo hearing aids is under 0.001 W/kg, more than three orders of magnitude less than what the FCC allows for cell phones. Don’t expect them to warm your brain up first thing in the morning—that’s what coffee is for.

If you’re an iPod addict, you can buy an optional Bluetooth device to connect your hearing aids to your iPod or to replace the headset on your computer-based VoIP phone. The thin palm-sized Bluetooth gadget also lets you redirect your hearing pattern in any direction, including to the right or left in the car if your spouse is driving.

The only limitation I’ve found is that pure sine waves—such as the “Put on your seatbelt!” signal from my car—are distinctly choppy. This doesn’t seem to be an AGC problem but more likely the result of slow data conversion. You can only run DSPs so fast if you want the tiny zinc-air hearing aid batteries to last 7-10 days. I don’t hear any choppiness or distortion when listening to music, but then I can’t do a meaningful comparison since the portions of the spectrum that the Audéos boost I can’t hear very well without them. I’m sorely tempted to do a teardown, but I doubt that the warranty would cover it.

I’ve been writing for years about low-power wireless and experimenting with new technologies as they came along. These little gadgets are the most impressive use of low-power wireless that I’ve seen to date. They’ve brought home to me in a personal way just how much technology can contribute to your personal well being.

Posted in Low-power design | Leave a comment

Powering Down

LPD_Transparent_leaflogo_551x538Ever since Intel hit the Power Wall in 2004—when the Pentium 4 drew 150W and approached 1000 pins—low-power design has come into its own. Over the past decade smart engineers have come up with a seemingly endless number of innovative tricks to stave off the frequently predicted death of Moore’s Law, which was supposed to happen first at 90 nm, then 65 nm, then 40 nm, etc. Still, when gate doping variations of several atoms can cause a transistor to fail, the laws of physics are finally asserting themselves. As one wit observed recently about Moore’s Law, the party isn’t over but the police have arrived and the volume has been turned way down.

On one level better process technologies have gone a long way toward enabling low-power design. Smaller geometries enable lower voltage cores, which helps exponentially on the power front. Strained silicon, silicon-on-insulator, high-K metal gates and other clever process innovations have all enabled the continuing push to smaller geometries and more energy efficient designs.

On the system level design engineers have developed a long succession of power management techniques. Modern microcontollers (MCUs) typically rely on power gating, clock gating, and more recently dynamic (even adaptive) voltage and frequency scaling to minimize power consumption in both active and inactive modes. With the number of sleep modes and voltage islands proliferating, fine-grained power management becomes so complex that most CPUs now rely on separate power management ICs (PMICs). Since MCUs are more self-contained, much of the power management burden is shifted from the embedded developer back to the chip designer.

Low Power –> Ultra-Low Power

If not the chips then the ‘race to the bottom’—in terms of power—between MCU vendors is getting heated. With the numbers they’re hitting, it’s hard to argue that newer MCUs are not indeed ‘ultra-low power’.

Renesas claims their 16-bit RL78/G13 delivers “the lowest power consumption in its class.” With up to 512 KB of Flash and 32 KB of ROM the RL78/G13 can deliver 41 DMIPS performance (32 MHz) while consuming 66 µA/MHz. In Halt mode they consume as little as 0.57 µA (RTC+LCD)–or 0.23 µA  in Stop mode (RAM retention).

TI promotes its 16-bit RISC ‘ultra-low power’ MSP430 line in a wide range of applications, including a wireless sensor circuit that can operate from a single coin cell for up to five years (thanks in part to a very short duty cycle). The MSP430C1101—with 1kB of ROM, 128B RAM, and an analog comparator—draws 160 µA at 1 MHz/2.2V in active mode, 0.7 µA in standby mode, and 0.1 µA in off mode.

Microchip’s answer to the MSP430 is its eXtreme Low Power PIC Microcontrollers with XLP Technology.  XLP processors include 16 to 40 MIPS PIC24 MCU & dsPIC DSC families with up to 256 KB of memory and a variety of I/O options. On its web site Microchip emphasizes how low power its devices are in deep sleep mode, comparing the PIC24F16KA102 favorably to the MSP430F2252 LPM3 at 3V. Comparing power in active modes is considerably more complex, being highly application dependent. That’s what evaluation kits are for.

Silicon Labs claims that its C8051F9xx ultra-low-power product family includes “the most power-efficient MCUs in the industry,” with both the lowest active and sleep mode power consumption (160 µA/MHz /50 nA for the C8051F90x-91x) compared to “competitive devices.” Comparing data sheets is often and exercise in “apples and oranges,” but the numbers do justify the impression that ‘ultra-low power’ is a lot more than marketing hype.

NXP is definitely into green MCUs with its GreenChip ICs that “improve energy efficiency and reduce carbon emissions.” NXP’s recently announced LPC11U00—being a Cortex-M0-based MCU—is decidedly low power, but this one focuses more on connectivity, incorporating a USB 2.0 controller, two synchronous serial port (SSP) interfaces, I2C, a USART, smart card interface3 and up to 40 GPIO pins.

STMicroelectronics features 8- and 32-bit families of ultra-low-power MCUs, apparently skipping over the 16-bit migration path that Microchip needed to fill. The 8-bit STM8L15xx CISC devices can run up to 16 MIPS at 16 MHz but still only draw 200 µA/MHz in active mode and 5.9 µA down to 400 nA in various sleep modes. Like NXP, ST is into connectivity, including a wide range of options on different devices.

Connectivity and flexibility are the main selling point for Cypress’ programmable system-on-chip or PSoC. PSoC 5 is based on a 32-bit Cortex-M3 core running up to 80 MHz. Incorporating a programmable, PLD-based logic fabric, the CY8C54 PSoC family can handle dozens of different data acquisition channels and analog inputs on every GPIO pin. The chip draws 2 mA in active mode at 6 MHz, 2 µA in sleep mode (with RTC) and 330 nA in hibernate with RAM retention.

While the MCU landscape is constantly changing, the specs of low-power processors are increasingly impressive–the payoff of a decade of innovative chip design that shows no signs of letting up. Moore’s Law may be reaching the point of diminishing returns, but my money’s on creative engineers continuing to drive down the power curve for many years to come.

Posted in Low-power design, semiconductors | Tagged , , , , , , , | Leave a comment

The RF Challenge in Portable Designs

cell phoneIn simpler times most designs were digital. Add a few converters to handle I/O and you could ship the product. Consumer electronics—and cell phones in particular—changed all that. Now there are few consumer designs that don’t involve a large analog/mixed-signal component as well as multiple RF chains. Adding a few ADCs and DACs to the signal path isn’t enough; the three worlds are now heavily intertwined.

Digital and analog designs start with some basic differences. Digital designs tend to focus on the time domain, whereas analog designs are more concerned with the frequency domain. Digital designers worry about time delays; analog designers worry about the accuracy of their components, which they can’t change by editing a few lines of code. For RF designers there are no simple components; every resistor has stray capacitance and inductance, and every trace is an antenna. Parasitic extraction hits a whole new level of complexity in RF designs. RF integration is the single biggest challenge for SoC designers and a major headache at the board level, too.

Designing the RF front end for a cell phone involves some serious tradeoffs. The power amplifier (PA) is second only to the display as an energy hog in handsets. Modern handset receivers typically have a sensitivity in the range of -106 dBm; they also need to be able reject a 60 dB out-of-band signal without flattening the front end. The obvious solution is to crank up the power to the front end, since bandwidth and power are directly related—a tough tradeoff in a portable device.

In handsets you’ll also need to provide multiple RF chains that operate on different frequency bands for cellular, Bluetooth, Wi-Fi, UMTS, Mobile WiMAX, GPS and more. Oh, and you want DTV, DAB and FM with that, too? Just finding room on a tiny PC board for a combination of these protocols, each with different antennas operating at different frequencies—or MIMO antennas with multiple data streams—is problematic enough. Keeping them from interacting or radiating spurious signals back into the analog sections of the board is a serious headache. Integrating RF components on silicon along side analog mixers, filters and LNAs is trickier still.

One way to ease the pain of RF integration is to go digital as quickly as possible. So called “digital RF” doesn’t really replace a UHF sine wave with a string of bits, but it comes close. On the receive side, direct-conversion receivers combine direct RF sampling with discrete-time signal processing. The RF signal is sampled at the Nyquist rate, converted into packets, filtered, down-converted and fed to the baseband processor. The transmit PA, in one configuration, is a series of digital NMOS switches that feed a matching network. On-chip capacitors smooth the square waves into an RF sine wave that is then fed to the antenna. This approach can cut PA power consumption in half.

The tools to enable designers to simulate and verify an RF/mixed-signal design have only recently started to appear. Traditionally analog designers have used SPICE models while their digital colleagues used VHDL or Verilog; rationalizing the results was at best time consuming. Now we’re starting to see SystemC models that include concurrency, bit accuracy, timing and hierarchy, enabling designers working at the architectural level to do hardware/software co-design, synthesizing and verifying a design down to the silicon. We’re still not to the point where you can go smoothly from algorithmic exploration to net lists, but we’re getting there.

Someday soon analog and RF will no longer be the exclusive turf of grumpy greybeards in corner cubes. They’ll be just two more tools in every designer’s toolkit.

 

Posted in Cell phones, RF/Wireless | Leave a comment

How Green Is Your Prius?

car in ashpileElectric vehicles (EVs) would seem to have everything going for them: Aside from being quiet and cool, they’re also environmentally friendly and cheap to operate. But are they really? In this month’s issue of IEEE Spectrum (Unclean at Any Speed) Ozzie Zehner dares to challenge those assumptions with a mountain of research. While it’s fun to zip smugly past gas stations, the inconvenient truth is that when you look at the vehicle’s entire life cycle it’s not a pretty picture.

Charge It!

Part of the problem has to do with the sources of energy needed to charge EV batteries. Burning natural gas to produce electricity produces CO2, undercutting one of the key arguments for EVs. Most electricity in the U.S. (and almost all of it in China) is still produced by highly polluting coal-fired power plants. And nuclear power plants? Don’t even ask.

Relying on alternative energy sources doesn’t get us off the hook, either. Solar cells contain heavy metals, and manufacturing them releases some highly toxic greenhouse gases such as sulfur hexafluoride, which has 23,000 times as much global warming potential as CO2. Plus fossil fuels are burned in extracting the materials used in solar cells and wind turbines, not to mention the lithium, copper, and nickel used in EV batteries. None of substances are easily recycled.

Zenher cites an extensive 2010 study by the National Academy of Sciences, “Hidden Costs of Energy: Unpriced Consequences of Energy Production and Use.” Taking a holistic approach the study drew together the effects of vehicle construction, fuel extraction, refining, emissions, and other factors. The researchers found that of course EVs produce no pollution while you’re driving them. However it concluded that the vehicles’ lifetime health and environmental damages (excluding long-term climatic effects) are actually greater than those of gasoline-powered cars. Adding insult to injury the lifetime difference in greenhouse gas emissions between EVs and vehicles powered by low-sulfur diesel was negligible.

Show Me the Money

While the politics of EVs vs. gas engines are attractive, the economics aren’t nearly as compelling.

Replacing a gas tank with a battery bank involves a huge downsizing of available energy. The table below demonstrates the dramatic advantage that gasoline has over even the most efficient batteries. Gasoline has an energy density of about 46 megajoules per kilogram (MJ/kg)—100 times greater than a lithium-ion battery; this in turn translates into 100x more Wh/kg. Batteries are also both heavy and expensive. The battery bank in the Tesla Roadster, for example, accounts for over a third of the weight of the vehicle. The battery bank is also the main reason that EVs are considerably more expensive than comparable gas-powered vehicles.

table

Furthermore—assuming that gasoline can deliver 36.6 kWh/U.S. gallon and that a gallon costs $3.50—it costs a mere $0.01/Wh, almost 50x cheaper per Watt hour than Li-Ion. Those numbers involve comparing the capital expenditure on an automotive Li-Ion battery bank—amortized over the life of the batteries—with the cost of an equivalent unit of energy derived from gasoline. From the consumer’s perspective let’s say you drive your EV 100 miles and recharge it at a cost of under $5. Driving the same distance using your gas engine is likely to cost in the range of $15 (25 mpg @ $3.50/gallon)—3x as much. To the EV driver recharging seems to be a trivial expense compared to pumping $50 worth of gas into your tank. That illusion disappears when it comes time to spend several thousand dollars to replace the battery bank.

Short of some unforeseen, dramatic breakthrough batteries will never be in the same ballpark with gasoline as a power source for cars, though hybrid electric vehicles (HEVs) are an attractive way to split the difference.

Driving Forward

In characteristically downbeat fashion Zehner concludes, “Upon closer consideration, moving from petroleum-fueled vehicles to electric cars begins to look more and more like shifting from one brand of cigarettes to another.” His conclusion: “Perhaps we should look beyond the shiny gadgets now being offered and revisit some less sexy but potent options—smog reduction, bike lanes, energy taxes, and land-use changes to start.”

All good suggestions, but let’s not write off the patient just yet. Addressing the power generation question, natural gas—widely used for backup power generation—is increasingly replacing coal fired power plants; it’s hardly non-polluting, but it’s cheap, plentiful, and an order of magnitude cleaner than coal, presenting a practical partial solution.

Similarly wind power is a non-polluting and highly viable power source, with huge wind farms in West Texas and along the Gulf Coast supplying much of my state’s electricity. Wind power can be used in conjunction with hydroelectric sources—another non-polluting source—by pumping water up into reservoirs while the wind is blowing and running it back down through turbine generators when it isn’t.

On the street level the name of the game is creating more energy efficient electric vehicles. That’s a challenge on which a lot of smart engineers are working, and they’ve made dramatic progress in recent years.

Don’t give up on your Prius quite yet.

Posted in Automotive, Batteries, Clean energy, Electric vehicles, Global warming | Leave a comment

Build Your Own Personal Drone

arducopter

When I was a boy I loved flying model airplanes. I’d laboriously build them from balsawood kits; cover them with tissue; and add a noisy .049 gas engine. Then I’d go to the neighborhood schoolyard and get dizzy flying them in endless circles at the end of control cables. Today for under $100 you can buy a Styrofoam plane with a battery-powered engine and wireless remote control—a cheap radio-controlled (RC) aircraft.

I bought one recently and took it to the neighborhood schoolyard where my son and I had a lot of fun with it. The noisy gas engine had been replaced by a small MCU-controlled BLDC motor running off a 7.2V/1000 mAh NiMH battery. I could easily add a small camera and transmitter and our inexpensive model airplane would suddenly become an unmanned aerial vehicle (UAV)—also known as a drone!

It turns out that a lot of engineering creativity is going into these things. The web site DIYDrones claims to be “the leading community for personal UAVs.” A very active site, it’s sort of a cross between SourceForge and Home Depot. You can download and/or buy just about all the hardware and software you’d ever need to create your own backyard drone.

CrazyflieThe tiniest of the lot is the CrazyFlie Nano Quadcopter that can sit in the palm of your hand but zoom around like a crazed hummingbird, controlled from your PC or Android phone. The CrazyFlie is controlled by a 32-bit STMicro MCU and includes a 3-axis MEMS gyro, 3-axis accelerometer, an altimeter, sensors for heading measurement, and a 0 dBm (1 mW) 2.4 GHz transceiver. Weighing in at just 19 gm it can only carry a payload of 10 gm, so it would be hard pressed to carry a camera—though it’s possible—but it can pack an array of LEDs so you can chase the cat around in the dark. If you want to get creative the software is open source, and expansion headers let you trick out the hardware, too. Priced at $179 with radio.

arducopter2If you want the real deal the ArduCopter 3.0—built around the venerable Arduino platform—claims to be “more than your average quadcopter” (whatever that might be). It’s an open-source multi-rotor UAV. This bad dog includes:

  • Automatic takeoff and landing
  • Auto-level and auto-altitude control
  • ArduPilot, which can automatically pilot the copter to up to 35 waypoints and return it to the launch point (GPS module required, of course)
  • A complete Robot Operating System that can enable multi-UAV swarming (hopefully that’s an option you can turn off)
  • MissionPlanner software, which lets you click on waypoints on a map, to which the Arducopter will then fly
  • Fully scriptable camera controls that can be preset for each waypoint—or you can control them in real time

Since the ArduCopter is a kit, it can take a number of configurations, with lots of options. If you want one ready to fly, the base price is $600—though it goes up quickly from there.

arduplaneIf fixed wing is your cup of tea—and you don’t care about keeping a camera pointed at one place—there’s the ArduPlane, which won the 2012 Outback Challenge UAV competition. Base price is $550, but the extra goodies can add up.

Finally, if you already have an RC plane you can buy APM 2.5 autopilot with GPS ($179)—well, and maybe an optional telemetry kit ($75)—and convert your RC airplane into a fully autonomous UAV. But don’t forget the 5.8 GHz video transmitter and receiver ($190). Suddenly my $95 plastic plane costs 5x as much as I first put into it.

Maybe I’m not that interested in seeing what’s in my neighbor’s yard after all.

 

Posted in Uncategorized | Tagged , | Leave a comment

Where is the next factor of 10 in energy reduction coming from?

Over the last decade chip engineers have come up with a large number of techniques to reduce power consumption: clock gating; power gating; multi-VDD; dynamic, even adaptive voltage and frequency scaling; multiple power-down modes; and of course scaling to ever smaller geometries. However according to U.C. Berkeley’s Jan Rabaey, “Technology scaling is slowing down, leakage has made our lives miserable, and the architectural tricks are all being used.”

If all of the tricks have already been applied, then where is the next factor of 10 in energy reduction coming from? Basically it’s a system-level problem with a number of components:

  1. Continue voltage scaling. As processor geometries keep shrinking, so to do core voltages—to a point. Sub-threshold bias voltages have been the subject of a great deal of research, and the results are promising. Sub-threshold operation leads to minimum energy/operation; the problem is it’s slow. Leakage is an issue, as is variability. But you can operate at multiple MHz at sub-threshold voltages. Worst case when you need speed you can always temporarily increase the voltage. But before that look to parallelism.
  2. Use truly energy-proportional systems. It’s very rare that any system runs at maximum utilization all the time. If you don’t do anything you should not consume anything. This is mostly a software problem. Manage the components you have effectively, but make sure that the processor has the buttons you need to power down.
  3. Use always-optimal systems. Such system modules are adaptively biased to adjust to operating, manufacturing, and environmental conditions. Use sensors to adjust parameters for optimal operation. Employ closed-loop feedback. This is a design paradigm shift: always-optimal systems utilize sensors and a built-in controller.
  4. Focus on aggressive deployment. Design for “better than worst-case”—the worst case is rarely encountered. Operate circuits at lower voltages and deal with the consequences.
  5. Use self-timing when possible. This reduces overall power consumption by not burning cycles waiting for a clock edge.
  6. Think beyond Turing. Computation does NOT have to be deterministic. Design a probabilistic Turing machine. “If it’s close enough, it’s good enough.” Statistical computing I/O is stochastic variables; errors just add noise. This doesn’t change the results as long as you stay within boundaries. Software should incorporate Algorithmic Noise Tolerance (ANT). Processors then can then consist of a main block designed for average case and a cheap estimator block for when that block is in error.

In his keynote at Cadence’s Low Power Technology Summit last October Rabaey emphasized several points that bear repeating:

  • Major reductions in energy/operation are not evident in the near future;
  • Major reductions in design margins are an interesting proposition;
  • Computational platforms should be dynamically self-adapting and include self-regulating feedback systems;
  • Most applications do not need high resolution or deterministic outcomes;
  • The challenge is rethinking applications, algorithms, architectures, platforms, and metrics. This requires inspiration.

What does all of this mean for design methodology? For one thing, “The time of deterministic ‘design time’ optimization is long gone!” How do you specify, model, analyze and verify systems that dynamically adapt? You can’t expect to successfully take a static approach to a dynamic system.

So what can you do? You can start using probabilistic engines in your designs, using statistical models of components; input descriptions that capture intended statistical behavior; and outputs that that are determined by inputs that fall within statistically meaningful parameters. Algorithmic optimization and software generation (aka compilers) need to be designed so that the intended behavior is obtained.

For a model of the computer of the future Rabaey pointed to the best known “statistical engine”—the human brain. The brain has a memory capacity of 100K terabytes and consumes about 20 W—about 20% of total body dissipation and 2% of its weight. It has a power density ~15 mW/cm3 and can perform 1015 computations/second using only 1-2 fJ per computation—a good 100 orders of magnitude better than we  can do in silicon today.

So if we use our brains to design computers that resemble our brains perhaps we can avoid the cosmic catastrophe alluded to earlier. Sounds like a good idea to me.

Posted in Energy Efficiency, Power management | Tagged , , | Leave a comment

Weightless Weighs In

astronautNow that PCs are old news and seemingly everyone on earth has a cell phone, the Next Big Thing promises to be machine-to-machine (M2M) communication, giving rise to the Internet of Things (IoT)—presumably a parallel universe to the Internet of People (IoP).

Whether you believe AT&T’s prediction of 50 billion connected devices by the year 2020 or IBM’s of 1 trillion devices by 2015, the numbers are huge. Every vendor with a vested interest is arguing that their wireless solution is the best way to connect these devices, at least in certain applications. Now suddenly there’s a new entrant in the race—Weightless. As one wag joked, “Weightless is not 1G, 2G, 3G or even 4G – it is ZERO G!”

Weightless is a new low cost, low power, long-range wireless protocol designed for M2M communications. The design is the brainchild of Professor William Webb, co-founder of Neul Ltd., CEO of the Weightless SIG, and author of Understanding Weightless. First announced in 2011, Weightless has picked up some serious backers, including ARM, CSR, and Cable & Wireless. The goal is to make it the first global standard for M2M communications.

Its proponents claim Weightless has a number of advantages vs. other protocols:

  • Cost—Cost is comparable to Bluetooth modules, less than $2. Also the cost of the infrastructure would be a lot less than for cell phones, since the protocol can go up to 10 km—all things being equal—meaning you need a lot fewer base stations. Finally, Weightless reuses the unlicensed white space between TV channels, so there’s no upfront massive investment in spectrum (unless the FCC decides to put it up for auction).
  • Power consumption—Weightless devices are designed for a minimum of 10 years battery life, since remote wireless sensors aren’t amenable to frequent battery replacement. This is possible in part because Weightless is a very lightweight protocol that spends little time in active mode. Also, it uses spread spectrum technology, which minimized output power. Finally, Weightless devices have allocated time slots, so they aren’t constantly listening to the network and can stay asleep most of the time. Weightless basestations only page connected devices every 15 minutes, varying the symbol rate based on signal strength.
  • Range—Using sub-GHz frequencies, Weightless devices have very good propagation and penetration characteristics vs. Wi-Fi, Bluetooth, and other protocols that utilize the 2.4 GHz ISM band.

How does it work?

Weightless is designed to work in the so called “white spaces” previously occupied by analog TV signals; typically this is in part of the UHF band approximately spanning 470MHz – 790MHz depending on the country. The FCC has ruled that these bands can be used for unlicensed devices, but only if they can detect the presence of other users and not interfere with them. That pretty much rules out Wi-Fi, which comes up short on interference detection and frequency agility.

Weightless uses time division duplexing (TDD), so both the uplink and downlink occupy the same channel. Since Weightless devices are assigned a particular time slot, they can spend most of their time asleep and needn’t constantly poll the channel, just waking up and transmitting only at preset intervals.

Weightless uses either phase shift keying or quadrature amplitude modulation (QAM) depending on signal strength and the amount of interference. It also utilizes a “whitening” algorithm to spread the signal and make it appear more like white noise, thus reducing interference. The data rates for the downlink range from 2.5 Kbps to 16 Mbps.

modulation schemes

As the table indicates Weightless uses a spreading algorithm to create a longer data sequence when the signal levels are weak. It reduces the data rate and shifts to a simpler modulation scheme in order to reduce the error rate or to gain additional range.

Despite relying on a TDD scheme Weightless also implements frequency hopping in order to reduce interference and maximize the data rate. This also helps to reduce the effects of Rayleigh fading. In its preferred implementation Weightless makes use of narrowband uplink channels in order to balance the link budget with relatively high power base stations and low-power terminals.

Weightless utilizes root raised cosine pulse shaping to convert the square waves from the digital baseband into sine waves that can be fed to the RF PA. This would typically be handled by a DAC, but Weightless provides a software approach should you choose to go direct from baseband to antenna.

Weightless systems operate in master-slave mode, with the basestation as the master and the terminals as slaves. Basestations have separate IP addresses and backhaul capability. When they go live they contact a master database, which knows their location, power, and estimated coverage radius. When a terminal within the coverage range of a basestation announces its presence, the basestation queries the database for a clear frequency, which it then assigns to that terminal. When the terminal starts transmitting the basestation sends that information back to the database server along with signal levels.

As conditions change the basestation negotiates changes in frequency and modulation with the terminal as needed. The central database—with a few already in place in the U.S. and the U.K.—is key to enabling this all to work, since sub-GHz signals can travel over the horizon, causing interference of which the transmitting station was unaware, since intervening mountains might prevent it from hearing the distant transmitter. If this happens the database server will be aware of the problem and instruct the basestation to shift to another frequency.

Launching an Open Standard

Webb and the Weightless SIG folks argue that only wireless protocols have been standardized will be really successful, since different ends of a wireless link will come from different vendors, and without a universal standard that connection isn’t liable to work. However, seeing an immediate market opportunity, the Weightless SIG chose to develop their own standard rather than wait years for the IEEE or ETSI to hash one out. This was the approach taken by the Bluetooth SIG, and that’s worked out.

weightless chip

In late 2011 Neul, a founder Member of the Weightless SIG and a member of the Weightless Promoter Group, presented v0.6 of the Weightless Specification to a small group of companies for ongoing development work to commence. The Weightless SIG currently has a draft specification (version 0.9) under review by its members, and it plans to formally release version 1.0 on April 3, 2013. The specification will be open to all but with licensing arrangements that are yet to be formalized. Once the specification is published the Weightless SIG proposes to pass it to ETSI for consideration as a formal specification; presumably an IEEE specification would follow at some point.

Weightless moved beyond the concept stage last month when Neul announced first silicon of Iceni, which it bills as “the world’s first TV White Space ASIC.” Iceni operates over the entire TV white space frequency  range from 470 MHz to 790 MHz supporting both 6 MHz and 8 MHz channel bandwidths. It features the adaptive modulation schemes listed above; data encryption; programmable I/Os for controlling an external RF front end; an on-board, low-power MCU; and a memory-mapped parallel bus interface and discrete interrupt lines for waking an applications processor.

Will the M2M Future be Weightless?

The danger is that the Weightless SIG is basically a startup, with only an early specification and limited vendor support. That is changing rapidly, with over 500 members registering in the last 12 months. But then again there is no lack of capable IEEE protocols that never gained much market traction, including—fairly or not—ultra-wideband (UWB), HyperLAN, 802.22 (WRAN), and WiMAX. Technical success doesn’t ensure market success, and it’s too early to judge either in this case.

Weightless seems to be a well designed protocol for M2M communications, though it’s not without competition; and standardized or not it’s not necessarily the obvious choice for all applications. However as an alternative to cellular it makes a lot of sense. Far from being weightless in that domain, it may well turn out to be a heavyweight. But that will take time, and only time will tell. Still, the Weightless SIG is off to a good start, and we wish them well.

 

Posted in ARM, RF/Wireless, semiconductors, Spectrum, Wireless | Leave a comment

Mmm—Raspberry Pi!

Pi logoHaving had great fun playing with Beagle ($125) and Panda ($175) boards, I was happily surprised when my backordered Raspberry Pi suddenly arrived a few days ago. At $35 for a tricked out, credit-card size single-board computer, it was too good a deal to pass up. Despite its diminutive size and price, the little puppy stacks up well to its heftier competitors.

Actually “competitors” may be the wrong choice of words, since the Beagle and Panda are pitched to embedded developers and the Raspberry Pi’s purpose is to teach young students about computers. The computer was developed by the Raspberry Pi Foundation, a charitable organization set up by Cambridge University to develop a tiny, cheap, programmable computer for kids.

The Raspberry Pi is built around a Broadcom BCM2835 SoC, which contains an ARM1176JZFS, with floating point, running at 700 Mhz, and a Videocore 4 GPU. The GPU is capable of BluRay quality playback, using H.264 at 40 MBits/s. It has a fast 3D core accessed using the supplied OpenGL ES2.0 and OpenVG libraries. There are two versions of the board available: the Model A ($25), which has 256 MB of RAM and no Ethernet connection; and the Model B ($35) with 512 MB of RAM, two USB ports, and an Ethernet port. Both are available exclusively through Newark/element 14 and (outside of America) RS Components. In addition there are lots of accessories, including cases, cables, expansion cards, and starter bundles. I bought the Model B board and a clear plastic box ($7.35) into which it neatly snapped.

Baking the Pi

Before you can boot the board you need to pre-load the operating system onto an SD card. The operating system is a Raspian-optimized version of Debian Linux. To get started you need to download an image file with the uninspiring (though possibly appropriate) name of “wheezy.” Once you download the zip file you need to run a checksum program (sha1sum) to verify that the checksum on the downloaded file corresponds to the one shown on the download site. After spending 23 minutes downloading the 482 MB zip file I wasn’t happy that the checksums didn’t match. I did a second download and they still didn’t match. I wasn’t prepared to try it a third time, so I figured what the hell, let’s try it as is.

Since I did the download on my Windows PC I next had to download, unzip, and run Win32Diskmanager. Then I was able to copy the Wheezy image file to the SD card and transfer it to the Pi. I connected the board via a USB cable to an external power supply—drawing up to 750 mA, powering from a PC isn’t possible—and connected a monitor to the HDMI port. After rebooting, logging in, and starting the desktop, “You will find yourself in a familiar-but-different desktop environment” according to the Getting Started guide. Right on both counts.

The Pi comes with a minimal set of programs and games, though through the Pi Store you can get a lot of different programs, most of them free. Of course you first need to get online, which I was able to do pretty easily by plugging an 802.11n WAN adaptor into one of the two USB ports (a wireless keyboard went into the other). But where was the browser? The only likely candidate was this funny looking green thing in the upper right hand corner of the screen, which I double clicked. That didn’t explain much since all the labels and explanations were in Arabic (or possibly Devanagari)! The same was true for the labels in most of the other programs, though fortunately the Pi Store and the Debian Reference were in English. Those mismatched checksums were starting to worry me. I’ve never encountered a Linux virus before, but that doesn’t mean it can’t happen.

Once I hacked at the mystery program I figured out that it was the browser, and typing a URL in the appropriate place took me where I wanted to go. That worked well, if slowly—well, at least by an unfair comparison to my 2.5 GHz PC. The Pi is supposed to be able to display HD video, so I went to the Amazon site to run a couple of movie trailers to check it out. When I tried to run one, I was told I needed to install the latest Adobe Flash player. I downloaded the Linux (YUM) 32-bit Flash player, but when it didn’t automatically install I opened the downloaded file—only to find all the instructions in Arabic/Devanagari. Aargh!! Time to bite the bullet, download Wheezy from another source, and try it again—later.

Buggy downloads notwithstanding, the Raspberry Pi is a great little computer with a large network of developers finding a lot of new consumer and even commercial uses for it—it is a general purpose computer after all. It doesn’t have the ecosystem of Arduino, for example, but that’s only so far. There are limits to what you can do with the Pi—you’re stuck with the 512 MB of memory, for example—but hey, you can overclock it and run some pretty cool games. Not bad for 35 bucks!

Posted in ARM, Raspberry Pi | Tagged , | 2 Comments