Friday 20 March 2015

Linux’s worst-case scenario: Windows 10 makes Secure Boot mandatory, locks out other operating systems 



Microsoft unveiled new information about Windows 10 at its WinHEC conference in China today, and the news is deeply concerning to anyone who values the ability to run non-Microsoft operating systems on their own hardware. Like Windows 8, Windows 10 will ship with support for the UEFI Secure Boot standard — but this time, the off switch (previously mandatory) is now optional.
Let’s back up and review what Secure Boot is. As the name implies, Secure Boot is a security measure that’s meant to protect PCs from certain types of malware that are typically loaded before the OS boot process has begun. With Secure Boot active, the UEFI checks the cryptographic signature of any program that it’s told to load, including the OS bootloader.
Secure-Boot-uefi-2
The image above shows the conventional boot process compared with the Secure Boot process. There’s nothing intrinsically wrong with Secure Boot, and multiple Linux distros support the capability. The problem is, Microsoft mandates that Secure Boot ships enabled. This caused panic in the open source community back in 2011, since the firmware is configured with a list of signed, acceptable keys when the user receives the system. If an alternative OS bootloader isn’t signed with an appropriate key on a Secure Boot-enabled system, the UEFI will refuse to boot the drive.
Microsoft defused the situation back then by mandating that all x86 systems ship with the ability to disable Secure Boot, and by partnering with VeriSign to create a method of signing third-party binaries in exchange for a $99 fee. With Windows 10, the situation is changing.

How Windows 10 changes things

OEMs are still required to ship Secure Boot, but the previously mandatory disable switch is now optional, as Ars Technica reports. With Windows 8, MS had split the feature by CPU architecture — x86 chips had to offer a disable switch, but ARM chips didn’t. Now, the split is between desktop and mobile, where desktop users can choose to offer the option, but mobile devices must leave Secure Boot locked on.
Windows 10 Secure Boot
Image courtesy of Ars Technica
What this means for the future of Linux and alternative OSes is unclear at best. Those who build their own desktops will retain the ability to disable Secure Boot, since Asus or MSI doesn’t know what kind of operating system you’re going to load on the board. But laptops are a different story. Some laptop vendors will undoubtedly continue to ship a “Disable” option on Secure Boot, but vendors like HP and Dell may simply decide that closing the attack vector is more important than user freedom, particularly when the margin on PCs is so low to begin with. When every support call is measured against the handful of dollars an OEM makes on each machine, eliminating the need for such interaction is extremely attractive.
It’s not clear, as of this writing, whether Linux and BSD distro developers will be able to sign their software and install to a Windows 10 system with Secure Boot enabled or not. Regardless, it’s difficult not to see this as another step along the long, slow journey of locking down PC hardware and making it more difficult for end users to control their own software. Psychological research has long confirmed the power of default settings — ship something enabled (or disabled), and the vast majority of users will never change the option. Given that Windows machines were already required to enable Secure Boot by default, where’s the security benefit in making the kill switch optional?
As far as we can tell, there isn’t one.

Wednesday 18 March 2015

The cyber warrior 'princess' who guards Google

Tabriz's biggest concern now is the people who find bugs in Google's software, and sell the information to governments or criminals.
To combat this, the company has set up a Vulnerability Rewards Program, paying anywhere from $100 to $20,000 for reported glitches.
"What we've seen in the last couple of years is what we suspect to be governments trying to intercept communications," said Tabriz. "In one case, there were Iranian-region Gmail users whose connection was being intercepted."
"These incidents are especially scary since they seem to be carried out by large, well-funded organizations or governments," she added.

Women warriors

It's a world away from Tabriz's computer-free childhhod home in Chicago. The daughter of an Iranian-American doctor father, and Polish-American nurse mother, Tabriz had little contact with computers until she started studying engineering at college.
Gaze across a line-up of Google security staff today and you'll find women like Tabriz are few and far between -- though in the last few years she has hired more female tech whizzes.
She admits there's an obvious gender imbalance in Silicon Valley, but for once is stumped on the fault.
"Clearly the numbers make you think 'what is the problem that there aren't more women working in security, that there aren't more women working in technology?" she said.
"And it does make me think what is the problem here? Is it the culture or the atmosphere?"

Thinking outside the screen

Funnily enough, during training sessions Tabriz first asks new recruits to hack not a computer, but a vending machine.
"There's this idea that you need to be a super genius computer geek to be a hacker. But in reality, I think anybody can be a hacker in the real world -- just think of all the non-software examples," said Tabriz.
"A lot of people ask me what's the best answer I've been given to the vending machine problem, and the real answer is there is none. Some people think about how they'd steal their favorite snack; some people figure out how to steal the entire machine of snacks; and some people figure out how they could add some sort of functionality to the machine that wasn't there before"
Tabriz's job is as much about technological know-how, as understanding the psychology of attackers.
"Anybody who's working in defense -- police officers, security, or law enforcement -- has to stop and think 'what is the enemy or the attacker going to do?'" she said.
"Because you always want to stay one steahead of them."

Don’t edit the human germ line? Why not?



GeneClip

The answer no longer comes exclusively from funding bodies picking winners and losers, or from journals holding sway over any knowledge they can sequester and trickle out as they see fit. The question is simply too rich. We need look no further than Nature magazine to see that the tables have turned. The first reference in their widely read commentary on the issue is not to an article in another peer-reviewed journal, but rather an article from the people, an article in the popular science publication MIT Tech Review.Victor Hugo once observed, “there is nothing more powerful than an idea whose time has come.” Not long ago, the world asked whether it can have read privileges to view its own genetic file of life. The answer, wrested from regulating bodies and crusty institutions by the expanding clientele of companies like 23andMe, was a resounding yes. Rapid advances in the ability to make edits under this file system have now forced the hand of researchers around the world into penning a moratorium, a temporary ban on germ-line gene editing. Once again, the world asks, if not now, then when?
The article notes that while some countries have responded to the argument over who can do what to whose genome, and at which positions with an indefinite ban, other countries will simply do. In fact they have already done, in monkeys, and in human embryos beset with genetic predispositions for ovarian or breast cancer. The gene editing techniques that can now be used to police our entire genome, potentially in any cell of the body, can also hit you right in the family jewels — the germ cells. The techniques have names like zinc finger nucleases or TALENs, but the one that has caused the biggest stir is called CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats).
The primary reason for much of the commotion is that CRISPR isn’t all that hard. It’s RNA tag can target specific DNA sequences with decent accuracy, and its on board protein nuclease can cut out the offending region and prepare the wound for repair systems in the cell to act. The main problems right now are that it doesn’t always do its job, it doesn’t always do it only where its supposed to, and it takes finite time to do it. If used not just in static cells, or germ cells in waiting, but in the rapidly dividing cells of say, a developing embryo, then all bets are off. It can still work, but if it catches a dividing cell in the act, when its pants are down so to speak, there is much less predictability.
What is a bit curious, disturbing actually, is that amid all the fuss over editing a little part of one protein in the singular nuclear genome of a cell, places like Britain are in the process of bankrolling related, but much more reckless procedures under the guise of fertility — namely, the mitochondrial transfer procedures that generate what is essentially a three parent embryo. Against the normal background that is our potluck of genetic recombination, many people who potentially stand to benefit from things like CRISPR are asking, what exactly is the problem?
While it is illegal in Britain to modify even a single base pair in human gametes (eggs or sperm), as could conceivably be done in the creation of an IVF embryo, you might now knock yourself out restocking your egg with what ever mitochondria you want. Never mind that empowering the egg in this way potentially introduces 16.5 giga base pairs of new DNA, (as compared with 3.4 giga base pairs nuclear DNA), albeit with ample redundancy.
CRISPR
To better understand some of the issues involved in this kind of germ modification, I would suggest availing yourself of the two articles linked in the next sentence. They highlight some concerns with mitochondrial mutations, heteroplasmy (different brands of mitochondria in the same cell or organism), and potential pitfalls in the elective crafting ofartisanal mitochondrial children. At the center of this issue is a new technique being made available by a company called OvaScience. Their ‘Augment’ procedure takes mitochondria not from a stranger’s egg, or even from somatic cells of the husband, but from supporting cells right next to the egg within the mother’s own defaulting ovaries.
It remains to be seen whether the mitochondrial DNA from these cells is of sufficiently better quality then that in the neighboring eggs. In particular, whether these cells are privy to the selective genetic bottlenecks that the egg is subjected to in vetting its mitochondrial suitors, or whether it is this very bottleneck that is the root cause issue. The founders of the company have made some intriguing discoveries regarding these cells, not least dispelling the myth that a woman is born with all the eggs she will ever have. In mentioning new work at OvaScience (and other places) what the Tech Review article, like many others, misses is that the ability to edit mitochondrial genomes as we would the nuclear is now coming into full view.
Instead of talking about ongoing work at places like OvaScience to do things analogous to CRISPR in stem cells — cells which could be turned into eggs (and might begin to skirt some issues that fall under the rubric of ‘germ cell law’) — we should probably be talking about editing single points in mitochondria. Especially if we have already green-lighted editing the entire mitochondria all at once through complete transfer. One researcher now looking at these issues is Juan Carlos Izpisua Belmonte from the Salk Institute in California. He is evaluating gene-editing techniques to modify the mitochondria in unfertilized eggs to later be used in IVF. If successful, we will soon have concerns even more immediate then CRISPR in germ cells.
At the heart of the issue is the fact that the proteins that make up the respiratory chain that powers our cells are mosaics. In other words, as researcher Nick Lane would say, mitochondria are mosaics. They are built from two genomes, their own DNA and the nuclear DNA, which re-apportions proteins (many once upon a time their own) back to them. Getting this mix right is the premier issue in fertility and any subsequent development of the organism. When negative mutations occur in the subunits making up these respiratory proteins something predictable happens: They don’t fit so close anymore, and subsequently the electrons that need to be transported through them have a more difficult time tunneling through the reaction centers attempting to squeeze out every last drop of energy.
Mr. Lane passes down another quote to us in his forthcoming new book ‘The Vital Question,’ a book which makes much of this discussion a whole lot clearer. It comes from famous biophysicist Albert Szent-Györgyi, and it is a fitting conclusion to our remarks here on tinkering with the file system of life: “Life is nothing but an electron looking for a place to rest.”

Tuesday 17 March 2015



With the launch of the Apple MacBook and Google’s Chromebook Pixel, USB-C (also called USB Type-C) and the accompanying USB 3.1 standard are both hitting market somewhat earlier than we initially expected. If you’re curious about the two standards and how they interact, we’ve dusted off and updated our guide to the upcoming technology. The situation is more nuanced than it’s been with previous USB standard updates — USB 3.1 and USB Type-C connectors may be arriving together on the new machines, but they aren’t joined at the hip the way you might think.

USB Type-C: Fixing an age-old problem

The near-universal frustration over attempts to connect USB devices to computers has been a staple of nerd humor and lampooned in various ways until Intel finally found a way to take the joke quantum.
Super-positioned USB
USB Type-C promises to solve this problem with a universal connector that’s also capable of twice the theoretical throughput of USB 3.0 and can provide far more power. That’s why Apple is pairing up Type-C and USB 3.1 to eliminate the power connector on the MacBook. It’s a goal we agree with, even if we’re less thrilled with the company’s decision to dump USB ports altogether with that single exception. Google’s approach, in providing two USB-C and two regular USB 3.0 ports, is obviously preferable, even though it adds a bit of bulk to the machine.
Apple-MacBook
Type-C connectors will be shipped in a variety of passive adapters (an earlier version of this story erroneously asserted that such cables would not be available, Extremetech regrets the error). The spec provides for passive adapters with USB 3.0 / 3.1 on one end and USB Type-C on the other.

USB-C, USB 3.1 not always hooked together

The Type-C plug can be used with previous standards of USB, which means manufacturers don’t automatically have to adopt expensive 3.1 hardware if they want to include it in mobile devices. Apple, to be clear, is offering USB 3.1 on the new MacBook, though the company hasn’t disclosed which third party vendor is providing the actual chipset support.
USB Type-C port
A USB Type-C port next to USB 3.0.
The disconnect between USB 3.1’s performance standard and the USB Type-C connector is going to inevitably cause confusion. One reason the shift from USB 2.0 to 3.0 was relatively painless is because coloring both the cables and plugs bright blue made it impossible to mistake one type of port for the other.
The upside to decoupling USB 3.1 from USB-C, however, is that companies can deploy the technology on mobile phones and tablets without needing to opt for interfaces that inevitably consume more power. Then again, some might argue that this would be a moot point — the USB controller can be powered down when it isn’t active, and when it is active, the device should be drawing power off the PC or charging port anyway. Heat dissipation could theoretically remain a concern — higher bandwidth inevitably means higher heat, and in devices built to 3-4W specifications, every tenth of a watt matters.

If I had to bet, I’d bet that the 100W power envelope on USB 3.1 will actually be of more practical value than the 10Gbps bandwidth capability. While it’s true that USB 3.1 will give external SSD enclosures more room to stretch their legs, the existing standard still allows conventional mechanical drives to run at full speed, while SSDs can hit about 80% of peak performance for desktop workloads. It might not be quite as good, but it’s a far cry from the days when using USB 2.0 for an external hard drive was achingly slow compared to SATA. Signal overhead is also expected to drop significantly, thanks to a switch to a 128-bit and 132-bit encoding scheme, similar to that used in PCI-Express 3.0.
USB-3.1-Type-C-04
The ability to provide 100W of power, as opposed to 10W, however, means that nearly every manufacturers could ditch clunky power bricks. There would still be concern about ensuring that connect points were sufficiently reinforced, but provided such concerns can be accounted for, the vast majority of laptops could switch over to the new standard. Hard drives and other external peripherals could all be powered by single wires, as could USB hubs for multiple devices.
The higher bandwidth is nice, and a major selling point, but the flippable connector and the power provisioning will likely make more difference in the day-to-day reality of life. As forcompetition with Intel’s Thunderbolt, USB 3.1 will continue to lag Intel’s high-speed standard, but as bandwidth rises this gap becomes increasingly academic. At this point, it’s the features USB doesn’t allow, like RAID and TRIM, that matter more than the raw bandwidth does in most cases.
Apple’s MacBook will be first out the door with USB 3.1 and USB-C support, with vendors scurrying to match the company on both counts. LaCie has announced a new revision of itsPorsche Design Mobile Drive that takes advantage of the Type-C connector, but only offers USB 3.0. It’s going to take time for the 3.1 spec to really show up on peripheral devices, even those that adopt the USB-C cable. Motherboard support outside the Apple MacBook is probably 4-5 months away, though the first peripheral cables should be available well before that point.

NASA’s Curiosity rover up and running again [UPDATED]

Curiosity-Feature

   Update (3/13/2015): According to NASA scientists, Curiosity is once again moving under its own power, having successfully transferred the material its robotic arm was holding into the appropriate analysis device. The team at NASA hasn’t restored the arm to full function yet, but has determined that the rover is safe to drive. Curiosity is now headed up the slopes of Mt. Sharp as it analyzes the mountain’s geology and hunts for telltale signs that life exists — or once existed — on the Red Planet(The original story continues below.
Curiosity’s long-term scientific mission has been on hold since February 27, when a short-circuit froze the robotic arm and put the kibosh on further research. Scientists and engineers working at NASA believe they have isolated the problem, however, and hope to have the rover back online and fully functional by early next week. On February 27, Curiosity was transferring sample powder from its robotic arm to other instruments when it detected what NASA characterizes as a “transient short circuit.” While this lasted less than 1/100 of a second, it was enough to trip the circuit breakers in the rover.
Curiosity self-portrait, with Mount Sharp in the background
Curiosity self-portrait, with Mount Sharp in the background
Since February 27, the rover has been in partial shutdown while engineers tested various facets of the design to find the problem. On Thursday, the problem reoccurred — it appears to be within the subsystem that operates the drill’s percussive action. NASA officials note: “The rover team plans further testing to characterize the intermittent short before the arm is moved from its present position, in case the short does not appear when the orientation is different,” they wrote in the statement. “After those tests, the team expects to finish processing the sample powder that the arm currently holds and then to deliver portions of the sample to onboard laboratory instruments.”
Curiosity’s drill isn’t just a rotating bit — it includes a percussive element to literally hammer into rock as well. A short in this subsystem could prevent the drill from operating at peak efficiency or restrict its operation to specific kinds of material. It might also mean that the drill can only be operated when the robotic arm is in certain positions. The long-term impact on Curiosity’s operation is still unclear. Much of Curiosity’s mission at Mount Sharp, including its detection of organic chemicals and a rise and fall in local methane levels, have relied on the operation of its drill. NASA’s ability to squeeze additional performance out of failing equipment is legendary, however, and this short-circuit is not expected to prevent Curiosity from continuing to explore the Red Planet.
Opportunity Rocks
Opportunity will be pausing to study these “odd rocks”.
In related Mars news, Opportunity’s memory reformat appears to have gone smoothly. We haven’t checked in with Curiosity’s older, smaller cousin since last September, when we covered the news that NASA would attempt to reformat the rover to solve a creeping reboot problem believed to be caused by bad flash memory. Opportunity doesn’t get the same press as Curiosity, but the little rover recently found some “odd rocks” (NASA’s term) that it’s paused to investigate. The big, dark-gray rocks are apparently unusual for this area, and Opportunity will check them out before resuming its trek.

Monday 16 March 2015

Google begins developing its own quantum computer chips, to prepare for the future

UCSB/Google, five-qubit array
 Google's artificial Intelligence team, not content with merely sharing a D-Wave kinda-quantum computer with NASA, has announced that it will now be designing and building its own quantum computer chips. Rather than start from scratch, Google will absorb UC Santa Barbara’s quantum computing group, which recently created a superconducting five-qubit array that shows promise for scaling up to larger, commercial systems. Google, probably just behind IBM, now appears to be one of quantum computing’s largest commercial interests.
As you may know, Google has been researching potential applications of quantum computing since at least May 2013, when it bought a D-Wave quantum annealing computer with NASA. The Vesuvius chip inside the D-Wave system is kind of quantum, but not trulyquantum in the sense that most scientists and physicists would describe a quantum computer. Benchmarks have shown that the D-Wave system only provides small speed-ups under very specific workloads — and in some cases, just your standard desktop PC might be faster than the D-Wave. We’re not saying that Google was hoodwinked, but I don’t think it’s a coincidence that it’s now investing in a very different area of quantum computing.
Enter John Martinis who, in the words of Google’s Hartmut Neven, is “the world’s authority on superconducting qubits.” Martinis used to be at UC Santa Barbara, but it seems he and his entire research team is joining Google’s Quantum AI laboratory. Way back in October 2013 Martinis gave a talk at Google about his work in superconducting qubits (embedded below) — and then in April, he and his team published their latest research in Nature. Seemingly at some point Neven (who runs the Quantum AI lab) was enamored and enthralled enough with the research to pick up the entire team. Presumably some money was involved. I wonder what kind of compensation UCSB gets.
The latest work by Martinis’ team, which will presumably be inherited by Google as it works towards realizing a computer capable of quantum AI, consists of a reliable five-qubit array. In the image at the top of the story, the five crosses are the qubits (called Xmons internally), and the squiggly lines are the readout resonators (for checking what value is stored in the qubit). The whole thing is superconducting — i.e. kept at cryogenic temperatures — but that isn’t really unusual, given that qubits are finicky beasts that very rapidly lose coherence at higher temperatures.
The main breakthrough of this recent work seems to be reliability. Because of their very nature, hardware that operates at a quantum level is unreliable and prone to errors — which then leads to untrustworthy results, and having to run the calculation hundreds of times to make sure you have the right result. The superconducting five-qubit array has a fidelity of over 99%, which is good — but to make it “commercially viable”, the team says it will need to push the error rate down to just “1 in 1,000.”
D-Wave's new 512-qubit Vesuvius chip
D-Wave’s new 512-qubit Vesuvius chip
If you’re looking for more details on UCSB’s Xmons, there’s a slide deck created by Martinis [PDF] that goes into the structure of the qubits, and how they made them so reliable.
For more information on why Google is even investing in quantum computing in the first place, the video below is pretty good. It focuses on the D-Wave (the video was made last year), but all the general ideas are the same. In short, though, Google just wants to make sure it’s ready for the future, when classical computers simply might not have enoughoomph to handle all of the data and calculations required by advanced AI, self-driving vehicles, robots, and so on.

Massive new solid rocket booster successfully test fired, could eventually send humans to Mars


NASA has been grounded since the Space Shuttle program was ended a few years ago. The agency has partnered with private companies like SpaceX and Boeing to develop low-Earth orbit launch and resupply vehicles, but NASA wants to go beyond orbit. The Space Launch System (SLS) will be used to send the Orion capsule to more distant places in the solar system. To get off the ground, SLS will need new rockets, and one of them was just tested in Utah. The solid SLS rocket booster fired by Orbital ATK is the largest and most powerful ever built.
Rockets like the Falcon 9 carry liquid fuel reservoirs, but solid rocket boosters are different. A rocket motor powered by liquid oxygen, refined kerosene, or liquid hydrogen can be turned on and off and provide variable thrust. A solid rocket booster fires and cannot be shut down — they just burn all the way through. SRBs have been used on a variety of larger launch vehicles over the decades, as they provide very high thrust and don’t require refrigerated fuels. For example, the Space Shuttle had two SRBs mounted on either side of the main orange fuel tank.
NASA’s new booster is a more advanced version of the one used to get the shuttle into orbit. The SLS qualification motor (QM-1) can put out 3.6 million pounds of thrust, which is roughly equal to 14 Boeing 747s at maximum power. The QM-1 uses many parts from past shuttle missions, but it has an extra segment that allows it to hold 25% more fuel than NASA’s old SRB. That brings the total height to 177 feet. There will be two solid rocket boosters on the SLS at launch, along with the four main engines, which are also being adapted from the shuttle program. However, the SRBs will provide about 75% of the thrust needed to escape Earth’s gravity. They’ll be jettisoned after use.
This first full-scale test of the SLS booster design went off without a hitch. The booster burned through 1.3 million pounds of propellant in a little more than two minutes (5.5 tons per second). This was a ground test, meaning the booster was bolted down so it didn’t go anywhere. Engineers will evaluate the booster itself and the mountain of data gathered during the test to see how it performed.
Another test of the SLS booster will take place in early 2016 with the booster (QM-2) cooled to 40 degrees F, which is the low end of the ignition range. The QM-1 test was actually conducted with the booster conditioned to 90 degrees Fahrenheit to test performance at the top of the propellant temperature range.
It will still be several years before engineers complete the final design for the SLS system. This ambitious program aims to land humans on an asteroid in the 2020s and on Mars a decade later. NASA is already well into the testing phase of the Orion crew capsule, which had a successful test flight in December.

New vanadium-flow battery delivers 250kW of liquid energy storage

 Imergy Power Systems announced a new, mega-sized version of their vanadium flow battery technology today. The EPS250 series will deliver up to 250kW of power with a 1MWh capacity. We’ve talked about a number of different battery chemistries and designs at ET, from nanobatteries to metal-air, to various lithium-ion approaches, but we’ve not said much about flow batteries — and since this new announcement is a major expansion for the company (their previous battery was a 30kW unit) it’s an opportunity to take a look at the underlying technology.

Feel the flow

A flow battery can be thought of as a type of rechargeable fuel cell. The electrolyte fuel, in this case, is kept in large external tanks that can be pumped through a reactor. One of the characteristics of a flow battery is that the energy storage can be decoupled from the energy output. The size of the reactor determines how much power can be released at once, while the size of the storage tanks determines how much total power can be stored.
Illustration
This, in turn, makes it theoretically much easier to expand the size of a flow battery installation as compared to a lithium-ion battery. Doubling your battery life is theoretically as simple as doubling the size of the storage tank. Flow batteries can charge and discharge rapidly — refilling the tank with “charged” electrolyte can be as simple as opening a nozzle and pumping in the replacement fluid while the original electrolyte is recharged in a separate container.
There are different types of flow batteries and multiple compatible battery chemistries, but Imergy’s designs all use vanadium for both electroactive elements. The ability to fill both ‘sides’ of the equation is an unusual property of vanadium and it simplifies certain aspects of the reactor design. Vanadium flow batteries are extremely stable — leaving the battery in a discharged state causes no damage, and the battery has an estimated lifespan of 30-50 years and supports thousands to tens of thousands of discharge cycles — far more than lithium-ion can manage.
The disadvantage of flow batteries is that the total energy density of the solution is rather low energy density and the complexity of the storage and pumping mechanisms. Research into improving vanadium’s energy density is underway, a team at the Pacific Northwest National Laboratory has found a way to boost the energy density of vanadium batteries by up to 70% by switching to a different electrolyte formulation.

The long-term market

Much of the debate over the long-term usefulness of battery technology in the US centers around whether or not batteries can be combined with solar and wind power while still matching the cost of our existing natural gas, coal, and nuclear plants. What’s often ignored is that these equations look very different in other parts of the world, particularly in Africa or Indonesia where import costs are high, infrastructure limited (or nonexistent) and natural deposits of fossil fuels are low.
Africa also has enormous renewable energy potential — it receives huge amounts of solar power, its hydropower generating capability is largely untapped, and its geothermal and wave power are both abundant. The East African Rift in particular has high potential as a long-term geothermal power source.
Vanadium flow batteries could potentially augment renewable power in many areas across the continent, and Imergy is focusing its efforts on both the developing and the developed world. The company claims it can deliver power for a levelized cost as low as $300 per kWh, which would put it in competition with lithium-ion costs — including, possibly, in competition with Tesla as that company scales up its own industrial battery efforts.
BATTERY POWER USED ALONE TO TRACK ANDROID DEVICES.
Battery
The phone you carry around all day has a myriad of different sensors that measure everything from location to barometric pressure. Apps usually have to adhere to the permission control system built into platforms like iOS and Android to get that information, but a team of researchers at Stanford University has devised a way to collect location information without talking to the GPS hardware. All they need to figure out where you’ve been is access to the battery levels.
Android (specifically a Nexus 4) was used to test this location tracking scheme, but it would work equally well on any mobile device that offered network access and battery stats. Tracking location based on battery activity is predicated on the assumption that the farther a device is from the a cell tower, the more power it uses to maintain a connection. The same is true when it’s inside a building or otherwise obscured by structures.
The researchers call their proof-of-concept application PowerSpy. Before this app can be useful, it must first know what the battery power map of an a route is. In the same way GPS systems use satellites as a point of reference, PowerSpy needs to know what battery performance to expect at different points in a journey and pre-associate that with a GPS location. So as you take the bugged phone around town, the PowerSpy app could watch the amount of juice being drawn and fit that to the map.
So what about all the other things that cause a phone to draw power? Using apps, placing calls, and simply having the screen on will drain a lot of battery. Indeed, this does increase noise in the data. According to the researchers, the algorithm used to track location is not attuned to these short-term fluctuations, but rather to a measurement period of several minutes. This allows the system to filter most of the battery usage that isn’t related to location and the cellular radio.
Routes
To test PowerSpy, the team mapped out a number of different routes between two points. The goal was to determine which route the phone (and the human carrying it) took based entirely on the battery drain. With few apps running, PowerSpy could identify the exact route two-thirds of the time and its overall distance error was 150m on average. That’s not much worse than the course location permission on Android. With a larger suite of apps running in the background including Facebook, Twitter, and Waze, the effectiveness dropped to 20% for exact route fits, but the average distance was still only 400m.
It’s far from perfect, but rather impressive for using only battery drain. The study saysPowerSpy could be improved further if it accounted for the battery usage of individual apps and services, as well as wakelock state. This data is all exposed by Android to some degree without root access. An app like PowerSpy could be used with malicious purposes in mind, but the requirement that you have a map of battery usage for various routes currently limits its real world utility. If that data is acquired and associated with GPScoordinates, this technology could be more useful, and also potentially worrisome.

Flexible nanogenerator harvests muscle movement to power mobile devices.

The consumer world is becoming powered by mobile devices, but those devices are still powered by being tethered to a wall or a reserve power pack. What if you could generate power for your mobile devices simply by moving your body, and the power source was almost unnoticeable? A new device developed at the National University of Singapore aims to fulfill both of those requirements.
The flexible nanogenerator resembles a small, stamp-sized patch that attaches to your skin. It uses your skin as a source of static electricity, and converts it to electrical energy — reportedly enough to power a small electronic device, like a wearable. The device, presented at the MEMS 2015 conference last week, can generate 90 volts of open-circuit voltage when tapped by a finger. The researchers presented the patch as a self-powered device that can track the wearer’s motion.
Electricity
The power generates thanks to the triboelectric effect, which is when certain types of materials can become electrically charged through contact and friction with another material — in this case, the patch gains the charge through fiction with human skin. When the two materials are pulled apart, they generate a current that can be harvested. An electrode is needed in order to harvest the current, so the research team installed a 50nm-thick gold film to get the job done. The gold film sits below a silicone rubber layer composed of thousands of tiny pillars that help create more surface area for skin contact, which in turn creates more friction.
Thanks to the triboelectric effect, creating the device is easier as well — the skin is one of the triboelectric layers that helps produce the effect, so that layer doesn’t need to be built into the device itself, saving time, money, and materials. It also removes something that can go wrong with the device — having one less layer built in means that’s one less part that can break.
In the researchers’ test, a finger-tap on the device was able to generate enough current to power 12 commercial LEDs.
Aside from the obvious benefit of being able to, in theory, indefinitely power a device so long as you keep moving, this type of generator could remove the need for batteries in certain mobile devices — your smartwatch or fitness tracker could be made even thinner and lighter. Who knows — one day this type of generator could even generate enough energy to power your smartphone, perhaps even removing the battery entirely, which is one of the biggest constraints to smartphone development and design.