Friday, December 31, 2010

Solar powered microchips put batteries in the shade



In a new, more efficient approach to solar powered microelectronics, researchers have produced a microchip which directly integrates photovoltaic cells. While harnessing sunlight to power microelectronics isn't new, conventional set-ups use a separate solar cell and battery. What sets this device apart from is that high-efficiency solar cells are placed straight onto the electronics, producing self-sufficient, low-power devices which are highly suitable for industrial serial production and can even operate indoors.
The autonomous microsystem was developed by the Semiconductor Components group at the University of Twente's MESA+ Institute for Nanotechnology led by Professor Jurriaan Schmitz. The researchers collaborated with colleagues from Nankai University in Tianjin, China and the Debye Institute of Utrecht University. The research was made possible by the STW Technology Foundation.
Instead of manufacturing the solar cell separately, the design sees the chip used as a base and the solar cell applied to it layer by layer. According to the UT release, this results in a more efficient production process, uses fewer materials and ultimately performs better.
The production process has not been trouble-free with the researchers finding that the fragile electronics can easily be damaged. For this reason it was decided to use amorphous silicon or CIGS (copper - indium - gallium – selenide) solar cells. The manufacturing of these cells does not influence the electronics, and these types of solar cells also produce sufficient power to allow the microprocessors to operate in low-light or indoors. There is a catch though – the chip's energy use must be well below 1 milliwatt.
Tests have shown that the electronics and the solar cells function properly, and the manufacturing process is also highly suitable for industrial serial production with the use of standard processes.
The paper Above-CMOS a-Si and CIGS Solar Cells for Powering Autonomous Microsystems by J. Lu, W. Liu, C.H.M. van der Werf, A.Y. Kovalgin, Y. Sun, R.E.I. Schropp and J. Schmitz was presented at the International Electron Device Meeting in San Francisco in December.

Samsung to unveil next-gen flexible and transparent AMOLED displays at CES 2011


There’s bound to be all manner of display technologies vying for eyeballs at CES 2011 when it kicks off in Las Vegas next week and two prototype AMOLED displays from Samsung Mobile Display (SMD) will definitely be high on our list of things to check out. The first is a 4.5-inch 800 x 480 (WVGA) resolution flexible AMOLED display concept prototype for mobile devices, while the second is the world’s largest transparent AMOLED display prototype for use in PC monitors and TVs.

Flexible AMOLED display prototype

SMD’s 4.5-inch flexible AMOLED display is two millimeters (0.08-in) thick and can be rolled down to a radius of one centimeter (0.39-in). The concept prototype’s 800 x 480 resolution, which Samsung claims is four times that of the previous most flexible AMOLED prototype constructed, comes courtesy of a new plastic substrate that can withstand the 450-500 degree Celsius temperatures required in the manufacturing process.
As this overcomes the problem of previous plastic materials melting during the manufacturing process that made commercialization of such devices difficult, Samsung says the concept display on show marks a major step on the road to mass production for the next-gen display, which is aimed at smartphones and tablet PCs.

Transparent AMOLED display prototype

The second prototype display to be unveiled is aimed at larger screen applications such as TVs and PC monitors. The 19-inch transparent AMOLED display prototype sports a qFHD (quad Full High Definition) resolution. This is a non-standard resolution of 3840 x 2160 pixels arranged in a 16:9 aspect ratio that gets its name from being four times the resolution of 1080p.
The prototype display is the world’s first large transparent AMOLED display prototype and, while the average amount of transparency previously achieved has been below 10 percent, SMD’s display maintains up to 30 percent transparency whether it is turned on or off. Samsung says this will allow the technology to be used for surfing the internet while watching TV or even watching TV on windows – and by that it means the glass kind, including car windows, not the operating system.
As well as the 19-inch prototype, SMD will also be exhibiting a 14-inch qFHD transparent AMOLED display designed for notebooks.

IBM researchers bring Racetrack memory another step closer to reality



Racetrack memory is an experimental form of memory that looks to combine the best attributes of magnetic hard disk drives (low cost) and solid state memory (speed) to enable devices to store much more information, while using much less energy than current memory technologies. Researchers at IBM have been working on the development of Racetrack memory for six years and have now announced the discovery of a previously unknown aspect of key physics inside the new technology that brings it another step closer to becoming a reality.
Instead of making computers seek out the stored data it needs, as is the case with traditional computing systems, Racetrack memory automatically moves data to where it can be used by sliding magnetic bits back and forth along nanowire “racetracks.” The researchers say that, because the data is stored as magnetic patterns – also known as domains – in racetracks just a few tens of nanometers wide, the technology would allow for portable devices to be created that could store all the movies produced within a given year with room to spare.
IBM has already proven that domains can act as nano-sized data keepers that can store at least 100 times more information than today’s techniques and can also be accessed at much greater speeds. The domain walls are moved at speeds of hundreds of miles per hour and stopped precisely at the position needed by controlling electrical impulses in the device, thereby allowing massive amounts of stored information to be accessed in less than a billionth of a second.
Now, for the first time, the researchers have been able to measure the time and distance of domain wall acceleration and deceleration in response to electric current pulses, which allows the precise control of the placement of the domains.
“We discovered that domain walls don't hit peak acceleration as soon as the current is turned on, and that it takes them exactly the same time and distance to hit peak acceleration as it does to decelerate and eventually come to a stop,” said Dr. Stuart Parkin, an IBM Fellow at IBM Research – Almaden.
“This was previously undiscovered in part because it was not clear whether the domain walls actually had mass, and how the effects of acceleration and deceleration could exactly compensate one another. Now we know domain walls can be positioned precisely along the racetracks simply by varying the length of the current pulses even though the walls have mass,” Parkin added.
This surprised the scientists because previous experiments had shown no evidence for acceleration and deceleration for domain walls driven along smooth racetracks with current.
The scientists say that, aside from giving them an unprecedented understanding and control over the magnetic movements inside these devices, the discovery also brings Racetrack memory closer to marketplace viability. It is also likely to be of interest to other researchers working on the technolog

Thursday, December 30, 2010

IBM's annual list of five innovations set to change our lives in the next five years

IBM has announced its fifth annual Next Five in Five – a list of five technologies that the company believes “have the potential to change the way people work, live and play over the next five years.” While there are no flying cars or robot servants on the list, there are holographic friends, air-powered batteries, personal environmental sensors, customized commutes and building-heating computers.

3D telepresence

It may not be a flying car, but it’s definitely one we’ve seen in sci-fi movies before – the ability to converse with a life-size holographic image of another person in real time. The futurists at IBM point to recent advances in 3D cameras and movies, predicting that holography chat (aka 3D telepresence) can’t be all that far behind. Already, the University of Arizona has unveiled a system that can transmit holographic images in near-real-time.
It is also predicted that 3D visualization could be applied to data, allowing researchers to “step inside” software programs (wasn’t that just in a movie?), computer models, or pretty much anything else that is limited by a simple 2D screen. IBM compares it to the way in which the Earth appears undistorted when we experience it first-hand in three dimensions, yet it appears pinched at the top and bottom when we see it on a two-dimensional world map.

Air-powered or non-existent batteries

Lithium-air batteries are already in the works, and IBM predicts that batteries “that use the air we breath to react with energy-dense metal” will result in smaller, lighter rechargeable batteries that last ten times longer than today’s lithium-ion variety. While such batteries could be used in everything from cars to home appliances, it is also suggested that small items such as mobile phones might not need batteries at all. IBM is trying to reduce the amount power required for such devices to less than 0.5 volts per transistor. At those rates, it is claimed, they could be powered via “energy scavenging” – like already-existing kinetic wrist watches that get their power from the user’s arm movements, or experimental piezoelectric devices.

Personal sensors creating “citizen scientists”

As it currently stands, most scientific data must be gathered by scientists, who have to go out in the field and set up sensors or other data recording devices. Within five years, however, a lot of that data could be gathered and transmitted by sensors in our phones, cars, wallets, computers, or just about anything else that is subjected to the real world. Such sensors could be used to create massive data sets used for everything from fighting global warming to tracking invasive species. IBM also sees custom scientific smartphone apps playing a part in “citizen science,” and has already launched an app called Creek Watch, that allows us regular folks to update the local water authority on creek conditions.

Customized commutes

Invaluable as Mapquest and other online mapping services have become to many of us, apparently it’s just the tip of the iceberg. In the not-so-distant future, says IBM, sensors and other data sources (such as the aforementioned citizen scientists, perhaps?) will provide a continuous stream of information on traffic conditions, road construction, public transit schedules, and other factors that could affect your commute. When you inquire about the quickest way of getting from A to B, computer systems will do more than simply consulting a map – they will also take into account all the variables unique to that day and time, combine them with mathematical models and predictive analytics technologies, and advise a route accordingly. It is also possible that, utilizing such data, traffic management systems could learn traffic patterns, and self-adjust themselves to minimize congestion.

Harvesting computer heat

It is estimated that half of the energy consumed by data centers goes toward cooling computer processors, with most of the removed hot air simply being blown into the atmosphere. Instead, IBM sees that heat being captured to warm the air in other areas of the building, to heat water, or to be converted into electricity. The company has already developed an on-chip water-cooling system for computer clusters, which is being demonstrated on the Swiss Aquasar supercomputer. It utilizes a network of microfluidic capillaries inside a heat sink, attached to the surface of each chip. Water flows within a few microns of the semiconductor material, picks up heat from it, then pipes the warm water to a heat exchanger – from there, the cooled water returns to the computers, within a closed loop system.
As with last year’s list, given that all of these technologies are already in experimental use, it’s a pretty good bet that they will indeed one day find their way our lives. Whether that day is within the next five years, however, is another question.

Tuesday, December 28, 2010

Scientists successfully manipulate qubits with electrical fields



Until now, the common practice for manipulating the electron spin of quantum bits, or qubits, – the building blocks of future super-fast quantum computers – has been through the use of magnetic fields. Unfortunately, these magnetic fields are extremely difficult to generate on a chip, but now Dutch scientists have found a way to manipulate qubits with electrical rather than magnetic fields. The development marks yet another an important development in the quest for future quantum computers, which would far outstrip current computers in terms of speed.
Just like a normal computer bit, a qubit can adopt the states ‘0’ and ‘1’. One way to make a qubit is to trap a single electron in semiconductor material. It’s state can be set by using the spin of an electron, which is generated by spinning the electron on its axis. As it can spin in two directions, one direction represents the ‘0’ state, while the opposite direction represents the ‘1’ state.
Until now, the spin of an electron has been controlled by magnetic fields but the scientists from the Kavli Institute of Nanoscience at Delft University of Technology and Eindhoven University of Technology have now succeeded in controlling the electron spin in a qubit with a charge or an electric field.
According to Leo Kouwenhoven, scientist at the Kavli Institute of Nanoscience at TU Delft this form of control has major advantages. "These spin-orbit qubits combine the best of both worlds. They employ the advantages of both electronic control and information storage in the electron spin," he said.
In another important quantum computing development, the scientists have also been able to embed these qubits into semiconductor nanowires. The scientists were able to embed two qubits in nanowires measuring just nanometers in diameter and micrometers in length made of indium arsenide.
"These nanowires are being increasingly used as convenient building blocks in nanoelectronics. Nanowires are an excellent platform for quantum information processing, among other applications," said Kouwenhoven.
The scientists’ findings appear in the current issue of the journal Nature.

Students design electronic device that indicates safe drinking water



The worldwide shortage of clean drinking water is a serious problem, although in many cases there’s a relatively simple solution – just leave the tainted water outside in clear plastic bottles, and let the sun’s heat and ultraviolet rays purify it. This approach is known as SODIS (SOlar DISinfection of water in plastic bottles), and it removes 99.9 percent of bacteria and viruses – results similar to those obtained by chlorine. Unfortunately, however, there’s been no reliable way of knowing when the water has reached a safe level of purity. Now, four engineering students from the University of Washington have created a simple, inexpensive device that does just that... and they won US$40,000 in the process.
The UW students took part in a contest promoted by InnoCentive Inc., a company that hosts a website where organizations post technical challenges, and anyone can send in their solutions for a chance to win cash prizes. In this case, the nonprofit GlobalGiving Foundation had asked other nonprofits around the world to submit their water-related challenges, from which it chose five to post on InnoCentive – the Rockerfeller Foundation supplied the prize money. The challenge the students took up had been submitted by the Bolivia-based Fundación SODIS, a nonprofit group that promotes the use of SODIS in Latin America.
The students, Chin Jung Cheng, Charlie Matlack, Penny Huang and Jacqueline Linnes, developed a simple device using parts from a keychain that blinks when exposed to light. When attached to a water bottle, it monitors how much light is passing through the water. An indicator light blinks on and off as long as particulates are still obstructing the light flow, and stops blinking once the water is safe to drink. It is also able to tell when a bottle of water is present in front of it, so it’s not trying to measure data when nothing’s there.



It is estimated that parts for each device would cost about US$3.40, although bulk buying should push that figure even lower. Matlack described it as containing “all the same components that you'd find inside a dirt-cheap solar calculator, except programmed differently.”
Fundación SODIS now holds a non-exclusive license to develop the technology, although a donor from the foundation has offered Matlack US$16,000 to do so himself. Along with Linnes and another student, he is now setting up a nonprofit business called PotaVida to produce and promote the water bottle indicator, and is looking for industry partners to add their expertise.
“We're at a point where we recognize the need for work on this beyond engineering,” he said. “Ultimately, the hardest part is going to be to get people to use it.”

Wednesday, December 22, 2010

Word Lens app turns your phone into a real-time translator


Word Lens translates printed words in real time on your iPhone. Can our jet packs be far behind? Developed by Quest Visual, Word Lens is an augmented-reality translation app that uses your phone's camera to view printed words and translate them into another language as you watch. If you’re traveling for business or on vacation and need to read a street sign or a menu, point your phone and Word Lens instantly translates it, maintaining the color and font as it goes.
Word Lens is currently available for the iPhone via iTunes. The app is free, with languages available for in-app purchase at US$5 each. So far, only Spanish-to-English and English-to-Spanish are available, but Quest Visual has plans to offer additional languages soon. After you purchase the languages, they are downloaded to your phone so you do not need a network connection to use Word Lens.
Quest Visual says the app is as easy to use as taking a picture with your phone. The app offers a zoom feature so you can crop out extraneous details, and a flashlight feature to light up the text if necessary. In addition, you can translate words by typing them in. Word Lens works best on clearly printed text, and does not work with decorative fonts or handwriting.
The app uses optical character recognition (OCR) technology to analyze the image and translate the words it finds. You can test out the Word Lens OCR capability in the free app using two cute features that will spell all words in the image backwards, or digitally erase all the words in the image. Like most translators and translation software, Word Lens is not perfect, but Visual Quest promises that you can at least get the general meaning of the text.
Word Lens works with the iPhone 4, iPhone 3GS, and iPod Touch with camera, and requires iOS 4.0 or later (The iPhone 3GS and iPod Touch do not support the zoom and flashlight features). There is no official word on an Android version yet.
Quest Visual seems to have a hit on its hands. In its first day of release on iTunes, Word Lens quickly climbed into the top 40 app chart. Hopefully we can look forward to its continued development, with more languages and improved translations.

Friday, December 17, 2010

WWF introduces new PDF-like file format to stop you from printing



The World Wide Fund for Nature (WWF) says that an area of forest the size of Greece is cleared every year and that a significant proportion of that wood is pulped to make paper. In an effort to curb the needless printing of documents, the German branch of the organization has teamed up with Jung von Matt to introduce a new PDF-like digital file format that actually prevents a user from sending documents to the printer.


Even in these enlightened days of digital documents and in spite of various public, business and government efforts to reduce and recycle, trees are still being cut down to make paper. Ahead of next year being designated International Year of the Forests by the United Nations, WWF Germany has developed a new WWF file format to focus our attention on such issues every time we save a digital document.
Any document that doesn't need to be printed can be saved with a WWF file extension and when it's subsequently opened in a reader, the print option is blocked. WWF says that a WWF file can be viewed by most software that's able to read PDF documents – and I can confirm that this is certainly true of readers from Foxit and Adobe, although neither company has indicated official support for the development.
"We think the PDF ISO standard and Acrobat have tremendous potential to help customers with their efforts to go green," Adobe's Senior Director of Product Management for Acrobat Solutions told Gizmag. "Adobe Acrobat allows customers to create PDF with a range of security permissions, including the ability to disallow printing. The WWF format is based on the PDF standard and it is great to see WWF leveraging PDF in creative ways. At this point, we don't intend to support the .wwf file extension. We do participate with the ISO standards groups to further improve PDF and helping customers better leverage PDF for efficient and eco-friendly document sharing and printing is an important part of that effort."
Anyone wishing to support the new WWF document format will first need to download some free conversion software developed by the Jung von Matt advertising agency (currently compatible only with Mac OS X systems, a Windows version is on the way). Once installed, a new "Save As WWF" option will appear as an extra print option or be available via the application dock.
WWF Germany says that the campaign is meant to be viral and an extra page tagged onto each new format document will help introduce new users to the campaign and encourage awareness about how we use paper in our digital lives. If you don't want this extra page added to catalogs, official documents, CV's and so on then you'll need to choose another method of saving files and run the risk that such things may end up in a print queue somewhere.
More information on the campaign, together with a link to the conversion software, is available from the Save As WWF website.

Thursday, December 2, 2010

New IBM chip technology integrates electrical and optical devices on the same piece of silicon

IBM has announced another breakthrough in its long term research goal to harness the low power consumption and incredible speed promised by optical computing. Following on from the Germanium Avalanche Photodetector – a component able to receive optical information signals at 40 Gb/sec and multiply them tenfold using a mere 1.5V supply – the company has now unveiled a new chip technology that integrates electrical and optical devices on the same piece of silicon. So how far can this technology take us? Eventually, IBM hopes, all way to the Exascale – that's one million trillion calculations per second.
IBM says the new technology, called CMOS (Complementary Metal-Oxide Semiconductor) Integrated Silicon Nanophotonics, will revolutionize the way chips communicate and enable an improvement of over 10 times the integration density than is feasible with current manufacturing techniques by integrating optical devices and functions onto a silicon chip. This is possible because IBM’s new technology sees a single transceiver channel with all accompanying optical and electrical circuitry occupying only 0.5mm2, which is ten times smaller than previous efforts. This means it should be possible to manufacture single-chip transceivers as small as 4x4mm2 that can receive and transmit over a trillion bits (Terabit) per second.

Standard CMOS foundry manufacture

In addition to combining electrical and optical devices on a single chip, IBM says its new technology can be produced on the front-end of a standard CMOS manufacturing line without the need for any new or special tooling. This approach allows silicon transistors to share the same silicon layer with silicon nanophotonics devices and, to make this approach possible, IBM researchers have developed a suite of integrated ultra-compact active and passive silicon nanophotonics devices that are scaled down to the diffraction limit – the smallest size that dielectric optics can afford.
IBM says single-chip optical communications transceivers can now be manufactured in a standard CMOS foundry, rather than assembled from multiple parts made with expensive compound semiconductor technology. This is made possible through the addition of a few more processing modules to a standard CMOS fabrication flow and enables a variety of silicon nanophotonics components, such as: modulators, germanium photodetectors and ultra-compact wavelength-division multiplexers, to be integrated with high-performance analog and digital CMOS circuitry.

Shooting for an Exaflop

By dramatically increasing the speed and performance between chips, IBM expects the new technology to further its ambitious Exascale computing program, which is aimed at developing a supercomputer that can perform one million trillion calculations – or an Exaflop – in a single second. Such a supercomputer would be around one thousand times faster than the fastest machine existing today.
“The development of the Silicon Nanophotonics technology brings the vision of on-chip optical interconnections much closer to reality,” said Dr. T.C. Chen, vice president, Science and Technology, IBM Research. “With optical communications embedded into the processor chips, the prospect of building power-efficient computer systems with performance at the Exaflop level is one step closer to reality.”
The details of IBM’s research effort were presented at the major international semiconductor industry conference SEMICON held in Tokyo on the December 1, 2010.

Monitor blood pressure while scrolling and clicking with the MDMouse


Monitoring blood pressure at home is recommended by the American Heart Association for the estimated 74.5 million American adults suffering from hypertension. CalHealth has created a blood pressure monitor that's housed in a computer mouse. After a user pushes a finger into the cuff monitor, the device sends readings to software on a PC for analysis, or to send on to doctors via email.
CalHealth's MDMouse is a fully functional USB optical mouse with a sphygmomanometer payload. The blood pressure meter extends on a rotating arm out of the body of the mouse. The user inserts a finger, and an air pump expands an air bag inside the tube around the digit. A pressure sensor stops the pump when it detects that the right amount of pressure has been applied and the user sets the monitoring to start via the computer software.
The pressure on the finger is initially increased beyond cutoff and then slowly decreased until arterial vessel pulsation is detected. CalHealth says that "the corresponding cuff pressure at this point will be substantially equal to systolic blood pressure which is the pressure when the heart is pumping."
The decrease of pressure continues until the device no longer registers arterial pulsation where, according to the company, "the pressure of the cuff at this point will be substantially equal to diastolic blood pressure."
The readings are then interpreted by the software and displayed for the user. The software can also store data from previous tests and present the user with graphs for onward email transmission to medical personnel.
There's a release valve to let the air out after each test, so there's no fear of the experience turning into some Jigsaw nightmare where the device starts to menacingly crush the trapped finger.
However, there has been some doubt cast on the accuracy of finger-based monitors. The American Heart Association recommends an automatic, cuff-style, upper-arm monitor: "Wrist and finger monitors are not recommended because they yield less reliable readings."
Any home monitoring device should be checked for accuracy by medical practitioners.

Latest LHC experiments show early universe behaved like a liquid



Physicists from the ALICE detector team have been colliding lead nuclei together at CERN's Large Hadron Collider (LHC) in an attempt to recreate the conditions in the first few microseconds after the Big Bang. Early results have shown that the quark-gluon plasma created at these energies does not form a gas as predicted, but instead suggest that the very early universe behaved like a hot liquid.
The Large Hadron Collider enables physicists to smash together sub-atomic particles at incredibly high-energies, providing new insights into the conditions present at the beginning of the universe.
ALICE (an acronym for A Large Ion Collider Experiment) researchers have been colliding lead nuclei to generate incredibly dense sub-atomic fireballs – mini Big Bangs at temperatures of over ten million degrees.
Previous research at lower energies had suggested the hot fire balls produced in nuclei collisions behaved like a liquid, yet many still expected the quark-gluon plasma to behave like a gas at these much higher energies.
Additionally, it has been found that more sub-atomic particles are produced in the collision than some theoretical models suggested.
“Although it is very early days we are already learning more about the early Universe,” said Dr David Evans, from the University of Birmingham’s School of Physics and Astronomy, and UK lead investigator at ALICE experiment. “These first results would seem to suggest that the Universe would have behaved like a super-hot liquid immediately after the Big Bang.”
The ALICE experiment aims to study the properties of the state of matter called a quark-gluon plasma. The ALICE Collaboration comprises around 1,000 physicists and engineers from around 100 institutes in 30 countries. During collisions of lead nuclei, ALICE will record data to disk at a rate of 1.2 gigabytes (GB), equivalent to two CDs every second, and will write over two petabytes (two million GB) of data to disk. This is equivalent to more than three million CDs, or a stack of CDs without boxes several miles high!
To process this data, ALICE will need 50,000 top-of-the-range PCs, from all over the world, running 24 hours a day.

Dynamic Eye sunglasses use moving LCD spot to reduce glare



Chris Mullin from Pittsburgh has designed a pair of smart electronic sunglasses that pinpoint and reduce glare using a moving liquid crystal display spot inside the lens. Dubbed "Dynamic Eye", the sunglasses dim direct sunlight or other hot spots without dimming everything else in view, so you no longer have to worry about driving home with the sun streaming directly into your line of vision.
Mullin came up with the electronic sunglasses after completing his PhD in physics from the University of California at Berkeley. The idea behind the design sprung from the general lack of functionality from most sunglasses, including polarized lenses, to cut out direct sunlight glare whilst keeping a clear picture of everything thing else.
Using the two polarizers in the liquid crystal display, the glasses are able to darken the area between your pupil and the glare source. Half the light passes through the first polarizer and the liquid crystal in the middle determines whether the light will be absorbed by, or pass through, the second polarizer. If the sun moves, then so does this crystal liquid spot and if there is no glare, there is no spot.
 
“The problem with the sun is that it’s ten thousand times brighter than everything else you’re looking at, and your eyes can’t handle the difference. You squint, pull down the shade, put your hand, or do anything to get rid of the sun,” explained Mullin on Kickstarter.com. “With our glasses, you can relax, because the sun is dimmed down to an acceptable level. You can still see it, as well as any silhouettes that come in front, and because the glare is blocked, you can see a lot more of what’s near the sun.”
Glare reducing night glasses which use a special transparent LCD developed by researchers at Kent State University is also in development.

Tuesday, November 23, 2010

Pulse Phone app has its finger on the pulse



nstead of relying on the iPhone’s microphone or extra hardware to measure a user’s heart rate like most other heart rate apps, Antimodular Inc.’s Pulse Phone does so by using the iPhone’s built-in camera. When the user places their finger over the iPhone camera, the app detects the changes in the intensity of light passing through the finger, which changes as blood pulses through the veins.
The app’s use of the iPhone’s camera by way of an (at the time) unauthorized “hack” of Apple’s code for the camera meant that the app was rejected when it was initially submitted to Apple over a year and a half ago. However, the recent relaxing of restrictions means Pulse Phone has now been approved. The use of the camera means that users can measure their pulse in noisy environments where microphone-based apps may struggle.
Pulse Phone works on older iPhone’s and will handle a variety of different ambient light conditions, although bright light yields the best results. However, with the use of the iPhone 4’s built-in flash, the app is even able to function in complete darkness – if measuring your pulse in complete darkness is your thing. Users can also save their readings and email them from within the app to keep a track of your heart rate over time.
Although there are other camera-based heart rate apps available, Pulse Phone developer Rafael Lozano-Hemmer believes his app’s image analysis is superior. Even so, a warning that appears when the app loads cautions against using the app for medical purposes.
Pulse Phone is available on the iTunes App Store for US$1.99.

500 Hz remote eye tracker watches what you watch


SensoMotoric Instruments (SMI) of Germany has launched its latest gaze and eye tracking system called the RED500. Eye tracking is a key research technique for many types of scientific, marketing, and design studies. Billed as the world’s first high-performance and high-speed remote eye tracker, the RED500 features a “scientific grade” 500 Hz sampling rate, binocular tracking, and a portable all-in-one design.
Gaze and eye tracking studies measure and plot the movement of the human eye. In the neuroscience and psychological fields, eye tracking can be used to analyze how we process visual information and to help detect neuro-degenerative diseases or comprehension disorders such as dyslexia. Marketers and designers can use eye tracking to pre-test designs and actually measure what their audience sees and where they focus. Eye tracking has also been used in sports and professional training to improve performance, and there are even security applications.
Eye tracking follows the subjects gaze as they perform a visual task such as reading or interacting with a web site. Far from being a smooth path, a human’s gaze is made up of many quick, minute eye movements called saccades. Saccades are measured in degrees of movement per second, and can reach speeds of up to 1000 degrees per second. The RED500’s 500 Hz sampling rate allows it to capture more saccades, and provide a high resolution measurement of eye movement.

Like previous SMI RED products, the RED500 looks something like a Microsoft Kinect game unit. But instead of connecting to your Xbox, the RED500 can be integrated with a workstation, most EEG systems for medical studies, plus MATLAB or other scientific hardware and software. The RED500 is "remote" in that it does not need to be attached to the subject to measure their eye movements.
The RED500 can be used with computer monitors, TVs, and projector systems. SMI also offers an all-in one version built into a 22-inch display. SMI also offers a free RED API to allow for integration with custom applications. Building on earlier RED models, SMI designed the RED500 with new, faster hardware and algorithms to achieve its higher level of performance.
The RED500 measures eye data such as the eye’s position on a surface and changes in the size of the pupil. Eye color, contacts, and glasses do not affect the measurements. Data can be exported in a variety of formats including SPSS and Excel. Software such as SMI’s Experiment Suite 360 and BeGaze allow you to manage your experiments, identify areas of interest, track the scan path, or visualize a “heat map” showing the most viewed areas.
No information yet pricing or availability for the RED500, but more product details are available at the SMI web site.

Saturday, November 13, 2010

Electronic explosive-detecting sensor out-sniffs sniffer dogs


The recent Yemeni bomb threat has only highlighted the need for quick, accurate ways of detecting explosives. With their excellent sense of smell and the ability to discern individual scents, even when they’re combined or masked by other odors, this task is usually given to man’s best friend. But training these animals can be expensive and good sniffer dogs can be hard to find. Scientists have now developed an electronic sensor they say is more sensitive and more reliable at detecting explosives than any sniffer dog.
The new sensor, developed by scientists at Tel Aviv University, is able to detect multiple kinds of explosives and is especially effective at detecting TNT – an explosive that currently requires equipment that is high cost, has lengthy decoding times, is large and needs expert analysis to be detected.
"There is a need for a small, inexpensive, handheld instrument capable of detecting explosives quickly, reliably and efficiently," says lead researcher Prof. Fernando Patolsky of Tel Aviv University's Raymond and Beverly Sackler School of Chemistry.
The device is made from an array of silicon nanowires, coated with a compound that binds to explosives to form a nanotransistor. To enhance the device’s sensitivity, the scientists developed each one with 200 individual sensors that work together to detect different kinds of explosives with what the scientists say is an unprecedented degree of reliability, efficiency and speed.
In addition to being portable, the device is also capable of detecting explosives at a distance. This means it can be mounted on a wall, with no need to bring it into contact with the item being checked. Also, unlike other explosives sensors, the device provides a definitive identification of the explosive that it has detected. Its developers say that, to date, the device has not produced a single detection error.

Friday, November 12, 2010

The Kno digital textbook


Remember the Kno digital textbook for students? After much development and student input, the devices are now ready for shipping. In addition to the 14.1-inch dual-screen version, the developers have also created a single screen edition that offers similar functionality to its bigger cousin but in a now familiar tablet format. Students can now also browse through an online textbook store, which is to include tens of thousands of titles from top publishers.
Printed textbooks can be a heavy and cumbersome affair which can also make the wallet feel much, much lighter. The Kno digital textbook was developed to provide a relatively lightweight solution to carrying volumes of information around at a fraction of the cost. The development process has involved the targeted users - students - at every stage and after a round of beta testing, the device has now been priced and an availability window announced.
It was originally developed as a huge 14.1-inch dual touchscreen device where each display was hinged down one side so that they folded in on each other, just like a printed book. However, there are now two options on offer. The dual-screen option has been joined by a single display version, 14.1-inch tablet model. As previously announced, the Kno benefits from a LED backlit 1440 x 900 WXGA resolution multi-touch screens, a 1GHz Tegra T200 dual core processor and wireless connectivity courtesy of 802.11b/g Wi-Fi and Bluetooth 2.0 with EDR.
Both versions will now be offered in 16GB and 32GB storage capacities and should be capable of up to six hours of "normal campus use" before the Li-polymer battery pack needs to be charged. Although not immediately available, full 1080p video playback will be available shortly after shipment via a software update.
Students can browse through the company's textbook store where popular titles and supplementary content will number in the tens of thousands. Reference material from publishers like McGraw Hill, Macmillan, Freeman & Worth, Random House and a large number of the University Presses will be on offer, which will "typically cost between 30 and 50 percent less than physical textbooks."
The company – which has calculated that "the Kno actually pays for itself in three terms" – is now accepting a limited number of pre-orders for an initial end-of-year shipment date. The single screen Kno is priced at US$599 for the 16GB flavor and US$699 for the 32GB option. The 16GB dual-screen version will cost US$899, with the 32GB model costing US$999.



Four ways to harvest solar heat from roads

Walk barefoot on an asphalt road and you'll soon realize how good the substance is at storing solar heat – the heat-storing qualities of roadways has even been put forward as an explanation as to why cities tend to be warmer than surrounding rural areas. Not content to see all that heat going to waste, researchers from the University of Rhode Island (URI) want to put it to use in a system that harvests solar heat from the road to melt ice, heat buildings, or to create electricity.
“We have mile after mile of asphalt pavement around the country, and in the summer it absorbs a great deal of heat, warming the roads up to 140 degrees or more,” said Prof. K. Wayne Lee, leader of the URI project. “If we can harvest that heat, we can use it for our daily use, save on fossil fuels, and reduce global warming.”



The research team has four main ideas for how that harvesting could be performed.

Cells on barriers

A relatively simple method of harnessing the sunlight shining on the road, if not the heat stored in it, is to wrap flexible photovoltaic cells around the top of the Jersey barriers on divided highways (Jersey barriers are those long rectangular concrete slabs). These cells could also be embedded in the asphalt between the barriers and the adjacent rumble strips. The electricity generated by the cells could be used to power streetlights and illuminate road signs.

Water pipes in the road

Another approach would be to install water-containing pipes within the asphalt. As the road heated up, so would the water, which could then be piped underneath a bridge deck to reduce icing, used to heat or provide hot water for nearby buildings, or even turned to steam at a power plant. URI grad student Andrew Correia has created a prototype for such a system, which he hopes will demonstrate how it could actually work in the real world.

Thermo-electricity

A small amount of electricity can be created by connecting two semiconductors to form a circuit linking a hot and a cold area. If those semiconductors were embedded in the road at different depths, or in sunny and shady areas, then the difference in temperature between them could conceivably be used to generate electricity. If enough of them were used together, their electrical output could be used for purposes such as defrosting roadways. URI’s Prof. Sze Yang proposes that instead of traditional semiconductors, inexpensive plastic sheet organic polymeric semiconductors could be used.

Electronic block roadways

In what the researchers admit would be the most costly option, asphalt roads could be replaced with roads made from clear-yet-durable electronic blocks. These would contain photovoltaic cells, LED lights and sensors, and could generate electricity, display changeable lane markings, and display illuminated warning messages. Idaho’s Solar Roadways has been working on just such a system, although according to Lee, a driveway made with the blocks cost US$100,000 to create. He believes that such technology may first show up in corporate parking lots, before decreased costs allow it to be used for public roads.

Wednesday, November 3, 2010

Agloves give full 10-finger gloved touchscreen functionality


With capacitive the technology of choice on the majority of touchscreen devices hitting the market, people have been coming up with all kinds of interesting ways to interact with their devices when the winter chill sets in and gloves become a necessity. Many South Koreans apparently turned to using sausages as a stylus but if you’d prefer not to be hassled by dogs as you type a text there are less meat product-based solutions, such as the North Face Etip gloves. Now there’s another glove-based solution in the form of Agloves, which provide even greater touchscreen friendly surface area for your hands.
Whereas the Etip gloves feature a conductive material known as X-Static fabric on the tips of the thumb and index finger, the Agloves are made with silver coated nylon to make the entire glove conductive. With silver boasting particularly high electrical conductivity it allows the Agloves to better transfer the skin’s bioelectrical charge through the gloves to the screen.
The Boulder-based company behind the Agloves says that since the whole glove is knitted with its unique silver yarn they are able to work even if the wearer’s fingertips lose conductivity, when they are too cold or dry for example. In such cases the rest of the hand is able to pick up the slack and allow the bioelectricity to travel from other areas on the hand, through the glove to the fingertips to maintain a connection.
Also, because the Agloves provide full 10-finger functionality, users are able to type using full QWERTY onscreen keyboards like that found on the iPad, or do four-finger swipes. Oh, and they should also keep your hands warm.
The Agloves are available now for US$17.99 a pair.

Wednesday, October 27, 2010

Our Tour








Jog Falls
(More Photos>>)







Shravana Belagola
(More>>)

NASA’s Solar Shield to mitigate damage to power grid from severe solar storms

The solar storms that cause the stunning aurora borealis and aurora australis (or northern and southern polar lights) also have the potential to knock out telecommunications equipment and navigational systems and cause blackouts of electrical grids. With the frequency of the sun’s flares following an 11-year cycle of solar activity and the next solar maximum expected around 2013, scientists are bracing for an overdue, once-in-100 year event that could cause widespread power blackouts and cripple electricity grids around the world. It sounds like an insurmountable problem but a new NASA project called “Solar Shield” is working to develop a forecasting system that can mitigate the impacts of such events and keep the electrons flowing.

In 1859 the most powerful solar storm in recorded history, known as the Solar Superstorm, or the Carrington Event, caused telegraph systems all over Europe and North America to fail. Today, the effects of severe solar storms are much more noticeable with the total length of high-voltage power lines crisscrossing North America increasing nearly tenfold since the 1950s. This has turned power grids into giant antennas for the geomagnetically induced currents (GICs) – the ground level manifestation of space weather that can overload circuits, trip breakers, and in extreme cases melt the windings of heavy-duty transformers, causing permanent damage.

Just such an event occurred in Quebec on March 13, 1989, when a geomagnetic storm much less severs than the Carrington Event knocked out power across the entire province for more than nine hours. In addition to Quebec, the storm damaged transformers in New Jersey and Great Britain and caused more than 200 power anomalies across the continental U.S.

Although many utility companies have taken steps to reinforce their grids, with demand for power growing faster than the grids themselves modern networks are stressed to the limit and vulnerable to the effects of a severe geomagnetic storms. With the possibility of long-lasting large scale blackouts a real possibility due to widespread transformer damage the Solar Shield project leader Antti Pulkkinen believes the project can “zero in on specific transformers and predict which of them are going to be hit hardest by a severe space weather event.”
How it works

When a massive burst of solar wind, known as a coronal mass ejection (CME), is detected rising from the sun’s surface and headed for Earth, images from SOHO and NASA's twin STEREO spacecraft would allow a 3D model of the CME to be created and predict when it will arrive. While the CME is making its way to Earth – a trip that usually takes 24 to 48 hours (although the Carrington Event CME took just 18 hours as an earlier CME had cleared the way) – the Solar Shield team would prepare to calculate ground currents.

About 30 minutes before impact the CME would sweep past ACE, a spacecraft stationed 1.5 million km upstream from Earth. Sensors aboard ACE would make in situ measurements of the CME’s speed, density and magnetic field and transmit this data to the Solar Shield team at the Community Coordinated Modeling Center (CCMC) at NASA's Goddard Space Flight Center.

"We quickly feed the data into CCMC computers," says Pulkkinen. "Our models predict fields and currents in Earth's upper atmosphere and propagate these currents down to the ground." With less than 30 minutes to go, Solar Shield can issue an alert to utilities with detailed information about GICs.

Solar Shield is still only experimental and hasn’t yet been field-tested during a severe geomagnetic storm. A few utility companies have installed current monitors at key locations in the power grid to help the team check their predictions but, with more data allowing the team to more quickly test and improve the system, they are hoping more power companies join the effort. A few good solar storms would help too with the sun being mostly quiet during the past year – something like the calm before the storm.

Wednesday, October 20, 2010

App allows users to view electrocardiograms on smartphones

Gone are the days when we simply used our mobile phones for calling people – now we can conduct our own ECGs. We’ve already seen iPhone and Android applications that can create ultrasound images and that measure air pollution. Now tech companies IMEC and the Holst Center, together with TASS software professionals, have released a new heart rate monitoring application.
The IMEC/Holst Center application is designed for Android and it uses small monitoring sensors which are placed on the user’s body. The sensors are connected to a necklace that will wirelessly transmit the heart rate data to your Android phone.
Within minutes you will receive your ECG (electrocardiogram) heart rate monitoring report, that can easily be stored or emailed to your doctor. The sensors are unobtrusive and can remain on the user’s body all day if constant monitoring is required. The application would be suitable for athletes, patients wishing to be monitored from home, and heart disease sufferers.
The small Android interface uses low power and is based on the Linux kernel, and is thus easily compatible with other Linux-based devices, such as PDAs or laptops. It also has the ability to integrate with all the features available on Google’s operating system, such as SMS, e-mail and data transmission over the Internet.

Friday, October 15, 2010

Father and son launch video camera into outer space

 It’s an inspiring story that reminds you how the wonders of scientific exploration aren’t just limited to research institutions with big budgets... in August of this year, Luke Geissbuhler and his seven year-old son Max attached an HD video camera to a weather balloon and set it loose. They proceeded to obtain footage of the blackness of outer space, 19 miles (30 km) above the surface of the earth. Needless to say, there was a little more to it than just tying a piece of string around a camcorder.

Luke and Max created a miniature space capsule for their Brooklyn Space Program experiment, using a food take-out container. It contained the camera (with a peep hole for its lens), hand warmers to keep its battery warm, a “please return if you find this” note, and an iPhone, so that they could use its GPS to locate the capsule once it landed. The whole thing was coated in foam, to absorb the energy of a high-speed landing, and attached to a parachute.
The pair launched the balloon from Newburgh, New York, near their home in Brooklyn. Over the next 72 minutes, it proceeded to climb to over 100,000 feet (30,480 meters), encountering 100mph (161km/h) winds and temperatures of -60F(-51C) along the way. Due to the lack of pressure at such high altitudes, the balloon eventually expanded beyond its capacity and burst, sending the capsule on a 150mph (241km/h) parachute-assisted fall back to earth.
Amazingly, it landed just 30 miles (48 km) from its lift-off point, in the middle of the night. Using its external LED lamp to locate it visually, the Geissbuhlers found the capsule hanging from its parachute in a tree.
The project involved eight months of research and testing, but as you can see in the video below, the results were well worth the effort.

Saturday, October 2, 2010

Space tourism takes another leap forward with plans for commercial space station/hotel



Out of financial necessity, Russia was one of the innovators when it came to the burgeoning field of space tourism, with American businessman and former JPL scientist Dennis Tito becoming the first space tourist in mid-2001 when he spent nearly eight days in orbit on the Russian Soyuz TM-32, the International Space Station (ISS), and Soyuz TM-31. Following Russia’s halting of orbital space tourism earlier this year due to an increase in the ISS crew size, private Russian company, Orbital Technologies, has now announced plans to build, launch and operate what could be the world’s first commercial space station (CSS). It envisions the station will be used by professional crews and corporate researchers to conduct scientific experiments, as well as private citizens looking for an out of this world holiday destination.


To be built by Russian spacecraft manufacturer RSC Energia, the CSS would be man-tended, with a crew capability of up to seven people, with the capability to expand the crew size over time. It would be serviced by the Russian Soyez and Progress spacecraft, as well as other human and cargo spacecraft that are expected to be in operation in the next decade. Orbital Technologies says such adaptability will be possible through the station’s unified docking system that will be compatible with any commercial crew and cargo capability developed in the U.S., Europe and China.
The CSS will be placed within 100 km (62 miles) of the ISS in order to minimize the energy required to transfer crew and cargo between the two stations and maximize the opportunities for commerce and cooperation. Its proximity will also allow the CSS to serve as an emergency refuge for the ISS crew if necessary.
"There is a possibility for the ISS crew to leave their station for several days. For example, if a required maintenance procedure or a real emergency were to occur, without the return of the ISS crew to Earth, habitants could use the CSS as a safe haven,” said Alexey Krasnov, Head of Manned Spaceflight Department, Federal Space Agency of the Russian Federation.
The first module of the CSS will measure just 20 cubic meters (706 cubic feet) and will comprise four cabins. Despite the tight fit, the planned module will offer more comforts than the ISS and will feature large portholes providing a view of Earth that would be hard to beat.
Aside from being aimed at well-to-do individuals and people working for private companies wanting to conduct research in space, the CSS is also designed to serve as a staging outpost for human space flight missions beyond low earth orbit.
Orbital Technologies isn’t the first company to announce plans for a commercial space station designed to serve as a hotel. In 2007, Galactic Suite Design announced its plans to develop an “orbital hotel chain” starting with a luxury space resort that was due for completion in 2012. Although the company has already taken bookings, no hardware has yet been built or tested and critics have voiced skepticism about the veracity of the project.
Although the CSS is still at the design and development stage, Orbital Technologies has already signed cooperation agreements with RSC Energia and the Russian Federal Space Agency (Roscosmos) for the project. It also claims that funding for the development and deployment of the CSS is already in place and is therefore proceding “on an expeditious schedule for the initiation of station operations” and plans to launch the first module of the CSS in 2015-2016. It hasn’t announced any potential pricing but if you’re interested in booking a room you might want to start saving those millions now.

First “potentially habitable” exoplanet discovered

If you’re looking to get away from it all then Gliese 581g might just fit the bill. But be prepared to pack enough for the trip that, even on a rocket traveling 30,000 km per second (18,640 miles per second), would take 200 years. Gliese 581g is the first exoplanet discovered that sits in an area where water could exist on the planet’s surface. If confirmed, this would make it the most Earth-like exoplanet yet discovered and the first strong case for a “potentially habitable” one.
Gliese 581g is located 20 light years from Earth, orbiting the nearby (in astronomical terms) red dwarf star Gliese 581. It, along with the discovery of another new planet, brings the total number of known planets around this star to six – the most yet discovered in a planetary system outside our own. Like our solar system, the planets around Gliese 581 have a nearly-circular orbit.

Full of potential

But perhaps put off packing your bags just yet. To astronomers, a “potentially habitable” planet isn’t necessarily one where humans would thrive. Rather, it refers to a planet that could sustain life. Actual habitability depends on many factors, but having liquid water and an atmosphere are among the most important.
With a mass three to four times that of Earth, Gliese 581g orbits its star in just under 37 days. Its mass indicates that it is probably a rocky planet with a definite surface and enough gravity to hold onto an atmosphere.
However, the planet is tidally locked to the star, meaning that one side is always facing the star, while the other side is in perpetual darkness. This means that the most habitable zone on the planet’s surface would be the line between shadow and light known as “the terminator”.

A long time coming

Gliese 581g’s discovery by a team of planet hunters from the University of California (UC) Santa Cruz and the Carnegie Institution of Washington was the result of more than a decade of observations using the W. M. Keck Observatory in Hawaii, one of the world’s largest optical telescopes.
Using the HIRES spectrometer on the Keck I Telescope, the team was able to precisely measure the star’s motion along the line of sight from Earth, and detect the new planet using the radial velocity method. This is when the gravitational tug of an orbiting planet causes periodic changes in the radial velocity of the host star.
"Our findings offer a very compelling case for a potentially habitable planet," said Steven Vogt, professor of astronomy and astrophysics at UC Santa Cruz. "The fact that we were able to detect this planet so quickly and so nearby tells us that planets like this must be really common."
Two previously detected planets orbiting Gliese 581 lie at the edges of the habitable zone. One, Gliese 581c, is on the hot side and the other, Gliese 581d, is on the cold side. While some astronomers still think that planet d may be habitable if it has a thick atmosphere with a strong greenhouse effect to warm it up, others are skeptical. The newly discovered planet g, however, lies right in the middle of the habitable zone.
Sponsored by NASA and the National Science Foundation (NSF), the team’s new findings are reported in a paper published in the Astrophysical Journal.

Wednesday, September 22, 2010

'Intelligent clothing' could stop boats when fishermen fall overboard

Working as a commercial fisherman is consistently ranked as one of the world’s most dangerous jobs. There are numerous ways in which they can end up in the water, with their shipmates (if they even have any) not noticing until it’s too late. That, or their boat can simply sink. In any case, fishermen need all the help they can get when it comes to safety, so a 14-group research consortium is developing “intelligent clothing” for them to wear at sea.
The three-year, 4 million Euro (approx. US$5,225,000) Safe@Sea project is being coordinated by Norway’s SINTEF research group, with Norwegian textile manufacturer Helly Hansen Pro as project manager. Other groups taking part in the project come from Denmark, Finland, Sweden, Belgium, Spain, Italy and the UK.
European fishermen have already expressed their needs to Safe@Sea, and the group is now working on addressing them. One of the most noteworthy features of the workwear is a proposed built-in wireless “dead man’s handle.” This will detect when its wearer has fallen overboard, and automatically kill the boat’s engine and activate a locator beacon – an essential feature for fishermen who work alone. Such devices are already available, although they have to be manually attached to clothing, so they could be forgotten or just not used.
Once in the water, the clothing could double as a flotation device. This could either be through solid slabs of buoyant materials, or via “lungs” that automatically inflate when immersed.
Of course, it will all count for nothing if nobody wants to wear the stuff. To that end, the researchers are also working on making it impervious to staining from fish blood and guts, while at the same time trying to keep it soft and breathable. They are also looking into the possibility of self-repairing material that glues up small rips in itself, to make sure it remains watertight.
At this point it’s hard to say how much of the proposed technology will make it into the final product, but the research itself is still valuable. “If we don’t manage to develop such textiles in the course of this three-year project, we can at least hope to create a basis for other materials that will be of value in the future,” said SINTEF’s project coordinator Hilde Færevik.

Mobile phones charged by the power of speech

In the search for alternative energy sources there's one form of energy you don't hear much about, which is ironic because I'm referring to sound energy. Sound energy is the energy produced by sound vibrations as they travel through a specific medium. Speakers use electricity to generate sound waves and now scientists from Korea have used zinc oxide, the main ingredient of calamine lotion, to do the reverse – convert sound waves into electricity. They hope ultimately the technology could be used to convert ambient noise to power a mobile phone or generate energy for the national grid from rush hour traffic.
Piezoelectrics are materials capable of turning mechanical energy into electricity, and can be substances as simple as cane sugar, bones, or quartz. Much research in this field has been focused on transforming the movement of a person running, or even the impact of a bullet, into a small electrical current, but although these advanced applications are not yet available in consumer products, scientists have been using piezoelectric materials in environmental sensors and speakers for years.
The Korean researchers were interested in reversing this process however. "Just as speakers transform electric signals into sound, the opposite process – of turning sound into a source of electrical power – is possible," said Young Jun Park and Sang-Woo Kim, authors of the article in journal Advanced Materials.
Piezoelectrics create an electrical charge under stress, and thus zinc oxide, the main ingredient of calamine lotion, was bent into a field of nanowires sandwiched between two electrodes. The researchers subjected the sandwich to sound waves of 100 decibels which produced an electrical current of about 50 millivolts.


On average, a mobile phone operates using a few volts, and as a normal conversation is conducted at about 60-70 decibels it's clear the technology falls some way short of being genuinely useful yet, but the researchers are optimistic that given time they can improve the electric yield. They hope future applications could include mobile phone charging from conversations, or sound-insulating walls near highways that boost the national grid using energy generated from rush hour traffic noise. However, with the increasing popularity of near silent electric vehicles there might be a decreasing window of opportunity for that particular application.

Monday, September 20, 2010

More realistic pet robots that recognize and respond to human emotions




Sony’s Aibo may be discontinued, but robotic pets of all shapes and sizes continue to stake a claim in the hearts of people around the world. Despite the apparent intelligence of some of these robot pets, their behavior and actions are usually nothing more than pre-programmed responses to stimuli – being patted in a particular location or responding to a voice command, for example. Real flesh and blood pets are much more complex in this regard, even discerning and responding to a person’s emotional state. Robotic pets could be headed in that direction, with researchers in Taiwan turning to neural networks to help them break the cycle of repetitive behavior in robot toys and endow them with almost emotional responses to interactions.
Building fully autonomous artificial creatures with intelligence akin to humans is a very long-term goal of robot design and computer science. On the way to such machines, home entertainment and utility devices such as "Tamagotchi" digital pets and domestic toy robots such as Aibo, the robotic dog and even the Roomba robotic vacuum cleaner, have been developed. At the same time, popular science fiction culture has raised consumer expectations.
In an effort to provide entertaining and realistic gadgets that respond to human interaction in ever more nuanced ways, mimicking the behavior of real pet animals or even people, researchers in Taiwan are now looking at a new design paradigm that could see the development of a robot vision module that might one-day recognize human facial expressions and respond appropriately.
"With current technologies in computing and electronics and knowledge in ethology, neuroscience and cognition, it is now possible to create embodied prototypes of artificial living toys acting in the physical world," Wei-Po Lee and colleagues at the National Sun Yat-sen University (NSYSU), Kaohsiung, explain.
There are three major issues to be considered in robot design, the team explains. The first is to construct an appropriate control architecture by which the robot can behave coherently. The second is to develop natural ways for the robot to interact with a person. The third is to embed emotional responses and behavior into the robot's computer.
The researchers hope to address all three issues by adopting an approach to behavior-based architecture - using a neural network - that could allow the owner of a robot pet to reconfigure the device to "learn", or evolve new behavior and at the same time ensure that the robot pet functions properly in real time.
The team has evaluated their framework by building robot controllers to achieve various tasks successfully. They, and other research teams across the globe, are currently working on vision modules for robots. The technique is not yet fully mature, but ultimately they hope to be able to build a robot pet that could recognize its owner's facial expressions and perhaps respond accordingly. Such a development has major implications for a range of interactive devices, computers and functional robots of the future.