Friday, October 28, 2011

Apple patent suggests gestures used to control devices from a distance


While there are video editing tools for iOS devices, the fact of the matter is that it’s a bit hard to edit videos on such small screens, plus sometimes touching the screen can cause unintentional editing which can ultimately lead to frustration. A new patent by Apple has suggested a way for iOS users to edit videos on their devices through the use of gestures without having to touch the device itself.
These gestures can be picked up via infrared sensors, optical sensors or other methods, and it’s even been suggested that these gestures can be used in conjunction with or as a replacement to traditional touchscreen-based gestures, meaning that it can be used outside of video editing.
For instance, the iPhone or iPad can be used as a “control device” for a remote camera, such as an iPhone being used to record a video, which will then sent wirelessly to an iPad where the user can view the video in real time and make adjustments accordingly. In a way we could liken this method to Microsoft’s Xbox Kinect whereby gestures are used to control the game or to make selections, but let’s hope that Apple won’t be stepping on anyone’s toes here.

LED by LITE aims to brighten up night-time cycling



The arrival of high-intensity LEDs has certainly made a huge difference to the brightness of bicycle headlights. Some people, however, are now looking at using the bulbs not just as a means of lighting the cyclist's way, but of making their bicycles more visible to motorists. A couple of examples include the Aura and Revolights systems, both of which incorporate LEDs into a bike's wheel rims. Another system, that looks like it might be considerably less involved yet still effective, is called LED by LITE.







Developed by Utah father and son team Rick and Brandon Smith, LED by LITE consists of four strips of silicone-encased LED bulbs. Two of those strips (containing white bulbs) mount on the bicycle's front forks, while the other two (with red bulbs) go on the seat stays. All four strips are waterproof, can be removed and reinstalled by the cyclist in a matter of seconds, and receive their power from a rechargeable 12-volt lithium-ion battery pack.
The LEDs are bright enough to both light the road ahead of the rider, and to make the bicycle stand out to motorists.
What makes the system particularly interesting, however, is its Dashboard. Mounted on the handlebars, this wireless unit features left- and right-turn buttons - press the left-turn button, and the left front and rear light strips will flash on and off, press the right, and ... you get the idea. It also allows users to switch the running lights between continuous and modulated (flashing) modes.
Rick and Brandon are currently in the process of raising funds from prospective customers, to commercially produce LED by LITE. A system with a total of 36 bulbs is planned to retail for US$175, although a pledge of $125 will get you one once they're ready to go. Versions with 24 and 48 bulbs are also available.

Thursday, October 27, 2011

Nokia Lumia 800 Preview


I’ve had some play time with the Nokia Lumia 800 and I was very curious about the new design (which is very close to the Nokia’s N9, with minor differences). Upon getting the Nokia Lumia 800 in your hand, the first (and most important) thing that you would notice is the new design. I did not have the opportunity to play with the N9, so this was my first contact with Nokia’s new design based on a seamless, injection-molded body in which all the components are added, with the display closing on top of everything else. The Lumia 800′s body is amazingly smooth and -surprisingly- doesn’t feel like “plastic”, although it is. The Lumia 800 has more of a “premium feel” than one could ever imagine by just looking at it. Nokia does need to put this in people’s hands.

Polycarbonate body

At 125g or so, it is noticeably lighter than the iPhone 4/4S, but it is a bit heavier than the Samsung Galaxy S2. I have used both those phones, and the best way to describe it is that the Nokia Lumia 800 feels like the iPhone 4S when it comes to the shape and volume, but without any of the rougher edges typical to the iPhone 4S design. Also, I’m not sure how the 800′s polycarbonate skin will hold up to scratches, but I think that it would be less prone to small damage and drops (!) than the iPhone 4S. The Galaxy S2 feels bigger -because it is- and some users may find it more “comfy” to use, as the 3.7″ display of the Lumia 800 is closer (in size) to the iPhone 4S’ 3.5″ display. Personally, I wouldn’t mind having a 4.3″ display version :)

Display (beautiful!)

Talking about the display, the AMOLED display and the curved Gorilla Glass do wonders. The display looks great and feels great to the touch. I think that Nokia could have gotten away with using a standard flat glass, but I’m glad that they chosed to feature a curved one. In terms of image quality, AMOLED does very well too: the contrast is amazing (isn’t always the case with AMOLED?) and the colors seemed a bit closer to reality than on Samsung phones, but I will need to check this again with a side by side comparison. We are not in the best setting for color comparison.

Nokia-specific software

I won’t go over all the new features of Windows Phone 7.5 -aka Mango- here, but Mango is a great WP7 update, and I was particularly interested by Nokia Drive (a free app), which is a personal navigation (PN) application that is unique in the Windows Phone world. It is a real personal navigation app that provides clear maps where all the street names are visible, most of the time (which is not the case with Bing maps, or with many navigation software). Plus the Maps scrolling and refresh rate is higher than Bing Maps (or Google Maps). The maps are completely vectorized (no bitmaps), and stored locally, which is great for map performance, and world travelers who don’t always have access to 3G data will appreciate it even more.
When you’re not driving, you can choose between Bing Maps, or Nokia Maps (this one was not actually on the device, but it should come very soon).
Nokia also announced a free and unlimited radio service that would not require an account and log-in. The service would also let users stream or download music. Unfortunately, Nokia is still negotiating the music rights for the USA, and each country will require its own set of deals.

System performance

Finally, you may have noticed that Nokia is using the Qualcomm SnapDragon S2 processor, which is a single-core product. It is obvious that Nokia is going to take some heat for that, but the fact is that Windows Phone 7 is not multi-core ready , yet.
That said, Windows Phone 7 has also consistently been the platform where the user interface is extremely fluid, and that’s true with the Lumia 800 too. For most users, the experience will be extremely fluid, and performance should not be an issue, except for games, and fancy photo processing. While some will undoubtedly complain about the processing power, I don’t feel like it will make or break the Lumia 800. On the other hand, the single-core chip may yield a better battery life…

Conclusion (must-see)

I’ve teased Nokia with their “thick” designs for years, and I’m honestly impressed with what they have come up with. The Lumia 800 design is truly original, seamless and just well thought out. If this doesn’t work for Nokia, I don’t know what will.
For Microsoft, I’ve recently said that Windows Phone 7 needs a “sexy design”, and this is pretty much it. This is the best shot that both companies have to take significant market share in 2012, but will they? It’s too early to tell.
Finally, the Nokia Lumia 800 will be available in the US only in “Early” 2012, but it will be available in Europe sometime next month. It has been confirmed to us that both CDMA/LTE and HSPA+ versions will be available. Nokia is not commenting on carrier deals, but I would expect Verizon, AT&T and T-Mobile to get the devices (if they want to).
What do you think?

Launch your own nanosatellites into space for a few hundred bucks



Do you wanna buy a satellite? No, really - do you? Well, Zac Manchester would like to sell you one. Not only that, but he claims that the thing could be built and launched into orbit for just a few hundred dollars. For that price, however, you're not going to be getting a big satellite. Manchester's Sprite spacecraft are actually about the size of a couple of postage stamps, but they have tiny versions of all the basic equipment that the big ones have.

Zac is a graduate student in Aerospace Engineering at Cornell University, and was part of the team that originally designed the Sprites for use as space probes - the idea being that they could travel on solar winds, like cosmic dust, traveling deep into space without the need for fuel. Three of the one-square-inch spacecraft were delivered to the International Space Station this May, to see how they how well they could stand up to the rigors of outer space. They are currently still mounted on the outside of the station, and are due to be brought back to Earth in a couple of years.
Each Sprite incorporates a Texas Instruments MSP430 microcontroller, a radio transceiver, solar cells, capacitors and antenna. In their current incarnation, they can't do much more than transmit simple bits of data, but Manchester says that future versions could easily include sensors such as thermometers or cameras.
A small box-like satellite - or CubeSat - would be used to carry the Sprites into low-altitude orbit. It could contain hundreds or even thousands of spring-loaded Sprites, which would shoot out as soon as the CubeSat's lid was opened from ground control via a radio signal.
Once in orbit, the Sprites' transmitted radio signals would be monitored by a network of amateur ground-based tracking stations. The aim would be to demonstrate their communications capabilities, while also observing how long they remained in orbit, and how well they were able to perform in outer space. They should all burn up when they re-enter the earth's atmosphere, within a few days or weeks of their release.
Manchester is now raising funds for the demonstration project via the Kickstarter fund-raising website. Depending on how much money is raised, his team will either have to wait for a spot on one of several free launch programs, or be able to purchase a commercial launch of their own. If they are able to purchase a launch, he is hoping for the Sprites to be in orbit by early 2013.
Yes, but how do you get to call one of them your own? By contributing US$300 via Kickstarter, that's how. Such a donation will allow you to name one of the Sprites, specify the four-character message that it will transmit (such as your initials), and track it on the KickSat website. A donation of $1,000 or more will put an actual physical Sprite in your hands, and provide you with the source code and programming tools to write its custom flight code. Should you wish to set up your own ground station, you will also be instructed on how to receive and interpret its radio signal.

Groups are also encouraged to sponsor fleets of Sprites, each one of which will bear that group's logo on its lilliputian surface.

ITRI develops re-writable, bendy, and electricity-free e-paper



Taiwan's Industrial Technology Research Institute (ITRI) has developed a highly flexible electronic paper that's both re-writable and re-usable, and like the Boogie Board electronic memo pads, the technology doesn't need electricity to retain the screen image. The institute is currently in licensing talks with manufacturers at home and in the U.S., and has taken first prize in the Materials and Basic Science and Technology category of the Wall Street Journal's Technology Innovation Awards.
The nonprofit research and development institute describes its i2R e-Paper technology as a flexible cholesteric liquid crystal panel, and is so named because it has a similar structure to biological cholesterol molecules. The reflective display technology uses ambient light sources to display 16 gray level images, so doesn't require the kind of backlighting used in LCD screens. The displayed 300 dpi resolution text and images are transferred and stored using heat - in a similar way to an old-style thermal fax machine.
A thermal printer fitted with a thermal head requiring 86°C (186.8°F) in temperature and just 37W of power heats the liquid-crystal layer, turning molecules light or dark. Running an already printed i2R e-Paper sheet through such a printer wipes the existing content and replaces it with something else. ITRI estimates that its bendable, thin plastic development - which can be produced in a number of different sizes - is re-writable up to 260 times before needing to be replaced, although technicians are continually building on this achievement and have managed rewrites hundreds of times beyond the current limit.
Once the end of its useful life is reached, the plastic PET substrate, high molecular liquid crystal material, nano pigment absorption layer material and silver electrode are all recyclable.
The institute says that water solvent marker pens can be used to annotate the published content and the notes just wash off after use, like a roll-updigital white board. Red, green and blue colors can be produced by adding different pitch spherical composite ion-exchangers during the process, making the i2R e-Paper useful for future application in color e-books, magazines and newspapers.
ITRI sees the technology being able to immediately replace paper for short-lived items like advertising banners, corporate visitor ID badges, transit passes, and museum or parking lot tickets.
"It's a fact that a significant portion of daily office printed papers will be discarded in days or weeks after use," said Dr. Janglin Chen, general director of ITRI's Display Technology Center. "i2R e-Paper's re-cycle and re-use capabilities, positive effects on the environment and low cost of production are paving the way for mass acceptance of green e-paper technologies."
The i2R e-Paper technology has already been licensed to Taiwan's ChangChun Plastics, which plans to begin trial mass production next year. It's also led to ITRI emerging victorious in both the Wall Street Journal's Technology Innovation Awards (for the third year in a row - last year taking Gold for its FlexUPD paper-thin, flexible AMOLED display technology) and the R&D 100 Awards.

Tuesday, October 25, 2011

Microsoft HoloDesk lets users handle virtual 3D objects



Does anyone remember the animated version of Star Trek from the 1970s? The Emmy-Award-winning series was the very first outing for the now familiar Holodeck, although it was called the recreation room back then. Despite some landmark advances in holographic technology in the years since - such as the University of Tokyo's Airborne Ultrasound Tactile Display - nothing has come close to offering the kind of physical interactivity with virtual objects in a 3D environment promised by the collective imaginations of sci-fi writers of the past. While we're not at the Holodeck level just yet, members of the Sensors and Devices group at Microsoft Research have developed a new system called HoloDesk that allows users to pick up, move and even shoot virtual 3D objects, plus the system recognizes and responds to the presence of inanimate real-world objects like a sheet of paper or an upturned cup.
Unfortunately, the research team hasn't revealed too much about how its new natural user interface system works, but here's what we do know. It's about the size of a filing cabinet and is made up of an overhead screen that projects a 2D image through a half-silvered beam splitter into a viewing area beneath. A Kinect camera keeps tabs on a user's hand position within the 3D virtual environment, a webcam tracks the user's face to help with placement accuracy, and custom algorithms bring everything together in (something very close to) real time.

The user looks down through a transparent display into the viewing area where holographic objects can be picked up and stacked on top of real-world ones, and real hands can juggle virtual balls or shoot them at targets, or play with a non-existent smartphone. The researchers also seem to have included the ability to remotely collaborate on shared multi-user virtual projects. Interestingly, objects in the virtual world still appear to obey the laws of real-world physics, but that doesn't mean that they have to - the beauty of a virtual world is surely that anything is possible.
As you can see from the following proof-of-concept Microsoft Researchvideo, the development does suffer from some jerkiness and image dilution when real-world objects enter the viewing area, and there are also a few placement and tracking issues, but it's a major step forward and in its current stage of development might find immediate use in gaming, education and design.

Doxie Go standalone portable scanner syncs with iOS devices



Apparent is updating its line of portable scanners with the Doxie Go, a lightweight, standalone unit with enough on board storage for up to 600 pages or 2400 photos and the ability to scan directly to an external drive or sync scans to iPad and iPhone without the need for a computer escort.

While Doxie's previous effort - the Doxie U portable scanner - came decorated with pink love hearts that would sit nicely next to a poster of Justin Beiber, the Go takes on a more executive look with its streamlined white finish.
Measuring 10.5" x 1.7" x 2.2" (26.7 cm x 4.35 cm x 5.6 cm) and weighing in at 14.2 oz (403g), the Go has a rechargeable lithium-ion battery that's good for consuming about 100 documents before it needs recharging and scan speed is specced at eight seconds per full-color page, so it's not lightning quick.

The company's included Doxie 2.0 software automatically syncs scans to your computer when connected via USB and is set up for simple transfer to Evernote, Dropbox, Flickr and Google Docs. There's also automatic recognition, cropping, rotation and contrast adjustment, a "software stapler" tool for creating multi-page documents and ABBYY® OCR (Optical Character Recognition) technology which is designed to create easily searchable PDFs that can be copied and pasted.
JPEG and PNG files can also be created while an SD slot adds to the Doxie Go's storage expansion options.

Recharging is via USB and a full recharge takes about 2 hours, but if you need to get on the road again in a hurry there's a charging kit available for US$19.
The Doxie Go is slated to hit shelves in late November at a price of $US199.
There's little detail at this stage on the iPhone/iPad Sync Kit except that it enables you to upload scanned files directly into your photo roll. The kit will be released in December and will set you back a further $US39. The ABBYY® OCR functionality will also be available in December as a software upgrade.

Monday, October 24, 2011

New touchscreen tech recognizes different parts of the finger



Small touchscreen devices such smartphones certainly have their attractions, but they also have one drawback - there isn't much room on their little screens for touch-sensitive features. This means that users will sometimes instead have to go into sub-menus, or make do with jabbing their fingers at tiny controls. Researchers at Carnegie Mellon University's Human-Computer Interaction Institute, however, are working on an alternative. Their prototype TapSense system can differentiate between screen taps from different parts of the finger, and will perform different tasks accordingly.



TapSense works by analyzing the sound of objects hitting the glass of the touchscreen display. Using an inexpensive microphone attached to the device (the built-in mics don't work), the system can tell the difference between taps delivered by a finger's pad, nail, knuckle, or tip. It can also differentiate between taps from styluses made out of different types of inert materials, such as wood, acrylic and polystyrene foam.
An app running on a smartphone or other touchscreen device, equipped with TapSense, could perform different functions based on what part of the finger its operator used. This means that users could stay on one screen that had fairly large controls, changing modes or accessing features by alternating between various types of taps - not unlike using a right mouse button.
The researchers demonstrated one possible application of the technology, in which a knuckle tap on an email heading brought up a list of options, instead of opening the message. In another example, a paint app allowed users to draw freehand when using the pad of their finger, but inserted straight lines when they used their tip. In yet another, users could access alternate characters on a virtual keyboard app by typing with the tip of their finger (as opposed to the pad), and backspace by nail-tapping.
Because TapSense can tell the difference between styluses made from different materials, this means that multiple people could use the screen simultaneously and the device would be able to identify which user was which, based on what the point of their stylus was made of. Different functions could also be assigned to different materials, making possible a stylus with a "pen" on one end and an "eraser" on the other.

Thursday, October 20, 2011

OmniTouch turns any surface into a touchscreen interface



Had Shakespeare been born several centuries later, he might have said "All the world's an interface," especially if he'd had a chance to play with the recently-developed, wearable OmniTouch system. While interactive interface projectors are far from new, this innovative concept design utilizes a different approach that promises to turn just about any solid surface into a touch-sensitive input device. Books, tables, walls, hands and other body parts, it's all fair game.

In its current proof-of-concept iteration, which was prototyped at idea-rich Microsoft Research in Redmond, Washington, by PhD student Chris Harrison and his team, the rough-hewn shoulder mounted device resembles a sci-fi prosthetic weapon, but looks can be deceiving.
"We explored and prototyped a powerful alternative approach to mobile interaction that uses a body-worn projection/sensing system to capitalize on the tremendous surface area the real world provides," explains Harrison.

Like the proverbial "better" mousetrap, the concept of mobile interaction seems prone to constant tinkering. The OmniTouch draws from a blend of disciplines to overcome numerous issues that beset similar devices. Some approaches require placing markers on the fingertips but still can't discern whether the fingers are "clicked" (touching the surface) or hovering. Others can't "read" surfaces beyond those of the user's own body or they lack the ability to respond to touch/drag motions.

To surmount these hurdles, Harrison and his colleagues combined a PrimeSense short-range depth camera with a Microvision ShowWX+ laser pico-projector. The camera generated a 320x240 depth map at a rate of 30FPS, even for objects as close as 8 inches (20cm). The projector delivered a sharp, focus-free, wide-angle image independent of the surface's distance - a useful property in such applications. Both devices were then linked to a desktop computer.
The OmniTouch gets its edge in finger position detection through a complex series of calculations that begins with the generation of the depth map. The second video below contains a detailed description of the process which enables the device to determine whether one's fingers are floating above a surface of actually contacting it. The inputs yielded closely approximate those of touchscreens and mice, so the possibilities for the OmniTouch are seemingly endless. Let's hope the wait for a commercial version isn't.
The paper, OmniTouch: Wearable Multitouch Interaction Everywhere, by Chris Harrison, Hrvoje Benko and Andy Wilson was presented in the Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (Santa Barbara, California, October 16 - 19, 2011)
All images courtesy Chris Harrison.

Wednesday, October 19, 2011

"Gloria" will allow internet astronomers to access worldwide robotic telescope network



Amateur astronomers wanting to observe celestial bodies soon won't be limited to just their own personal telescopes, or visits to the local public observatory. Starting next year, the first in a worldwide network of robotic telescopes will be going online, which users from any location on the planet will be able to operate for free via the internet. Known as Gloria (GLObal Robotic telescopes Intelligent Array for e-Science), the three-year European project will ultimately include 17 telescopes on four continents, run by 13 partner groups from Russia, Chile, Ireland, the United Kingdom, Italy, the Czech Republic, Poland and Spain. Not only will users be able to control the telescopes from their computers, but they will also have access to the astronomical databases of Gloria and other organizations.
The telescope at Spain's Montegancedo Observatory is serving as the model for Gloria. Located at the Universidad Politécnica de Madrid'sFacultad de Informática, it can already be remotely operated through the internet, using the university's Ciclope Astro software. This same software will be used by all the Gloria telescopes, to ensure uniformity across the system.

The amount of time that individual users get on the telescopes will be based on their "Karma," determined by how popular their work is with their fellow users. It will reportedly be somewhat like YouTube, where users vote on each other's video posts.
While the EUR2.5 million (US$3.4 million) project is intended to help armchair astronomers of all types explore the Universe for themselves, it will also be used for crowd-sourced research. The University of Oxford in particular will be using Gloria for its Galaxy Zoo project, in which users are recruited to help classify approximately a million galaxies. Astronomical events will also be broadcast on the system, to help promote Gloria and built its user community.

Initiative challenges young minds to design Space Station science experiment



YouTube and Lenovo have joined forces to launch a global initiative that challenges youngsters to design a science experiment which can be performed in space. Two winning entries chosen by a panel of scientists, astronauts and educators - including A Brief History of Time author professor Stephen Hawking - will have their experiments conducted by astronauts aboard the International Space Station and live streamed on YouTube for the world to see.

The YouTube Space Lab competition is now open to 14 to 18 year-old students around the globe, who are being asked to submit a two minute YouTube video outlining their experimental proposal to the Space Labvideo channel. Each video presentation must include the scientific question that the entrant wants answered, an educated guess at what that answer might be, an outline of the method used to conduct the experiment and the expected results. Entries can be individual or team efforts, but the latter are restricted to groups of three.

Three years of planning

The idea for the challenge came from a Google brainstorming session three years ago. To help make it a reality, YouTube and Lenovo have partnered with private space exploration company Space Adventures, and space agencies including NASA, the European Space Agency and the Japan Aerospace Exploration Agency.
Students have until December 7, 2011 to get their proposals uploaded, after which the judges will deliberate before announcing six regional finalists. These lucky youngsters will gather in Washington, D.C. next March, where they will be treated to a Lenovo IdeaPad laptop, experience aZero-G flight and awarded other prizes. One winner from the 14-16 age group and another from ages 17-18 will then get the chance to have their experiments undertaken 250 miles above the Earth.

Global winners will also get to choose a once-in-a-lifetime space experience as a prize. The lucky students can either take a tour of the Japan Aerospace Exploration Agency facilities and then watch their experiment blast off from Tanegashima Island, Japan, in a rocket bound for the International Space Station some time during the northern summer of 2012, or receive astronaut training (upon reaching the age of 18) at the training center for Russian cosmonauts in Star City, Russia.

Droid RAZR: “Impossibly Thin”



Motorola DROID RAZR
Motorola has just announced the Droid RAZR, a phone that the company calls “impossibly thin“. At 7.1mm thin, at its thinnest point, Motorola calls it the world’s thinnest phone. It features a 1.2GHz dual-core processor, 1GB of RAM. Its display is a 4.3″ Super-AMOLED with Gorilla glass that can be used to watch HD movies from Netflix. Motorola says that it is the first device that can download HD movies from Netflix.

Key features:
  • 1.2GHz dual-core processor
  • 1GB of RAM
  • 4.3” Super AMOLED display with Gorilla glass
  • Android Gingerbread 2.3.5
  • Measuring 7.1mm at its thinnest point
  • Stainless steel and kevlar design
  • 4G LTE connectivity
  • Able to watch HD Netflix movies
  • Government-graded encryption of emails, contacts and calendars
  • Compatible with Motorola’s Lapdock 500 Pro/Lapdock 100

The internal design is made of stainless steel and Kevlar, which ensures rigidity but at the same time, it weighs only 127g, which is more than the Galaxy S2, but less than the iPhone 4.
But hardware is only one side of the story says Motorola. On the software side, Motorola says that it has optimized the software that will improve battery live, and of course, Webtop from Motorola transform this thin smartphone into a small Linux computer.
Motocast is a cloud service for Motorola. With Motocast, users can stream music, download files/documents, which is something that business users will appreciate according to Motorola.

Finally, this is the thinnest 4G LTE phone in the world right now. It is as simple as that. At the moment, the 4G LTE network from Verizon is the fastest 4G network in America, and earlier devices suffered from being bulkier and had short battery life. The Droid RAZR has already changed the physical design difference. Now, we’re excited to see how good the battery life is going to be!

Tuesday, October 18, 2011

Throwable ball camera captures panoramic images

Taking pictures is about to get a lot more fun if computer engineer Jonas Pfeil and his colleagues have anything to say about it. A recent graduate from the Technical University of Berlin, Pfeil and his team designed and built a working prototype "ball" camera- a foam-studded sphere (about 8 inches in diameter) peppered with 36 tiny 2-megapixel cell phone cameras. Throw it in the air and it captures an image at the top of the ball's trajectory. Talk about redefining photography- one day, snapping pics may give way to "tossing" them.







Panoramic images, with their large width to height ratio, are appealing because they better approximate the way we humans view the world. But capturing them typically requires a tripod, several camera positions and lots of stitching together. Pfeil's invention eliminates all that since the component images are captured simultaneously. That's especially handy since it also freezes moving objects that might otherwise blur or shift during the image-gathering process.
To view the roughly 72-megapixel images from the ballcam, the data is downloaded via USB port into a spherical panoramic viewer which will ship with the camera. The resulting images look similar to those on Google'sStreet View and can be similarly panned and zoomed to examine all the captured details.
The multi-faceted housing that holds the ball-cam together was fabricated with a 3D printer. Aside from the 36 STMicroelectronics quarter-inch CMOS camera modules, the well-padded interior also houses an accelerometer (to gauge toss acceleration and maximum height) and two Atmel microcontrollers to sync up and control all the cameras. Most of the components are fairly inexpensive, so while there's no price point yet, it's likely to be competitive with mid-range digital point-and-shoots.
"We used the camera to capture full spherical panoramas at scenic spots, in a crowded city square and in the middle of a group of people taking turns in throwing the camera," says Pfeil.
The team plans to demo their patent-pending new camera this December at SIGGRAPH Asia 2011, and it's sure to cause a stir. Hopefully it'll generate a marketing deal, too, because, as Pfeil notes, "above all we found that it is a very enjoyable, playful way to take pictures."

All images courtesy Jonas Pfeil

Saturday, October 15, 2011

Without Dennis Ritchie, there would be no Jobs

In the last two weeks, we have lost two people who had immense influence on digital era.

It is undeniable that Steve Jobs brought us innovation and iconic products like the world had never seen, as well as a cult following of consumers and end users that mythicized him. The likes of him will probably be never seen again.

I too, like many in this industry, despite my documented differences with the man and his company, paid my respects, and have acknowledged his influence.

But the “magical” products that Apple and Steve Jobs — as well as many other companies created owe just about everything we know and write about in modern computing as it exists today to Dennis Ritchie, who passed away this week at the age of 70.

Dennis Ritchie?

The younger generation that reads this column is probably scratching their heads. Who was Dennis Ritchie?

Dennis Ritchie wasn’t some billionaire meticulous wunderkind from Silicon Valley that mystified audiences with standing room only presentations in his minimalist black mock turtleneck with new shiny products and wild rhetoric aimed against his competitors.

No, Dennis Ritchie was a bearded, somewhat disheveled computer scientist who wore cardigan sweaters and had a messy office.

Unlike Jobs, who was a college dropout, he was Ph.D, a Harvard University grad with degrees in Physics and Applied Mathematics.

And instead of the gleaming Silicon Valley, he worked at AT&T Bell Laboratories in New Jersey.

Yes, Jersey. As in “What exit.”

Steve Jobs has frequently been compared to Thomas Edison for the quirkiness of his personality and inventive nature.

I have my issues with that comparison in that we are actually giving Jobs credit for being an actual technologist and someone who actually invented something.

It is important to realize that while indeed the man was brilliant in his own way, Steve Jobs was not a technologist.

Indeed, he had a very strong sense of style industrial design, understood what customers wanted, and was a master marketer and salesperson. All of these make him a giant in our industry. But inventor? No.

Dennis M. Ritchie, on the other hand, invented and co-invented two key software technologies which make up the DNA of effectively every single computer software product we use directly or even indirectly in the modern age.

It sounds like a wild claim, but it really is true.

First, let’s start with the C programming language.

Developed by Ritchie between 1969 and 1973, C is considered to be the first truly modern and portable programming language. In the 40 years or so since its introduction, it has been ported to practically every systems architecture and operating system in existence.

Because it is a imperative, compiled, procedural programming language, allowing for lexical variable scope and recursion, and allowing low-level access to memory as well as complex functionality for I/O and string manipulation, the language became quite versatile, and allowed him and Brian Kernighan to refine it and publish it as the “ANSI C” programming language.

In 1978, Kernighan and Ritchie published the book “The C Programming Language”. Referred to by many simply as “K&R” It is considered to be a computer science masterpiece and a critical reference for explaining the concepts of modern programming, and is still used as a text when teaching programming to students in computer science curriculums even today.

C as a programming language is still used heavily today, and it has since mutated into a number of sister languages.

The most popular, C++ (pronounced “C plus plus”) which was introduced by Bjarne Stroustrup in 1985 and added support for object-oriented programming and classes, is used on a variety of operating systems including every major UNIX derivative including Linux and the Mac, and is the primary programming language that has been used for Microsoft Windows software development for at least 20 years.

Objective-C, created by Brad Cox and Todd Love in the 1980s at a company called Stepstone added Smalltalk messaging capabilities to the language.

It was largely considered an obscure derivative of C until it was popularized in the NeXTStep and OpenStep operating systems in the late 1980s and early 1990’s on Steve Jobs’ NeXT computer systems, the company he formed after he was ousted by Apple’s board in 1985.

What happened “next” of course is computing history. NeXT was purchased by Apple in 1996 and Jobs returned to become CEO of the company in 1997.

In 2001, Apple launched Mac OS X, which makes heavy use of Objective-C and object-oriented technologies introduced in NeXTStep/OpenStep.

While C++ is used heavily on the Mac, Objective-C is what is used to program to the native object-oriented “Cocoa” API in the XCode IDE which is central to the gesture recognition and animation features on iOS that powers the iPhone and the iPad.

Objective-C also provides frameworks for the Foundation Kit and Application Kit that are essential to building native OS X and iOS applications.

Microsoft has its own derivative of C in C# (pronounced “C Sharp”) that was introduced in 2001 and serves as the foundation for programming within the .NET framework.

C# is also is the basis for programming the new Metro applications in the Windows Runtime (WinRT) for the upcoming Windows 8 as well as in Windows Phone 7.x. It is also used within Linux and other Unix derivatives as the programmatic environment for Mono which is a portable version of the .NET framework.

But C’s influence doesn’t end at C language derivatives. Java, which is an important enterprise programming language (and has itself morphed into Dalvik, which is used as the primary programming environment for Android) is heavily based on C syntax.

Other languages such as Ruby, Perl and PHP which form the basis for the modern dynamic Web, all use syntax introduced in C, created by Dennis Ritchie.

So it could be said that without the work of Dennis Ritchie, we would have no modern software… at all.

I could end this article simply with what Ritchie’s development of C means to modern computing and how it impacts everyone. But I would only really be describing half of a life’s work of this man.

Ritchie is also the co-creator of the UNIX operating system. Which, of course, after being prototyped in assembly language, was completely re-written in the early 1970’s in C.

Since the very first implementation of “Unics” booted on a DEC PDP-11 back in 1969, it has mutated into many other similar operating systems running on a huge variety of systems architectures.

Name a computer vendor, and every single one of them has had at some time an implementation of UNIX. Even Microsoft, which once owned a product called XENIX and since sold it to SCO.




Essentially, there are three main branches.

One branch is the “System V” UNIXes that we know today primarily as IBM AIX, Oracle Solaris, SCO UnixWare and Hewlett Packard’s HP-UX. All of these are considered to be “Big Iron” OSes that drive critical transactional business applications and databases in the largest enterprises in the world, the Fortune 1000.

Without the System V UNIXes, the Fortune 1000 probably wouldn’t get much of anything done. Business would essentially grind to a halt.

They may only represent about 10 to 20 percent of any particular enterprise’s computing population, but it’s a very important 20 percent.

The second branch, the BSDs (Berkeley Systems Distribution) include FreeBSD/NetBSD which form the basis for both Mac OS X and the iOS which powers the iPhone. They also are used to power much of the critical infrastructure that actually runs the Internet.

The third branch of UNIX is not even a branch at all — GNU/Linux. The Linux kernel (developed by Linus Torvalds) combined with the GNU user-space programs, tools and utilities provides for a complete re-implementation of a “UNIX-like” or “UNIX-compatible” operating system from the ground up.

Linux of course, has become the most disruptive of all the UNIX operating systems. It scales from the very small, from embedded microcontrollers to smartphones, to tablets and desktops and even the most powerful supercomputers.

One such Linux supercomputer, IBM’s Watson even beat Ken Jennings on Jeopardy! while the world watched in awe.

Still, it is important to recognize that Linux and GNU contains no UNIX code at all — hence the Free Software recursive phrase “GNU’s not UNIX.”

But by design, GNU/Linux behaves much like UNIX, and it could be said that without UNIX being developed by Ritchie and his colleagues Brian Kernighan, Ken Thompson, Douglas Mcllroy and Joe Ossanna at Bell Labs in the first place, there never would have been any Linux or an Open Source Software movement.

Or a Free Software Foundation or a Richard Stallman to be glad Steve Jobs is gone, for that matter.

But enough of religion and ideology. We owe much to Dennis Ritchie, more than we can ever possibly imagine. Without his contributions, it’s likely none of us would be using personal computers today, sophisticated software applications or even a modern Internet.

No Android smartphones, no fancy DVRs and streaming devices, and no Macs and iPads for Steve Jobs and Apple to make Amazingly Great.

No “Apps for That.”

To Dennis Ritchie, I thank you — for giving all of us the technology to be the technologists we are today.







Wednesday, October 12, 2011

PSLV-C18 Successfully Launched Megha-Tropiques


An Indo-French satellite Megha-Tropiques was today successfully placed in orbit by PSLV-C18 rocket in a perfect launch from Satish Dhawan Space Centre as part of a key mission that will help understand global tropical weather.

Along with Megha-Tropiques, Indian Space Research Organisation's workhorse Polar Satellite Launch vehicle (PSLV) also shot into space three nano satellites--VesselSat - 1 from Luxembourg, SRMSat from SRM University, Chennai, and Jugnu from IIT, Kanpur.

The four satellites were injected into orbit one after another in clockwork precision about 26 minutes after PSLV lifted off in a plume of smoke at 11 AM, in a mission described as a "grand success" by ISRO Chairman K. Radhakrishnan."PSLV-C18 has been a grand success. Very precisely, four satellites were injected in space orbit and the difference between what we planned and what we achieved is just two km over an altitude of 867 km," he told scientists after the launch.

The rocket first injected the 1000-kg Megha-Tropiques satellite into an orbit of 867 km altitude at an inclination of 20 degrees with respect to the equator. Megha-Tropiques carries three payloads - two by French space agency CNES (Centre National d'Etudes Spatiales) and one jointly by ISRO and CNES - and a complementary scientific instrument.



ISRO has built the satellite at a cost of Rs 80 crore with "equal contribution" from CNES. Megha-Tropiques (Megha meaning cloud in Sanskrit and Tropiques denoting tropics in French) will investigate the contribution of water cycle in the tropical atmosphere to climate dynamics.

Information beamed by Megha-Tropiques is expected to benefit not only India, but also all countries in the Indian Ocean region and other parts of the world.

Jugnu, a three-kg satellite, has a camera system to take pictures of the Earth to monitor vegetation, reservoirs, lakes and ponds. Data received from it will be studied with a tracking system installed at IIT-Kanpur, and pictures and information received will be used for research.

'Jugnu' will also help gather information on floods, drought and disaster management. SRMSat, developed by SRM University and ISRO, weighs 10.9 kg and aims to monitor carbon dioxide and water vapour using a grating spectrometer.

Luxembourg's Luxspace developed and built VesselSat-1 weighs 28.7 kg and carries receivers to detect signals automatically transmitted by vessels at sea in the region covered by the satellite footprint. Today's mission is ISRO's third successful one this year from India, besides another from French Guyana. This is the Indian space agency's 20th successful venture using PSLV.

The simultaneous 50-hour countdown commenced On October 10, with the Launch Authorisation Board clearing the launch.

Student-made tablet app may make dedicated Braille writers obsolete



Undergraduate student, Adam Duran, made excellent use of his time at Stanford University, where he attended a two-month summer course organized by the Army High-Performance Computing Research Center (AHPCRC). Together with his mentors, Adrian Lew and Sohan Dharmaraja, he created a potentially game changing application that should make the lives of visually impaired people both easier and less expensive. The application turns a tablet into a Braille writer and thus saves the blind from having to purchase a device that may cost up to ten times more than a tablet.
Some Braille notetakers are essentially specialized laptops with limited functionality that tend to come with a steep price tag, with some costing up to US$6,000. Although some attempts at writing a tablet application that would make dedicated devices obsolete had been made before, none produced a result that would match the user experience. Simply using the standard eight-key Braille keyboard layout doesn't do the trick. The biggest challenge is allowing a blind person to correctly position his or her fingers over the virtual keys on a completely smooth glass panel.
This problem could be easily overcome by an appropriate tactile feedback technology, or at least that is what Microsoft engineers seem to think. The Redmond based company has already filed an appropriate patent application for their still-to-be-developed tactile feedback technology that uses plastic cells sprayed with a flexible polymer. If Microsoft is to be believed, those pixel sized bits of plastic would be manipulated with UV light and would be used to convey a whole array of textures.
It all sounds great, but it's a bit to early to celebrate. Even if we live to see Microsoft bring this technology to fruition, it's difficult to say whether it is ever going to make its way into a portable tablet. The patent application clearly states that the technology is being developed for use in Microsoft Surface - a table-sized behemoth tablet used in corporate environments and public spaces.
Apps4Android Inc has a different, considerably less high-tech take on finger orientation on a flat touch screen. The company's Braille Keyboard "Screen Protector" is basically a transparent plastic stencil with a finger positioning design cut from it. This simple yet effective solution gets the job done in that it allows a blind person to use a tablet as a Braille notetaker. However, it is still far from perfect. First, the user has to buy an additional piece of hardware (the stencil) and deal with the hassle of applying it, adjusting it, washing it etc. (let's remember we are talking about an accessibility accessory for the visually impaired). Second, It has the disadvantage of constraining the user to pre-defined finger spaces, disregarding the users' typing preferences and habits.
Both these grievances are neatly addressed by Duran's ingenious app. Instead of asking the person to conform to the keyboard layout, theStanford summer course team made a keyboard layout that conforms to the person. All the user has to do is touch eight fingertips to the screen, and the appropriate keys are automatically assigned. This means that the application is fully customizable and it accommodates varying finger shapes and sizes. The app menu can be accessed by shaking the device and navigated by dragging a finger across the screen.