April 11, 2015
he Norwegian technology company Elliptic Labs develops systems for touchless gesture-based control for mobile devices. Its technology allows users to interact with the device by making gestures in front of it, rather than physically touching the screen.
The technology needs no new hardware - Elliptic Labs' software uses hardware components already in the device to translate movements into commands - and it's coming to a phone near you soon.
According to the company, its touch-free functionality will be featured on new flagship mobiles to be launched later this year, but it won't disclose just which handset maker is about to build the technology into their products yet, simply saying that it's "a large vendor you've heard about".
Ellipstic Labs' system is based on ultrasound detection. Using the loudspeakers and microphones built in to handsets, it's possible to transmit a continuous ultrasound signal. That signal is reflected back to the device when objects - such as a user's hand - come into its field of view. By triangulating the object's location between the two or three microphones found on most modern mobiles, gestures and hand movements can be relayed to the device.
"The frequency we're operating on is about 40,000Hz. That's double what humans can hear, but bats or dolphins are able to hear it. We don't expect this will be a problem though," Haakon Bryhni, CTO of Elliptic Labs, told ZDNet.
Elliptic Labs' technology enables hand movements to be detected in an almost complete hemisphere around the front of the phone. In practice, it can detect objects in excess of half a metre away, but according to Bryhni, the maximum theoretical reach is far larger.
Positioning is not limited to just the x and y axis - mobile devices running Elliptic Labs' software can also detect how near or far the hand is from the screen with centimetre precision.
That makes it possible - though not practical - for the system to present up to 40 different levels' worth of resolution to its proximity hand detection. However, the company believes that mobile vendors will only need a few of those 40 different layers for what it calls # "multi-layer interaction".
Elliptic Labs is currently demoing the functionality at work in a video player app. The app shows no controls when hands aren't detected, shows how long the video has left to play when it detects a hand far from the screen, and player control buttons when the hand is close to the screen.
While phones don't need any additional hardware to make use of the functionality, a device's hardware set-up can affect how it works as Elliptic Labs needs at least three sensors in order to perform 3D detection. While that many microphones isn't uncommon in modern devices - the Apple iPhone 5s has three built in for example - older or more low-end handsets might not be compatible with the system.
DSP, not CPU
Another important element in Elliptic Labs' design is the sound processing software architecture.
"When we're talking about a low number of sensors [found in smartphones today]... then we can re-use existing processing hardware. Inside the newest sound chips in the latest mobile devices, there is also DSP [digital signal processor] functionality. In the same chips that are doing the analogue/digital conversion, there are also processors present that our solution can run on. That's why we can run with very low power consumption," Bryhni said, because the software running on the DSP will continue to do so even if the CPU is in halt mode while the device is sleeping. This means that the motion detection can still work when the device isn't awake, and functionality like "wake-on-motion" can be easily implemented.
"You can even remove the on/off button from the device, a simple gesture detection called 'double tap in the air' can be detected, because our technology can be on all day without emptying the device's battery," Bryhni said.
"What we are delivering to the operating system on the mobile is high-level information like left and right gestures, distance to the hand, speed of the movement, et cetera. Our code is not a part of the operating system, but it'll load onto the DSP at boot time. So our code will be a part of the system software, and we're not an app - we cannot be an app. Our software is part of the system, but not delivered by Google," he added.
Elliptic Labs is a small research company that saw its genesis in the research laboratories in Norway.
"We have been working on this for a long time. This has been a research project for almost eight years, mainly at the University of Oslo. We also have a long-running relationship with Sintef [the largest research institution in the Nordics]. There are a good number of researchers internally: even though we are a small company with just 30 employees, we have eight people with PhDs. There's no lack of competency in our company," Bryhni says. "We are very concerned about our intellectual property portfolio, and we've got almost 70 registered patents by now," he added.
Elliptic Labs is hoping app developers will pick up the technology and run with it, with Elliptic Labs' technology able to provide app developers with more information and functionality from the ultrasound-based detection system in future.
"In the future, you'll see sensor arrays with a very high resolution. The benchmark is medical ultrasound. If you've ever seen a 3D ultrasound of an unborn child, you can see the facial features of the baby, the shape of the nose and the ears. There is nothing that hinders us in seeing such details as well, but a 3D medical ultrasound device like that has about 200 sensors, while we're using four today," Bryhni says.
The next big step for Elliptic Labs is to get its technology into products on the market. Bryhni told ZDNet that will happen this year, but he couldn't disclose which brand will be the first.
During the Mobile World Congress trade show in Barcelona this year, other Elliptic Labs representatives have hinted that the first mobile with Elliptic Labs technology built in will be "a large vendor, and it will be in a flagship product". It was also stated that the gesture control will be the main feature on that device, and that vendor will "really make noise about that".
It remains to be seen whether a large number of users will want to control their mobile devices using gestures as well as by touch, but in Oslo a small company with big self-confidence is eager to showcase what ultrasound detection can add to the user interface.
"Our mandate and mission is to get into mobiles, tablets, laptops and wearables, first and foremost into mobiles, because our unique advantage is that we use a hundredth of the power of our competitor's solutions, and we're invisible. You cannot see from the outside whether our technology is present or not," Elliptic Labs CTO Haakon Bryhni concluded.