in ,

New 3D display combines visuals, haptic feedback, and sound, Ars Technica

New 3D display combines visuals, haptic feedback, and sound, Ars Technica


      Soundscape mode –

             

A small sphere levitated on sound can be moved around fast enough to be a display.       

          –

  

           

Plenty of technology development comes in areas where we’ve already settled on an efficient design. Wind turbines are a great example. Several decades ago, some radical ideas were floating around, touted as providing heightened efficiencies. But wind turbines have since stabilized on a standard design, and most research now goes into figuring out how to get the most out of that design. In a lot of ways, it’s boring compared to the lingering potential for a complete reinvention.

Right now, 3D displays are back in the much more fun “radical ideas” phase. While various VR technologies are on the market, they’re unsatisfying in various ways. A handful of technologies has been demonstrated that provide 3D images without the need for goggles or glasses. But these ideas have their own problems, including slow refresh and complicated hardware, and they lack a standardized mode of user interaction. One company has developed a 3D display that can bemanipulated by hand, but without any feedback, this can be tricky.

This week, researchers are describing a new take on a recent 3D display development that mixes in a key ingredient: sound. The use of ultrasound allows the researchers to both run the display and provide haptic feedback for interactions with it. As an added bonus, the new display can allow audible sound to originate from objects within the display itself.

Conceptually, the display borrows heavily from onewe described early last year. The light emitted by the display comes from lasers reflected off the surface of a tiny, hovering sphere. As long as the sphere is reflective, it can display any of the colors you might need for an image. To display an object, the sphere has to be moved around rapidly enough that the human eye registers the line traced by its movement as a single object, rather than detecting that it’s the product of motion.

The earlier system controlled the movement of the display dot using lasers, which generate a small bit of force as they reflect off the dot. The low amount of force provided limited the movement of the dot and thus made it harder to trace out large objects quickly enough to trick the eye into not seeing the motion.

The new display swaps out the lasers for ultrasound generators, which produce significantly more force. While this has also been attempted previously, the team has achieved the fastest motion of a display dot yet, being able to cover nearly nine meters a second vertically and another 3. 75 meters / second horizontally. That makes for a far better display. In somethingChris Leewould consider an obvious negative, the researchers forgo lasers entirely. Instead, they use colored LEDs to reflect light from the dot and simply paint all of its surroundings black to avoid stray reflections.

In addition to powering the display dot to much higher speeds, the ultrasound generators also provide two other features that led the device’s developers to name it a multimodal acoustic trap display. One of these is audible sound, generated by controlling the interactions of multiple ultrasound generators. While adding speakers to a display isn’t a very big ask, this has the ability to allow sound to be generated at the location where an object is displayed, which could provide some interesting effects.

Eimontas Jankauskis

More significantly, however, the use of ultrasound provides haptic feedback within the display itself. In other words, if you want to push a button displayed by the device, the ultrasound can be used to make it feel like you actually touched something physical.

This is a bigger advance than it seems. I’ve tried a 3D display technology that allowed interaction by registering the position of your finger within the display. This was challenging, because if your gestures produced the wrong response (or no response at all), it was difficult to tell whether you were using the interface wrong or had simply not correctly lined up your finger with the display. Haptic feedback should make these interactions far more certain and specific.

The “multimodal” aspects do come at a cost. As more of the ultrasound generators are diverted to being used to generate sound or feedback, the display dot slows down, with speed being cut nearly in half. Providing both sound and tactile feedback at the same time isn’t possible.

Some video of the display in action.

That’s not ideal, but the bigger problem is that interactions are going to be necessarily limited by the fact that the dot is levitating on a bed of sound. Any gestures that interfere with that — the placing of a finger anywhere underneath something visible — could cause the dot to fall out of the display. This can be avoided to an extent through software, but it will still prevent the display of anything above a location where people are poking at the display.

That said, there’s alotof room for improvements. The researchers put this together from existing commercially available hardware. It’s cheap, and anyone can potentially build one, but specialized, dedicated hardware could possibly provide better performance. Brighter light sources are an option the authors mention, as well. But perhaps the biggest potential comes from the fact that they haven’t built a detailed model of how the ultrasound interacts with the display dot. More detailed modeling “would allow better exploitation of the observed maximum speeds and accelerations, enabling larger and more complex visual content.”

So, there’s a bunch of pros and cons to this design, which is about where you’d expect things to be in an era that’s focused on trying out different tech rather than revising a tech standard. But the fact that there’s still enough space to try out an entirely different type of hardware to drive the display shows why this is still an exciting field.

Nature, 2019. DOI:10. 1038 / s 41586 – 019 – 1739 – 5(About DOIs).         

Listing image byEimontas Jankauskis                                        

                  

Brave Browser
Read More
Payeer

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Fusion in a magnetically-shielded-grid inertial electrostatic confinement device, Hacker News

Stadia what? Xbox game streaming will become part of Game Pass in 2020, Ars Technica

Stadia what? Xbox game streaming will become part of Game Pass in 2020, Ars Technica