New spray-on smart skin uses AI to interpret commands through hand movements and gestures

Spray-on smart skin
Spray-on smart skin Copyright Stanford University, KAIST, Seoul National University
By Roselyne Min
Share this articleComments
Share this articleClose Button
Copy/paste the article video embed link below:Copy to clipboardCopied

A newly developed electronic skin allows users to type without a keyboard and let people use sign language in the metaverse.


Jeff Bezos, the founder and CEO of Amazon, once said of the transformative power of touch screen technology that "it's almost like the device disappears and it becomes just this natural extension of your hand and your finger".

But touch screens may soon be a thing of the past as a newly developed bioprinted smart skin could one day allow people to communicate with just gestures, type on invisible keyboards, and identity objects simply by touching them.

The electronic skin - thought to be the first to use artificial intelligence (AI) - was invented by a group of researchers from Stanford University, KAIST, and Seoul National University.

Electronic skins are usually attached to joints to track a user’s movements. Up until now, scientists had had a hard time making an electronic device flexible and stretchable enough to be comfortably used.

But the American-Korean team behind the joint research project has devised a novel manufacturing method in which a mesh of electronic circuits is printed on the hand by spraying conductive liquid onto the skin.

The nanomesh is made of threads with a thickness of a nanometre (nm).

Applications in telemedicine and gaming

When the conductive mesh on the hand stretches according to the user's movement, an electrical signal is generated and transmitted wirelessly via Bluetooth.

Using AI, it learns hand movements and can perform various tasks in a virtual space if the user repeats the same motion a few times.

The research team has implemented virtual reality (VR) platform technology to the skin and tested typing letters into a computer inputting letters into a computer with only hand movements without a keyboard. It could also draw the shape of an object on the screen when touching an object.

This will also let people with language disorders use sign language in virtual spaces, while researchers say the technology could also be widely used in fields as wide-ranging as telemedicine, gaming, and robotics in the future.

The research was published in the Nature Electronics journal in December.

For more on this story, watch the video in the media player above.

Video editor • Roselyne Min

Share this articleComments

You might also like