Friday, August 8, 2025

Creating liberating content

Introducing deBridge Finance: Bridging...

In the dynamic landscape of decentralized finance (DeFi), innovation is a constant,...

Hyperliquid Airdrop: Everything You...

The Hyperliquid blockchain is redefining the crypto space with its lightning-fast Layer-1 technology,...

Unlock the Power of...

Join ArcInvest Today: Get $250 in Bitcoin and a 30% Deposit Bonus to...

Claim Your Hyperliquid Airdrop...

How to Claim Your Hyperliquid Airdrop: A Step-by-Step Guide to HYPE Tokens The Hyperliquid...
HomeWorldScientists will help...

Scientists will help people with vocal cord disorders speak

Scientists will help people with vocal cord disorders speak

Specialists from the Department of Bioengineering at the Samueli School of Engineering at the University of California, Los Angeles developed a bioelectric system capable of detecting the movements of the muscles of the human larynx and translating these signals into audible speech using a special system. UCLA), the science news website EurekAlert reported on March 15, citing a press release from the UCLA service.

A soft, thin and elastic device made in the form of a patch with an area of ​​about 30 square meters. mm and a weight of 7.2 g, which should be placed on the skin of the neck, just under the chin, will help people with vocal cord dysfunction speak.

In case of a pathological condition of the vocal cords or during the recovery period after laryngeal cancer surgery, people often find it difficult or even impossible to speak. A team of researchers in Jun Chen’s lab at UCLA may soon be able to solve this problem for patients. Using machine learning, they managed to get their device to reproduce speech based on the movement of the laryngeal muscles with an accuracy of almost 95%.

This achievement is not the first that Chen’s associates have achieved in helping people with disabilities. Previously, they developed a glove-shaped device that can be used to translate American Sign Language into English in real time.

The new device, which looks like a small patch, consists of two components. The first is a sensitive, self-powered sensor that detects and converts signals generated by muscle movements into highly accurate, analyzable electrical signals. The second is the executive component, it converts speech signals into speech.

Each of these components contains two layers: a layer of polydimethylsiloxane (PDMS), an elastic and biocompatible silicone compound, and a layer consisting of copper induction coils. The components are separated by a fifth layer containing PDMS mixed with micromagnets. Generates a magnetic field.

The new device’s magnetoelastic sensing mechanism detects changes in the magnetic field resulting from mechanical forces: the movement of the muscles of the larynx. Induction coils in the magnetoelastic layers generate high-precision electrical signals in response, which are measured and analyzed by AI.

The developers presented a detailed description of the device in the article “Conversation without vocal cords using a touch-controlled wearable system based on machine learning”, published in the journal Nature Communications.

Research has shown that almost one in three people will lose their voice at least once during their lifetime. In severe cases, therapeutic methods such as surgery and voice therapy may require three months to a year to restore the voice. Additionally, after surgical or invasive interventions, a long period of mandatory vocal rest is often required.

“Current solutions, such as portable electrolaryngology devices and tracheoesophageal puncture procedures, can be inconvenient, invasive or uncomfortable.Chen noted. — “This new device provides a portable, non-invasive option that can help patients communicate before treatment and during recovery from treatment for voice disorders.”.

To train the AI, the researchers tested their device on eight healthy adults, collecting data on the movement of the laryngeal muscles and using a machine learning algorithm to match the resulting signals to specific words.

By selecting the appropriate speech output from the executive component of the device, they demonstrated the accuracy of the system created by asking participants to say five sentences, first out loud and then silently.

The accuracy of word prediction using the model was 94.68%. Additionally, the device monitored silent speech expression and transmitted it by increasing or decreasing the volume of the voice signal. This made it possible to highlight the end of the sentence: the artificial speech did not remain monotonous.

The research team plans to expand the device’s vocabulary using machine learning and test it on people with speech problems.

Source: Rossa Primavera

Get notified whenever we post something new!

Continue reading

The head of the Abkhazian Foreign Ministry assessed the relations between Russia and Georgia

It is in Abkhazia's interest to develop relations between Russia and Georgia, Abkhaz Foreign Minister Sergei Shamba said at a briefing on September 3."I think it is in our interest to see things develop normally. I do not believe...

The head of the Abkhazian Foreign Ministry called on the Russian Federation to consider how to improve relations with the Russian Federation

Abkhaz political forces need to think together about how to improve relations with Russia, which have begun to deteriorate, Abkhaz Foreign Minister Sergei Shamba said at a briefing on September 3.He commented on a document that appeared on social...

Sports Minister Degtyarev punished a State Duma deputy for changing citizenship

State Duma deputies who previously changed their sports citizenship are disgusting, Russian Sports Minister Mikhail Degtyarev said on September 4 during a speech at the EEF-2024 forum.Degtyarev said he was not against Russian athletes taking part in international competitions...