Neosensory Buzz: The Principles of Sensory Replacement Technology that Translates Sound into Vibration
The Neosensory Buzz wristband utilizes technology that converts “sound → vibration patterns” into rhythm, beat, and voice cues on the wrist.
The core consists of hardware (sensors + actuators), real-time signal processing (filters and mapping), and custom software (modes and learning).

1. Hardware Configuration — Sensors and Actuators
- Microphone (Input): A microphone (designed to respond to various environmental noises) captures ambient sound and an A/D converter. This signal is then transmitted in real time to a DSP/microcontroller.
- Vibration Actuator (Output): Four vibration motors (four small, independently controllable vibration units) are typically placed within the band. Patents, papers, and product descriptions state that a Linear Resonant Actuator (LRA) or similar high-performance haptic actuator is used at different points around the wrist. Each actuator can independently control amplitude (intensity), timing, and pattern.
2. Signal Processing — How Sound is Converted into Vibration
The overall flow is (microphone) → (preprocessing) → (feature extraction) → (mapping algorithm) → (actuator control).
1) Preprocessing (preprocessing)
- After the ADC, the speech/audio signal is processed on a frame-by-frame basis (e.g., 10–50 ms).
- Basic noise suppression, automatic gain control (AGC), and simple voice activity detection (VAD) are first applied to highlight only the “relevant sounds.”
2) Frequency Decomposition & Feature Extraction
- Frequency components are extracted using methods such as the short-time Fourier transform (STFT) or Mel-filter bank.
- Short/long tones (frequency), loudness (sound pressure), timing (rhythm/beat), and timbre characteristics are calculated in real time. According to papers and reviews, Buzz utilizes frequency-space mapping by “vibrating specific actuators in response to specific frequency bands (or loudness).”
3) Mapping Algorithm (Frequency/Intensity → Location/Intensity/Pattern)
- Spatial Mapping: Frequency (pitch) is mapped to different actuator locations. For example, high-frequency components are assigned to one actuator on the wrist, while low-frequency components are assigned to the other. This allows the user to infer the color (frequency) of the sound by determining where the vibration originates.
- Intensity/Envelope Mapping: The intensity (volume) of the sound is converted to vibration amplitude (strength) or pulse density, so that stronger sounds are represented by stronger (or denser) vibrations.
- Temporal/Rhythmic Mapping: In music mode, the original beat and beat timing are preserved, firing the actuators in the form of “pulses” to create a sense of rhythm (e.g., regular pulses in time with a kick drum in 4/4 time).
4) Pattern Diversity
- Product/promotional materials claim tens of thousands (e.g., 29,000+) of unique haptic patterns can be generated—meaning numerous discrete patterns can be created by combining frequency resolution, amplitude level, and timing variations.
3. Software/Modes — User Experience and Adaptation
- Mode differentiation: Multiple modes, such as Everyday, Music, and Sleep (sleep/alarm filtering), allow for adaptive signal preprocessing and mapping rules (e.g., Music mode optimizes melody and beat expression).
- App Integration & User Tuning: The smartphone app allows for settings such as sensitivity, frequency response, and notification filters (e.g., only vibrating for fire alarms and doorbells).
- Machine Learning (ML) Use Cases: Later products (e.g., Clarify) use ML models to perform speech-to-haptic augmentation by detecting specific speech elements (e.g., high-frequency sounds like “s” and “z”) and applying vibrations tailored to those sounds. Through repeated wear, the user’s brain “learns” to interpret this new tactile input (principle of neuroplasticity).
4. Operating Characteristics — Response Time, Latency, and Constraints
- Real-time: To be cognitively useful, the overall latency—from A to D conversion, filtering, mapping, and actuator activation—must be low (typically within tens of milliseconds). Product design targets this range (the exact number varies depending on internal company data and firmware).
- Resolution Constraints: The tactile resolution of the wrist skin is limited to replacing the frequency accumulation of sound waves (e.g., detailed frequency decomposition, as in the ear). Therefore, in practice, the frequency range is compressed into a small number of channels (in this case, four channels), and the user “learns” to interpret the patterns through long-term wear.
5. Principle Background — Sensory Substitution and Neuroplasticity
- Sensory Substitution Theory: Because the brain can learn the same information through different sensory inputs (e.g., sound information provided through touch), repeated experiences can internalize the mapping of “vibration pattern → sound meaning.” Neosensory’s research and clinical data also report cases where users were able to distinguish sound types or voice cues through vibration after a certain period of wear.
6. Practical Application Examples (Summary)
- Music Listening: Provides a “bodily listening” experience by directly feeling the beat and rhythm through wrist vibration. Music mode emphasizes beat synchronization.
- Hearing Assistance (Environmental Awareness/Hearing Aid Complement): Detects environmental sounds such as doorbells, sirens, and speech and provides vibration alerts to assist users with hearing loss in situational awareness.
- Tactile Hearing Aid: Detects specific phonemes and emphasizes them with vibrations to assist in communication situations (ML applications in products such as Clarify).
7. Limitations and Improvement Points
- Resolution Limitations: It’s difficult to convey all the details of voice with just four wrist channels, and interpretation relies heavily on user learning.
- Individual Differences: Effectiveness varies depending on skin sensitivity, wearing position and pressure, and subjective interpretation ability.
- Battery and Sustainability: Real-time audio processing and continuous vibration require design considerations in terms of power consumption (low-power DSP and efficient vibration patterns are crucial).
*Reference
https://www.sciencedirect.com/science/article/abs/pii/S0306452221000129
https://sharphearingcenter.net/new-hearing-technology-converts-sound-into-haptic-feedback
https://eagleman.com/latest/neosensory-launches-clarify-a-wrist-worn-hearing-aid-alternative
This is really interesting. The Neosensory Buzz converts sound into vibration, but how is it different from a simple haptic notification band?
If it’s just “sound → vibration,” isn’t it the same as a vibration notification?
Good question. Typical haptic bands only vibrate to provide a “signal notification.”
But the Buzz is much more sophisticated—it analyzes the frequency, intensity, and rhythmic patterns of sound, then uses four vibration motors on the wrist to create different sensory patterns.
So, it’s not just a simple vibration, it’s more like creating a “sensory language.”
So, if you actually wear it, people will be able to “feel” sound? Like a hearing aid?
It’s more like “perceiving the presence of sound through a new sensory pathway.” It doesn’t restore hearing, but through repetitive pattern learning, the brain learns to recognize the vibration as something like, “This is the sound of a door closing.”
In other words, Buzz doesn’t replace hearing, but rather remaps it—think of it as a sensory translation.
So, does that require training? It’s not something you can feel immediately after putting it on, but rather, you have to learn the patterns, right?
Exactly. At first, you simply feel the vibrations.
But if you wear it consistently for a few days or weeks, your brain will gradually begin to associate specific vibration patterns with specific sounds.
This is the neuroplasticity training that Buzz is designed for.
Ultimately, you’re relearning how to “feel,” not how to “hear.”
So, could this technology eventually be extended beyond the hearing impaired to non-hearing users?
For example, for music appreciation or VR experiences?
It’s certainly possible. Some researchers are already experimenting with using Buzz as a subfeedback channel for music.
For example, you could sense the rhythm of low notes or the emphasis of a melody through vibration patterns on your wrist. In the future, it could serve as the foundation for creating multisensory experiences that integrate vision, hearing, and touch in VR/AR environments.
In other words, Buzz has the potential to evolve beyond a simple assistive device into a sensory augmentation technology.
![WEARABLE_INSIGHT [FORUM]](https://wearableinsight.net/wp-content/uploads/2025/04/로고-3WEARABLE-INSIGHT1344x256.png)

