Wearable_Insight_Forum

 

Notifications
Clear all

What are the most commonly used models in the wearables industry?

6 Posts
2 Users
0 Reactions
17 Views
(@hannah)
Posts: 80
Trusted Member
Topic starter
 

I’d like to know what the most commonly used models are in the wearables industry.

– Classic ML (Random Forest, SVM)?

– DL (LSTM/GRU, 1D CNN)?

– Do they also use TinyML + Transformer these days?


 
Posted : 05/12/2025 3:14 pm
(@david-mun)
Posts: 30
Eminent Member
 

In the real world, RF/GBDT + feature engineering is still the most commonly used.
Deep learning is “good to have, but expensive,” and Transformer/TinyML are still experimental.

Based on my experience, this is roughly what it looks like.

[Classic ML (Random Forest / XGBoost / SVM)]
Still the industry mainstream.
Most commonly seen in commercial wearables.
Reasons:
– Interpretable + Easy to debug
– Robust even on small datasets
– Easy to run on MCUs/low-power environments

Characteristics:
– 1–3 second sliding window
– Handcrafted features like mean, std, energy, and FFT peak
This approach is sufficient for most “walking/running/cycling” scenarios.
If you want “fast release + stability,” this is often the answer.

[Deep Learning (LSTM / GRU / 1D CNN)]
→ Research → Flagship Products (occasionally)

Good when you want to feed raw sensor data directly.
1D CNN is used more often than you might think (lighter than LSTM).
Cons:
– Requires a lot of data.
– Battery/inference cost issues.
– Edge debugging hell.
Usually:
– Runs on the phone or
– Sends to a server for processing.
DL is clearly stronger in “micro-movement/complex gestures.”

[TinyML + Transformer]
→ Honestly, it’s still in the early adopter realm.
There are quite a few papers, PoCs, and demos.
Rarely seen in actual products.
Reason:
– High memory and power overhead.
– Performance improvements are often not “perceived” compared to RF.
Alternatives:
– Simplified attention.
– Hybrid architecture (CNN + attention): This is currently being tested.
It’s often used as a marketing point to claim “we’re leading the way in technology.”

[Summary]
Released Products: RF/GBDT + Feature Engineering
Advanced Recognition: 1D CNN, Occasionally LSTM
Experimental: TinyML Transformer

Honestly,
sensor location + data quality + labeling are more important than the model itself.
The rule of thumb is to check these before changing the model.


 
Posted : 06/12/2025 1:50 am
(@hannah)
Posts: 80
Trusted Member
Topic starter
 

I enjoyed reading the post. I have a question.
You said RF and GBDT are still the mainstays, right?
So, are teams using deep learning primarily for specific use cases like “complex gestures”?
Or is there simply a significant difference between research and product teams?


 
Posted : 06/12/2025 1:54 am
(@david-mun)
Posts: 30
Eminent Member
 

Yes, it’s almost exactly both, lol.
From the product team’s perspective, they often have to explain “why they decided something this way,” so RF is much easier.
On the other hand, for problems like wrist gestures, micro-motions, and those with significant user-to-user variability,
they often start tuning with classic ML and then eventually move to DL.
Research teams like DL, but as the release date approaches, they return to RF… I’ve seen this pattern quite often.


 
Posted : 06/12/2025 1:54 am
(@hannah)
Posts: 80
Trusted Member
Topic starter
 

Ah… Now that I understand, lol.
So, TinyML + Transformer are really rare in commercial applications?
It feels like there are a lot of papers, but they’re still a long way from real-world applications?


 
Posted : 06/12/2025 1:55 am
(@david-mun)
Posts: 30
Eminent Member
 

Exactly. There are many proof-of-concepts (PoCs) that are “just going to work,” but they rarely make it into product lines due to battery life and stability issues.

Instead, we’re starting to see a mix of 1D CNNs and very thin attention.

Personally, I believe that creating data and labels properly is far more important than being ambitious about the model.


 
Posted : 06/12/2025 1:55 am
Share: