Why position sensing is way harder on the human body than on robots?
I keep seeing people talk about “just add a position sensor” in wearables,
and I feel like this massively underestimates the problem.
On robots or machines, position is clean:
-
rigid links
-
fixed reference frames
-
predictable motion
On humans?
-
soft tissue
-
skin sliding
-
sweat
-
posture changes
-
clothing shifting
Even a “simple” joint angle turns into a mess once it’s worn for a few hours.
So honest question:
-> Are we trying too hard to measure exact position instead of movement patterns?
Curious how people here think about this — especially anyone who’s tried to build this IRL.
Totally get where you’re coming from — and I think you’re putting your finger on the core mismatch between how sensors are designed and how bodies actually behave.
A lot of “just add a position sensor” thinking comes straight from robotics intuition. In that world, position means something stable. Links are rigid, joints are well-defined, reference frames don’t drift, and the system doesn’t decide to sweat, slouch, or gain 2 kg over lunch. Humans break every one of those assumptions. The moment a sensor sits on skin instead of a rigid frame, “position” stops being a clean variable and turns into a proxy for a bunch of messy biological processes layered on top of each other.
Skin slip alone kills most naïve position models. Add sweat, micro posture changes, muscle bulging, clothing drift, and time-on-body effects, and suddenly your “joint angle” is an inference problem, not a measurement. This is why lab demos look great for 10 minutes and then quietly fall apart after a few hours of real wear. The signal didn’t get worse — the mapping did.
That’s why I agree with your question: in many cases, movement patterns are the more honest target. Humans are remarkably consistent in how they move, even when absolute positions are noisy. Gait cycles, repetition timing, coordination between segments, transitions between states — these survive sensor drift far better than absolute pose. A slightly wrong angle every time is often more useful than a “precise” angle that changes its meaning over time.
You can see this shift already in products that actually ship. They care less about “your knee was at 37.2°” and more about “this movement deviates from your baseline” or “this pattern correlates with fatigue / instability / injury risk.” That’s not a failure of sensing — it’s a reframing of what the sensor is for. Position becomes an intermediate feature, not the output.
So yeah, I think we are trying too hard to extract exact position from systems that fundamentally don’t support it. The interesting work isn’t in squeezing another decimal place out of joint angles — it’s in deciding which aspects of motion are invariant enough to be worth modeling in the first place. Anyone who’s built this IRL learns that lesson pretty fast, usually the hard way.
This really clicked for me. Curious though — if exact position is kind of a dead end on the body, where do you personally draw the line? Like, do you still think position sensors are useful at all, or should we just stop pretending we can get “joint angles” from wearables and move on entirely?
I wouldn’t throw them out completely. I think the mistake is treating position as truth instead of context. Position signals are still useful as inputs — they just shouldn’t be the final thing we present or optimize for. The moment you accept that “this angle is an estimate that will drift,” you stop fighting physics and start designing around it. In practice, position works best when it’s anchored to patterns, baselines, or short time windows, not absolute values you expect to hold all day.
That makes sense. Do you think this is more a hardware limitation or a mindset problem? Like, if sensors get better, does this go away — or is the human body just fundamentally the wrong interface for precise position?
Honestly, mostly a mindset problem. Better sensors help at the margins, but they don’t fix skin being skin. Even perfect sensors still sit on a deforming, moving surface. I think the real unlock is accepting that wearables are closer to behavioral sensing than mechanical measurement. Once teams design for “detect change, trend, deviation” instead of “reconstruct geometry,” things suddenly start working — and shipping.
![WEARABLE_INSIGHT [FORUM]](https://wearableinsight.net/wp-content/uploads/2025/04/로고-3WEARABLE-INSIGHT1344x256.png)

