This page was created by Manish Tyagi.
Augmented Reality in Movement Learning
Participant - Kevin Inouye
Mentions
- https://scalar.case.edu/freedman-fellows/inouye-2020-2021
- https://scalar.case.edu/freedman-fellows/2021-2022-faculty-fellows
The overall project goal is simply a more specific version of what I set out to create with this past year’s Freedman Fellowship: A true first-person perspective model, viewable in augmented reality via something like the Microsoft Hololens, that can facilitate learning of basic movement patterns. As a proof of concept, what I’d like to create is a scalable, semitransparent humanoid avatar, or perhaps something like a humanoid aura, which a user can follow through placing their own body within the boundaries of the model. It’s a corporeal fill-in-the-blanks, where the user could look around and make any necessary adjustments to foot position, elbow, torso, or any other body part. While currently envisioned as a way of learning something like tai chi or yoga positions, it has obvious potential in all sorts of sports physiology, coaching, and other physical learning. Augmented reality tools will become more accessible to consumers in the years to come, and one big area of potential there is in how we and our real environments can co-exist with and learn from these digital elements. I suspect this will have advantages over things like the VR apps that currently exist on platforms like the Oculus Quest, and over traditional third-person video demonstrations. It’s not professional feedback, but the ability to visually compare your own position to the models’ is as close as we’re likely to get without putting the learner in a full motion capture suit, or having personal one-on-one coaching. This past year of pandemic isolation has reinforced my belief in the need for activity-based apps and solo learning options.