The goal of this project is to build a character controller that doesn’t just replay pre-made animations, but actually learns how to move—just like a real person would. Most virtual characters rely on kinematic systems: animations are handcrafted and triggered through state machines. It works well in controlled environments, but falls short when characters need to adapt to the unexpected—like uneven terrain or dynamic obstacles. That’s where physics-based controllers come in. Instead of playing back animations, they generate movement in real time through forces, torques, and joint constraints—allowing for more interactive and lifelike behavior. But creating believable motion purely through physics is tough. You need a control policy that understands balance, momentum, recovery, and coordination—all without explicitly coding it. My approach combines Nvidia IsaacLab’s reinforcement learning environment with Unreal Engine 5’s Motion Matching system. By using animation data as a guide, the neural network learns to mimic human movement while still operating entirely within a physics-driven simulation. This meant bridging two very different worlds: UE5, where motion data lives, and IsaacLab, where learning happens. I am developing a system to extract pose data from Unreal and inject it into the training loop in IsaacLab—effectively letting the neural network “watch” human motion and figure out how to reproduce it using only physics. The end goal is a character controller that not only looks human but reacts human—fluid, adaptive, and grounded in physical reality. Whether it’s subtle shifts in posture or adjusting to changes in the environment, the system learns to move naturally without being explicitly told how. It’s a project that touches on biomechanics, animation systems, machine learning, and simulation—and one that brings us closer to truly responsive, believable virtual humans.
This thesis was by far the most technically ambitious project I’ve worked on—and also one of the most rewarding. We set out to solve a classic challenge in virtual reality: how to give users the freedom to walk naturally in a virtual world without walking into real-world walls. It’s a problem known as “Redirected Walking,” and while it’s been tackled before, most existing approaches either fall apart in small spaces or don't scale well when multiple users are involved. To address this, we designed and implemented a brand-new hybrid algorithm that combines Artificial Potential Fields with a Steer-to-Orbit mechanic. The result is an elegant solution that subtly nudges users in circular paths, helping them stay safely within a confined physical space—even when sharing that space with someone else. To test it, we built a VR game using Unity where players explored a creepy, puzzle-filled maze, looking for a key while unknowingly being redirected in the real world. The experience was designed to be disorienting—in the best way. Players would take tight turns, encounter zombies, and often had to turn around, all while physically walking in circles in a 6x6m room. The results were genuinely exciting. Our algorithm helped reduce cybersickness and made users feel more comfortable moving around, especially in multi-user sessions. Players walked faster, completed tasks more confidently, and overall stayed more immersed in the virtual environment. Seeing people move naturally—and safely—in a shared VR space without ever realizing they were being redirected was incredibly satisfying. Personally, this project pushed me to grow in so many areas: real-time sensor data processing, Unity development, multiplayer networking, and running proper user studies. But most of all, it deepened my passion for creating intuitive, human-centered VR experiences that feel seamless and immersive.
This project explored one of the most exciting frontiers in XR visualization: how to turn everyday video footage into immersive, interactive 3D scenes. At the heart of it was a technique called Gaussian Splatting—a neural rendering method that builds detailed 3D models by assigning Gaussian functions to points in a structure-from-motion point cloud. The process began with a simple video, walking around an object from different angles. That footage was broken down into frames, which were then used to generate a 3D point cloud. From there, we trained a model that splatted Gaussians onto each point—transforming raw pixels into a convincing volumetric scene. Even with just a subset of frames, the results were surprisingly detailed. The catch? Training took hours and pushed our hardware to the limit. It was a clear reminder that high-fidelity rendering comes with serious computational demands. Getting these models into Unity for real-time interaction introduced a new set of challenges. The splatting output wasn’t natively compatible with Unity’s XR pipeline, so we had to work around graphics API mismatches using community-built tools and plugins. Once in Unity, we integrated physics—adding colliders, rigidbodies, and material properties to make the basketball hoop and ball behave realistically in VR. The final result was a mini-game where users could physically interact with a Gaussian-splatted scene inside a virtual space. It was a rewarding blend of machine learning, graphics, and gameplay. The physics felt great, the visuals were sharp—and yet, somehow, I still couldn’t make a single shot. Turns out Gaussian Splatting can do a lot… but it can’t fix my jump shot.