The third round of my Building Virtual Worlds class is called the "lightning round," because we only have one week to build an experience. In this round, my team chose to experiment with an eye tracker and we wanted to create a multiplayer experience. Since Tobii eye trackers emulate mouse input, there's a technical limitation that a computer can only have one attached. This meant that to make a multiplayer game, we would either have to add networking, or make it asymmetric with an alternate control scheme for the other players. In the end, we chose to go the asymmetric route.
Using the eye tracker was an interesting experience. We had to ensure that speed of the game was balanced for the twitchy movements of players' eyes. We found that players had an easier time looking at regions of the screen close to the center, and that it was easier to look left and right than up and down. We could have chosen to put content in hard to look at places, but this would have caused too much eye fatigue for the players. We prioritized player comfort, and scaled the map to fit in a comfortable view angle. From that starting point, we balanced the hider movement speed.
One balancing difficulty we ran into was that some players were better with the eye tracker than others, and this was a learned skill. Since I spent a lot of time using the eye tracker during testing, I could almost always stop all the hiders from getting out. New players, however, struggled. When we presented this at Festival, some players had already played other eye tracker games, which meant it was even harder to make an experience that worked fairly for everyone. In my opinion, we didn't end up at a place that was balanced for players of all skill levels. We worked around this a little bit by changing the "win" display to have more nuance than just "every hider escaped" or "every hider was captured."
One interesting technical challenge was that we needed the hiders and seeker to see different things. We wanted the hiders looking at the same screen so they could work together and communicate, but we needed the seeker looking at a different screen. It wasn't too difficult to figure out how to handle multiple displays in Unity. We used the same scene with different cameras to render each scene. However, we didn't just want different perspectives, we wanted different lighting. It's possible to set objects to different masking layers that can be ignored by certain cameras, but lights don't work in the same way. My solution was to hook into the pre and post render functions for each camera and selectively disable and reenable lights as needed.
Another interesting challenge was figuring out how to make the input system work with multiple players and multiple characters. I made the whole game adaptive to the number of players, so we were able to test with just one hider, and run the game with one, two, or three. This meant spawning in a new characters in their correct spawn locations as more controllers were connected.