Hybrid Hands – Conclusion

Well…my project is done.

As of writing this, it is currently 4 am. I am finishing up any simple tweaks to the project, and cleaning it up to submit in a few hours.

This project was the first time I had full reign to design and create whatever I truly wanted in VR. The creative freedom was initially quite daunting, but over time, I understood how exhilarating it was creating my own concept and seeing it come to life over the months.

So, did I make hands into a game mechanic? Yes, I think I did.

Doing some research, I couldn’t find any other VR titles with a feature similar to mine, which I am very happy about. I was able to create a way players could take their hands – the core way they are represented in VR – and change it to suit their own needs.

I will admit the system ended up being a bit more limited than I had hoped. Initially, I wanted the player to be able to remove and reattach their fingers whenever they wanted, not needing to hit a reset button. Although, as of typing this, I already can think of OnCollision functions that could possibly allow this effect, so for now I will just chalk it up to lack of experience and time.

Coding was a bit challenge for me in this project. In the first year of the course, I figured out quickly I enjoyed 3D modeling more than other areas, and tried to focus my efforts on that primarily. However, if I ever want to get further into the industry, an understanding of other areas would always be beneficial. And so I decided I’d dedicate my time to coding more for this project.

Now don’t get me wrong, there were plenty of times I regretted doing this. A few errors that made me pull my hair out usually ended up being a capital letter missing, or an !. However, at the end of the day, I find myself much more confident in C#, and know I have the knowledge to work in it to make what I need.

And after all the hard work, the project works. The code is ugly, and the physics questionable, and many more issues I could find if I looked hard enough…but it works, and that means everything to me as someone who can rarely say they have accomplished anything as big as this.

I want to actively develop this system more. I say that about a lot of my projects, but I think I honestly mean it this time. I want to make it so you can reattach fingers on the go, I want to add different combo’s for new tools, and I want to make a full fledged VR experience out of this.

I’m not sure how busy my summer will be, but if its open, I might hop into Unity and keep working on tweaking the system to perfection. Heck, maybe I can make the full game my project for my final year.

I loved making Hybrid Hands

Hybrid Hands – Part 4: Claws

This post will talk about how I developed the final function of my climbing claws.

This video was my primary resource for making this system work, and came out right at the time where I was considering scrapping the feature altogether.

A lot of the videos I could find detailing how to make this system worked struggled to be applied to the latest versions of XR in Unity, but as my luck would have it, this youtube – whom I followed – put out a video on how to get it working in the most recent versions.

Velocity based building functions are something I still am very new to understanding, and I’ll be honest and say a lot of the development of this system was simply following that video directly, but as expected, there were a few elements that required tweaks to work with the systems I had in place, as well as a few errors that stemmed from the smallest things like a ! in the incorrect place (that’s coding for ya!) but eventually, I got it all working.

This is the climbing provider script, which is attached to the locomotion system of the XR Rig. The primary functions are used to determine when the system should be registering and adding velocity values for climbing, and when it shouldn’t. The technique of using an if statement to register whether the correct hands are active also returns here.

This script above found in the climbing provider is the one that alters the players position based on calculations of the velocity of the player. A VelocityContainer script is called here, which is used to register the velocity of the selected XR controller when the XRI settings are edited correctly.

The climbing anchor script above is shown here, and is applied to the climbing wall objects. This function calls the ones used in the provider, and links them to when the wall is grabbed by the player. With these two scripts running together, when the wall is gripped, the climbing provider begins registering the velocity of the controllers, and then uses that to elevate the player.

Besides these two scripts, there are a few miscellaneous ones used to make sure the gravity of the player functions correctly.

The finished product

I definitely think I need to learn more about systems like these. Climbing in VR is an interesting mechanic in any experience, and the more I know about implementing it correctly, the more experiences I can apply it to in the future.

Hybrid Hands – Part 3: Scene and Swords

To begin this blog post, I will outline a few tweaks given to the sword system.

As stated in my previous post, I found a way to limit this feature only to when the sword hands are present.

game object for sword is identified and called in an if statement for the function.

This if statement also shows my solution to a few other emerging issues with the system. The collisionDetect bool was created due to the cutting crashing my computer, as too many cuts would occur if the blade stayed in the middle of any object for too long.

areas collisionDetect is used

With this code, the cutting is reliant on the bool being false. Whenever a cut is made, the bool is set to true, and only made false again when it is no longer making collision with an object. This means it can only make one cut at a time.

The gameObject tag area of the if statement sorts one final issue where the swords were cutting objects they shouldn’t, such as pieces of the hands, or blocks for other mechanics. Having it so it can only cut tagged objects is a simple fix.

One last change was I added this line to solve a problem I was having with cut pieces falling through the floor. This adds a box collider to new pieces so they no longer do this, however it can cause some issues with collisions, as the newly generated box colliders can sometimes be too large. Perhaps I can look into a way to make the colliders more accurate to cut pieces.

Finished cutting system

So that’s the alteration to the sword out of the way, the next topic of discussion is the development of my scene.

Initially, my scene was very basic, with the UI hand selector, a box to cut, and a wall for eventual climbing. And while my project is primarily a system demo over a full game, Herman gave me some advice to develop my scene.

I started off initially by browsing some of the asset packs I had previously purchased here and there on the Unity asset store, eventually creating a simple scene out of some voxel based buildings and roads that matched the look of the hands.

Updated scene

I also added a skybox to just give the scene a better overall look, but it was definitely an improvement and wasn’t too complicated to draw away from my systems.

Next came the organization of the mechanic areas. I wanted to separate out different areas for each hand to be tested, setting up small obstacles for the player to test out trying to get past. A suggestion from Herman came later to make these zones color-coordinated, to make it even easier for the player to know where to go.

Cutting zone. There is also a small button behind the boxes to act as a success indicator for the player. When pressed it plays a fanfare
Hand zone
Climbing zone. Gap in the floor means players MUST climb to reach the objective on the other side

A small thing related to the scene development also relates to the function of the hands. Initially I was having trouble finding a way to code it that only the hands could push the green boxes, trying to use IgnoreCollision functions. However, I learnt that I could achieve the effect I wanted a lot easier using the built-in layer collision matrix in Unity. This system allows you to limit which layers interact with each other.

The collision matrix for my project. The hands are set as
pushers, and interact only with pushable layer objects.

After all these changes, the scene looked and functioned a lot better.

Hybrid Hands – Part 2: Socket Development

This post will show the development of my hand selector system.

Initially, as shown by my previous post, I had ideas of making the selection of hands radial based. However this seemed a bit too generic and didn’t fit with my design philosophy of augmenting hands into a gameplay feature. With that in mind, I gave it some thought, and eventually came up with an idea for the player to build their hands using blocks, with different outcomes depending on the combinations.

I did some digging for features that could make this system work. I was briefly playing with the idea of using arrays and collision enter functions, but eventually I found some decent documentation on XR Socket Interactors, and I made my system using them. The hand would initially start off as a simple base, with sockets to place different building pieces as shown below. I wanted to keep the sockets to a minimum to make the concept easier to follow, as well as code, so I based it around 4 main sockets.

How the hand would look initially when booting up the experience

The player would then build up their hands into certain structures, and when the parts were in place, they would function as the built hand should.

Mockup design of the simple hand with slotted in building parts.

My first documented issue implementing this is that when pieces were slotted into the hand itself, the collisions would launch the player back at rapid speeds, pushing them away from the designated area they were building their hands. It took some time troubleshooting, but eventually it turned out to be an issue with the XR collision radius.

Had to lower the radius to 0 to stop the hands from pushing against it constantly.

With the system now working in testing without issue, I designed a simple sheet to show the player their options in what they could build with the pieces, showing them the process of making their hands, swords, or claws.

In-game diagram
How the full final build area looks in-game

The reset button below simply reset the scene in case the player incorrectly built their hands at any point.

With the design concepts out of the way, it’s time to talk more about the code that makes this work. There are two/three main scripts in play that I designed to be used to create the hands here.

The first we can call the Hand Selector. This was the primary script called in the other two, and was the one swapping the hands out correctly once built.

To start, I had to use multiple bool values for this system to work correctly. There needed to be separate ones used for the hands, as well as the tools. Above all the bools, I also made sure the call a few game objects to swap between or toggle active where needed.

If statements

The screenshot above is the core code, using if statements in the update function to determine what hand to swap to, when to hide the base hand, and when to hide the game object of the fingers to create the illusion that the hand was built.

The next scripts are the Hand/Tool selectors. They share many similarities, but are used for different processes.

Matching Tags function

For my system to work, I needed to limit the places where certain pieces could go. To do this, I made a function that a socket piece could only be slotted in if it shared the correct tag chosen in the selector. This worked well to limit the options present but still allow the player enough freedom to build.

A whole bunch of if statements

Calling the bools present in the Hand Selector script, these if statements set certain values to true depending on the tagged part that was socketed correctly. In tangent with the Hand Selector script, this meant that once the correct combo of bools were true, the hand would be made successfully.

Tool Building

The tool building script is a carbon copy of the hand building one, with only a slight change to the specific bools it would change. This is due to this script being present only on the child sockets attached to certain pieces that are used to create the claw of sword, meanwhile, the hand builder is solely used for building the hands. This was made this way to get around an issue I was having where certain bool values were overlapping each other, causing problems when trying to build anything.

Finished building mechanic

Here’s how it looks with all the code in play. The sockets pick up only pieces with the correct tags for them, and when they do they set their assigned bools to true. With enough bools set to true, the pieces are hidden away, the base is deactivated, and the claws are put in its place, ready to use. From here, certain functions will be based around certain hands being active, giving the idea that players need to create certain tools to solve certain problems.

VR Design Research Lab – Jamie Mossahebi Lecture

Jamie works for Epic Games, and gave a lecture on the usage of Unreal Engine. This was an interesting lecture as while we were given some very basic knowledge on the usage of Unreal and a different game engine, we were primarily taught to use Unity for all our projects in our modules.

The initial discussion was about the use of 360 degree video, where they would use multiple camera rigs to capture images to then project onto a sphere. Back when development of the videos was only just starting, the cameras had to be very close to one another to capture video for each individual eye, as these videos rely on the same Stereoscopic render techniques that VR experiences commonly used.

The discussion was short, but we got to learn about a few of the 360 videos Jamie developed. One main one was the simulation of a taxi cab experience in VR using these techniques, but he also outlined some other attempts using different ideas such as drones, and boats.

After this short discussion, the topic switched to that of Unreal Engine and Epic Games. I have always been aware of the existence of Unreal, and I consider it a viable option for many developers in recent years as Epic Games have offered the software for free and give high rates of pay to any games put on their platform using it.

While Unity values function primarily, giving users access to many tools to create their ideas, Unreal seems to strive more heavily for graphical fidelity, with tools such as MetaHuman being common practice for Unreal projects. This is a difficult line to follow for VR development in my opinion, as high graphical assets can slow down VR due to its high processing demand. Beyond this, I also believe high fidelity is not always the best idea, and plenty of games are popular due to them following their own personal art styles.

Briefly at the end of the lecture, Jamie talked about some of his experience working in the industry, and urged us to stop refusing the apply for jobs due to not being a complete expert in certain fields. The specific industry we wish to work in is always changing as more techniques and options come to the table, so no one will ever be an expert for long. I will do my best to try and take this advice to heart, in hopes it will give me the confidence to apply in areas I didn’t believe I could before.

VR Design Research Lab – Phoenix Perry Lecture

Our lectures with Phoenix Perry discussed the ideas of machine learning, and a program called Interact ML.

The discussion began with a talk about search engines use machine learning inputs to filter searches, and as such, may be vulnerable to certain bias and discrimination. The core idea was that when a ‘man’ or ‘woman’ was searched on Google Images, the primary results contained pictures of Caucasian individuals.

While I understand the idea and viewpoint made, I genuinely believe that there is no reason to try and combat it. If we tried to cater to every single individual on the net with their certain searches, the net would be a much different place and have no semblance of a baseline. Beyond that, as stated these searches are the result of the most common concepts related to certain words, and trying to change them may just end up confusing anyone who is searching for items.

The next part of the discussion was on the usage of InteractML, which is a visual scripting tool designed for Unity to aid in the development of videogames without the usage of traditional coding techniques. The ML stands for the Machine Learning aspect of the program, where it must be taught certain interactions through performing them. After the program learns them however, these interactions can then be coded simply using the visual nodes.

The idea of stylized tools to aid in the development process of projects has always been something I’ve been interested in, having looked into software such as Gravity Sketch VR to create 3D models INSIDE VR for use in projects. In the case of InteractML, I would definitely be interested in giving it a try, with me having already seen the frustration common coding practices can cause for those inexperienced in its quirks.

Mapping Virtual Practices – Antoine Marc Lecture

Antoine Marc, a choreographer, producer, and director (https://antoinemarc.com/), was another one of the guests we had during our Mapping VR Practices lectures. He talked about his work of combining performing arts and technology to create experiences. Antoine specializes in Media and New Technologies and has worked as a consultant on multiple stages of films, live shows, and award-winning concepts to apply his technology techniques to them.

I will be honest, I wasn’t quite sure how relevant this lecture was to our studies into the realms of virtual reality outside of the conceptual ideas Antoine had that we could impart towards our own work. His most recent pieces focus on trying to combine technology and dance into one performance, which he stated began simply from his own curiosity about new emerging tech.

The idea of developing something out of sheer curiosity is the only real thing I took from this lecture, having done the same myself with the development of my Hybrid Hands project. I will admit that perhaps I missed the key point of this specific guest lecture, but even after a rewatch of the discussion, I could not take anything else of value from it, unfortunately.

Mapping Virtual Practices – Ed Tlegenov Lecture

For two sessions, we had guest lectures from Ed, who came from Autodesk.

The first consisted of some discussion about how applying for jobs in this industry works. We had discussions on building a strong résumé, as well as updating our Linkedin profiles.

During this discussion, I did begin the process of reformatting a LOT of my employment-related items to better suit applying for jobs in the industry. Before, my main CV consisted of just my work experience and grades, but I tried creating a VR CV that only used relevant information for job applications, as well as reformatting my blog to organize it better in hopes of using it as a portfolio of sorts.

A smaller, but just as important element was creating a ‘tagline’ for our LinkedIn page, which would be used for any onlookers on the sight to gauge who we were, what we were studying, and what we were working on. Mine was: Adaptive game developer | BA Virtual Reality Student | Creating hybrid hand system from the ground up for virtual reality

This lecture did give me some incentive to start looking for internship positions much stronger, with me going out of my way to message the HR team at CM Games (a developer who’s VR work I follow) to see if they had any jobs. I was fortunate enough to have a personal connection to the company, with my mother knowing one of the lead members of the team. Using that opportunity, I found the contact details and sent over this cover letter:

Dear Mr Funtikov

I was suggested to contact you directly by our mutual acquaintance Martin Villig so let me introduce myself.

My name is Jamie Flack, I am a 21-year-old student who is currently enrolled in my second year in a BA(Honors) Virtual Reality course at University of the Arts London. I am looking to do my internship this summer, which has brought me to CM Games. I am an avid fan of your immersive VR experience ‘Into the Radius’ and would be overjoyed to work alongside the team responsible for it.

If my application in successful, I would be interested in joining you for an internship in the coming summer during the large break in my studies. I am currently based out of London, however, I do have dual citizenship in Estonia, so I could manage a remote internship or stay in Estonia if needed.

I bring two years of experience developing VR experiences at university, having gained skills in 3D modeling and animation in Autodesk Maya, as well as virtual reality development and C# coding skills in Unity. Besides this, I also have experience working in team-based projects on a global scale, and have successfully created multiple Virtual Reality experiences that garnered me high grades during my course. 

I consider my strengths to be primarily in the areas of 3D modeling, however I have plenty of knowledge in other areas to use if and when. Regardless, I will be happy to learn anything and everything you possibly have to offer.

I believe my knowledge and experience would be useful during my time at CM games, and I know that the insight and knowledge your company’s team members could grant me would be immensely beneficial, and give me the best kickstart in the industry I could possibly ask for.

Please find me CV attached below. I hope you take my application into consideration, and I look forward to hearing from you soon.

Kind Regards Jamie Flack

While I was unsuccessful with my application, I believe my cover letter was strong, and still think the advice Ed gave definitely helped me present myself in a much stronger light.

For the other lecture we had the next week, we focused on trying to emulate the development pipeline discussions that companies have when brainstorming a game. To start, we were given a brief to create a multiplatform VR experience that could rely on the confines of the Oculus Quest room-scale size. After some discussion, we began brainstorming elements that would be needed to develop. While most of the items needed were quite obvious, it did show that working well to identify exactly what you need to develop for your experience from the start is crucial for successfully developing a project on any scale.

While both lectures brushed ideas I was already aware of, they did so in a much more illuminating light, giving me the knowledge, and the incentive, to understand the processes.

Wardsend Cemetery – Part 3: Video Presentation

Given that this project is for actual commercial use, we need to make sure to keep our clients in the loop on how the project is developing and such. One part of this that was discussed was the creation of a small video trailer to outline the experience and possibly act as an actual trailer for our clients to use at some point.

A few people alongside the creative directors of the project gave me an outline of what they wanted from the video, and from there it was up to me to make it to reality.

Timeline for the video idea

I have some prior experience editing videos, so I got to work using the Unity Recorder plugin to get some good clips for the video. The animations were all decently simple, mainly just being camera zooms or panning of lights for certain shots. The hardest thing overall was probably just finding some background music to use, and then syncing up the clips to work with the audio I got. After a few hours however, I had a decent trailer made, and let the others in the team peer review it. We worked to iron out some parts and finally had our trailer.

Warsend Cemetery Project – Part 2: Gallery Room

My next task on the project was 3D modeling a small gallery room the player would be situated for the experience. The room didn’t need to be complex, as the main attraction was the interactive painting, but I did do a few things to try and make the room look as good as possible.

Using some of the pictures taken during the trip down to Sheffield to see the gallery, I modeled a similar room with wooden flooring and other similar elements. I also added some simple overhead lights, and some extra paintings on the wall for decoration. Altogether the room worked perfectly fine for what was needed.

As for the actual paintings, I remembered back to one of our previous lectures that involved AI generated art, and used a program with a few keywords such as Sheffield, Cemetery, and more, to generate some simple artwork. However, since we didn’t want to distract the user from the main art, I blurred them a bit so as to keep the aesthetic.

Altogether, the room functions well, and with the player environment created in our project, we could turn more attention on the workings of the primary painting.