Virtual Networks Lecture Appraisal and Showcase Reflection

Lecture Review:

Much to my annoyance, I was unable to attend many of the guest lectures available during the unit. However, one of the few I did attend was during week 4 and was primarily focused on what values we held most prominent in our creative industry.

Your core values are always apparent in creativity. What you like, what you dislike, what drives you, what inspires you. This has always been present for me during all units in the course, with me taking inspiration from genres such as cyberpunk, and shows and games such as ‘RWBY’ and ‘Persona’ that I personally enjoy. Values always shape who you are and where you’ll end up and what you will be doing, even if you don’t realize it.

We went through a lot of different concepts during our lecture with Zoe. One of these was the Japanese concept of Ikigai, which referred to the intersection of four elements – what you love, what you are good at, what the world needs, and what can be paid for. It represents a person’s reason for being and their purpose in life, and finding the best place to be with the most overlap of these core areas is the best way to find yourself doing a job you enjoy and doing it well enough to get paid.

This concept did stick with me a lot. I always have had trouble finding a place where I could see myself going where these values are balanced. True, I did start this course hoping to go into VR development, but have on occasion always questioned it, wondering whether I will truly end up using my degree in the end, or end up going elsewhere to find the perfect job for me.

Zoe had two tasks for us during the lecture. This was already a welcome change, as sitting in a chair for hours simply listening and not engaging has never been much fun to me. The first activity involved a set of character cards, created free hand by an artist to represent concept, people, and objects. After looking these over, we were tasked to create our own, ignoring our possible lack of artistic skill. I drew a card I labeled ‘the storyteller’ as I have always loved telling stories of people and places, even doing so in my own free time online with books and fanfiction.

The second task involved a personality test created by Adobe Create. The questions were a bit weird, but in the end, I got paired up with ‘The Visionary’ described as:

‘Full of big ideas, ability to see potential and possibility everywhere’ – a nice compliment to start. Very nice

‘Using your visions to fuel consistent daily action’ – not sure how accurate this is. Procrasination IS my middle name practically.

‘You live in a world of infinite possibilities, preferring to see things not as they are but as they could be. You know that life is limited only by the boundaries of your own beliefs, and you’re driven to push the limits of, well, everything.’ – I did start studying VR for a reason. I’ve always hoped I could push the boundary of the medium and begin creating storytelling experiences for VR that are able to emit the same level of emotion from TV, movies, and games I’ve experienced…if not even more emotion.

‘Emotional, passion-driven, and full of ideas, the VISIONARY combines a vivid imagination with a desire for practical solutions. Your introspective and intuitive nature is balanced by a keen interest in the world around you and a desire to contribute to society.’ – Again, quite accurate. I enjoy seeing how people enjoy things I’ve created, having a following online from art, fanfiction, and stories I’ve made.

Charismatic and expressive, you love sharing your ideas and visions with others and creating community around shared values and ideals. Your greatest gift? The ability to see the spark of potential in everything and everyone, and to inspire others to see it, too. You’re able to guide people toward an invisible horizon with a rare generosity of spirit and strength of conviction.’ – I do have some experience in this area. I have friends I like to help a lot with things in hopes I can set them on their right path. I’ve also worked on group projects before, and have ran multiple tabletop games for many years much to the enjoyment of others.

‘Don’t get stuck in the dreaming stage, VISIONARY. Your greatest challenge—and true power—lies in learning to take consistent daily action to create the future you envision.’ – I have always had trouble with getting ideas out of the dreaming stage, having a massive list of ideas I’ve thought up on my phone, but never getting close to creating them. I hope to work on this, learning more techniques to bring the ideas to life and learning how to incentivize myself more towards pushing these ideas out of the dream stage and into reality.

‘Seek out the “voice of reason” of the THINKER type to help you take a grounded, rational approach to your creative work. The THINKER’s deep perception and probing intellect lend a powerful clarity that can bring your visions into sharper focus’ – I guess I now know who to look out for!.

Picture of ‘The Visionary’

I honestly was disturbed by how accurate some of these points hit.

Zoe also talked about the importance of values in terms of their worth in money, expertise, voice, and more. The skills cycle involves answering questions about where we are now, where we want to be, and how we can get there. During the workshop, we reflected on past projects, our contributions to group work, and extra-curricular impacts on our studies. I realized that I struggle with communicating my work to others and tend to keep it hidden, as well as having a hard time selling myself in any capacity, which might make it tough to find a place in this industry. I also – much like my classmates – can sometimes feel overwhelmed with the amount of work I have to do and the limited time I have to do it. The time goes by too fast

We also briefly discussed the 9 attributes in the Creative Attributes Framework by UAL, which include proactivity, communication, curiosity, and resilience. The CAF is a reference point for developing enterprise and employability, and the MyCAF tool allows us to download a report and action plan. I’ve yet to do this since I’ve been quite busy working, but it might be worth looking into down the road.

Overall, the session with Zoe was informative and almost TOO thought-provoking. Her hands on approach involving getting us more invested into the session practically worked like a charm and had me thinking about the lecture long after she left. I will take the advice she gave to heart.

Showcase Reflection:

Im excited for the showcase.

The possibility of showing off a piece of work I’ve put so much time into to others seems really exciting to me, and while I admit I didn’t have much input into setting things up outside of the project itself, I will share what I did do.

The logistics of our booth are pretty simple. Our game is set in the office, so we want to make an office booth. It works pretty well in theory. Booth walls, a VR ready PC running our title on a desk, and obviously enough central space to allow any users to play without fear of injury.

I did however, add that I was hoping for one…unorthodox item.

I wanted a giant llama statue.

It would be nearby the booth and act as the main selling point towards getting people to try the experience. Its thematically useful too, as the core idea of our title is the llama being in a place it CLEARLY shouldn’t be. An office, or a uni event? Both are equally crazy. I’m not sure if in the end we will be able to add this item into our booth area, but if I ever had the chance, I would certainly jump at it.

Outside of that though, most of my efforts towards getting us ready to showcase involved getting the game ready. I’m the narrative lead so I got to write out a pretty lengthy storyline AND also the environment designer, so I’m spending plenty of my free time modeling everything up by hand.

Fingers crossed we can get it all done right on time.

Final Major Project – Part 3: Group Assembly & Initial Work

After we gave our proposals, it was clear everyone was very passionate about their projects, with the assembly of groups slowing down as everyone waited to see if anyone would hop on their projects before going for anyone else’s.

Thankfully however, I was not the only one who wanted to make a game, and after some discussion, created a group with Will and Colin to work on a project. We all were still passionate about our own ideas, so together we worked on a new one that implemented all our ideas together.

We each chose one major thing we wanted to express in the game. For me, it was for it to have an acceptable narrative with possible links to escapism, for Colin it was to include the concept of machine learning in some regard, and for Will, he wished to include procedural animation using a llama character.

From there, we discussed other elements, such as the setting, core gameplay concept, genre, VR locomotion system, and other areas, and began to get an idea formed pretty quickly.

• Narrative based puzzle/tower defence game

• Set in a futuristic office

• Player is a robot with teleport locomotion

• Gameplay consist of trying to do office jobs whilst a llama tries to stop you

• Has a day system with 3 days in total

• Day 1 -tutorial Day 2 – llama shows up Day 3 – narrative rebel vs obey system

• As the llama affects the game, the office could become overgrown

I was excited that we had an idea forming so quickly, with us divvying out jobs to begin work immediately. I was tasked with the narrative elements of the game. However, due to having work obligations, I was unable to discuss this with my team during class so they had to create a base semblance of the story already. So instead I began focusing on areas to 3D model as well as looking into how the llama might affect certain tasks we had listed, and how these tasks would also act on our day 3 finale.

An example of this was the ‘sign in’ objective we would have for each day. This would begin the work cycle. For day 1 it would be pretty simple to teach the player the system. However with the llama present, it might steal the pen needed to sign in, making you search for it or have to buy a new one. For the final day, the player would get the opportunity to rebel, being able to throw the sign-in sheet out the window or in the trash.

For the 3d modeling work, I had some mood board examples as well as a floorplan Colin created in The Sims to work off.

Premium Photo | Modern sci-fi futuristic interior office design with green  wall plant and beautiful night scene cityscape view. 3d rendering
Mood board piece for office look

So to begin, I focused work on the office cubicles and their items. I took inspiration from the mood board items, as well as my own love for cyberpunk and futuristic design and created a few items including the desk, chairs, monitors and keyboards, and some basic office clutter. All the while I made sure to keep most items modeled separate, knowing we had plans to make a physics system where the player could pick up most items.

My initial office models

https://trello.com/b/HDvGRJQv/llama-drama – link to our trello where we document our ideas and notes

Final Major Project – Part 2: My Game Idea

‘How Does That Make You Feel’ was the final WIP title I settle on for my VR game idea for this years final project.

The Plot:

As I said, I wanted to have a good storyline for my content, and having spent time writing as a hobby, I got to work drafting a story. The player would play as a medical based AI directed with restoring or removing suppressed memories inside people’s minds. The core premise was the AI would live out and experience these memories, and would have a final choice to restore or shred them at the end of the level – with different impacts from either decision.

Memories would be linked in unforeseen ways, and the initial impact of your choices would be left ambiguous until you had finished treating the patient. The story would find its roots heavily in the themes of escapism, something I hoped to capitalize on by linking to my essay in my second module this year which I themed around escapism.

The Gameplay:

In each level, certain ‘mood states’ would be accessible such as sad or happy. The player’s core goal would be to combine these mood states in a certain way to create the specific mood the patient locked them memory behind.

Mood states would affect the environment, such as anger setting certain areas ablaze. The mood states would also be used as a form of puzzle solving, with certain objects, places, and other moods locked behind areas that need other certain moods. Once the player gets the required mood, they would receive a memory ball which they could view, and then decide whether to restore or shred.

Initially, I thought it best to make fully 3d environments the player could use locomotion to explore. However, later down the line my classmate Billy came up with the idea of the player being stationary and interacting with items from a distance. The idea stuck out as a lot more interesting premise on how the game would play, and also would make presenting the piece to others a lot easier as it would limit possible motion sickness in newer VR players.

The Design:

In terms of my implementation of AI art, I wanted to try and make the memories a collage of DALL-E generated scenes, slowly building together to make a memory. It would thematically suit the themes as well given the player character is an AI.

Beyond DALL-E, the level design would be modeled on how people would perceive the inside of a brain to be. There are a few examples of this I stated in my presentation, but the creative freedom on how to make that look would have been very high.

One major core design element I wanted is that the player had to FEEL the mood states. I needed to find a way to express them in the design elements in all areas at once, whether that be using the emotional color wheel for the level palette, changing sound design, or any other myriad of elements.

With all this, I put it together into a presentation alongside a few other elements I thought well to mention. Abel did point out that my presentation didn’t change much between the first time I showed it, and the time I presented it, but I don’t think it needed too much. I had a core idea in my head of what I wanted and saw no reason to try and alter it simply to change my presentation. I just tried to add bits into areas here and there whilst also making sure I wasn’t info dumping tons on anyone whom I presented to.

This is what the finished presentation looked like.

Final Major Project – Part 1: Making a Game

So my final year as a uni student has begun.

For this module, we are tasked with creating a fully-fledged VR experience alongside others in a group effort, hoping to eventually present the pieces to others. To start, we are pitching our own ideas for what we want to do, in hopes of convincing other people to hop on board our projects and work with us to make them happen.

Immediately I knew I wanted to make a game. Gaming in VR is what drove me to take the subject, and finally being able to make a full game of decent quality would be a great opportunity. As someone with a pretty creative and random mind, I have a big notes page on my phone of story/game ideas that I’ve always wanted to develop at some point. So I turned my attention to all of those and found the idea I wanted to push.

To begin, I narrowed down a few elements of what I wanted to come from the experience:

• I wanted the game to have a narrative. Many VR games fall into the trap of being glorified arcade games or tech demos without a proper story to make them truly great.

• I wanted to make a puzzle game. Since this was my first time working with a team, I wanted to work with a genre that is somewhat easier to refine into a game in some regards. Not to mention a puzzle based game is easy to present to audiences.

• I wanted to use AI art generation. I’ve watched a lot of videos on how AI art is changing how a lot of game dev elements function. As such, I wanted to embrace that, and find a way to use programs like DALL-E in the game development process.

I had the core ideas down, and a premise I had somewhat written out, but I spent a few weeks working on it, getting a proper presentation ready to show off.

Hybrid Hands – Conclusion

Well…my project is done.

As of writing this, it is currently 4 am. I am finishing up any simple tweaks to the project, and cleaning it up to submit in a few hours.

This project was the first time I had full reign to design and create whatever I truly wanted in VR. The creative freedom was initially quite daunting, but over time, I understood how exhilarating it was creating my own concept and seeing it come to life over the months.

So, did I make hands into a game mechanic? Yes, I think I did.

Doing some research, I couldn’t find any other VR titles with a feature similar to mine, which I am very happy about. I was able to create a way players could take their hands – the core way they are represented in VR – and change it to suit their own needs.

I will admit the system ended up being a bit more limited than I had hoped. Initially, I wanted the player to be able to remove and reattach their fingers whenever they wanted, not needing to hit a reset button. Although, as of typing this, I already can think of OnCollision functions that could possibly allow this effect, so for now I will just chalk it up to lack of experience and time.

Coding was a bit challenge for me in this project. In the first year of the course, I figured out quickly I enjoyed 3D modeling more than other areas, and tried to focus my efforts on that primarily. However, if I ever want to get further into the industry, an understanding of other areas would always be beneficial. And so I decided I’d dedicate my time to coding more for this project.

Now don’t get me wrong, there were plenty of times I regretted doing this. A few errors that made me pull my hair out usually ended up being a capital letter missing, or an !. However, at the end of the day, I find myself much more confident in C#, and know I have the knowledge to work in it to make what I need.

And after all the hard work, the project works. The code is ugly, and the physics questionable, and many more issues I could find if I looked hard enough…but it works, and that means everything to me as someone who can rarely say they have accomplished anything as big as this.

I want to actively develop this system more. I say that about a lot of my projects, but I think I honestly mean it this time. I want to make it so you can reattach fingers on the go, I want to add different combo’s for new tools, and I want to make a full fledged VR experience out of this.

I’m not sure how busy my summer will be, but if its open, I might hop into Unity and keep working on tweaking the system to perfection. Heck, maybe I can make the full game my project for my final year.

I loved making Hybrid Hands

Hybrid Hands – Part 4: Claws

This post will talk about how I developed the final function of my climbing claws.

This video was my primary resource for making this system work, and came out right at the time where I was considering scrapping the feature altogether.

A lot of the videos I could find detailing how to make this system worked struggled to be applied to the latest versions of XR in Unity, but as my luck would have it, this youtube – whom I followed – put out a video on how to get it working in the most recent versions.

Velocity based building functions are something I still am very new to understanding, and I’ll be honest and say a lot of the development of this system was simply following that video directly, but as expected, there were a few elements that required tweaks to work with the systems I had in place, as well as a few errors that stemmed from the smallest things like a ! in the incorrect place (that’s coding for ya!) but eventually, I got it all working.

This is the climbing provider script, which is attached to the locomotion system of the XR Rig. The primary functions are used to determine when the system should be registering and adding velocity values for climbing, and when it shouldn’t. The technique of using an if statement to register whether the correct hands are active also returns here.

This script above found in the climbing provider is the one that alters the players position based on calculations of the velocity of the player. A VelocityContainer script is called here, which is used to register the velocity of the selected XR controller when the XRI settings are edited correctly.

The climbing anchor script above is shown here, and is applied to the climbing wall objects. This function calls the ones used in the provider, and links them to when the wall is grabbed by the player. With these two scripts running together, when the wall is gripped, the climbing provider begins registering the velocity of the controllers, and then uses that to elevate the player.

Besides these two scripts, there are a few miscellaneous ones used to make sure the gravity of the player functions correctly.

The finished product

I definitely think I need to learn more about systems like these. Climbing in VR is an interesting mechanic in any experience, and the more I know about implementing it correctly, the more experiences I can apply it to in the future.

Hybrid Hands – Part 3: Scene and Swords

To begin this blog post, I will outline a few tweaks given to the sword system.

As stated in my previous post, I found a way to limit this feature only to when the sword hands are present.

game object for sword is identified and called in an if statement for the function.

This if statement also shows my solution to a few other emerging issues with the system. The collisionDetect bool was created due to the cutting crashing my computer, as too many cuts would occur if the blade stayed in the middle of any object for too long.

areas collisionDetect is used

With this code, the cutting is reliant on the bool being false. Whenever a cut is made, the bool is set to true, and only made false again when it is no longer making collision with an object. This means it can only make one cut at a time.

The gameObject tag area of the if statement sorts one final issue where the swords were cutting objects they shouldn’t, such as pieces of the hands, or blocks for other mechanics. Having it so it can only cut tagged objects is a simple fix.

One last change was I added this line to solve a problem I was having with cut pieces falling through the floor. This adds a box collider to new pieces so they no longer do this, however it can cause some issues with collisions, as the newly generated box colliders can sometimes be too large. Perhaps I can look into a way to make the colliders more accurate to cut pieces.

Finished cutting system

So that’s the alteration to the sword out of the way, the next topic of discussion is the development of my scene.

Initially, my scene was very basic, with the UI hand selector, a box to cut, and a wall for eventual climbing. And while my project is primarily a system demo over a full game, Herman gave me some advice to develop my scene.

I started off initially by browsing some of the asset packs I had previously purchased here and there on the Unity asset store, eventually creating a simple scene out of some voxel based buildings and roads that matched the look of the hands.

Updated scene

I also added a skybox to just give the scene a better overall look, but it was definitely an improvement and wasn’t too complicated to draw away from my systems.

Next came the organization of the mechanic areas. I wanted to separate out different areas for each hand to be tested, setting up small obstacles for the player to test out trying to get past. A suggestion from Herman came later to make these zones color-coordinated, to make it even easier for the player to know where to go.

Cutting zone. There is also a small button behind the boxes to act as a success indicator for the player. When pressed it plays a fanfare
Hand zone
Climbing zone. Gap in the floor means players MUST climb to reach the objective on the other side

A small thing related to the scene development also relates to the function of the hands. Initially I was having trouble finding a way to code it that only the hands could push the green boxes, trying to use IgnoreCollision functions. However, I learnt that I could achieve the effect I wanted a lot easier using the built-in layer collision matrix in Unity. This system allows you to limit which layers interact with each other.

The collision matrix for my project. The hands are set as
pushers, and interact only with pushable layer objects.

After all these changes, the scene looked and functioned a lot better.

Hybrid Hands – Part 2: Socket Development

This post will show the development of my hand selector system.

Initially, as shown by my previous post, I had ideas of making the selection of hands radial based. However this seemed a bit too generic and didn’t fit with my design philosophy of augmenting hands into a gameplay feature. With that in mind, I gave it some thought, and eventually came up with an idea for the player to build their hands using blocks, with different outcomes depending on the combinations.

I did some digging for features that could make this system work. I was briefly playing with the idea of using arrays and collision enter functions, but eventually I found some decent documentation on XR Socket Interactors, and I made my system using them. The hand would initially start off as a simple base, with sockets to place different building pieces as shown below. I wanted to keep the sockets to a minimum to make the concept easier to follow, as well as code, so I based it around 4 main sockets.

How the hand would look initially when booting up the experience

The player would then build up their hands into certain structures, and when the parts were in place, they would function as the built hand should.

Mockup design of the simple hand with slotted in building parts.

My first documented issue implementing this is that when pieces were slotted into the hand itself, the collisions would launch the player back at rapid speeds, pushing them away from the designated area they were building their hands. It took some time troubleshooting, but eventually it turned out to be an issue with the XR collision radius.

Had to lower the radius to 0 to stop the hands from pushing against it constantly.

With the system now working in testing without issue, I designed a simple sheet to show the player their options in what they could build with the pieces, showing them the process of making their hands, swords, or claws.

In-game diagram
How the full final build area looks in-game

The reset button below simply reset the scene in case the player incorrectly built their hands at any point.

With the design concepts out of the way, it’s time to talk more about the code that makes this work. There are two/three main scripts in play that I designed to be used to create the hands here.

The first we can call the Hand Selector. This was the primary script called in the other two, and was the one swapping the hands out correctly once built.

To start, I had to use multiple bool values for this system to work correctly. There needed to be separate ones used for the hands, as well as the tools. Above all the bools, I also made sure the call a few game objects to swap between or toggle active where needed.

If statements

The screenshot above is the core code, using if statements in the update function to determine what hand to swap to, when to hide the base hand, and when to hide the game object of the fingers to create the illusion that the hand was built.

The next scripts are the Hand/Tool selectors. They share many similarities, but are used for different processes.

Matching Tags function

For my system to work, I needed to limit the places where certain pieces could go. To do this, I made a function that a socket piece could only be slotted in if it shared the correct tag chosen in the selector. This worked well to limit the options present but still allow the player enough freedom to build.

A whole bunch of if statements

Calling the bools present in the Hand Selector script, these if statements set certain values to true depending on the tagged part that was socketed correctly. In tangent with the Hand Selector script, this meant that once the correct combo of bools were true, the hand would be made successfully.

Tool Building

The tool building script is a carbon copy of the hand building one, with only a slight change to the specific bools it would change. This is due to this script being present only on the child sockets attached to certain pieces that are used to create the claw of sword, meanwhile, the hand builder is solely used for building the hands. This was made this way to get around an issue I was having where certain bool values were overlapping each other, causing problems when trying to build anything.

Finished building mechanic

Here’s how it looks with all the code in play. The sockets pick up only pieces with the correct tags for them, and when they do they set their assigned bools to true. With enough bools set to true, the pieces are hidden away, the base is deactivated, and the claws are put in its place, ready to use. From here, certain functions will be based around certain hands being active, giving the idea that players need to create certain tools to solve certain problems.

VR Design Research Lab – Jamie Mossahebi Lecture

Jamie works for Epic Games, and gave a lecture on the usage of Unreal Engine. This was an interesting lecture as while we were given some very basic knowledge on the usage of Unreal and a different game engine, we were primarily taught to use Unity for all our projects in our modules.

The initial discussion was about the use of 360 degree video, where they would use multiple camera rigs to capture images to then project onto a sphere. Back when development of the videos was only just starting, the cameras had to be very close to one another to capture video for each individual eye, as these videos rely on the same Stereoscopic render techniques that VR experiences commonly used.

The discussion was short, but we got to learn about a few of the 360 videos Jamie developed. One main one was the simulation of a taxi cab experience in VR using these techniques, but he also outlined some other attempts using different ideas such as drones, and boats.

After this short discussion, the topic switched to that of Unreal Engine and Epic Games. I have always been aware of the existence of Unreal, and I consider it a viable option for many developers in recent years as Epic Games have offered the software for free and give high rates of pay to any games put on their platform using it.

While Unity values function primarily, giving users access to many tools to create their ideas, Unreal seems to strive more heavily for graphical fidelity, with tools such as MetaHuman being common practice for Unreal projects. This is a difficult line to follow for VR development in my opinion, as high graphical assets can slow down VR due to its high processing demand. Beyond this, I also believe high fidelity is not always the best idea, and plenty of games are popular due to them following their own personal art styles.

Briefly at the end of the lecture, Jamie talked about some of his experience working in the industry, and urged us to stop refusing the apply for jobs due to not being a complete expert in certain fields. The specific industry we wish to work in is always changing as more techniques and options come to the table, so no one will ever be an expert for long. I will do my best to try and take this advice to heart, in hopes it will give me the confidence to apply in areas I didn’t believe I could before.

VR Design Research Lab – Phoenix Perry Lecture

Our lectures with Phoenix Perry discussed the ideas of machine learning, and a program called Interact ML.

The discussion began with a talk about search engines use machine learning inputs to filter searches, and as such, may be vulnerable to certain bias and discrimination. The core idea was that when a ‘man’ or ‘woman’ was searched on Google Images, the primary results contained pictures of Caucasian individuals.

While I understand the idea and viewpoint made, I genuinely believe that there is no reason to try and combat it. If we tried to cater to every single individual on the net with their certain searches, the net would be a much different place and have no semblance of a baseline. Beyond that, as stated these searches are the result of the most common concepts related to certain words, and trying to change them may just end up confusing anyone who is searching for items.

The next part of the discussion was on the usage of InteractML, which is a visual scripting tool designed for Unity to aid in the development of videogames without the usage of traditional coding techniques. The ML stands for the Machine Learning aspect of the program, where it must be taught certain interactions through performing them. After the program learns them however, these interactions can then be coded simply using the visual nodes.

The idea of stylized tools to aid in the development process of projects has always been something I’ve been interested in, having looked into software such as Gravity Sketch VR to create 3D models INSIDE VR for use in projects. In the case of InteractML, I would definitely be interested in giving it a try, with me having already seen the frustration common coding practices can cause for those inexperienced in its quirks.