FMP week 15 anaylse and restructure

Since pandamic ,there’re already lots of virtual concerts on the market. I choosed the most successful example of fortnite virtual concerts of Travis scott and Arianna Grande.

Based on the research analysis, I clarity the design goal of this projects which is trying to maxiumize the strength as medium to better tell the music story via live performing.

The whole player Journey has been kept updated. I wrote the original idea of this projects and divided the excuted part and unfinished part.

The whole experience was made of four edited songs.Before entering the environment, the audience joins a preparatory scene to receive their bracelets, analogous to the time spent in line to enter a concert. In this scene, the audience will be able to see other participants and learn the mechanic. 

The audience will enter Scene 1 through the lift, where there are no intricate environment and the musician will sing on stage. Flying mechanic was applied in the scene to better convey the music concept of flying. During the transition, the audience will learn how to operate the teleport, which will also be utilised in scene 2, 3 and 4. In Scene 3, the stage appears and the performer will act around it, with the audience positioned in the centre of the stage.Scene 4 is the opposite, the performer is in the centre and the audience is on the sides.

The audience can remain in the programme after the performance, surrounded by a chat room where they can leave comments or converse with others. Even if the programme is implemented, the NFT bracelet can be acquired as a collection.

Each stage design was based on the music concept.

 In Stage 1, for instance, the original song is based on the concept that a footless bird will keep flying until it dies before it lands.  In this song, the creator is in a state where to make a fresh start and find a way out of an impasse. In the experience, the idea of “flying” was adapted  into a road, where the bird keep to soar without ever touching the ground, just as a person continues to walk without stopping. The crisscrossing of the routes also indicates that the protagonist is confronted with a multitude of options, unable to see the path ahead, but with a sense of conviction that propels him forward. Thus, the performer will stand on a path resembling a ribbon that overlaps and intertwines, with flashing lights along the route indicating which direction he will face.

The second track focuses on a nocturnal fantasy. In his imagination, the past no longer haunts him.  He succeeds at whatever he sets out to do. The artist’s unwavering faith in his own abilities and his ability to realise his goals shines through in this track.

The song is about the musician reinventing himself by letting go of previous regrets and establishing a better world for himself. In the second stage, the inner world of the musician is reconstructed. This desolate environment contains all objects and locations associated with the past. All the objects, including the microphones, clothing, and notebooks for writing lyrics, are disproportionately sized, resembling an alien ruin. Some objects from the past appear in the scene as point clouds, a blend of fact and fantasy, all floating in the air without gravity, indicating that this is an empty imaginary inner world.

The third song, unlike the previous two, was composed specifically for the concert. In a genuine performance or the release of a music album, the sequencing of the songs is vital, as it directly impacts the emotions of the audience. The musician wrote Thought with this in mind, employing a big musical vision to help the emotions reach a peak. Thought‘s musical concept derives from the challenges he continually faces on his quest to self-discovery, as he seeks to prove that he is capable and deserving. It resembles the trek undertaken by the protagonist in every adventure film. And in the virtual setting, I was reminded of the visual language widely employed in motion pictures, in which the characters remain in the middle of the scene despite constant transformation. This editing method is frequently used to depict characters travelling between numerous realities or experiencing multiple memories simultaneously. Such as in the movie Everything Everywhere All at Once.This is also the main design approach of the third scene,with its constant shift between numerous universes, also symbolises the performer’s never-ending seek for his own era.

In the last scene, I’m hoping to set a positive tone for the audience. Just like every film and every stage or play has an ending, the song One Day is a wonderful finish, a depiction of a beautiful fantasy world, a utopia of a dream world. It is about becoming one day what you once desired. So in Scene 4, there is a ‘stage’ in the concert which has the platform for performing in the centre while the audience surrounds itself . At the same time accompanied by numerous dreamy abstract pieces, with all the materials tending towards transparent, crystal-like textures to create this priceless sensation. Compared with the last scene, the previous three scenes were in darker locations. However in the fourth scene I wanted to give a visual feeling of the dawn breaking. The whole experience progressively transitions from darkness to brightness. The audience accompanies the performers from late night till daybreak, thus initiating a new chapter.

To better communicate with the musician, Timeline was used to describe which effects would appear at which stage. A virtual concert, unlike a film script,  does not follow the exact order of events; rather, it follows a tight chronology of when effects will occur.In order to clarify the production process, the chronology is separated into three major sections: environment, characters, and music.The environment refers mostly to variation in the main stage and special effects,denoted in orange and slightly light orange,respectively.The music is separated into distinct portions,like pro-chorus,rap, with the lyrics and their English translations placed in the corresponding timelines.

There are many types of virtual concert avatars, including Travis Scott’s Fortnite and Wave VR, which both construct a character avatar and attach the motion capture to it for live streaming. There are also numerous additional methods for visualising characters, such as catching them on camera and visualising them in real time utilising particle systems or using 3D scanned models to generate characters. As the entire project was developed in Unity, it was determined that to create a virtual character of the musician. 

The character’s design was based on the image of the musician, then sculpted in blender, bound to the completed outfit, and finally connected into the motion capture file.

In  the timeline and player journey, there’s a section about movement design and traffic flow of the character. Although the live performance has Improvisation, the traffic flow of the musician still can be designed according to the virtual environment and VFX. For example, performer can interact with the environment or don’t collide with the rock etc

The project was recorded in the digital learning lab at LCF xsens equipment was used to record the body movements of the performer. Face expressions were recorded on the phone via face live. During the whole process, we didn’t have gloves to use to record data from the hands.

After completing all of the movement rigs, the avatar’s performance animation was made by compositing it in iclone, attaching it to the garments, and importing it into Unity. As the model was relatively long and slender, it caused the character to appear twisted in some postures, necessitating numerous adjustments to the animation to achieve the current outcome.

In order to emphasise the performers’ nontraditional masculinity, each virtual costume was developed specifically for the performance.Considering the first song’s major themes—”bird without legs” and “fly”—feathers have a significant part in the ensemble’s construction. Feathers were also the inspiration for the face accessory. The second scene takes place in zero gravity, thus the garment has a silky, feathery feel with minimal design. The third scene’s song has a tremendous amount of energy, therefore power is the operative word for the costume; a cloak and back ornaments further cement this impression. In the last scene, “knight of thorns “ was used as a costume concept to echo the theme of the dawn breaking scene. 

To demonstrate the texture of the garment, we created custom physics effects in unity to imitate the garment’s effect. Despite some technological difficulties, the clothing physics were implemented successfully, with the clothing imitating gravity as the character moves.

On the basis of prior interviews with the performers, the existing design has deficiencies in both interaction design and sound design; therefore, several modifications have been made to solve these issues.

Teleportation:

Seeing as teleportation is the main way to move around in Scene 2, the audience can also use it to get to several view spots where they can watch. But because the teleportation area isn’t clearly marked, the audience doesn’t know where they can teleport. So, in the transition scenes, the performer will tell the audience how to use teleport in scene 2,  and the signs of the teleportation area will show up . This lets the audience know how to interact with the experience before it starts. The length of the raycast and the range of the teleport area are also improved to the controller. 

Sound design:

After receiving feedback from the performer regarding the sound design, the project was revised to include sound effects. A new sound effects section has been added to the original Timeline. All sound components that could be added were indicated with icons. In the pre-scene, for instance, the ambient sound is an undercurrent of noise and processed music intended to create an effect that is audible but unclear outside the concert. While the bracelet emits a low frequency sound to attract the audience, it matches the frequency of the bracelet’s animation. The primary sound is an AI-generated voiceover that leads the audience through the event. Once the bracelet is picked up or using controller attempts to fly, there are accompanying sound cues to help the user differentiate between different game situations.

More sound effects, such as rain, shock waves, and fires, are added to main stage based on VFX. Complexer than the pre-scene, however, is the fact that because the songs are more comprehensive, they contain multiple sound frequencies, therefore some sound effects may be overpowered by the original music. To resolve this issue, the songs must be remixed in the order in which the sound effects appear in order to eliminate the appearance of frequencies that overlap.

FMP week 1-5 | concept development

The concept actually came from a quite long time ago when I just watched the fornite virtual concert. “It is so COOL” then I start to thinking the possiblity to develop a virtual concert in VR.

From my own opinion, the virtual concert has some strength that concert can’t complete with. Like the physical environment doesn’t have limitation but still maintein the same acoustics environment. Virtual concert is like a combination of concert and music video. In this context, I think virtual concert have more space to develop even the narrative on the music.

BTW, the virtual concert won’t replace the real concert.

Critical Practice Last week Chapter 2 and last change

After the second user research, I started the process of chapter 2 and the changes to Chapter1

For instanca, I found the voiceover in the chapter 1 is bit too long. Although the voice is enhance the charactize, but it may caused less interest to experience whole projects.

Although the majority of the people think it’s not too long but we still think it better to keep around 2 minutes.

And also there are some people mentioned that some of the subtitles are out of the canvas and they are quite interesting to see some visual effects while listening to the backstory.

So I am tryin to simulate the situation of the Voice over said in the visual .

Meanwhile, because some of the player might don’t have the function of how to use the controller so I added this tutorial at the begining to let player know.

At the same time we decide to change the chapter2 into follow the camera path( Because user already feel difficult in the chapter 1 . Our intinal thought is experiencing the same mechanics once again. But according to the feedback we decide just let the player experience.

So I animated all the camera and objects animation.

And also the credit scene.

Critical Practice | week 17 user reasearch and iteration

After merge with Xuan’s file we planed to have this user reasearch in the classroom.

The main aim of this projects is to test how peoole react to the voiceover. Because this experience is quite vision-limited so we used a lot VoiceOver and soundeffect to guide the audience.

The first aspect investigated in the interviews was that of the backstory of the main character. We find that detailed story background and voice acting is necessary in order to enhance the characterization, but that might make the experience too long especially when the character tells the story without any visuals. But 60% interviewees think the backstory story is not long. 

Feedback on the clarity of the voice guide is given both generally and regarding specific details. The general feedback is quite encouraging, with most users finding this experience is what they expected, “very close” to the blind people’s life

The research plan was in the classroom. We decide to invite 5 people to experience and help answer few questions.

The interview question is focus on how clear of this interactivity.

https://docs.google.com/forms/d/1f9OBGgar0FdR6JIpJdnUDcuP5TPVoDsGSew3UJCDUhw/edit

And we got many results to iterate this project.

The interesting thing is I found everyone think each tasks’ difficulties are really different. I think that’s mainly because we have really long VoiceOver in each scene. But if someone accidently trigger the task then the next vo will play. At this time some voice just came together so they will miss something.

So some people would get the task successfully but some don;t.

Base on the results ,we assigned the task to each other. I will keep working on adjust the subtitle and the menu part and keep working on the last chapter.

Specific feedback on the clarity of individual interactions varies. For instance, the task of “finding clothes” is quite easy for some players because the hint is undeniable. But there are also comments that because they missed the voiceover, it makes judgments even harder in fully dark environments. 

Another example of the use of terms, mentioned by multiple interviewees, is that multiple audios were triggered at the same time which may hinder the continuity of this experience.

The unpredictability of the trigger and the specificity of the voiceover only play once,lead to distinguish feedback from each interaction. It required us to develop the task in a more structured and logical way such as Cutting down the length of the narrative or adding more task-based navigations. Meanwhile both users and developers find that the difficulty of  tasks is necessary for the concept.

in addition to clarification of the difficulty of interaction. Half interviewee mentioned that the voiceover didn’t make it clear when the task was completed. For instance, one of the participants thought there wasn’t much clarity when they completed a task, it just moved on. Some interviewees suggested adding more visual or sound effects of completion or making the spatial sounds more clear. This was a suggestion for enriching the sound design as mentioned above.

In general, the interviewees commented that the feel of this experience in terms of clarity was positive throughout, however, the above-mentioned comments on further explanation provide important feedback for future revisions.

Critical Practice | Week 16 Merge the scenes and update the UI menu.

In this week, Rita helped to do the voiceover for the whole experience. So I am trying to add the voice guide as much as possible.

Which is just add the subtile at the scene to make the voiceover more clear.

I connect the first two scene. The backstory also the first scene

Meanwhile I found this experience can be hard to understand. For example, When first time we start the voiceover, there’s no any visual thing appears on the screen. But as the time pass bys

And for the wrist menu. I am trying to make the layout like the prescription. User can restart the current scene they are were or start the whole chapter if they miss something . And also has the button if they wanna exit the game。

It’s a bit tricky when I started to do it because if I wanna use menu button to open the wristmenu I need to made a new input system and write the every function I need to do with the menu. I spent a couple of days to figure it out.

I want to add the hint of every scene in the beginning, but it turns out it became muchcomplicatedlacted.

At the beginning I wish It could offer some clues to players to understand. But when we do it becomes a bit diffcult to make it came true . So i decide to do it as the way just share the same menu.

trying to do with game manager way

Critical Practice | week 15 start scene and ui

Since 2 weeks ago I did the window open effect and I found a easy way to trigger the volume

Bascially just make a float at volume and change it when the player trigger the window.

Meanwhile I am also making the start scene.

The plan was to let the user are in infinite black but when i test it in unity, It gives a feeling that we haven’t finish the making of this experience. Also it may caused the confusion of people who may not have aware of what this exoerience for.

So we decide to have some visual effects and add the ui to this scene.

I have found this curve UI trying to make it work and I also found some reference of VR UI in different Vr experience.

book of distance

I think in many VR films, they don’t set the button to let the player know . But on the contary there’re lots of Vr UI examples.

I think it may caused by the complexity of this experience. In another way ,if controller have more function, the story have lots of gameplay for different chapters, then it’s necessary to have the UI interface.

But in our experience, we have this coginitive gap of normal people and visually impaired people, with the part of visual loss(we make it on purpose), it became harder to understand and explore. On the top of that , I think it’s necessary to have this UI system.

Like in the “book of distance”, I think that’s a really easy way to build this system. It only has three functions, exit ,main menu and continue. It help the user undestand where to quit or where are they.