Reflection:
During class this week, I found the exploration of the circumplex model of emotions has brought incredible insight into my work going forward. I personally would love to continue to expand on my User Experience design skills, and being able to understand the arousal states for which individuals experience specific emotions will prove instrumental in the development of my project this semester. In my exploration of surrealism through my Tumblr project, I reflected on colour theory that was discussed in class, and my understanding of what emotions are associated with in terms of colour. Something I also found fascinating was the idea of intentionally boring the audience in order to subvert expectations, which has gave me spontaneous ideas on how I could experiment with the lights in order to evoke specific moods. I also have begun to think about how sonic gestures can be connected with physical gestures to evoke meaning in my users, and want to develop my own understanding of how I can connect motifs throughout my project to specific feelings in the abstract, AI generative approach I want to pursue.
Research:
Following my initial explorations into Pharos, I conducted some further exploration of Mad Mapper to see if I could connect video to the lighting system in The Capitol. However, during class on Thursday, we were able to have a breakthrough and drop the video sequence directly onto the timeline, to create some incredibly fascinating lighting, which is exactly what I was hoping to do. Now understanding that I can transfer video footage directly onto Pharos without needing to manually map it, has led to me exploring deeper AI generative tools, such as how I can create cinematic angels, and create seamless transitions between a variety of prompts. I have been able to find great success through reddit, and firsthand experimentation with my AI tool of choice, stable diffusion. Now I am currently in the process of trying to figure out what kind of narrative I would like to take, while I continue to experiment with prompts to figure out how I can emphasise the surreal through my project. I am also exploring options for audio generation, such as Rave DJ, which can allow me to combine YouTube tracks, however there is some concern around copyright, so I am exploring a tool called ‘Riffusion’ which will allow me to generate my own, personalised music using text prompts, and images.
Progress:
I have made some excellent progress over the past week, especially considering that Pharos is able to input video directly. I was able to successfully modify my AI generative tool to create more cinematic angles that were previously unobtainable with my previous models, to create a feeling of motion as the prompts change. Across the last week, I spent several hours iterating on my previous ‘Eyes of the Beholder’ demo for a new AI generative experiment, utilising an entirely new prompts that cover a specific theme across a set number of frames. With frame 0-150 containing the same prompt, 150-300, a different one, and 450-600 the final prompt. I ensured to keep the general thematic the same, such as the quality being 8k, the detailing, while being sure to include different artists across each prompt to see how seamless I could get the transition. I would say that I am incredibly pleased with the results so far and believe I may be able to expand on the project beyond my original scope and have begun experimenting with AI music tools that could accompany the video I aim to develop.
