Portfolio

Animation – Final

For my animation final, I chose to create an Oculus game built in Unity. In order to create this, I had to do some programming in C Sharp to create colliders that generated platforms upon the player leaving another platform, and in order to integrate Oculus into my Unity Game.

The game begins with the player standing on a small carpet in space, with clouds below them. They listen to a robot reciting Manifesto for a European research network into Problematic Usage of the Internet, set to William Basinski’s Mono no Aware. Once the player tires of listening to this esoteric soundscape, they can jump off of the carpet, and fall onto a platform comprised of the video and audio components of Rick Astley’s Never Gonna Give You Up music video. Upon feedback, I added 10 more platforms comprised of various internet video memes for the player to fall through before they reached the Never Gonna Give You Up platform.

This project was a metaphor for the addictive nature of the internet and the interconnectedness of internet culture and mindless media consumption. We know that over-use of the internet is addictive, and harmful to our mental and physical health. This is not to discount the positive content of the internet, rather, it is a mindful realization that utilization of “mindful” and “mindless” content are two sides of the same coin in regards to internet abuse.

Playtest of My First Game Version

Two Playtests of My Second Game Version

Animation – Understanding Comics Response

The value of Understanding Comics in the context of Animation is multifaceted. On the one hand, the medium surpasses the technical capabilities of comics, as movement is more fluidly communicated. However, there are direct parallels between the process, or “six steps”, involved in creating any art form, outlined by the authors that are directly applicable to animation – particularly as these are both visual media. Iconography, distortion of reality, and closure (topics covered in chapters 2 and 3) are also directly applicable to animation, as they are narrative mechanisms unique to visual media, regardless of their fluid or disjointed use of time. In my opinion, the beauty Scott sees in comics, in their capacity to engage the five senses our emotions, are directly pertinent to animation as well. Scott’s trepidation for over-writing comics can be directly transposed to the sentiment that animation directors should not rely on exposition to tell their story (A brilliant example of this is Genndy Tartakovsky’s Primal )- both should rely on their use of images to facilitate engagement, particularly through the use of color. Overall, I found Understanding Comics to be an ideological repetition of the tenants we learned in class – well packaged in an engaging comic style.

ICM Final – My Beautiful Dark Twisted Recursive Trees

For my final, I chose to create a recursive tree that responds to sentiment analysis of personally typed text. For every “positive” word, I wanted the tree to have a flower, and for every “negative” word, I wanted to have a flower disappear.

Before I discuss my process and issues, I would like to state my position on sentiment analysis, which I have included on a file with my cited sources in my final project:

We live in a time where innovation often outpaces critical thought vis a vis the social systems we change, and subsequently, the people we affect. Sentiment analysis is the perfect example of a technology that at first glance is cool, but can and has been utilized to perniciously exacerbate problematic social structures, particularly but not limited to racial discrimination.

I did not create the coding schema that values and devalues certain words based on their perceived positivity. In fact, I take great issue with the rudimentary nature of this schema and find it necessary to communicate that context is what truly determines how one may interpret a given word or phrase. Furthermore, the context of how we choose our words, the history of our phrases, and what they mean in relation to different lived experiences, is imperative to communicating in a manner that serves to diminish the hostility (intentional or not) of our communication.

The greatest struggle of this project was figuring out how to map flowers over the tree. There were a bunch of techniques tried before a true attempt at mapping was executed – the following were two attempts that worked conceptually, though I ended up bringing the latter attempt to “completion” because it was easier to complete in the time that I had to turn in this assignment

Attempt 1

Conceptualized by Cassie Tarakajian and with the assistance of Alden Jones, I attempted to map my recursive tree by “cheating” (in a super complicated way). Essentially, I would:

  1. In a first animation, generate a recursive tree
  2. In a second animation, upload the image of the tree, map the pixel positions of the image using edge detection and black pixel recognition arrays, load 50 random pixel positions from these arrays into a JSON file and export that JSON file.
  3. In a third animation, upload the image of the tree and the JSON file, and use the JSON coordinates to place my flower constructor

I stopped at step 3 due to deadline constraints – I now understand that I need to change step 2 to not make an inverse color image during edge detection.

Attempt 2

With the guidance of Max Da Silva, I developed a recursive tree generated from more complex vector geometry. This made it considerably easier to map the positions and nodes of each branch, to facilitate an easier “random” generation of flowers. I played around with some prototypes before adding the tree to sentiment analysis. These prototypes are linked below.

Comm Lab Animation – Perfect Pair

The purpose of this assignment was to strengthen our skills in Adobe AfterEffects. My partner, Hanwen Zhang, and I created Perfect Pair as a commentary on the normalization of excess in consumption, and the paradoxical judgment we impose on those who overindulge.

The most difficult element of this animation was the masking and rotoscoping involved with the character and bodily fluid layering. Most had to be done manually, frame by frame. Were there more time, I would have appreciated greater insight into how to make more realistic fluid textures, and make the transition between the “flood” and “kaleidoscope” sequences more fluid. I thoroughly enjoyed working on the sound design for this piece. I also enjoyed adding in the Kobayashi Television sequence based on feedback we had received on a rough draft – its concept integrates seamlessly into our piece’s larger message.

ICM – Weeks 9 & 10 Assignment

For this week’s assignment, our class was tasked with integrating sound into a p5 animation. This assignment was an interesting challenge, as I aimed to make a drum pad, usable both on a desktop and touch screen device, that played different lyric sections of Daft Punk’s Harder, Better, Faster, Stronger.

I tried three different methods to reach my end result, accessible in the buttons below.

In six words, I would describe this project as:

Tactile, Colorful, Harder, Better, Faster, Stronger

Users may interact with my final sketch on touch screens using the preview link.

If I had more time I would have made a popup introductory epilepsy warning with the song’s chorus quietly playing in the background. The drum pad would be accessible once the user clicked the warning, which would be superimposed over the helmet of one of the members of Daft Punk’s members. I have stored all of these media files in my sketch, and intend to add this feature in the near future.

Pcomp – Revised Prototype

This week our group re-evaluated our project concept based on user feedback from our previous class. Our users made two concepts clear:

1. The metaphorical connection between fireflies, fire, and personal/psychological connection was not entirely clear

2. It is not universally intuitive when given a jar to try and catch fireflies

Based on this feedback, we decided to refrain from integrating a Microsoft Kinect to track our user’s body motions. We made the following progress with our animation, figured out how to integrate our muse into our animation, and made steps in building our physical jar process trigger.

Pcomp – Paper Prototype Prep

This week, my group was tasked with developing a project description, a system diagram, timeline, and bill of materials in preparation for our playtesting.

As a group, we constructed our final narrative and drew out the necessary components provided in the images below:

Essentially, we will have 3 components linked to a p5 sketch – a Neurosky EEG reader, a jar that can light up and vibrate, and an Xbox Kinect camera. Our animation will begin as a flame with a dark background. When our user twists off the top of the jar, the fire will extinguish, and turn into smoke. Depending on our user feedback, our user will be prompted to focus on the smoke, and it will turn into an avatar for our user, as fireflies begin to emerge from the darkness. With the jar, registered by the Neurosky, our user may capture up to 10 fireflies. For every firefly caught, our jar will become brighter and vibrate. Once the user has caught 10 fireflies, the animation will revert back to a fire, and the user will reset the animation by replacing the cap on our jar.

We decided that we would determine our floor plan based on user feedback.

Our group had been working on dynamic timeline and bill of material documents, that we have made accessible on this page.

ICM – Week 8 Assignment

This week’s assignment was to create a sketch using pixelated images and describe it under specific parameters. I created a “Who’s That Pokemon” game with Neta Bomani – our description is as follows:

We based our animation on the “Who’s that Pokemon” television bump sketch.

We were asked to describe the experience in 5-10 words, we chose: A memory game to flex your nostalgiac fandom for Pokemon.

The sketch is split into 8 sections based on each Pokemon, Bulbasaur, Mewtwo, Jigglypuff, Electrobuzz, Infernape, Squirtle and Vilplume.

We were also asked to describe each section in 2-3 words below:

Section 1: This is Bulbasaur.

Section 2: This is Mewtwo.

Section 3: This is Jigglypuff.

Section 4: This is Electrobuzz.

Stop Motion

the work documented in this post has gratuitous use of the middle finger. discretion advised I guess.

With my partner, Sarah Liriano, I shot a stop motion animation, set to Oliver Nelson’s Reuben’s Rondo. In it, my character struggles with his anthropomorphized middle finger, after being jostled by a colleague. Though the tone of the animation is light, I intended this piece to be a physical representation of the struggles individuals with obsessive-compulsive, mood, and pervasive-thought disorders may face in their day-to-day life.

We ended up using Dragonframe software and a Canon EOS Mark 3 camera to collect and stitch our stop motion images. I had anticipated how time-intensive the process of shooting the animation would be; approximately 5 hours in total. We planned well in ahead, writing our storyboard before the day we decided to shoot. However, because we neglected to mark up my finger in the shooting phase, editing the middle finger face (which Sarah created in Adobe Illustrator) required tedious, manual, frame by frame adjustment to overlay with my finger.