Physical Computing Final – Fireflies

My project was built in collaboration with Abby Lee and Tianxu Zhao. Together we built “Fireflies”, a concentration game utilizing a Muse EEG, p5 animation visible through a projector, and a Bluetooth enabled jar trigger. The process would be that once our user opens the jar, our animation of fireflies will appear. When our user is prompted to focus, the Muse will transmit that our user is in a state of concentration, prompting the flies in our animation to concentrate in conjunction. We see this game as an interesting jumping-off point for many neuroscience-related conversations.

I have had laboratory experience working with an electroencephalogram, which read the presence and absence of polarized oxygenated blood as electrical frequencies. Different types of frequencies, also known as neural oscillations, occur simultaneously all over the brain, and their functions vary depending on which region of the brain they take place in. We want to be clear with our audience – Muse is not a scientific apparatus. In fact, the company misrepresents many of the types of data the EEG reads and the inaccuracy of that data. That being said, Muse is capable of reading very localized, very simple frequency and amplitude changes that are understood to be associated with the presence and absence of concentration. Concentration in it of itself is not an entirely well-understood phenomenon. However, at a reductive level that reflects the Muse’s limitations, our animation is contingent on a change from lower frequency, lower amplitude oscillatory behavior (usually referred to as alpha) which is commonly associated with resting state, to a higher frequency, higher amplitude neural oscillatory behavior (usually referred to as beta) which is commonly associated with concentration.

We integrated our Muse to our p5 animation through a series of steps. First, we read the Muse EEG data with a third party mobile application titled “Muse Monitor” on one of our phones. Our phone then transmits the live EEG data using Open Sound Control protocol to a processing sketch. We can then input the OSC feedback from our processing sketch to our p5 animation locally.

We connect our jar to our animation using an Arduino Nano, which has a built-in Bluetooth transmitter. We have programmed this Nano with a proprietary Bluetooth transmission library, capable of easily interfacing with our local p5 animation. Our Nano is attached to a button, which is triggered by the removal of the jar’s lid.

The game plan is that when the lid of our jar is removed, our p5 animation will play, allowing the Muse EEG feedback to be represented. Once the lid of the jar is replaced, the animation stops. To be clear – the Muse and jar operate independently of one another.

Ultimately my group learned how to take Muse data and use it in a p5 sketch. We learned how to connect an Arduino to a computer via Bluetooth. We discovered that the delays created by Neopixel code can interfere with Bluetooth signal transmission. We broke a few Arduino Nanos trying to figure out an optimal long-lasting power source for our jar.

Special thanks to:

Jeff Feddersen, our Physical Computing professor, without whom we could not have realized this project, let alone use a multimeter.

Louise Lessel for pointing us to Gene Kogan’s OSC processing library on GitHub, and being an approachable and accessible wealth of Muse knowledge.

Tom Igoe for teaching us how to connect devices with Bluetooth.

Residents Jesse Simpson, Aiden Nelson, Ellen Nickles, and Anthony Bui for their help in troubleshooting.

Playtesting a prototype animation
Final prototype demonstration

Animation – Final

For my animation final, I chose to create an Oculus game built in Unity. In order to create this, I had to do some programming in C Sharp to create colliders that generated platforms upon the player leaving another platform, and in order to integrate Oculus into my Unity Game.

The game begins with the player standing on a small carpet in space, with clouds below them. They listen to a robot reciting Manifesto for a European research network into Problematic Usage of the Internet, set to William Basinski’s Mono no Aware. Once the player tires of listening to this esoteric soundscape, they can jump off of the carpet, and fall onto a platform comprised of the video and audio components of Rick Astley’s Never Gonna Give You Up music video. Upon feedback, I added 10 more platforms comprised of various internet video memes for the player to fall through before they reached the Never Gonna Give You Up platform.

This project was a metaphor for the addictive nature of the internet and the interconnectedness of internet culture and mindless media consumption. We know that over-use of the internet is addictive, and harmful to our mental and physical health. This is not to discount the positive content of the internet, rather, it is a mindful realization that utilization of “mindful” and “mindless” content are two sides of the same coin in regards to internet abuse.

Playtest of My First Game Version

Two Playtests of My Second Game Version

Animation – Understanding Comics Response

The value of Understanding Comics in the context of Animation is multifaceted. On the one hand, the medium surpasses the technical capabilities of comics, as movement is more fluidly communicated. However, there are direct parallels between the process, or “six steps”, involved in creating any art form, outlined by the authors that are directly applicable to animation – particularly as these are both visual media. Iconography, distortion of reality, and closure (topics covered in chapters 2 and 3) are also directly applicable to animation, as they are narrative mechanisms unique to visual media, regardless of their fluid or disjointed use of time. In my opinion, the beauty Scott sees in comics, in their capacity to engage the five senses our emotions, are directly pertinent to animation as well. Scott’s trepidation for over-writing comics can be directly transposed to the sentiment that animation directors should not rely on exposition to tell their story (A brilliant example of this is Genndy Tartakovsky’s Primal )- both should rely on their use of images to facilitate engagement, particularly through the use of color. Overall, I found Understanding Comics to be an ideological repetition of the tenants we learned in class – well packaged in an engaging comic style.

ICM Final – My Beautiful Dark Twisted Recursive Trees

For my final, I chose to create a recursive tree that responds to sentiment analysis of personally typed text. For every “positive” word, I wanted the tree to have a flower, and for every “negative” word, I wanted to have a flower disappear.

Before I discuss my process and issues, I would like to state my position on sentiment analysis, which I have included on a file with my cited sources in my final project:

We live in a time where innovation often outpaces critical thought vis a vis the social systems we change, and subsequently, the people we affect. Sentiment analysis is the perfect example of a technology that at first glance is cool, but can and has been utilized to perniciously exacerbate problematic social structures, particularly but not limited to racial discrimination.

I did not create the coding schema that values and devalues certain words based on their perceived positivity. In fact, I take great issue with the rudimentary nature of this schema and find it necessary to communicate that context is what truly determines how one may interpret a given word or phrase. Furthermore, the context of how we choose our words, the history of our phrases, and what they mean in relation to different lived experiences, is imperative to communicating in a manner that serves to diminish the hostility (intentional or not) of our communication.

The greatest struggle of this project was figuring out how to map flowers over the tree. There were a bunch of techniques tried before a true attempt at mapping was executed – the following were two attempts that worked conceptually, though I ended up bringing the latter attempt to “completion” because it was easier to complete in the time that I had to turn in this assignment

Attempt 1

Conceptualized by Cassie Tarakajian and with the assistance of Alden Jones, I attempted to map my recursive tree by “cheating” (in a super complicated way). Essentially, I would:

  1. In a first animation, generate a recursive tree
  2. In a second animation, upload the image of the tree, map the pixel positions of the image using edge detection and black pixel recognition arrays, load 50 random pixel positions from these arrays into a JSON file and export that JSON file.
  3. In a third animation, upload the image of the tree and the JSON file, and use the JSON coordinates to place my flower constructor

I stopped at step 3 due to deadline constraints – I now understand that I need to change step 2 to not make an inverse color image during edge detection.

Attempt 2

With the guidance of Max Da Silva, I developed a recursive tree generated from more complex vector geometry. This made it considerably easier to map the positions and nodes of each branch, to facilitate an easier “random” generation of flowers. I played around with some prototypes before adding the tree to sentiment analysis. These prototypes are linked below.

Comm Lab Animation – Perfect Pair

The purpose of this assignment was to strengthen our skills in Adobe AfterEffects. My partner, Hanwen Zhang, and I created Perfect Pair as a commentary on the normalization of excess in consumption, and the paradoxical judgment we impose on those who overindulge.

The most difficult element of this animation was the masking and rotoscoping involved with the character and bodily fluid layering. Most had to be done manually, frame by frame. Were there more time, I would have appreciated greater insight into how to make more realistic fluid textures, and make the transition between the “flood” and “kaleidoscope” sequences more fluid. I thoroughly enjoyed working on the sound design for this piece. I also enjoyed adding in the Kobayashi Television sequence based on feedback we had received on a rough draft – its concept integrates seamlessly into our piece’s larger message.

ICM – Weeks 9 & 10 Assignment

For this week’s assignment, our class was tasked with integrating sound into a p5 animation. This assignment was an interesting challenge, as I aimed to make a drum pad, usable both on a desktop and touch screen device, that played different lyric sections of Daft Punk’s Harder, Better, Faster, Stronger.

I tried three different methods to reach my end result, accessible in the buttons below.

In six words, I would describe this project as:

Tactile, Colorful, Harder, Better, Faster, Stronger

Users may interact with my final sketch on touch screens using the preview link.

If I had more time I would have made a popup introductory epilepsy warning with the song’s chorus quietly playing in the background. The drum pad would be accessible once the user clicked the warning, which would be superimposed over the helmet of one of the members of Daft Punk’s members. I have stored all of these media files in my sketch, and intend to add this feature in the near future.

Pcomp – Revised Prototype

This week our group re-evaluated our project concept based on user feedback from our previous class. Our users made two concepts clear:

1. The metaphorical connection between fireflies, fire, and personal/psychological connection was not entirely clear

2. It is not universally intuitive when given a jar to try and catch fireflies

Based on this feedback, we decided to refrain from integrating a Microsoft Kinect to track our user’s body motions. We made the following progress with our animation, figured out how to integrate our muse into our animation, and made steps in building our physical jar process trigger.

Pcomp – Paper Prototype Prep

This week, my group was tasked with developing a project description, a system diagram, timeline, and bill of materials in preparation for our playtesting.

As a group, we constructed our final narrative and drew out the necessary components provided in the images below:

Essentially, we will have 3 components linked to a p5 sketch – a Neurosky EEG reader, a jar that can light up and vibrate, and an Xbox Kinect camera. Our animation will begin as a flame with a dark background. When our user twists off the top of the jar, the fire will extinguish, and turn into smoke. Depending on our user feedback, our user will be prompted to focus on the smoke, and it will turn into an avatar for our user, as fireflies begin to emerge from the darkness. With the jar, registered by the Neurosky, our user may capture up to 10 fireflies. For every firefly caught, our jar will become brighter and vibrate. Once the user has caught 10 fireflies, the animation will revert back to a fire, and the user will reset the animation by replacing the cap on our jar.

We decided that we would determine our floor plan based on user feedback.

Our group had been working on dynamic timeline and bill of material documents, that we have made accessible on this page.

ICM – Week 8 Assignment

This week’s assignment was to create a sketch using pixelated images and describe it under specific parameters. I created a “Who’s That Pokemon” game with Neta Bomani – our description is as follows:

We based our animation on the “Who’s that Pokemon” television bump sketch.

We were asked to describe the experience in 5-10 words, we chose: A memory game to flex your nostalgiac fandom for Pokemon.

The sketch is split into 8 sections based on each Pokemon, Bulbasaur, Mewtwo, Jigglypuff, Electrobuzz, Infernape, Squirtle and Vilplume.

We were also asked to describe each section in 2-3 words below:

Section 1: This is Bulbasaur.

Section 2: This is Mewtwo.

Section 3: This is Jigglypuff.

Section 4: This is Electrobuzz.