Building a Philips Hue Bulb Controller

For this assignment, we were tasked with building an Arduino-based device capable of connecting to and manipulating a Philips Hue Bulb via its Zigbee radio control hub. This assignment was our class’ foray into HTTP protocols – the week prior, we had completed this assignment in the command line of our computers.

I intended for my device to have 4 features – an on/off button, and three sensors which could control the hue, saturation, and brightness values of the bulb respectively. Initially, I intended on using three rotary encoders to manipulate these values, but I ended up using three potentiometers as I discovered that my rotary encoders were broken. Unfortunately, I documented poorly for this assignment, but I will do my best to communicate the two biggest problems I ran into in trying to complete this assignment:

  1. Putting the Cart Before the Horse in Wiring

Once I developed what I believed to be functional code based on an amalgamation of Tom Igoe’s Hue Control examples, I soldered my rotary encoders and wired them to my breadboard with my prototype on/off button. This was a mistake because in testing the inevitable bugs that ensued from my untested code, I was unable to discern which issue to solve first: the noise from my rotary encoders that made their values useless or my inability to send any HTTP put requests to the hue hub.

I ended up deconstructing my circuit and adapting my code to a switch to test if my put request code worked – first by turning the bulb on and off, then by having the bulb change between two colors in response to my switch being flipped. After realizing that it did, I scrapped the idea of working with my (poorly soldered) rotary encoders and decided to switch to working with a potentiometer. Before continuing, I wrote code that tested if I could manipulate the hue bulb’s brightness characteristics with one potentiometer, tested it, and made sure it worked.

2. Integrating Multiple Sensors Poorly

I spent the brunt of my time trying to integrate my on/off switch, and my potentiometers as separately read entities in the concatenated string HTTP put request that determines the condition of the hue bulb.

Test of one potentiometer for brightness, one on/off button prototype

First, I realized that my switch code was providing too much feedback, or “pinging” the hue hub too frequently, particularly in conjunction with the strings coming from my one potentiometer. I ended up changing the code and hardware to that of an “on/off” push button.

Second, in including two other variable changing potentiometers I somehow disrupted my code in a way that represented the potentiometers’ values in Arduino’s console log but did not put request their values to the hue hub – I have left the code that I used to confirm that my potentiometer values were sent in my final code file.

Had I pre-written conditions within each loop that debounced the put requests from the respective sensors I would have saved myself a lot of time in the integration of my sensor code into one file.

I still have not understood what aspects of my code have prevented me from cleanly and consistently manipulating different hue variables with all of my potentiometers. I will meet with Tom Igoe and update my blog to reflect this understanding.

What I’d Want to Do Moving Forward With This Project:

I would like to implement the following:

  1. Make the LED in my on/off push button work to represent if the bulb I’ve connected to is on or off- I ran out of time and could not figure out how to write the code that would accomplish this.
  2. Add a feedback screen – though I would not have had time to add it to this project while I was working on it, I had ordered one and it came too late for me to even attempt to integrate.
  3. Try and make my hue-bulb connection feedback faster, and more consistent. Even though I included debounce code in Millis, and took out much of the delays in my code my device had a noticeably slow “influence” on the bulb, I believe in part because I was still overwhelming the hue hub with my put requests.

Final Product:

Circuit Diagram

Deconstructing Death Grips

Our first python programming assignment for Material of Language was to create a composition based on computational manipulation of data. I set out to change a text file of the lyrics of “Spread Eagle Cross the Block” by Death Grips into emojis pertinent to their corresponding word. From there, I planned on exporting the new “emojified” document and try and turn it into an audio file, as I presumed the audio file that would be generated might resemble an instrumental sample of Death Grips’. Instead, I used the same code that I used to “emojify” “Spread Eagle Cross the Block” to “emojify” a text file of the song “Beware”, also by Death Grips. I realized that my “emojify” code would be an opportunity to clearly identify words (and letters) that repeat in the two songs. I found this interesting, as the abrasive sound of Death Grips’ music has inspired a cult following and mythology surrounding their provocative lyrics and the social commentaries the group tries to provide. I wonder, were I to apply this to “emojifying” code to Death Grips’ other songs if there would be other words I would want to add to the “emojifier”, which could elucidate a hidden meaning that other fans have overlooked. After all, musical cryptography
is not unprecedented.

Should I try to observe recurring patterns in Death Grips’ music, there would undoubtedly be more efficient methods to pursue in jupyter notebook. Already, I would look to see if there is some sort of sentiment-analysis-style tool that assigns emojis to dictionary words and could be imported into jupyter notebooks, to save me the time of having to figure out which words may or may not have emoji analogs. I have provided further, less formal, documentation in my python file.

Ball Drop Game Client

For this assignment, our class was tasked with creating a device that could connect to a server via a TCP network socket in order to play a “ball drop game”. Our device needed to be able to indicate when it was connected to the server, to move a platform character within the game, and to connect to the server with a button press. After registering my Arduino nano’s MAC address with NYU’s Wifi Network, I started building the hardware for my device with a simple four-button layout, to ensure that I could transmit the messages necessary to interact with the game platform/character.

Four-button controller

I struggled most with troubleshooting this step – I discovered that the breadboard I had been working with was partially broken and that my Arduino was broken and could not use its built-in wifi capabilities. I ended up using a new smaller breadboard, a new Arduino nano and connected my directional buttons via a PCB board.

The separate soldered 4-button PCB board

From there, I adapted an example code provided by Tom Igoe to use a push-button with a built-in LED to fulfill my network connection button, and network connection indicator requirement. This code also required me to integrate my 4 button controls (instead of Tom’s joystick code) and the WIFININA Arduino library to Arduino Nanos (the original code was intended for a different type of Arduino). Once my circuit and code were sufficiently debugged, I was able to play the ball drop game, and I moved on to fabrication. I tried two iterations of my controller enclosure with two kinds of cardboard.

My final circuit diagram, system diagram, and final product images can be found below:

System Diagram
Circuit Diagram

Below is a video of my controller in action:

I have linked my code in a GitHub repository, note that for security purposes I have not included the file with my network name and password, and the IP address written into the code will vary.

Physical Computing Final – Fireflies

My project was built in collaboration with Abby Lee and Tianxu Zhao. Together we built “Fireflies”, a concentration game utilizing a Muse EEG, p5 animation visible through a projector, and a Bluetooth enabled jar trigger. The process would be that once our user opens the jar, our animation of fireflies will appear. When our user is prompted to focus, the Muse will transmit that our user is in a state of concentration, prompting the flies in our animation to concentrate in conjunction. We see this game as an interesting jumping-off point for many neuroscience-related conversations.

I have had laboratory experience working with an electroencephalogram, which read the presence and absence of polarized oxygenated blood as electrical frequencies. Different types of frequencies, also known as neural oscillations, occur simultaneously all over the brain, and their functions vary depending on which region of the brain they take place in. We want to be clear with our audience – Muse is not a scientific apparatus. In fact, the company misrepresents many of the types of data the EEG reads and the inaccuracy of that data. That being said, Muse is capable of reading very localized, very simple frequency and amplitude changes that are understood to be associated with the presence and absence of concentration. Concentration in it of itself is not an entirely well-understood phenomenon. However, at a reductive level that reflects the Muse’s limitations, our animation is contingent on a change from lower frequency, lower amplitude oscillatory behavior (usually referred to as alpha) which is commonly associated with resting state, to a higher frequency, higher amplitude neural oscillatory behavior (usually referred to as beta) which is commonly associated with concentration.

We integrated our Muse to our p5 animation through a series of steps. First, we read the Muse EEG data with a third party mobile application titled “Muse Monitor” on one of our phones. Our phone then transmits the live EEG data using Open Sound Control protocol to a processing sketch. We can then input the OSC feedback from our processing sketch to our p5 animation locally.

We connect our jar to our animation using an Arduino Nano, which has a built-in Bluetooth transmitter. We have programmed this Nano with a proprietary Bluetooth transmission library, capable of easily interfacing with our local p5 animation. Our Nano is attached to a button, which is triggered by the removal of the jar’s lid.

The game plan is that when the lid of our jar is removed, our p5 animation will play, allowing the Muse EEG feedback to be represented. Once the lid of the jar is replaced, the animation stops. To be clear – the Muse and jar operate independently of one another.

Ultimately my group learned how to take Muse data and use it in a p5 sketch. We learned how to connect an Arduino to a computer via Bluetooth. We discovered that the delays created by Neopixel code can interfere with Bluetooth signal transmission. We broke a few Arduino Nanos trying to figure out an optimal long-lasting power source for our jar.

Special thanks to:

Jeff Feddersen, our Physical Computing professor, without whom we could not have realized this project, let alone use a multimeter.

Louise Lessel for pointing us to Gene Kogan’s OSC processing library on GitHub, and being an approachable and accessible wealth of Muse knowledge.

Tom Igoe for teaching us how to connect devices with Bluetooth.

Residents Jesse Simpson, Aiden Nelson, Ellen Nickles, and Anthony Bui for their help in troubleshooting.

Playtesting a prototype animation
Final prototype demonstration

Animation – Final

For my animation final, I chose to create an Oculus game built in Unity. In order to create this, I had to do some programming in C Sharp to create colliders that generated platforms upon the player leaving another platform, and in order to integrate Oculus into my Unity Game.

The game begins with the player standing on a small carpet in space, with clouds below them. They listen to a robot reciting Manifesto for a European research network into Problematic Usage of the Internet, set to William Basinski’s Mono no Aware. Once the player tires of listening to this esoteric soundscape, they can jump off of the carpet, and fall onto a platform comprised of the video and audio components of Rick Astley’s Never Gonna Give You Up music video. Upon feedback, I added 10 more platforms comprised of various internet video memes for the player to fall through before they reached the Never Gonna Give You Up platform.

This project was a metaphor for the addictive nature of the internet and the interconnectedness of internet culture and mindless media consumption. We know that over-use of the internet is addictive, and harmful to our mental and physical health. This is not to discount the positive content of the internet, rather, it is a mindful realization that utilization of “mindful” and “mindless” content are two sides of the same coin in regards to internet abuse.

Playtest of My First Game Version

Two Playtests of My Second Game Version

Animation – Understanding Comics Response

The value of Understanding Comics in the context of Animation is multifaceted. On the one hand, the medium surpasses the technical capabilities of comics, as movement is more fluidly communicated. However, there are direct parallels between the process, or “six steps”, involved in creating any art form, outlined by the authors that are directly applicable to animation – particularly as these are both visual media. Iconography, distortion of reality, and closure (topics covered in chapters 2 and 3) are also directly applicable to animation, as they are narrative mechanisms unique to visual media, regardless of their fluid or disjointed use of time. In my opinion, the beauty Scott sees in comics, in their capacity to engage the five senses our emotions, are directly pertinent to animation as well. Scott’s trepidation for over-writing comics can be directly transposed to the sentiment that animation directors should not rely on exposition to tell their story (A brilliant example of this is Genndy Tartakovsky’s Primal )- both should rely on their use of images to facilitate engagement, particularly through the use of color. Overall, I found Understanding Comics to be an ideological repetition of the tenants we learned in class – well packaged in an engaging comic style.

ICM Final – My Beautiful Dark Twisted Recursive Trees

For my final, I chose to create a recursive tree that responds to sentiment analysis of personally typed text. For every “positive” word, I wanted the tree to have a flower, and for every “negative” word, I wanted to have a flower disappear.

Before I discuss my process and issues, I would like to state my position on sentiment analysis, which I have included on a file with my cited sources in my final project:

We live in a time where innovation often outpaces critical thought vis a vis the social systems we change, and subsequently, the people we affect. Sentiment analysis is the perfect example of a technology that at first glance is cool, but can and has been utilized to perniciously exacerbate problematic social structures, particularly but not limited to racial discrimination.

I did not create the coding schema that values and devalues certain words based on their perceived positivity. In fact, I take great issue with the rudimentary nature of this schema and find it necessary to communicate that context is what truly determines how one may interpret a given word or phrase. Furthermore, the context of how we choose our words, the history of our phrases, and what they mean in relation to different lived experiences, is imperative to communicating in a manner that serves to diminish the hostility (intentional or not) of our communication.

The greatest struggle of this project was figuring out how to map flowers over the tree. There were a bunch of techniques tried before a true attempt at mapping was executed – the following were two attempts that worked conceptually, though I ended up bringing the latter attempt to “completion” because it was easier to complete in the time that I had to turn in this assignment

Attempt 1

Conceptualized by Cassie Tarakajian and with the assistance of Alden Jones, I attempted to map my recursive tree by “cheating” (in a super complicated way). Essentially, I would:

  1. In a first animation, generate a recursive tree
  2. In a second animation, upload the image of the tree, map the pixel positions of the image using edge detection and black pixel recognition arrays, load 50 random pixel positions from these arrays into a JSON file and export that JSON file.
  3. In a third animation, upload the image of the tree and the JSON file, and use the JSON coordinates to place my flower constructor

I stopped at step 3 due to deadline constraints – I now understand that I need to change step 2 to not make an inverse color image during edge detection.

Attempt 2

With the guidance of Max Da Silva, I developed a recursive tree generated from more complex vector geometry. This made it considerably easier to map the positions and nodes of each branch, to facilitate an easier “random” generation of flowers. I played around with some prototypes before adding the tree to sentiment analysis. These prototypes are linked below.

Comm Lab Animation – Perfect Pair

The purpose of this assignment was to strengthen our skills in Adobe AfterEffects. My partner, Hanwen Zhang, and I created Perfect Pair as a commentary on the normalization of excess in consumption, and the paradoxical judgment we impose on those who overindulge.

The most difficult element of this animation was the masking and rotoscoping involved with the character and bodily fluid layering. Most had to be done manually, frame by frame. Were there more time, I would have appreciated greater insight into how to make more realistic fluid textures, and make the transition between the “flood” and “kaleidoscope” sequences more fluid. I thoroughly enjoyed working on the sound design for this piece. I also enjoyed adding in the Kobayashi Television sequence based on feedback we had received on a rough draft – its concept integrates seamlessly into our piece’s larger message.

ICM – Weeks 9 & 10 Assignment

For this week’s assignment, our class was tasked with integrating sound into a p5 animation. This assignment was an interesting challenge, as I aimed to make a drum pad, usable both on a desktop and touch screen device, that played different lyric sections of Daft Punk’s Harder, Better, Faster, Stronger.

I tried three different methods to reach my end result, accessible in the buttons below.

In six words, I would describe this project as:

Tactile, Colorful, Harder, Better, Faster, Stronger

Users may interact with my final sketch on touch screens using the preview link.

If I had more time I would have made a popup introductory epilepsy warning with the song’s chorus quietly playing in the background. The drum pad would be accessible once the user clicked the warning, which would be superimposed over the helmet of one of the members of Daft Punk’s members. I have stored all of these media files in my sketch, and intend to add this feature in the near future.