Portfolio

WiFi Connected Temperature and Humidity Sensor

This project was our final assignment; an environmental sensor capable of sending data over https to Tom Igoe’s sever. I attempted this project with two different kinds of sensors on a Raspberry Pi Zero W. My first attempt was with the Pimoroni Enviro sensor hat, which required programming in Python, and communicated over the i2c protocol. My second, successful attempt was using a DHT-11 temperature and humidity sensor, with a built-in pull-up resistor, which required programming in Node.js, and communicated over the 1-wire protocol.

I set up my raspberry pi using Tom’s guide.

Attempt 1 – Pimoroni Enviro Hat

The Pimoroni Eniviro Hat came with its own GitHub code, including scripts to run its various sensors. I tried cloning this repository twice, and as I rebooted my pi, my pi crashed both times. I believed my SD card was corrupted both times, and reset my SD card completely both times. Before trying the third time, I backed up a copy of wifi-accessible, set up SD card on my computer.

The third time I cloned the repository, my pi seemed to crash on the reboot again. However, I decided to try to ssh into my pi and it turned out that the files, cloned repository and all, were not corrupted. I tried to run the various pre-programmed files that were supposed to get my sensors to run. The only file that worked was the screen file.

After some unhelpful feedback from the Pimoroni manufacturers, I discovered that my pi was not reading the hat’s sensors over the i2c protocol.

The bottom portion of this code indicates that my pi did not recognize any of the sensor’s pins

After talking to Tito, we figured that the issue was a higher-level computer engineering issue, related to the types of pre-programmed communication protocols my respective hat and pi had. Unsure of how to proceed from here, I decided to go the easier route and use the Node.js based DHT-11 sensor, that my classmate, Cy Kim, was using.

Attempt 2 – DHT-11 Sensor

I. Hardware fallout from enviro

I adapted the code that I used from Tom’s Github repository and the repository for my device. However, I could not return a reading from my sensor.

I took the following steps to debug my circuit:

1. I unit tested my sensor, this gave me back values of 0 for humidity and temperature
2. I checked whether or not I was receiving integer values using a format similar to the temp-humidity-client.js from Tom’s code
3. Tried test examples from my sensor’s repo which do not provide any insight or solutions
4. Used a multimeter (which indicated that my sensor was fine)
5. Swapped out my wires and breadboards

To no avail, frustrated and defeated, I took a break, and when I returned I:

6. Wiped my pi clean of all code, Raspian Lite and all
7. Rewrote and re-downloaded all of the code from scratch
8. In the process, I fried my zero w and switched to my non-wifi zero just to see if I could get a sensor value

I should note that at this step I believe I fried my sensor as well

9. Switched my sensor with Jason Tse
10. Switched my code with Jason Tse

Again, to no avail.

The following day, I arrived with a new sensor and new Raspberry Pi and was almost immediately able to get sensor readings.

II. Functional, but too “chatty”

Once I got my sensors reading, I combined the sensor reading file with a file that was set up to send HTTPS put request strings to Tom’s server.  After sending a get request to make sure that I had sent data to Tom’s server, I attached a SSD1306 display screen and adapted Tom’s code from his pi recipes to get the whole circuit to work.

Then came cron. I spent about an hour trying to figure out why cron would not work – it turns out that I put the wrong path to the file I wanted to run with cron. I then realized that I didn’t want to run my https transmission function with the same frequency as my screen display function, so I separated the two into separate files. I set the screen file to turn on as the device was powered (@reboot) and for my send function to run every 5 minutes (*/5 * * * *). I forgot to check if the latter function was operating at the speed I wanted it to function at, and as a result, accidentally spammed Tom’s server with upwards of 30,000 sensor readings. I’m still unsure why my 5 minute cron function was overridden by the readInterval function that I had written in my function sending code. I am also unsure if I should have written my cron script in the regular cron or sudo cron file, so I wrote my script in both for posterity.

After all of this, I had a device that would display humidity and temperature, and send readings every 15 minutes.

III. Self-Sabotaged in the Finishing Stretch?

I ended up trying to get my network SSID and IP represented on my pi’s screen. My screen transmitted that information, but would only do so by flashing for every update, which I programmed to occur each second.

In trying to debug this, I ended up only being able to get my screen to run when I ran it through ssh. This led me to believe that I was running into transmission speed issues more complicated than I anticipated.

I will consult with Tom Igoe and update my blog to reflect future directions.

Edit:

  1. In my transmission code, I accidentally changed my set interval to once a second
  2. I did not realize that cron jobs “overlap” – essentially when I ask to run 1 time, the script I want to run will actually run 1 x the interval I set for it to run, each time I asked for it to run.

Finished Product

Here is a final video of my sensor when it was working, a photo of my sensor when it was working, and a link to my GitHub repository for this project.

Building a Philips Hue Bulb Controller

For this assignment, we were tasked with building an Arduino-based device capable of connecting to and manipulating a Philips Hue Bulb via its Zigbee radio control hub. This assignment was our class’ foray into HTTP protocols – the week prior, we had completed this assignment in the command line of our computers.

I intended for my device to have 4 features – an on/off button, and three sensors which could control the hue, saturation, and brightness values of the bulb respectively. Initially, I intended on using three rotary encoders to manipulate these values, but I ended up using three potentiometers as I discovered that my rotary encoders were broken. Unfortunately, I documented poorly for this assignment, but I will do my best to communicate the two biggest problems I ran into in trying to complete this assignment:

  1. Putting the Cart Before the Horse in Wiring

Once I developed what I believed to be functional code based on an amalgamation of Tom Igoe’s Hue Control examples, I soldered my rotary encoders and wired them to my breadboard with my prototype on/off button. This was a mistake because in testing the inevitable bugs that ensued from my untested code, I was unable to discern which issue to solve first: the noise from my rotary encoders that made their values useless or my inability to send any HTTP put requests to the hue hub.

I ended up deconstructing my circuit and adapting my code to a switch to test if my put request code worked – first by turning the bulb on and off, then by having the bulb change between two colors in response to my switch being flipped. After realizing that it did, I scrapped the idea of working with my (poorly soldered) rotary encoders and decided to switch to working with a potentiometer. Before continuing, I wrote code that tested if I could manipulate the hue bulb’s brightness characteristics with one potentiometer, tested it, and made sure it worked.

2. Integrating Multiple Sensors Poorly

I spent the brunt of my time trying to integrate my on/off switch, and my potentiometers as separately read entities in the concatenated string HTTP put request that determines the condition of the hue bulb.

Test of one potentiometer for brightness, one on/off button prototype

First, I realized that my switch code was providing too much feedback, or “pinging” the hue hub too frequently, particularly in conjunction with the strings coming from my one potentiometer. I ended up changing the code and hardware to that of an “on/off” push button.

Second, in including two other variable changing potentiometers I somehow disrupted my code in a way that represented the potentiometers’ values in Arduino’s console log but did not put request their values to the hue hub – I have left the code that I used to confirm that my potentiometer values were sent in my final code file.

Had I pre-written conditions within each loop that debounced the put requests from the respective sensors I would have saved myself a lot of time in the integration of my sensor code into one file.

I still have not understood what aspects of my code have prevented me from cleanly and consistently manipulating different hue variables with all of my potentiometers. I will meet with Tom Igoe and update my blog to reflect this understanding.

What I’d Want to Do Moving Forward With This Project:

I would like to implement the following:

  1. Make the LED in my on/off push button work to represent if the bulb I’ve connected to is on or off- I ran out of time and could not figure out how to write the code that would accomplish this.
  2. Add a feedback screen – though I would not have had time to add it to this project while I was working on it, I had ordered one and it came too late for me to even attempt to integrate.
  3. Try and make my hue-bulb connection feedback faster, and more consistent. Even though I included debounce code in Millis, and took out much of the delays in my code my device had a noticeably slow “influence” on the bulb, I believe in part because I was still overwhelming the hue hub with my put requests.

Final Product:

Circuit Diagram

Deconstructing Death Grips

Our first python programming assignment for Material of Language was to create a composition based on computational manipulation of data. I set out to change a text file of the lyrics of “Spread Eagle Cross the Block” by Death Grips into emojis pertinent to their corresponding word. From there, I planned on exporting the new “emojified” document and try and turn it into an audio file, as I presumed the audio file that would be generated might resemble an instrumental sample of Death Grips’. Instead, I used the same code that I used to “emojify” “Spread Eagle Cross the Block” to “emojify” a text file of the song “Beware”, also by Death Grips. I realized that my “emojify” code would be an opportunity to clearly identify words (and letters) that repeat in the two songs. I found this interesting, as the abrasive sound of Death Grips’ music has inspired a cult following and mythology surrounding their provocative lyrics and the social commentaries the group tries to provide. I wonder, were I to apply this to “emojifying” code to Death Grips’ other songs if there would be other words I would want to add to the “emojifier”, which could elucidate a hidden meaning that other fans have overlooked. After all, musical cryptography
is not unprecedented.

Should I try to observe recurring patterns in Death Grips’ music, there would undoubtedly be more efficient methods to pursue in jupyter notebook. Already, I would look to see if there is some sort of sentiment-analysis-style tool that assigns emojis to dictionary words and could be imported into jupyter notebooks, to save me the time of having to figure out which words may or may not have emoji analogs. I have provided further, less formal, documentation in my python file.

Ball Drop Game Client

For this assignment, our class was tasked with creating a device that could connect to a server via a TCP network socket in order to play a “ball drop game”. Our device needed to be able to indicate when it was connected to the server, to move a platform character within the game, and to connect to the server with a button press. After registering my Arduino nano’s MAC address with NYU’s Wifi Network, I started building the hardware for my device with a simple four-button layout, to ensure that I could transmit the messages necessary to interact with the game platform/character.

Four-button controller

I struggled most with troubleshooting this step – I discovered that the breadboard I had been working with was partially broken and that my Arduino was broken and could not use its built-in wifi capabilities. I ended up using a new smaller breadboard, a new Arduino nano and connected my directional buttons via a PCB board.

The separate soldered 4-button PCB board

From there, I adapted an example code provided by Tom Igoe to use a push-button with a built-in LED to fulfill my network connection button, and network connection indicator requirement. This code also required me to integrate my 4 button controls (instead of Tom’s joystick code) and the WIFININA Arduino library to Arduino Nanos (the original code was intended for a different type of Arduino). Once my circuit and code were sufficiently debugged, I was able to play the ball drop game, and I moved on to fabrication. I tried two iterations of my controller enclosure with two kinds of cardboard.

My final circuit diagram, system diagram, and final product images can be found below:

System Diagram
Circuit Diagram

Below is a video of my controller in action:

I have linked my code in a GitHub repository, note that for security purposes I have not included the file with my network name and password, and the IP address written into the code will vary.

Physical Computing Final – Fireflies

My project was built in collaboration with Abby Lee and Tianxu Zhao. Together we built “Fireflies”, a concentration game utilizing a Muse EEG, p5 animation visible through a projector, and a Bluetooth enabled jar trigger. The process would be that once our user opens the jar, our animation of fireflies will appear. When our user is prompted to focus, the Muse will transmit that our user is in a state of concentration, prompting the flies in our animation to concentrate in conjunction. We see this game as an interesting jumping-off point for many neuroscience-related conversations.

I have had laboratory experience working with an electroencephalogram, which read the presence and absence of polarized oxygenated blood as electrical frequencies. Different types of frequencies, also known as neural oscillations, occur simultaneously all over the brain, and their functions vary depending on which region of the brain they take place in. We want to be clear with our audience – Muse is not a scientific apparatus. In fact, the company misrepresents many of the types of data the EEG reads and the inaccuracy of that data. That being said, Muse is capable of reading very localized, very simple frequency and amplitude changes that are understood to be associated with the presence and absence of concentration. Concentration in it of itself is not an entirely well-understood phenomenon. However, at a reductive level that reflects the Muse’s limitations, our animation is contingent on a change from lower frequency, lower amplitude oscillatory behavior (usually referred to as alpha) which is commonly associated with resting state, to a higher frequency, higher amplitude neural oscillatory behavior (usually referred to as beta) which is commonly associated with concentration.

We integrated our Muse to our p5 animation through a series of steps. First, we read the Muse EEG data with a third party mobile application titled “Muse Monitor” on one of our phones. Our phone then transmits the live EEG data using Open Sound Control protocol to a processing sketch. We can then input the OSC feedback from our processing sketch to our p5 animation locally.

We connect our jar to our animation using an Arduino Nano, which has a built-in Bluetooth transmitter. We have programmed this Nano with a proprietary Bluetooth transmission library, capable of easily interfacing with our local p5 animation. Our Nano is attached to a button, which is triggered by the removal of the jar’s lid.

The game plan is that when the lid of our jar is removed, our p5 animation will play, allowing the Muse EEG feedback to be represented. Once the lid of the jar is replaced, the animation stops. To be clear – the Muse and jar operate independently of one another.

Ultimately my group learned how to take Muse data and use it in a p5 sketch. We learned how to connect an Arduino to a computer via Bluetooth. We discovered that the delays created by Neopixel code can interfere with Bluetooth signal transmission. We broke a few Arduino Nanos trying to figure out an optimal long-lasting power source for our jar.

Special thanks to:

Jeff Feddersen, our Physical Computing professor, without whom we could not have realized this project, let alone use a multimeter.

Louise Lessel for pointing us to Gene Kogan’s OSC processing library on GitHub, and being an approachable and accessible wealth of Muse knowledge.

Tom Igoe for teaching us how to connect devices with Bluetooth.

Residents Jesse Simpson, Aiden Nelson, Ellen Nickles, and Anthony Bui for their help in troubleshooting.

Playtesting a prototype animation
Final prototype demonstration

Animation – Final

For my animation final, I chose to create an Oculus game built in Unity. In order to create this, I had to do some programming in C Sharp to create colliders that generated platforms upon the player leaving another platform, and in order to integrate Oculus into my Unity Game.

The game begins with the player standing on a small carpet in space, with clouds below them. They listen to a robot reciting Manifesto for a European research network into Problematic Usage of the Internet, set to William Basinski’s Mono no Aware. Once the player tires of listening to this esoteric soundscape, they can jump off of the carpet, and fall onto a platform comprised of the video and audio components of Rick Astley’s Never Gonna Give You Up music video. Upon feedback, I added 10 more platforms comprised of various internet video memes for the player to fall through before they reached the Never Gonna Give You Up platform.

This project was a metaphor for the addictive nature of the internet and the interconnectedness of internet culture and mindless media consumption. We know that over-use of the internet is addictive, and harmful to our mental and physical health. This is not to discount the positive content of the internet, rather, it is a mindful realization that utilization of “mindful” and “mindless” content are two sides of the same coin in regards to internet abuse.

Playtest of My First Game Version

Two Playtests of My Second Game Version

Animation – Understanding Comics Response

The value of Understanding Comics in the context of Animation is multifaceted. On the one hand, the medium surpasses the technical capabilities of comics, as movement is more fluidly communicated. However, there are direct parallels between the process, or “six steps”, involved in creating any art form, outlined by the authors that are directly applicable to animation – particularly as these are both visual media. Iconography, distortion of reality, and closure (topics covered in chapters 2 and 3) are also directly applicable to animation, as they are narrative mechanisms unique to visual media, regardless of their fluid or disjointed use of time. In my opinion, the beauty Scott sees in comics, in their capacity to engage the five senses our emotions, are directly pertinent to animation as well. Scott’s trepidation for over-writing comics can be directly transposed to the sentiment that animation directors should not rely on exposition to tell their story (A brilliant example of this is Genndy Tartakovsky’s Primal )- both should rely on their use of images to facilitate engagement, particularly through the use of color. Overall, I found Understanding Comics to be an ideological repetition of the tenants we learned in class – well packaged in an engaging comic style.

ICM Final – My Beautiful Dark Twisted Recursive Trees

For my final, I chose to create a recursive tree that responds to sentiment analysis of personally typed text. For every “positive” word, I wanted the tree to have a flower, and for every “negative” word, I wanted to have a flower disappear.

Before I discuss my process and issues, I would like to state my position on sentiment analysis, which I have included on a file with my cited sources in my final project:

We live in a time where innovation often outpaces critical thought vis a vis the social systems we change, and subsequently, the people we affect. Sentiment analysis is the perfect example of a technology that at first glance is cool, but can and has been utilized to perniciously exacerbate problematic social structures, particularly but not limited to racial discrimination.

I did not create the coding schema that values and devalues certain words based on their perceived positivity. In fact, I take great issue with the rudimentary nature of this schema and find it necessary to communicate that context is what truly determines how one may interpret a given word or phrase. Furthermore, the context of how we choose our words, the history of our phrases, and what they mean in relation to different lived experiences, is imperative to communicating in a manner that serves to diminish the hostility (intentional or not) of our communication.

The greatest struggle of this project was figuring out how to map flowers over the tree. There were a bunch of techniques tried before a true attempt at mapping was executed – the following were two attempts that worked conceptually, though I ended up bringing the latter attempt to “completion” because it was easier to complete in the time that I had to turn in this assignment

Attempt 1

Conceptualized by Cassie Tarakajian and with the assistance of Alden Jones, I attempted to map my recursive tree by “cheating” (in a super complicated way). Essentially, I would:

  1. In a first animation, generate a recursive tree
  2. In a second animation, upload the image of the tree, map the pixel positions of the image using edge detection and black pixel recognition arrays, load 50 random pixel positions from these arrays into a JSON file and export that JSON file.
  3. In a third animation, upload the image of the tree and the JSON file, and use the JSON coordinates to place my flower constructor

I stopped at step 3 due to deadline constraints – I now understand that I need to change step 2 to not make an inverse color image during edge detection.

Attempt 2

With the guidance of Max Da Silva, I developed a recursive tree generated from more complex vector geometry. This made it considerably easier to map the positions and nodes of each branch, to facilitate an easier “random” generation of flowers. I played around with some prototypes before adding the tree to sentiment analysis. These prototypes are linked below.

Comm Lab Animation – Perfect Pair

The purpose of this assignment was to strengthen our skills in Adobe AfterEffects. My partner, Hanwen Zhang, and I created Perfect Pair as a commentary on the normalization of excess in consumption, and the paradoxical judgment we impose on those who overindulge.

The most difficult element of this animation was the masking and rotoscoping involved with the character and bodily fluid layering. Most had to be done manually, frame by frame. Were there more time, I would have appreciated greater insight into how to make more realistic fluid textures, and make the transition between the “flood” and “kaleidoscope” sequences more fluid. I thoroughly enjoyed working on the sound design for this piece. I also enjoyed adding in the Kobayashi Television sequence based on feedback we had received on a rough draft – its concept integrates seamlessly into our piece’s larger message.