Week 1 – Conjuring

My finished Product

This week we were tasked with “building an interface for conjuring an environment rather than traveling through it”. This week we were also introduced to the machine learning software RunwayML. I decided to train a model on RunwayML based on images of my apartment, and present them as a fluid scene in a p5.Js animation.

I trained my ML model with approximately 1000 photos of my apartment. I got these photos by taking videos of my apartment at different times of the day and breaking those videos into still images with the command line tool FFmpeg.

Because I was using a one-time/free/trial version of RunwayML’s training model, as you can see in the video above, the ML generated images came out incredibly distorted. Were I to try and replicate this project, I would be inclined to spend some money to allow this model to train for longer, to see if I could generate realistic apartment images.

Though I will not share my apartment images or my RunwayML model, I will share my p5 code.