Assignment,Precedent Analysis — ygk @ 1:55 am

“OpenControllers” by ECAL/Marc Dubois (2014

OpenControllers ECAL/Marc Dubois from ECAL on Vimeo.

Open Controllers allows you to use your daily device as a controller of a different kind by placing it within specifically designed artifacts that embed the device and utilize its capabilities such as the gyroscope and the light detection. This project takes your physical movements and interprets them. I think this can be done with a range of patterns, which gets into the next video…

“Tilt Brush” by Skillman & Hackett (2014)

This is a lot more similar to what I want to to, except that I also am interested in the painting becoming three-dimensionalized as you are painting. Not that I intend on doing that exact thing…I just mean to draw a parallel. The gestures here looks swift and intuitive if it’s real, and the interface is quite honestly very confusing but maybe that’s because I’m not aware of the gestures it uses. Assuming this is true, I think this is the future of design through tangible interaction.

“3Doodler” by Pete and Max (2013)

This project does a lot of what I am hoping to accomplish with its ability to prototype quickly and efficiently without having to prepare ahead of time what will be made.

“3D Printed Blooming Flowers” by Mikaela Holmes (2014)

This kind of movement is exciting as a less mechanical way to actuate an artifact. I wonder how it could be applied at a more complex or responsive level.

I think these videos show the progression of my thoughts on the final project quite well. I want to be able to design something indirectly by using a tool, not unlike how surgeons now use robots to do the actual surgery. I am struggling to know the scope I am capable of, and coming up with a good project idea to make it happen. Something I had thought about is using my surface as a tool to communicate with a CNC robot to fabricate sketches drawn in real time on the surface.

Precedent Analysis 2

Precedent Analysis — pvirasat @ 11:21 pm

Art: “Quotidian Record” by Brian House (2012)

Quotidian Record is a custom-made vinyl record, featuring Brian’s location-tracking data over a year. The data is mapped to a harmonic relationship from 365 days down to approximately 11 minutes. This project connects digital information to the physical world though music. I find it a very unique and interesting way to visualize the data that can be easily forgotten and make it tangible and memorable.



Industry: Fitbit

I am still deciding on the type of data I’d like to collect, and that would also determine the tool I use. Fitbit is definitely on top of the list. One idea is to track the amount of sleep I get each night, and how that affects my daily activities the following day.


Academia: “Understanding Quantified-Selfers’ Practices in Collecting and Exploring Personal Data” by Eun Kyoung Choe, Nicole B. Lee, Bongshin Lee, Wanda Pratt, and Julie A. Kientz (2014)

link to paper

Even though everyone tracks something about themselves, this paper looks into the “extreme users” – the quanified-selfers, who diligently collect many different kinds of data about themselves. Through Meetup groups, blogging, and other outlets, they have shared their experiences of what they have found useful and the mistakes they recommend to avoid.  This paper will help me narrow down the data I should be collecting, thinking about how the data would be beneficial or appropriate for this project.



Since I am looking to work with my personal data that is collected over time, the most appropriate outlet would be to be published on a few notable design blogs. However, my ultimate goal for this project is to learn a little bit more about myself through making this data permanent and tangible.


Augmented Windows

Precedent Analysis — Tags: — John Mars @ 5:25 pm


In 2012, Samsung debuted a transparent LCD. Where typical LCDs use an artificial backlight to shine through their pixels, the transparent display uses the sun (and artificial edge-lighting at night). The display also includes a touchscreen component.

Samsung previewed their technology as a 1:1 desktop monitor replacement, but I see value in using the technology more inventively: as an augmented reality window, where the display builds upon what’s visible in the outside world.


Memo Akten (who I just realized authored the ofxARDrone library I used in the last project) created a series of videos for the launch of the Sony PlayStation Video Store. The videos use a live projection-mapping technique where the content is mapped according to the perspective created by the camera’s position and angle in space.

If you look through a window at a nearby object, draw a circle around it, and then move your head, the circle is no longer in-place. With head- or eye-tracking, the perspective projection could change with your changing viewpoint, so that the circle will always be around the object.


What happens if two (or more) people are looking out the window? A conventional display can’t show different images for each person. Alternatives to make that happen would be active- (or maybe even passive-, if you’re really good) shuttered glasses, as used with 3D TVs, or a lenticular display. A lenticular display is one that uses lenticular film — you know, these things:

This 1999 paper1, by N.A. Dodgson, et al, in addition to his 2011 workshop, discusses a way to create a glasses-free 3D display using lenticular arrays. In addition to its 3D uses, such a display could also be used for “two-view, head-tracked displays; and multi-view displays”. They’re not talking about displaying unique perspectives per-viewer, but instead about delivering correct stereoscopic images to each eye of each viewer; my use should be much more simple.


I definitely want to try to get my project picked up by a media outlet — a technology/design blog, for example. A little boost of fame would be most appreciated. And, heck, if it turns out to be really cool, unique technology (which it might be — I’m not finding all that much in academia), maybe even a paper submission would be in order.

  1. N. A. Dodgson, J. R. Moore, S. R. Lang. 1999. “Multi-View Autostereoscopic 3D Display.” International Broadcasting Convention 99.

Project 2.01 – Oatcam

Uncategorized — rbrill @ 11:24 am

Making Things Interactive: Assignment 2 from Ryan B. on Vimeo.

For our second project, we were tasked with making a spy device using Raspberry Pi. This was my and Danita’s first time working with Raspberry Pi.

We created a motion sensing spy device using the Raspberry Pi, an oatmeal container, a PS3 Eye camera and a passive infrared (PIR) sensor. We used the WiringPi mapping for the GPIO pins to take in data from the PIR sensor and camera. It took us a while to get consistent results with these pins until we realized that one of the pins we were using had a built in pull up resistor. Changing pins did the trick.

We definitely pushed the Raspberry Pi’s processing power to its limits at times in the development process, but in the end we had a system that can consistently capture and store and transfer photos of passers by.

by Ryan and Danita

OpenFrameworks Awesomeness: TRON Legacy

Software,Technique — Tags: — John Mars @ 9:47 pm

I remembered this and am not using it for my precedents, but I thought I’d share anyway (read his post, don’t just watch the above video):

Tron Legacy by Josh Nimoy

“Dizzy, The Deceitful” by Patt Vira & Epic Jefferson


Dizzy is a friendly looking teddy who actually watches your every move and makes everything it sees public.

Making use of the Raspberry Pi’s GPIO, we hooked up a PIR sensor to trigger a webcam capture event and publish the image to Tumblr.



By far the most challenging part of this project was working with the API. Since it’s a lot easier to find examples of raspi projects written in python than in c++, we decided to use python. For example, the Tumblr API page has example code for python but not c++, adafruit has a great python tutorial for hooking up a PIR sensor to a Pi’s GPIO.

pytumblr is Tumblr’s official API client for python, but the instructions are unclear.


Python-to-tumblr Tutorial


Source Code






Streaming live video with ffmpeg/ffplay

Software,Technique — Tags: — John Mars @ 9:33 am

FFmpeg is a multi-platform command-line application for streaming, recording, converting, and saving video and audio. It includes a variety of related programs, including FFplay, which displays video in a window (vanilla FFmpeg on its own does not).

First, install it on your device.

Either build from source, install via your favorite package manager (apt-get, aptitude, brew, etc.), or download a pre-compiled binary (Windows, Mac).

Second, find a stream.

For our project, the ARDrone was streaming video using a custom variant (PaVE) of a regular H.264 video, streaming along TCP port 5555 from

Third, ffplay.

In our case, this easy little command is what did the work (your case will undoubtedly be different):

It has four parts. On the left is ffplay, the application that is doing all of the work. To the right, tcp:// tells ffplay that we will be using the TCP protocol (as opposed to http, ftp, udp, etc.). is the IP address of the server, and :5555 is the port we will be looking at.

Put it all together, and video should flow like a slightly-laggy waterfall (FFplay has some latency issues).

As always, read through all of the documentation and fully understand what you’re trying to accomplish and how you’re going to accomplish it, instead of randomly replacing values.

How to Upload Image From RaspberryPi (RPi) to Dropbox using OpenFrameworks (OFx)

Reference,Software — alanhp @ 8:11 pm

Okay! here is the explanation, hopefully its not super hard to get it working.

Link to explanation with cool colored text:

For the instructions by the creator of Dropbox-Uploader go here, these are not specific to the RaspberryPi and OpenFrameworks, they are meant to be for all environments thus they may be harder to use specifically for our case.


1) get the Dropbox Uploader software from github. To do this the full instructions are at ( but the simplest method is to connect your RPi and from the home directory (which is the one you should be at by default when you connect your RPi) type on the terminal the following command:
git clone

2) Then move to the directory where Dropbox-Uploader is (which should just mean you have to type cd Dropbox-Uploader) and in it you want to give the right permissions to the script by typing the following command (you may have to play a little with the +x option, it gave me some trouble. I think you can say chmod help and it will give you a list of options):
chmod +x

3) Then you want to run the script which will guide you through some configuration options. To run the script, the command should be:

4) One of the configuration options is whether you want your app to have access to all of your dropbox or just to a specific folder, I said just to a specific folder which I think makes things easier to control. The rest of the instructions are based on the assumption that you are choosing that option. You should create a folder in the Apps folder in your dropbox, I think you should have one by default, you should be able to create a folder on this dropbox for developers url

5) Switch to your openFrameworks app directory i.e. opt/openFrameworks/apps/myApps/myCoolApp Now you can do the cool stuff. The thing I really liked about how this integrated is that my actual interaction with dropbox takes only one line on the code. I save the file I want to upload on the RPi and then I just get it from the directory where it is at and save it to dropbox.

6) So now, wherever in your app you want to upload your file to dropbox you will place that one line of code mentioned on the previous step. I put the line right after saving the image file to my raspberry pi (for which I used the image saving example on the examples/graphics folder for openFrameworks) The line is basically going to have the form system(“command that we will specify”). What will basically happen is that whatever is between the quotes will be executed from the command line. We are momentarily leaving openFrameworks and running something from the command line. The specific line for uploading is going to look something like this: system(“../../../../../home/pi/Dropbox-Uploader/ upload ../bin/data/uploadFile.png uploadFile.png”)

Once that is done, your file should get uploaded to Dropbox… but it usually doesn’t happen on the first try, I had to play a lot with the location paths for all the files. I am going to explain in a little more detail each part of the quoted command so that hopefully getting the command just right is easy. So…

../../../../../home/pi/Dropbox-Uploader/ this is the path for where the file is in relation to the directory where the app home is. ../../../../../ with this we are going up five directories, so basically to the terminal right before we enter the opt folder. Then from this directory we want to go to the directory /home/pi/Dropbox-Uploader/ where the dropbox uploader file is and here we run the script

upload this says that from the dropbox uploader we want to use the upload functionality.

../bin/data/uploadFile.png and what we want to upload is the file called uploadFile.png which is located in the folder bin/data to which we get access once we go up one directory ../ from the src folder where our compiled code is (I think).

And then since we configured our app to have access to one specific dropbox folder, we don’t need to specify the directory where we are saving in dropbox and instead can just type the name we want the file to have in dropbox uploadFile.png.

that’s it.

Project 2.01 — rpi’s GPIO Pins + oF WiringPi Library

Assignment — pvirasat @ 11:00 am

This is my first time working with a raspberry pi and I am hooked! This project demonstrates the basic working of the GPIO Pins on the raspberry pi, using openFrameworks library ‘WiringPi’. I used three buttons as inputs to control the direction, size and color of a circle on the screen.

It took me a while to get in the flow of accessing the raspberry pi through ssh, but after I got that working the rest wasn’t too bad. I worked on the hardware and software separately and made sure that everything worked on its own before integrating them. It was definitely a good introduction to raspberry pi and openFrameworks.





Project 2 – Raspberio

Assignment,Hardware — Tags: — tdoyle @ 11:26 pm

For my second project, I created a controller reminiscent of the classic NES controller with three push buttons. This project aimed to expose us to aspects of hardware prototyping and the WiringPi library. In the end I created a demo “game” where the player could move Mario around the screen by moving left, right, and jumping.


« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2020 Making Things Interactive | powered by WordPress with Barecity