Project 2, Assignment 3 – OSC – Lights

Assignment — tdoyle @ 8:33 am

For the third micro assignment, Priya and myself worked together to create an OSC application that communicates between two computers. One computer takes input and the other outputs to a physical output. We decided to create a prototype of what could be turned into a controlled light array. One of our computers took in keyboard input and the other relayed the information to the arduino which turned on lights depending on the key pressed. We used three small light bulbs and mapped the keys ‘j’, ‘k’, and ‘l’ to them. For a nicer effect, we added some code to fade the light bulbs. Additionally since the lightbulbs required more than the 5V the arduino could supply, we used a transistor and external power supply to create the finished product. Below are links to the github repos with the source code. Note that the arduino code is in the LightReciever repo.

(more…)

Hand-gesture controlled sound synthesis

One of my main interests is working with interfaces for sound synthesis. Over the years I’ve been experimenting with a few different techniques to see how different interfaces inspire different styles of performance and how the interface affects the sound produced. Without having a clear goal, I’ve delved into circuit bending/ hardware hacking, computer-vision, touch screens and web/text based systems. Here are some of my findings:

Circuit Bending

Elvis box – Circuit Bent DS-1 Distortion

Circuit Bending provides a great introduction to incorrect electronics, the idea is that you use wires to randomly create connections within existing circuits (using only battery power, for safety) and explore the effect these connections have on the sound (or visual). I think this wires the brain in a great way because you expect failure, instead of total control you have only curiosity and luck. This got me thinking about how I was going to control these sounds. Why had I decided to use buttons, patch cables and knobs?

(more…)

Project 2.03 — OSC — Patt and Alan

Assignment,Technique — pvirasat @ 10:54 am

Here is our project, which shows some of the basic use of OSC protocol!

Password: pattandalan

 


 

The images below show how things talk to each other, and the wiring diagram of the motor (using a 2N7000 transistor and an external power supply):

how things talk

motor circuit diagram with 2N7000


 

Also, here is a video of another test we did where we hooked up a light sensor to a raspberry pi, and used it as a digital input. Then, we sent the data (through OSC) to an openFrameworks application on another computer, which connected to an Arduino (through Serial) and controled the motors.

Password: pattandalan

Here’s the code: Github

Project 02 – Spy Device – iSpyLenny

Assignment,Submission — alanhp @ 4:19 pm

The password is: ispylenny

Screen Shot 2014-10-03 at 4.15.37 PM

 

Screen Shot 2014-10-03 at 4.16.11 PM

Screen Shot 2014-10-03 at 4.17.09 PM

Screen Shot 2014-10-03 at 4.17.32 PM

Concept: iSpyLenny is a remote dog monitor to spy / see my dog who is in South America while I am in the US. A pressure sensor sits below his bed and senses when he stands on it. When this happens, a picture of him is taken by the PS3 Eye webcam which sits next to the bed. This Picture is then saved on the RaspberryPi and uploaded to Dropbox from the device. Once the photo is uploaded, an IFTTT block is activated and a notification is sent to my phone with a link to the picture.

The process for getting this to work was a lot harder than I expected. The simplest part was to get video capturing working using the built in functionality in openFrameworks. The more complicated part for me was with the wiringPi and with the uploading images to Dropbox. The wiringPi part is technically very simple but I had a big misunderstanding of current flow and of the way resistances worked. Once I got help on that from Jake it wasn’t hard. The other part that was hard was the Dropbox uploader, in particular what I found the hardest is understanding how to locate from a terminal command all of the files I needed, so using file paths for where the script was, for the location of the image on the RaspberryPi and for the location of the Dropbox folder. One issue which I ran at the end when combining both the picture taking and the wiringPi is that the picture being taken was just grey, even though I was using the exact same code from the working file, I think it had something to do with the way the project was created and the addOns and settings made when creating it. I tried a couple of different ways of solving this but after two hours it seemed like it wasn’t justified and I just decided to leave it.

Some lessons learned:

  • Terminal commands can be run from openFrameworks code so you can essentially do anything outside of openFrameworks using openFrameworks.
  • Current flows to where its easier for it to flow, a resistor will make it harder for the current to flow in that direction.
  • File paths… ../ go back one directory folder1/folder2/folder3 go to folder 3… ../../../folder1 go back three directories and then enter folder1.
  • There are some problems which are probably not worth solving, i.e. when the returns are really tiny compared to the effort you’ll dedicate
  • IFTTT blocks run every five minutes.

 

Precedent Analysis 2

Final Project,Precedent Analysis,Uncategorized — priyaganadas @ 6:17 am

Idea-
Create a data visualization of an everyday far away space phenomenon in physical reality.

Similar Projects that I found

LIGO
ART

Cosmic Gravitational waves are detected(data is acquired) and showcased through LEDs. What I like about this project is an unseen or invisible phenomenon is given a physical form.
Moon Creeper: Sublime Gadgets by Auger-Loizeau(2012)
Screen-Shot-2014-06-25-at-12.32.25

Apollo 11 mission left giant reflectors on the surface of moon. Reflectors can be illuminate by telescope laser beams from earth. The reflected beam is measured to calculate speed of receding moon from earth(3.8 cm per year). Moon Creeper is a visualization of receding moon. What I find interesting in this project is that a subtle motion is chosen and represented as is, and not accelerated in the visualization.

The Asteroid Impact Probability Drive.

the asteroid impact probabilty drive from luckytwentythree on Vimeo.

The device shows potentially hazardous astroids rotating around earth and shows warnings when their orbit matches with that of earth. Device Highlights that our lives our result of giant probabilistic equation. There are so many things that can go wrong and yet we go on living our lives as if we are immortal.

Academia

This paper by Stéphanie Fleck and Gilles Simon defines augmented reality platform for young children to learn astronomy. Every market corresponds a planet in the solar system. The augmented objects react based on their position and distance with respect to each other, real time.

RPi Primer

Assignment,Submission — priyaganadas @ 7:58 am

Three switches were connected to Raspberry Pi. Switch 1 activates the graphics and generates first smiley. Switch two changes the expression into a happy face and switch three changes the expression into a worried face. Aim was to learn to use wiringPi with openFrameworks.

IMG_0223

IMG_0225

IMG_0226

Video is here

Lesson learned-
1. The pinout diagram is different for different models, GPIO numbers are not similar to pin breakout on the board.
2. OpenFrameworks is fairly easy to use to generate graphics.

Troubleshooting with Windows
1. Before starting, always check if the computer is on ‘CMU’ network and not on ‘CMU Secure’. Even though I registered my computer on CMU network, it did not register for longer time, (more than 30 minutes). This problem got solved automatically when I registered for fourth time.
2. Also, Go to “Internet and sharing option’ , then click on to ‘ adapter settings’. Here, find CMU wireless connection, Go to advance properties, Under ‘sharing’ make sure you have selected ‘ Local Area Network’. Also, check the box which says ‘Allow sharing with devices on the network’.
3. Above setting may change next time you connect to CMU network. Check every time.

Color Detector

Submission — amyfried @ 7:47 am

Color Detector from Amy Friedman on Vimeo.

photo 3 photo 2

Concept: Our concept was to create an encoded message device, which will output a secret message if the right colors are recorded by the camera in the correct sequence.

Process: First we needed to create the program that would be able to read blob detections to identify one color. Our goal was to have 3 colors, red, black  and blue. Once we detected one color we utilized Python to detect two other colors. After this we developed the program to recognize color but record the sequence. If the detect sequence matched the “secret code” of blue black red then we allowed for the message to be given. Next we needed to create the mp3 file of what we desired the message to say and added it to the raspberry pi.

Lessons Learned: We learned how to use python, how to import into python, the language context needed to successfully utilize the python. We also learned how to time capture images using a camera and output audio files on the Raspberry Pi. We also learned how to detect color using captured images. We learned how to develop form for already preexisting objects, to be utilized in a new capacity.

We utilized these two websites to help with part of the coding:

learn.adafruit.com/playing-sounds-and-using-buttons-with-raspberry-pi/install-audio

www.cl.cam.ac.uk/~db434/raspi/blob_detection/

“Ar Drone Parro-Tweet” by John Mars, Yeliz Karadayi, and Dan Russo

Assignment,Submission — Dan Russo @ 7:39 am

Parro-Tweet from Dan Russo on Vimeo.

Parro-Tweet utilizes the AR Drone hardware platform coupled with Open Frameworks and the twitter api. Open frameworks allows the drone to be controlled multiple sources, including a remote laptop with gaming controller or the raspberry pi with custom control panel.  The drone can be used to seek out photos and instantly place them on a twitter feed.

The biggest challenge with this hardware platform was the latency experienced from the video feed outside of it’s designated mobile app.  This is most likely due to the compression type the wifi connection uses.  It’s proprietary nature made it difficult to find any documentation on how to fully utilize the system.  The latency made it nearly impossible to run any computer vision / navigation through open frameworks.  However, all manual controls and the twitter api are fully functional.

parr_twt

Live Time Lapse Prototype

Project01 — jk @ 5:49 am

I am prototyping a time lapse camera and real time video feed.

I am interested in the implications of showing these real time videos in proximity to the place photographed.

I am particularly interested pursuing further prototypes with the phenomena at CMU known as the fence.

The fence is already a stage for CMU students to perform their organization in front of the school. I am interested both in capturing these performance over an entire year. I am most interested in the possibility of cultural or social feedback between the fence and the timelapse video feed. How might these photographs and video be controlled the way Jimi Hendrix controlled the feedback of his guitar?

To make this prototype I have made extensive use of the many Raspberry Pi time lapse builds out there.

These include:

www.fotosyn.com/simple-timelapse-camera-using-raspberry-pi-and-a-coffee-tin/

Precedents

Assignment,Precedent Analysis — ygk @ 1:55 am

“OpenControllers” by ECAL/Marc Dubois (2014

OpenControllers ECAL/Marc Dubois from ECAL on Vimeo.

Open Controllers allows you to use your daily device as a controller of a different kind by placing it within specifically designed artifacts that embed the device and utilize its capabilities such as the gyroscope and the light detection. This project takes your physical movements and interprets them. I think this can be done with a range of patterns, which gets into the next video…
More…

“Tilt Brush” by Skillman & Hackett (2014)

This is a lot more similar to what I want to to, except that I also am interested in the painting becoming three-dimensionalized as you are painting. Not that I intend on doing that exact thing…I just mean to draw a parallel. The gestures here looks swift and intuitive if it’s real, and the interface is quite honestly very confusing but maybe that’s because I’m not aware of the gestures it uses. Assuming this is true, I think this is the future of design through tangible interaction.

“3Doodler” by Pete and Max (2013)

This project does a lot of what I am hoping to accomplish with its ability to prototype quickly and efficiently without having to prepare ahead of time what will be made.

“3D Printed Blooming Flowers” by Mikaela Holmes (2014)

This kind of movement is exciting as a less mechanical way to actuate an artifact. I wonder how it could be applied at a more complex or responsive level.

I think these videos show the progression of my thoughts on the final project quite well. I want to be able to design something indirectly by using a tool, not unlike how surgeons now use robots to do the actual surgery. I am struggling to know the scope I am capable of, and coming up with a good project idea to make it happen. Something I had thought about is using my surface as a tool to communicate with a CNC robot to fabricate sketches drawn in real time on the surface.

« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2020 Making Things Interactive | powered by WordPress with Barecity