Teletickle Concept Video

Assignment,Final Project — alanhp @ 1:59 pm

password: teletickle

RPi-Windows troubleshooting

I recently got tangled into Raspberry Pi – Windows connectivity problems again. Hence, I decided to solve it for once and forever.
I have windows 7. I am running all following steps as admin.

Here is the ultimate fix.

1. Download DHCP Server for Windows here- www.dhcpserver.de/cms/download/

2. Unzip and install dhcpserver.exe

3. Go to properties of your Local Area Network and Assign a static IP. For example- 192.168.2.1. Enter Subnet mask. For example- 255.255.255.0 .

4. Run dhcpwiz.exe from the downloaded folder.

5.Select Local Area Network. It should say ‘Disabled’ . Hit Next.
1

6. Do not do anything on this screen, hit Next.
2

7. Insert a range here(highllighed in the image). for example Put 100-110
3

8. check box “Owerwrite previous file” and Write the Configuration file.
you should see “INI file written after this step.”. Hit Next/finish
4

9. Now you should see status as “Running” here. If not, hit Admin.
5

10. That’s it, you are done. On next page of the interface, enable the option “continue running as tray app”.

Now, boot up your RPi, you should see inet address as the one you assigned in the DHCP server.

Hope this helps future MTI folks with windows machine.

Similar video here

507 Mechanical Movements

Hardware,Reference — Tags: , , , , — epicjefferson @ 3:20 pm

507movements

I found out that the book is now in the public domain, so it’s free to download-

507 Mechanical Movements PDF

And here’s the site that has animated some of the movements.

507movements.com/

Live Time Lapse 2

Uncategorized — jk @ 4:41 am

For the second iteration of my Live Time Lapse prototype I have independently developed four aspects of this project:

1, Camera Operation

2, Uploading Photos

3, Creating Video

4, Mobile App

I have approached this project with the premise that other people know how to do all the above better than I could; I saw my job as researching what others had done. I identified requirements for each individual aspect of my project and identified programs and related hardware that would fulfill these requirements.

1, Camera Operations:

I wanted a camera that was cheap and easily configurable with good image quality. A Raspberry Pi Coupled with a RaspiCam fulfilled these requirements. The 5 MegaPixels images are fine for time lapse and you can adjust the camera settings and there’s a lot of code already written for these cameras.  The most interesting code I found was from James Welling of fotosyn, a photo developer which has also created a variety of interesting apps. There’s a great blog post about the timelapse camera he developed and created. I used this code to initiate my time lapse sequence with the following code:

This code can be downloaded here.

2, Uploading Photos

I used an amazing piece of code called Dropbox Uploader on the Raspberry Pi to upload images to Dropbox. This piece of code took a bit to actually figure out how to use given my limited knowledge of Python, but eventually I was able to get it to work.  One instructable post really helped with this.  As well another post pointed me in the right direction towards understanding how to create a python script to add to the above mentioned raspiLapseCam script.

Here’s the code to make the Dropbox Uploader work.

This piece of the code indicates the director whose images are uploaded

This piece below indicates the directory which is created in the App portion of your Dropbox folder (which you create when installing Dropbox Uploader on your Raspberry Pi).

In addition there is another piece of code I am trying to get to use. On the Dropbox Uploader Git Hub instructions under Optional Parameters, Andrea Fabrizi (the developer) indicates that you can use the “-s” to not upload images which already exist in the Dropbox. I don’t know how to use that piece of code yet. If anyone knows, that would be great!

3, Creating Video

I identified MAXmsp/Jitter as a program I had some familiarity with, which also had a lot of documentation, and which someone had already created a “patch” for that I could use. I found this person whose name is Gian Pablo Villamil. The time lapse looping is now working, but it is not directed to the correct Dropbox folder. This should be a simple fix.

What won’t be as simple is perhaps translating this patch to Pure Data so it can work directly on a Raspberry Pi.

4, Mobile App

Raspberry Pi’s are endlessly configurable, but their user interface is terrible. It would be awesome to operate a camera and timelapse directly from a mobile app. This is something that forosyn is working on. I was able to install their BerryCam Express and get it to work from my iPhone. You can use their BerryCam app to take pictures remotely from your phone. They are also promising timelapse functionality, which would be nice.   I found their app worked really well.

The Mobile App isn’t necessary for my final project, but it is exciting to experiment with.

Here’s a video:  vimeo.com/109430407

Sonification of Astronomical Data

Uncategorized — priyaganadas @ 9:29 pm

I have been going through different types of astronomical datasets lately. One of the interesting directions is to sonify astronomical data. This is what I have decided to pursue. Following project taken in deep space field data from Canada-France-Hawaii Telescope over four years. Based on brightness, duration, distance from earth of supernovae that were captured, piano notes are played to create a sonata.

Internet radio station, CRaTER, plays live cosmic ray count as music

read more about CRaTER

Resources
Datasets from NASA

Large Hadron Collider (LHC) sounds
datasonification.tumblr.com/

Tracking the ISS

Software,Technique — Tags: , , , , , — epicjefferson @ 1:32 pm

Searching for an astronomy related API, I found this neat site called “Where is the ISS at?” They have an API

wheretheiss.at/w/developer

Since there is no sample code, I modified the kimonolabs sample code for python.

Then you get back something like this.

Sweet.

 

Piezo Sensor Prototype

Uncategorized — jk @ 12:17 am

Recently I completed a collaboration with Dakotah Konicek for the Tough Art Artist Residency at The Children’s Museum of Pittsburgh. In conjunction with this project, for the MTI class I attempted to make a Piezo sensor that could be used to both sense sound and activate a pump. This video documents this project, in part, as well as the Piezo microphones I built for this project and my attempt at building an amplifier that could be input into an Arduino sensor and output as sound.

Fracking4Kidz1_HDR2_Small

 

Above is the final piece at The Children’s Museum.

 

Here’s excellent advice on building Piezo Preamps on Alex Rice’s website.

Sound Studies (Precedent Analysis)

Uncategorized — jk @ 12:05 am

Over the past couple of years I’ve become more and more interested in sound and what sound can do. It’s taken me a while to come around to sound, but the first time I really remember being struck by sound was at a noise music concert that a friend of mine put on in which, with gleeful transgression a group of punks blew away Burnside Avenue in Portland, Oregon on a Saturday afternoon. Incidentally, the orchestrator of that event has recently started a deconstructionist film blog Talking at The Movies with the tagline: “Spoiler Alert: Meaning is an Artifact of Creation.”

Three other experiences with sound include experiencing “The Forty Part Motet (A reworking of “Spem in Alium” by Thomas Tallis 1573)” by Janet Cardiff and George Bures Miller at The Cloisters in Manhattan.  This piece was truly transforming. You would walk around this amazing chapel which was transported from Spain and could here each individual voice on each individual speaker. It really got you to think about sound and I really believe that sound is something we have difficulty focusing on. This allowed a modern audience to focus on Thomas Tallis’ truly amazing composition.

 

Recently I’ve become fascinated with John Luther Adam’s compositions such as Inuksuit, which incorporate the avant-garde musical tradition of early 20th century percussion oriented composition with Stockhausen’s radical site specific “Helicopter String Quartet.” John Luther Adams, however, combines the radicalness of these gestures with truly relatable sounds such as those, in the case of Inuksuit, of the arctic. He studies these sounds and reinterprets them for orchestra, again, so we can here them again.

 

Composer Nico Muhly has done the same thing with the traditional folk song “Oh, The Wind and Rain.” He has essentially, deconstructed this song into separate parts and then over the course of a three part composition (one of which is in the video above) this song is rebuilt. More about this song and this composition can be found on Nico Muhly’s website.

The commonality I have found between these compositions is the way sound is used to call attention to a composition or sound that already exists that we pass over and don’t really hear.

 

Project01: iVolume — a volume-controlled RPi Radio

Assignment,Project01 — pvirasat @ 11:21 pm

iVolume is a device that allows you to listen to your favorite radio station on Pandora, where the volume of the music is controlled based on how loud or quiet the surrounding environment is. As someone who loves listening to music on Pandora, I thought this would be an interesting way to use the input from the microphone, and a productive way to get my hands dirty with the Raspberry Pi.

DSCF3989

DSCF3992

DSCF3996

DSCF3988

The video above has illustrated the basic working of the device. The electret microphone amplifier senses the crowd noise, where the data is interpreted and sent to the Raspberry Pi through Serial communication. Pianobar, an open-source, console-based client for Pandora, comes with many features including playing, managing, and rating stations. For this project, I used the input data to manipulate the volume of the music. In the video, I plugged in a set of speakers to the Raspberry Pi to illustrate how the device works. However, a more ideal setting would be to use headphones instead because the microphone will take in the loud sound from the speakers, which will eventually make the volume go up and never come down.

Even though there are quite a number of tutorials for Pianobar + Raspberry Pi, I learned a lot working on this project. This includes figuring out how to manually install the libraries onto the Raspberry Pi, working in terminal, sending data from an Arduino to a RPi, and coding in Python. Looking forward, I plan to create a more durable and portable prototype, and possibly mix in other types of inputs to manage the different stations.

 

Demo of Color Detector

Assignment,Submission,Technique — priyaganadas @ 10:37 am

Here is the video of how the Spy Object works.

Github Repository is here

When the program is run, camera takes three consecutive photographs. Every image is scanned to determine dominant color of every pixel. Pixels are converted into dominant color (either red , green or blue). Now, entire image is scanned to determine dominant color of the entire image. This color is printed out. If all three images are of a predefined color sequence, an audio file is played. If the sequence does not match, program returns nothing.

Lessons Learned
1.The original idea was to to do face detection using RPi. We couldn’t find much precedence on that, also processing via RPi makes it very slow. The only method of doing it is creating a database of predetermined face (say 9 different expressions and angles) and train the Pi to detect the face using this database. This method is not scalable since if more faces are to be detected, larger database has to be built, which can not be handled in Pi.
2. We reduced the size of the image (160×120 pixels) to decrease the time it takes to process the image. Processing time is very high for images larger than that.
3. Color detection is not very accurate. We don’t know if it is the lights, reflection or the camera. Camera can detect the dominant color of a pixel(orange to pink are taken in as red and so on for blue and green) but differentiating between three closely related colors proved to be difficult. Possible solution here would be to print RGB value for a colored object and then manually determine a range of detection.

« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2020 Making Things Interactive | powered by WordPress with Barecity