For my final project, I am interested in working with the idea of serendipity and the kinds of interactions that can be built around this concept. Because of the sensors we have access to both in our cell-phones and in the prototyping tools we are using (RPi and Arduino) the opportunities for facilitating happy accidents opens up. We can use sensors to determine where people are, what they are doing, how active they are… We can connect with each other’s devices and exchange these data and based on it take actions. So I want to focus on these data, the interactions between devices and the way that serendipity can be prompted by them. Some examples I am looking at from different fields:
From Industry – Wakie, a social alarm clock: You set a time you want to wake up and when the time comes, some random person logged in the app who wants to wake someone up at the time you set will call you from the app. It works the other way too, anytime you sign into the app, it will tell you how many people would like to wake up around the World and you can just click on “wake someone up”
From Art – Serendipity is a project developed by Kyle McDonald while in residency in Spotify. The project detects when two strangers in the world click on play at the same exact time on a song on Spotify and then shows the locations on a map. About once a second the song being shown changes to show a new song simultaneously started in two places around the World.
From Academia – 20 Day Stranger: A project by the MIT Media Playful Systems Group, for a period of 20 days, it connected pairs of strangers in the world and had them share their day to day experience with each other.
Potential tech to emulate or use:
Estimote Beacons estimote.com/
OSC for data
For the direction I want to head in for the final project I’ve chosen wearables. Specifically I want to get into emotion and how we can communicate emotions, feelings, and sensations through devices that are meant to be worn. Below are several examples from academia, industry, and art that show some of these concepts.
Apple Watch (Industry):
The biggest news out of the commercial wearables market this Fall was Apple’s Watch. There are many cool aspects to the device, but I the ones I find most inspiring are the sharing features (mentioned towards the video). Via the watch you can send drawings or messages to another watch as well as your heartbeat. I think the latter is a really interesting concept since there is such a personal and affection connection to the concept of one’s heartbeat. Additionally the device uses a series of haptic sensors to give the user feedback which I think is critical for a wearable.
InTouch is a system that uses haptic feedback to give users a physical interaction with digital content. In this application, the system allows users to manipulate 3D surfaces and meshes with haptic feedback to improve the experience of the application. The main concept for this research that I like is the interaction between the digital and the physical where the developers are using haptic to bridge the gap and translate the digital interaction into a more relatable and manipulable physical sensation.
Synaptic Traces (Art):
Finally, in the realm of fashion and wearable technology comes the project called Synaptic Traces. The frock is made of a beautiful galaxy-like print that is curled into stripes and piece together. Inside of the garment are many LEDs that allow the garment to glow. When someone touches the garment, lights activate that represent the touch and hold for longer than a normal sensation would. In essence is holds the fleeting feelings and sensation that we get from another person or object and keeps it a bit longer in a digital representation. Other than the beautiful physical construction, I really liked the concept of preserving the intimate touches and sensations we feel when interacting with another person.
I’m should receive a Myo developer kit from Thalmic Labs this week. I’m interested in exploring the capabilities of this input device. Gestural controls have interested me for some time. Beyond their ability to allow more natural motion to control a computer, they bring
I am interested in the social and psychological implications of the added physicality they enable in our interaction with computers. In particular, Keeping Together in Time: Dance and Drill in Human History by William McNeil and Interaction Ritual Chains by Randall Collins come to mind. Both works discuss the role of rhythmic movement in creating solidarity in small groups.
1) Keeping Together in Time: Dance and Drill in Human History by William McNeil
2) Randall Collins: Interaction Ritual Chains (2004)
Book Review: www.cjsonline.ca/reviews/interactionritual.html
3) World Kit by Chris www.chrisharrison.net/index.php/Research/WorldKit
I am really interested in wearables and I am intrigued by the work of Anouk Wipprecht. Specifically her intimacy dress, which uses smart e-foils, which becomes transparent when you run a current through it.
I am also interested in projected images but I haven’t thoroughly thought through how to combine the electric foil with the projected image. Here is one of the projected images I found interesting:
Color Detector from Amy Friedman on Vimeo.
Concept: Our concept was to create an encoded message device, which will output a secret message if the right colors are recorded by the camera in the correct sequence.
Process: First we needed to create the program that would be able to read blob detections to identify one color. Our goal was to have 3 colors, red, black and blue. Once we detected one color we utilized Python to detect two other colors. After this we developed the program to recognize color but record the sequence. If the detect sequence matched the “secret code” of blue black red then we allowed for the message to be given. Next we needed to create the mp3 file of what we desired the message to say and added it to the raspberry pi.
Lessons Learned: We learned how to use python, how to import into python, the language context needed to successfully utilize the python. We also learned how to time capture images using a camera and output audio files on the Raspberry Pi. We also learned how to detect color using captured images. We learned how to develop form for already preexisting objects, to be utilized in a new capacity.
We utilized these two websites to help with part of the coding:
Parro-Tweet from Dan Russo on Vimeo.
Parro-Tweet utilizes the AR Drone hardware platform coupled with Open Frameworks and the twitter api. Open frameworks allows the drone to be controlled multiple sources, including a remote laptop with gaming controller or the raspberry pi with custom control panel. The drone can be used to seek out photos and instantly place them on a twitter feed.
The biggest challenge with this hardware platform was the latency experienced from the video feed outside of it’s designated mobile app. This is most likely due to the compression type the wifi connection uses. It’s proprietary nature made it difficult to find any documentation on how to fully utilize the system. The latency made it nearly impossible to run any computer vision / navigation through open frameworks. However, all manual controls and the twitter api are fully functional.
Academia: “Designing Interfaces for Children with motor impairment” by Marcela Bonilla, Sebastián Marichal, Gustavo Armagno, Tomás Laurenz (2010)
I found this article to be helpful as I want to utilize computers as solutions into my final project. I am unsure of how this will occur, whether it be a developed part of the research how to utilize user interfaces or part of the final product. The research team designed software specifically for those who possess motor issues. In this study they dealt with children in Uruguay who have Cerebral palsey. In the academic article it states that “According to the teachers, the screen layout of this activity is too compact and the controls are not enough different one from the other.” (p.249). Feedback such as this will be important for a devices success to manipulate the prototype.
Industry: “Magic Arms” by Tariq Rahman, & Nemours/Alfred I. duPont Hospital for Children (2012)
Emma has Arthrogryposis Multiplex Congenita a disorder in which the joints are grown abnormally.The technology used to create the customized device was a Stratasys 3D printer. The Wilmington Robotic Exoskeleton was already created by the Nemours Biomedical Research, but not to an adequate size for a child. The researchers created casts and molds of the child to understand what her dimensions were and how devices could be adapted for a small child. I would like to utilize the idea of customization 3d printing to be able to create a device to help those with mobility issues. User centered research is another important aspect of the video which I would like to utilize, without patient feedback I wont be able to create a product with a purpose.
Art: “Third Arm” by Stelarc (1980)
Ars Electronica 1992 – Stelarc “The Third Hand” from ars history on Vimeo.
Stelarc Third Hand
Stelarc created this performance piece as an extension of his body and performed with it from 1980 until 1998 traveling to Japan, the USA, Australia and Europe. According to the website of Stelarc (found above) “The Third Hand has come to stand for a body of work that explored intimate interface of technology and prosthetic augmentation- not as a replacement but rather as an addition to the body.” I found this piece to be intriguing as it utilizes mechanics to create a third arm. With those who are vision or motor impaired an extension of the body can allow for them to manipulate objects in a way they were unable to do previously. I think utilizing the ideas of addition as stated on Stelarcs website will allow for an integrative experience as you arent trying to fix what is already existing rather add to what you have.
I will be working towards developing a product that will again users with motor control issues. The most important aspect of this project is to find a participant to help me throughout the process. I dont want to just create a project with user-research. My goal is to have a working prototype to be shown off in December which outlines the journey of adapting the device to the users needs. I have already received the Frank-Ratchye Fund for Art @ the Frontier grant from the Studio for Creative Inquiry to hack a 3D printer to understand the benefits of rapid prototyping and how we can create customizable parts.
I am prototyping a time lapse camera and real time video feed.
I am interested in the implications of showing these real time videos in proximity to the place photographed.
I am particularly interested pursuing further prototypes with the phenomena at CMU known as the fence.
The fence is already a stage for CMU students to perform their organization in front of the school. I am interested both in capturing these performance over an entire year. I am most interested in the possibility of cultural or social feedback between the fence and the timelapse video feed. How might these photographs and video be controlled the way Jimi Hendrix controlled the feedback of his guitar?
To make this prototype I have made extensive use of the many Raspberry Pi time lapse builds out there.
Industry: Time lapse photography is often used by high end developers and contractors to document a project. This can be shown to future or current clients.
Two interesting commercial venues for time lapse include construction time lapse camera’s such as Brinno’s and time lapse services such as The Time-Lapse Company’s. The former caters to mid range contractors while the later targets high end developers.
The scope of projects and the subject of the projects documented using construction cameras is fascinating, however, there is little or no interaction or relationship between the time lapse video and the construction project. As well, Brinno advertises “instant video,” however, instant here indicates that every day’s photographs are automatically compiled into a video. I am interested in real time updates and compilation. This offers the possibility for interaction with the time lapse video, perhaps on a scale of time that people don’t usually interact with video cameras on.
Art: The avant-garde theater troupe, The Wooster Group, plays with the kind of back and forth between recorded and live video that I am interested in. In this section from Hamlet the actors recreate a movie of Hamlet which the have memorized every movement of. Simultaneously a camera is trained at the actors. This causes the viewer to think of memory and time and their interaction through video. With the sort of live video feed I am interested in creating, these sorts of issues would be brought up, however, the viewer would get a chance not just to observe, but to interact with the video recording.
Academia: Another aspect of live video feed time-lapse would be the possibility to interact and see the results of one’s actions over a long period of time. This is something which The Long Now organization is trying to do; they are trying to shift our temporal focus from the short, to the long term. It’s an interesting problem when so much media today is focused on increasingly short time intervals; short cuts.