LAYERd

Assignment,Final Project,Hardware,Software — Tags: — John Mars @ 10:45 pm

LAYERd is a multi-layer display made from off-the shelf computer monitors. It allows for glasses-free 3D as well as a novel way to envision User Interfaces.

Every LCD screen on the planet is made of two main parts: a transparent assembly (made of laminated glass, liquid crystal, and polarizing filters) and a backlight. Without the backlight, an LCD monitor is actually completely transparent wherever a white pixel is drawn.

LAYERd uses three of these LCD assemblies illuminated by a single backlight to create a screen with real depth: objects drawn on the front display are physically in front of objects drawn on the rear ones.

My work is mainly focused on the potential UI uses for such a display: what can one do with discrete physical layers in space?

PROCESS

The process begins by disassembling the three monitors. After destroying two cheaper ones with static electricity before the project began in earnest, I was very careful to keep the delicate electronics grounded at all times, and I worked on top of an anti-static mat and used an anti-static wristband when possible.



With some careful prying, the whole thing came undone.



Here, you can see how the glass panel is transparent, and how the backlight illuminates it.

After disassembly came the design, laser cutting, and assembly of the frames and display.

Finally, the finished product.

It uses three networked Raspberry Pis to keep everything in sync, as well as the power supplies/drivers from the disassembled monitors.

LESSONS LEARNED

I learned a lot about polarization; mainly that about half of the films needed to be removed in order for light to pass through the assembly. Plus, this cool little trick with polarized light.

I also learned about safety/how monitors work: Alas! Disaster struck. I accidentally cut one of the ribbons while disassembling a monitor, which resulted in a single vertical stripe of dead pixels. Plus, my front display got smashed a little bit on the way to the gallery show, and made a black splotch.

RELATED WORK

One main body of research highly influenced my design and concept: the work being done at MIT Media Lab’s Camera Culture Group, notably their research in Compressive Light Field Photography, Polarization Fields, and Tensor Displays.

Their work uses a similar assembly of displays to mine, but is focused mainly on producing glasses-free 3D imagery by utilizing Light Fields and directional backlighting.

A few other groups of people have also done work in this field, namely Apple Inc., who has a few related patents — one for a Multilayer Display Device and one for a Multi-Dimensional Desktop environment.

On the industry side of things is PureDepth®, a company that produces MLDs® (Multi Layer Displays™). There isn’t much information in the popular media about them or their products, but it seems like they have a large number of patents and trademarks in the realm (over 90), and mainly produce their two-panel displays for slot and pachinko machines.

Another project from CMU is the Multi-Layered Display with Water Drops, that uses precisely synced water droplets and a projector to illuminate the “screens”.

REFERENCES

Chaudhri, I A, J O Louch, C Hynes, T W Bumgarner, and E S Peyton. 2014. “Multi-Dimensional Desktop.” Google Patents. www.google.com/patents/US8745535.

Lanman, Douglas, Gordon Wetzstein, Matthew Hirsch, Wolfgang Heidrich, and Ramesh Raskar. 2011. “Polarization Fields.” Proceedings of the 2011 SIGGRAPH Asia Conference on – SA ’11 30 (6). New York, New York, USA: ACM Press: 1. doi:10.1145/2024156.2024220.

Prema, Vijay, Gary Roberts, and BC Wuensche. 2006. “3D Visualisation Techniques for Multi-Layer DisplayTM Technology.” IVCNZ, 1–6. www.cs.auckland.ac.nz/~burkhard/Publications/IVCNZ06_PremaRobertsWuensche.pdf.

Wetzstein, Gordon, Douglas Lanman, Wolfgang Heidrich, and Ramesh Raskar. 2011. “Layered 3D.” ACM SIGGRAPH 2011 Papers on – SIGGRAPH ’11 1 (212). New York, New York, USA: ACM Press: 1. doi:10.1145/1964921.1964990.

Wetzstein, Gordon, Douglas Lanman, Matthew Hirsch, and Ramesh Raskar. 2012. “Tensor Displays: Compressive Light Field Synthesis Using Multilayer Displays with Directional Backlighting.” ACM Transactions on …. alumni.media.mit.edu/~dlanman/research/compressivedisplays/papers/Tensor_Displays.pdf.

Barnum, Peter C, and Srinivasa G Narasimhan. 2007. “A Multi-Layered Display with Water Drops.” www.cs.cmu.edu/~ILIM/projects/IL/waterDisplay2/papers/barnum10multi.pdf.

Lanman, Douglas, Matthew Hirsch, Yunhee Kim, and Ramesh Raskar. 2010. “Content-Adaptive Parallax Barriers.” ACM SIGGRAPH Asia 2010 Papers on – SIGGRAPH ASIA ’10 29 (6). New York, New York, USA: ACM Press: 1. doi:10.1145/1866158.1866164.

Mahowald, P H. 2011. “Multilayer Display Device.” Google Patents. www.google.com/patents/US20110175902.

Marwah, Kshitij, Gordon Wetzstein, Yosuke Bando, and Ramesh Raskar. 2013. “Compressive Light Field Photography Using Overcomplete Dictionaries and Optimized Projections.” ACM Transactions on Graphics 32 (4): 1. doi:10.1145/2461912.2461914.

This project was supported in part by funding from the Carnegie Mellon University Frank-Ratchye Fund For Art @ the Frontier.

LAYERd

Assignment,Final Project — Tags: — John Mars @ 6:40 am

Augmented Windows

Precedent Analysis — Tags: — John Mars @ 5:25 pm

INDUSTRY: SAMSUNG TRANSPARENT LCD

In 2012, Samsung debuted a transparent LCD. Where typical LCDs use an artificial backlight to shine through their pixels, the transparent display uses the sun (and artificial edge-lighting at night). The display also includes a touchscreen component.

Samsung previewed their technology as a 1:1 desktop monitor replacement, but I see value in using the technology more inventively: as an augmented reality window, where the display builds upon what’s visible in the outside world.

ART: SONY PLAYSTATION VIDEO STORE

Memo Akten (who I just realized authored the ofxARDrone library I used in the last project) created a series of videos for the launch of the Sony PlayStation Video Store. The videos use a live projection-mapping technique where the content is mapped according to the perspective created by the camera’s position and angle in space.

If you look through a window at a nearby object, draw a circle around it, and then move your head, the circle is no longer in-place. With head- or eye-tracking, the perspective projection could change with your changing viewpoint, so that the circle will always be around the object.

ACADEMIA: MULTI-VIEW AUTOSTEREOSCOPIC 3D DISPLAYS

What happens if two (or more) people are looking out the window? A conventional display can’t show different images for each person. Alternatives to make that happen would be active- (or maybe even passive-, if you’re really good) shuttered glasses, as used with 3D TVs, or a lenticular display. A lenticular display is one that uses lenticular film — you know, these things:

This 1999 paper1, by N.A. Dodgson, et al, in addition to his 2011 workshop, discusses a way to create a glasses-free 3D display using lenticular arrays. In addition to its 3D uses, such a display could also be used for “two-view, head-tracked displays; and multi-view displays”. They’re not talking about displaying unique perspectives per-viewer, but instead about delivering correct stereoscopic images to each eye of each viewer; my use should be much more simple.

OUTLET

I definitely want to try to get my project picked up by a media outlet — a technology/design blog, for example. A little boost of fame would be most appreciated. And, heck, if it turns out to be really cool, unique technology (which it might be — I’m not finding all that much in academia), maybe even a paper submission would be in order.


  1. N. A. Dodgson, J. R. Moore, S. R. Lang. 1999. “Multi-View Autostereoscopic 3D Display.” International Broadcasting Convention 99. www.loreti.it/Download/PDF/3D/IBC99-Dodgson.pdf.

OpenFrameworks Awesomeness: TRON Legacy

Software,Technique — Tags: — John Mars @ 9:47 pm

I remembered this and am not using it for my precedents, but I thought I’d share anyway (read his post, don’t just watch the above video):

Tron Legacy by Josh Nimoy

Streaming live video with ffmpeg/ffplay

Software,Technique — Tags: — John Mars @ 9:33 am

FFmpeg is a multi-platform command-line application for streaming, recording, converting, and saving video and audio. It includes a variety of related programs, including FFplay, which displays video in a window (vanilla FFmpeg on its own does not).

First, install it on your device.

Either build from source, install via your favorite package manager (apt-get, aptitude, brew, etc.), or download a pre-compiled binary (Windows, Mac).

Second, find a stream.

For our project, the ARDrone was streaming video using a custom variant (PaVE) of a regular H.264 video, streaming along TCP port 5555 from 192.168.1.1.

Third, ffplay.

In our case, this easy little command is what did the work (your case will undoubtedly be different):

It has four parts. On the left is ffplay, the application that is doing all of the work. To the right, tcp:// tells ffplay that we will be using the TCP protocol (as opposed to http, ftp, udp, etc.). 192.168.1.1 is the IP address of the server, and :5555 is the port we will be looking at.

Put it all together, and video should flow like a slightly-laggy waterfall (FFplay has some latency issues).

As always, read through all of the documentation and fully understand what you’re trying to accomplish and how you’re going to accomplish it, instead of randomly replacing values.

The Floor has a Voice

Assignment,Hardware,Project01,Software — Tags: — John Mars @ 12:58 am

I often find myself humming some made-up tune to the gentle whir of a room’s machinery in the background of my consciousness. What would happen if that whir became more pronounced, and the room started singing its own tune?

To accomplish this, I must do a few things:

1. Pick up the noise in a room with a microphone (the kind of which is undetermined)
2. Analyze the sound to determine the room’s base frequency. Continue analyzing that sound to determine if/when that frequency changes.
3. Create a never-ending tune from based upon the base frequency.
4. Send that tune into the room as unobtrusively as possible, to make it seem like the room itself is singing.

Mic

1. Pick up the noise in a room with a microphone (the kind of which is undetermined)

An [electret mic](https://en.wikipedia.org/wiki/Electret_microphone) is my microphone of choice in this case. The one I’m using from [Adafruit](https://www.adafruit.com/products/1063) is pretty good, and very easy to use. Sound has always been this mystical, mysterious thing, but over the past year or so, it’s all coming together – and it’s all a lot simpler than I was expecting.

2. Analyze the sound to determine the room’s base frequency. Continue analyzing that sound to determine if/when that frequency changes.

An FFT algorithm helps compute the amplitude of all frequencies of sound wave getting picked up by the microphone. The one I’m using splits the audible range into 64 bins of 75hz ranges each.

3. Create a never-ending tune from based upon the base frequency.

Via [OSC](https://en.wikipedia.org/wiki/Open_Sound_Control) I can send the FFT-derived base-frequency to a Raspberry Pi running [Pd-extended](http://puredata.info/downloads/pd-extended). With PD, tone generation is as simple as connecting a few nodes, and song generation is just a little bit more complicated than that.

A series of specific whole-number ratios multiplied by a frequency result in natural harmonies: for example, the base-frequency times five-fourths results in a Major Third above the base; fifteen-eights is a Major Seventh.

Using this knowledge in combination with a basic chord progression and a little randomness, I can create a never-ending song that perpetually realigns itself to the incoming frequency.

4. Send that tune into the room as unobtrusively as possible, to make it seem like the room itself is singing.

There isn’t much to show here, and that’s kind of the whole point. I’ve embedded my system into the ventilation vents in the floor below. A [surface transducer](https://www.adafruit.com/products/1784) (speaker without a cone) transfers the amplified music to highly reverberant metal air ducts.

Boards

Transducer

“The Visual Microphone” by Abe Davis, et al. @ MIT (2014)

Assignment,Precedent Analysis,Project01 — Tags: — John Mars @ 4:21 am

Extracting sound information from minute vibrations of objects picked up by high-speed video.

More…

“RjDj” by Reality Jockey Ltd. (story told by Roman Mars @ 99% Invisible) (2010 – Present)

Assignment,Precedent Analysis,Project01 — Tags: — John Mars @ 4:08 am

Context-aware, reactive, augmented reality music platform being embedded into a series of apps.

More…

“Ishin-Den-Shin” by Olivier Bau, Ivan Popyrev, and Yuri Suzuki @ Disney Research (2013)

Assignment,Precedent Analysis,Project01 — Tags: — John Mars @ 3:48 am

Using the human body as a speaker, Ishin-Den-Shin converts and amplifies sound spoken into a microphone and passes it to the listener through a touch of the finger.

More…

Project00 – Background – John Mars

Background — Tags: — John Mars @ 12:17 am

John Mars, at your service. I’m a first-year grad here in the MTID program, just graduated from the Architecture program at RISD.

The future lies not in the augmentation of ourselves with technology, but in the collaboration with its new intelligence. I want to give that intelligence to the things that pervade our lives.

In general, all of my works are available at john-mars.com. Of relevance to this class are Interface, Computing Drawing, WebSounds/piano, Block Island, Shells, and Jitterbug. Plus, here’s a fun little YouTube playlist of examples for your enjoyment:

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2017 Making Things Interactive | powered by WordPress with Barecity