Overview: Interactive Cultural Heritage Projects

An overview of my interactive Cultural Heritage projects over the last years.

Motion Bank Augmented Reality Postcard
An Augmented Reality visualization of Ros Warby dancing Deborah Hay’s “No Time To Fly”. She was captured by 3 cameras in order to extract her silhouette and calculate her 3D position. The postcard and the brochure are tracked via natural feature tracking with Fraunhofer IGD’s Mobile AR framework.


DATEV Lange Nacht der Wissenschaften in Nürnberg
Posters come to live thorugh Augmented Reality. The iPad app illustrates the history of the Datev Company, data processing and taxation. It was presented 2011 at Nürnberg’s Lange Nacht der Wissenschaften.


dARsein: Augmented Reality Tour through Architectural History at House of Olbrich
The iPhone app visualizes the compelling history of Darmstadt’s unique Jugendstil quarter with Augmented Reality. Jump back in time visually by photos you take with your iPhone: Augmented Reality superimposes information on each picture and visualizes the impressive historical architecture of the Art Nouveau in front of real building.


MovableScreen at Allard Pierson Museum in Amsterdam
During the “A Future for the Past” exhibition of Allard Pierson Museum (http://www.allardpiersonmuseum.nl) in Amsterdam we are presenting two Augmented Reality applications on the MovableScreen: A virtual reconstruction of Satricum and an annotated landscape on a 1855 photograph of Forum Romanum (Rome Reborn).


Augmented Reality Sightseeing
A table with a satellite image of Berlin shows a 3D model of the Berlin Wall and the urban development from 1940 – 2008 are displayed. Therefore urban grain plans showing areas covered with buildings is augmented on the satellite image. The visualization was presented on UMPCs and the iPhone via video seethrough. Furthermore posters are simulating the system working outdoors. Historic photographs are seamless superimposed and showing the development of landmarks.


iTACITUS Reality Filtering
Reality Filtering enables context sensitive overlays of original historic drawings of missing paintings or lost architecture. For a seamless integration we are rendering the reality in the style of the original drawing (here: blach and white). At Reggia Venaria Reale (http://www.lavenaria.it) we are visualizing missing paintings in Diana Hall, different architecture styles of Palazzo Diana and the lost Temple Diana in the gardens of the palace.


Rome Reborn Augmented Reality at SIGGRAPH 2008
Augmented Reality overlays of 3D roman monuments via markerless tracking.


My Dog Light Writing “Makers”

Mounting 5 LEDs on a moving object creates one of the cheapest and largest displays: Persistence of Vision. It’s been done on bicycle wheels, fans and other rotating objects.

In this project i am sewing a Lilypad wearable Arduino board and five LEDs with conductive thread on my dog’s shirt. She (Ianto) is a Miniature Pinscher running very fast for fun. In curves fast enough for Persitence of Vision. And she likes running in large circles in the park! Light writing.

I chose Cory Doctorows “Makers” for her writing with light. It’s one of the most influential books i read in the last years. A book about our generation of Makers set some month / years in the future. And Cory released it under Creative Commons license. Thus anyone can remix it.

This is a remix in light.

The hardware:
5 Sparkfun LilyPad LED Bright White
Sparkfun LilyPad Power Supply
Conductive Thread

Sewing conductive thread i kind of tricky. It took some time until i got used to it and the stitches got straight. I even got some short circuits because crossing plus and minus threads too close. But meanwhile i prefer it over soldering.

Source code: LilyPOVText.pde

Thanks to Birgit for the shirt and Ianto for running around.

Hack a day

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

blnk: Solar Ping Pong Ball Dot Matrix Display

The right perspective is necessary to read the message. Blink, don’t blink.

My latest experiment with dot matrix light installations. It’s much simpler than Shadow Meadow and Fireflies. I am working with only two layers of dots instead of a random number of layers. Thus i don’t need to calculate reverse 3D projection.

The pixels of the 5×3 dot matrix type are arranged in two layers on a grid. Thus they are only readable from a position in front of them. Viewed from other position looks like random arranged light dots of a led chain.

Thanks to the solar led chain blnk can be placed anywhere. It charges by day and glows by night.

Solar LED chain (20 €)
Metal grid (5 €)
Ping pong balls (12 €)


Thanks to:
Instructable Ping Pong Ball Lights
Wikipedia 3D Projection

Shadow Meadow Variations

These are 3D simulations and 3D prints of my “Shadow Meadow” concept. Dot matrix type generated by apparently random arranged light dots and obstacles casting shadows.

Fireflies (IKEA Allsang solar lights)
WebGL / X3DOM simulation (requires latest Chrome or Firefox 4)


Shadow Meadow
WebGL / X3DOM simulation (requires latest Chrome or Firefox 4)


Shadow Meadow 3D printed by Shapeways (OK)

Kinect Experiment: Freezing Han Solo

Princess Leia’s hologram message has already been recreated with Kinect. So Jens and i chose Han Solo frozen in carbonite.

This is an experiment with Kinect and Processing. People in front of it are posing like Han Solo and get frozen in 3D. We are already exporting the 3D models for 3D printing. So stay tuned. The software will be open source soon if anyone is interested.

Thanks to Sabine, Patrick, Manuel and Ianto for posing. Daniel Shiffman for the openkinect library. And NIN’s creative commons album “The Slip”.

Make Blog
The Creators Project

Download the application (OS X) and source code (Processing):
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License.

Flickr Album of the 3D Prints:




Restart Capture (Kinect):

Restrict depth of Kinect:
q: minDepth += 10
a: minDepth -= 10
w: maxDepth += 10
s: maxDepth -= 10

Cut left and right:
f: cut_l -= 5
g: cut_l += 5
v: cut_r -= 5
b: cut_r += 5

Move camera in z:
+: move_z += 10
-: move_z -= 10

Move model in z:
e: model_z += 10
d: model_z -= 10

LilyPad POV: My dog has a display

There are the first experiments with Arduino LilyPad, Persistence of Vision (POV) and my dog. When its running the flickering LEDs are writing words in the dark.

I used my old POV example for Arduino.

Next is the Bluetooth connection to my Android Nexus S to change the words.