This semesters student projects

Studentische Projekte aus dem Fachbereich Mediendesign der
Hochschule Hof / Campus Münchberg.

Student projects from the Department of Media Design at
Hof University / Campus Münchberg.
Prof. Michael Z̦llner РInteraction Design

IxD_Lab (by Sarah Thielsen)

Interaction Design
Gestische Interaktion im Operationssaal mit Microsoft Kinect in Kooperation mit Siemens Healthcare
Gestural interaction in the operation room with Microsoft Kinect in cooperation with Siemens Healthcare

-> Projekte ansehen / See all projects

Information Architecture
Netzneutralität in Kooperation mit
Net neutrality in cooperation mit

-> Projekt ansehen / See project

Experimentelle Leitsysteme für das Dokumentationszentrum des Reichsparteitagsgeländes in Nürnberg
Experimental signage systems for the documentation centre of Nuremberg’s Party Rally Area
-> Projekte ansehen / See all projects

3D Printing
Makerbot Replicator Einführung
Makerbot Replicator introduction

Creative Coding Repository at GitHub
Processing Kursmaterial, Plugins und Example Code
Processing course material, plugins and example code

-> Sommersemester 2012 / Summer semester 2012
-> Media Design at Campus Hof
-> Arbeitsblog
 (Registrierte Nutzer / registered users)

PointCloud: How does it work?

PointCloud is an iOS App that expands a Webkit Browser with advanced Augmented Reality features. Thus you can stick for example a DIV with video, images or even Processing.js sketches onto images or objects in reality. I simply took a reference image of Wil Wheaton’s t-shirt and stciked an SVG / CSS 3D animation to it. The result is an perspective emitter of stencils.

Take a look at the HTML document to get an impression how to write a simple app in PointCloud:
PointCloud’s documentation isn’t complete yet. They are providing some examples, which are too complex for learning and a PDF document with further descriptions. They are still lacking a complete API reference. But i know that documentation is hard work and they just released their software.

I reduced their example to a minimum. It’s only initializing and tracking one image. Therefore you have to link an image in the head of the document. The image has to be on a public server on the internet in order to be downloaded and processed by PointClouds server. A local web server doesn’t work.

The only relevant Javascript function is onAppLoaded(). Here we are activating the tracking target image:

function onAppLoaded() {
viper.activateReferenceImage("p1_wil"); // id of image link

Your HTML content is put within a hierarchy of DIVs controlled by PointCloud:

That’s it. Load your file in PointCloud browser and fine tune the positioning.

Processing JS Plugin Test

Testing Processing JS WordPress Plugin by Keyvan.

Works great as soon as i disable WordPress should correct invalidly nested XHTML automatically in Settings / Writing. Otherwise Wordpres adds i.e. </width> when using i<width in a for loop.

python processing proof of concept

i am sure that if ben fry and casey reas would start processing today they would use python instead of java. learning the short and efficient python is much easier than the statically typed and lengthy java. no compilation would be necessary anymore, tons of native libraries would be available via c-bindings, etc. nodebox is a nice example of the use of python for computational design. unfortunately it is restricted to os x.

today we are using python for all sorts of automated code generation in wuerzburg. one example is the conversion of fluid simulation data into x3d files for our coperion project. jan and sebastian are using this approach for their simulated cities of their master projects.

i started a short proof of concept of a python based processing. jython is an implementation of python in java. with its help it’s possible to write code with the processing api in python language.

code example (

from processing.core import PApplet

class HelloProcessing(PApplet):

     def setup(self):
          global p
          p = self
          p.size(350, 350)

     def draw(self):
          p.rect(150, 150, 50, 50)

     if __name__ == ‘__main__’:
          import pawt

1. just download and install jython:

2. put you processing libs into java classpath or modify the startup file (jython) in order to do so:


# This file was generated by the Jython installer
# Created on Mon May 19 20:25:40 CEST 2008 by me

if [ ! -z “$CLASSPATH” ]
“/System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Home/bin/java” -Dpython.home=”/Users/me/jython2.2.1″ -classpath “$CP” org.python.util.jython “$@”

3. start the python file in your shell:



last semester’s topic of my course “experimental interfaces” at fh wuerzurg’s design faculty was reconqr – reconquer urban networks. the results were amazing:

an augmented reality visualization of urban network traffic projected on architecture

urbanpulse (silke hilsing)
a minimal installation for feeling the city.

klangstufen (christiane keller)
stairs act as interface for a spacial sound experience of stockholm’s city noise.

denk.mal (claudia wieser)
a large interactive installation about privacy issues at german facebook clone studivz.

urban aura (philipp hartung)
experimental protocol for the communication channel between underground metro and surface: wind and light.

sound of surveillance (dominik hoegn)
creates sound from surveillance cameras

parasite (katharina weier)
an urban parasite communication system

all progress and development was documented in the courses weblog:


in accelerando charles stross’ writes about alot of interesting concepts we are just starting to work on and i am covering in my dissertation. it was one of the most important books for me in 2007. it shows how close science fiction and science get in these days.

my favorite feature is superplonk. it remixes the environment and filters annoying persons, objects and sounds. that’s an augmented reality version of what i practice today with special earplugs. but soon that should be possible with modified hearing devices and slim head mounted displays.

one experiment in my ongoing surveillance series simulates superplonk with images of network cameras. via motion detection i am reconstructing a place’s image without people and cars. all moving objects are becoming ghosts. only people and cars who are standing still are becoming visible. movement makes you invisible. jan covers this topic in his master thesis, too.

processing source code: cams_superplonk.pde

image source: earthcam

published in: BoingBoing