processing

Next power of two in Processing

Monday, April 20th, 2015

If you’ll ever need the next power of two of a number in #Processing:

int next = (int)pow(2, ceil(log(n) / log(2)));

http://stackoverflow.com/a/17379122

¯\_(?)_/¯ running through all installed fonts via #Processing

Saturday, August 2nd, 2014

¯\_(?)_/¯

yumad_emoji
http://www.openprocessing.org/sketch/156356

bluecove fix for OS X 10.8

Tuesday, October 22nd, 2013

For everyone working with bluecove Bluetooth library in Processing / Java: It stopped working on OS X 10.8 due to deprecated functions and there’s no maintainer anymore.

Here’s a fix and a Wiimote example:
https://github.com/ixd-hof/Processing/tree/master/Examples/Input/Wiimote_BalanceBoard

Thanks to the patch by Wulf:
http://www.lejos.org/forum/viewtopic.php?f=7&t=3833

This semesters student projects

Saturday, March 2nd, 2013

Studentische Projekte aus dem Fachbereich Mediendesign der
Hochschule Hof / Campus Münchberg.

Student projects from the Department of Media Design at
Hof University / Campus Münchberg.
Prof. Michael Z̦llner РInteraction Design

IxD_Lab (by Sarah Thielsen)

Interaction Design
Gestische Interaktion im Operationssaal mit Microsoft Kinect in Kooperation mit Siemens Healthcare
Gestural interaction in the operation room with Microsoft Kinect in cooperation with Siemens Healthcare

-> Projekte ansehen / See all projects

Information Architecture
Netzneutralität in Kooperation mit netzpolitik.org
Net neutrality in cooperation mit netzpolitik.org
Netzneutralitaet

-> Projekt ansehen / See project

Leitsysteme
Experimentelle Leitsysteme für das Dokumentationszentrum des Reichsparteitagsgeländes in Nürnberg
Experimental signage systems for the documentation centre of Nuremberg’s Party Rally Area
PanoramaKongress
-> Projekte ansehen / See all projects

3D Printing
Makerbot Replicator Einführung
Makerbot Replicator introduction
Makerbot

Creative Coding Repository at GitHub
Processing Kursmaterial, Plugins und Example Code
Processing course material, plugins and example code
GitHub
https://github.com/ixd-hof

-> Sommersemester 2012 / Summer semester 2012
-> Media Design at Campus Hof
-> Arbeitsblog
 (Registrierte Nutzer / registered users)

$1 Unistroke Recognizer in Processing

Wednesday, October 24th, 2012

A simpler version of Norman Papernick’s Processing port of Wobbrock et al’s $1 Unistroke Recognizer. Moved the gesture processing in a separate file and simplified the main file.

Thus it’s simpler to adapt for finger tracking with Kinect.

Dollar_Gestures.pde
Recognizer.pde 

PointCloud: How does it work?

Saturday, April 14th, 2012

PointCloud is an iOS App that expands a Webkit Browser with advanced Augmented Reality features. Thus you can stick for example a DIV with video, images or even Processing.js sketches onto images or objects in reality. I simply took a reference image of Wil Wheaton’s t-shirt and stciked an SVG / CSS 3D animation to it. The result is an perspective emitter of stencils.

Take a look at the HTML document to get an impression how to write a simple app in PointCloud:
http://dev.m05.de/pointcloud/wil/
PointCloud’s documentation isn’t complete yet. They are providing some examples, which are too complex for learning and a PDF document with further descriptions. They are still lacking a complete API reference. But i know that documentation is hard work and they just released their software.

I reduced their example to a minimum. It’s only initializing and tracking one image. Therefore you have to link an image in the head of the document. The image has to be on a public server on the internet in order to be downloaded and processed by PointClouds server. A local web server doesn’t work.

The only relevant Javascript function is onAppLoaded(). Here we are activating the tracking target image:

function onAppLoaded() {
viper.activateReferenceImage("p1_wil"); // id of image link
viper.requireRealityMap();
viper.log("requireRealityMap");
}

Your HTML content is put within a hierarchy of DIVs controlled by PointCloud:

That’s it. Load your file in PointCloud browser and fine tune the positioning.

Processing JS Plugin Test

Saturday, February 25th, 2012

Testing Processing JS WordPress Plugin by Keyvan.

Works great as soon as i disable WordPress should correct invalidly nested XHTML automatically in Settings / Writing. Otherwise Wordpres adds i.e. </width> when using i<width in a for loop.

python processing proof of concept

Wednesday, May 21st, 2008

i am sure that if ben fry and casey reas would start processing today they would use python instead of java. learning the short and efficient python is much easier than the statically typed and lengthy java. no compilation would be necessary anymore, tons of native libraries would be available via c-bindings, etc. nodebox is a nice example of the use of python for computational design. unfortunately it is restricted to os x.

today we are using python for all sorts of automated code generation in wuerzburg. one example is the conversion of fluid simulation data into x3d files for our coperion project. jan and sebastian are using this approach for their simulated cities of their master projects.

i started a short proof of concept of a python based processing. jython is an implementation of python in java. with its help it’s possible to write code with the processing api in python language.

code example (helloProcessing.py):

from processing.core import PApplet

class HelloProcessing(PApplet):

     def setup(self):
          global p
          p = self
          p.size(350, 350)

     def draw(self):
          p.fill(p.random(255))
          p.rect(150, 150, 50, 50)

     if __name__ == ‘__main__’:
          import pawt
          pawt.test(HelloProcessing())

1. just download and install jython:
http://www.jython.org/Project/download.html

2. put you processing libs into java classpath or modify the startup file (jython) in order to do so:

#!/bin/sh

# This file was generated by the Jython installer
# Created on Mon May 19 20:25:40 CEST 2008 by me

CP=”/Users/me/jython2.2.1/jython.jar:/Applications/Processing/lib/core.jar
if [ ! -z “$CLASSPATH” ]
then
CP=$CP:$CLASSPATH
fi
“/System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Home/bin/java” -Dpython.home=”/Users/me/jython2.2.1″ -classpath “$CP” org.python.util.jython “$@”

3. start the python file in your shell:

./jython helloProcessing.py

reconqr

Thursday, March 6th, 2008

last semester’s topic of my course “experimental interfaces” at fh wuerzurg’s design faculty was reconqr – reconquer urban networks. the results were amazing:

HERBARIUM DIGITAL (georg reil)
an augmented reality visualization of urban network traffic projected on architecture

urbanpulse (silke hilsing)
a minimal installation for feeling the city.

klangstufen (christiane keller)
stairs act as interface for a spacial sound experience of stockholm’s city noise.

denk.mal (claudia wieser)
a large interactive installation about privacy issues at german facebook clone studivz.
denk_mal.jpg

urban aura (philipp hartung)
experimental protocol for the communication channel between underground metro and surface: wind and light.

sound of surveillance (dominik hoegn)
creates sound from surveillance cameras

parasite (katharina weier)
an urban parasite communication system

all progress and development was documented in the courses weblog:
http://gestaltung.fh-wuerzburg.de/blogs/reconqr

superplonk

Sunday, January 27th, 2008

in accelerando charles stross’ writes about alot of interesting concepts we are just starting to work on and i am covering in my dissertation. it was one of the most important books for me in 2007. it shows how close science fiction and science get in these days.

my favorite feature is superplonk. it remixes the environment and filters annoying persons, objects and sounds. that’s an augmented reality version of what i practice today with special earplugs. but soon that should be possible with modified hearing devices and slim head mounted displays.

one experiment in my ongoing surveillance series simulates superplonk with images of network cameras. via motion detection i am reconstructing a place’s image without people and cars. all moving objects are becoming ghosts. only people and cars who are standing still are becoming visible. movement makes you invisible. jan covers this topic in his master thesis, too.

superplonk.gif
processing source code: cams_superplonk.pde

image source: earthcam

published in: BoingBoing