0110.be logo

Articles Tagged '0110.be'

~ Printing a part of the world - a 3D-printed cityscape


Fig: 3D printing your part of the world.

My ex-girlfriend and current wife likes maps. While looking for a gift for the new-years I got the idea to give her a 3D map of the nearby historic city center of Ghent with its three iconic towers. I have a 3D printer at home but still need to find a printable 3D model of Ghent.

Luckily, a couple of days ago a piece of software appeared to capture Google Earth tiles -cubes- into a single 3D file. There you can select an area of interest via google maps and download a GLTF file which captures the landscape in 3D. The software needs an API key which can be requested via the Google Developer tools.

After downloading a GLTF file, the 3D model needs to be made 3D-printable. There are online GLTF to STL converters but a bit of care needs to be taken to end up with an actually printable STL. My selected area of interest only has slight height differences in the landscape which are handled by placing the STL file on a base which compensates for these differences. Your 3D slicer can also generate structure to support inclinations in the landscape.

The 3D model generated by Google Earth is quite noisy and can contain floating parts and holes. It may be needed to edit the STL mesh directly. Selecting a slightly shifted area of interest may also solve problems with the edges of the print: take care to chop less buildings in two.

Have fun printing your own piece of the world!

Fig: a 3D model for the Ghent city center visualized with an Three.js STL viewer.

 


~ Olaf in print - Elektor magazine article on Acoustic Fingerprinting


Fig: Olaf on the frontpage of Elektor!

Elektor, a hobby electronics magazine, recently featured an article on acoustic fingerprinting using the ESP32. It is included in a special edition on Espressive products like the ESP32. This article includes content previously published on this blog and other writings about Olaf.

Since the article is based on my writings, there was an agreement to allow one of their writers to compose the magazine article under my name. This was my first experience with having a ghostwriter – quite convenient, I must say. Although it’s somewhat apparent that the article is compiled from various sources, I am overall pleased with the outcome. It even made the front page!

Elektor has a rich history, dating back to the early 1960s when it was first published in Dutch as ‘Elektuur’. I have fond memories of browsing Elektuur at my nerdy uncle’s place. If anything, this article has certainly earned me some nerd credibility points in my uncle’s eyes.

Please take a moment to read the Espressive Special Edition of Elektor Magazine.


~ Clap detection - Trigger your anything

Fig: Clap twice for light.

There is something about surprising interfaces. Having a switch to turn on a light gets quite boring after a while. Turning on a light by clapping twice, on the other hand, has some kind of magic feel to it. In a recent Mr Beast video he and his gang visit a number of expensive houses and in one of those mansions there is a light operated by clapping twice. I am not sure about the blatant materialism, but it got me thinking on how to build a similar clap-operated light yourself.

So, what are the elements needed: first a microphone to pick up sound. Second an algorithm is needed that detects claps. And finally, something that reacts to claps: a light or something else.

Many devices have microphones so sound input is relatively easy, and with some creativity there are many things waiting to be ‘clap triggered’: vacuum robots, sunscreens, lights, in-house ventilation, … The main difficulty is implementing a efficient clap-detection algorithm. Luckily there are already a few described in the literature. I have based my ANSI C implementation on ‘Duxbury, C., et al (2003). Complex domain onset detection for musical signals’.

My version of the clap-detection algorithm has two parameters which might need adapting to fit your environment. The silence threshold determines the minimum loudness for a clap to be triggered. The onset threshold determines more or less how ‘percussive’ the sound needs to be: the idea is to only react to things sounding like a clap and not to e.g. a loud whistle or other sounds. This is what the onset threshold tries to control. You can try it out below:


Demo: click the ‘start audio’ to capture your microphone and try to clap clearly twice. Lower the parameters if nothing happens.

Clap detection on a micro-controller

With this working we now can try to run this code on a micro-controller. Running it on a micro-controller makes it more practical in daily use to e.g. switch on lights. A low-cost ESP32 with a MEMS microphone is a good platform: these microcontrollers are easy to use and have WiFi connectivity which opens the possibility to trigger commands to smart sockets or other WiFi-enabled devices. The pector GitHub repository contains an Arduino project to run the clap-detection algorithm on an ESP32 or similar device (Teensy, RP2040,… ).

Clap detection in the command line

Next to the main clap detection software, there is a small script to trigger commands when a clap is detected. In this case, the script waits for a double clap and then pushes updates to a git repository. There are two reasons for this: the first is that it is fun, the second is for bragging rights. Not that many people can say they once pushed source code simply by clapping twice. It is, however, a challenge to find people who have the patience to listen to me explaining what I have done and who are impressed by this feat, so maybe there is only one reason: it is fun. Below a screen capture can be found pushing code to the pector repository.

Vid: pushing code by clapping

Have a look at the pector GitHub repository for more info on how you can make your websites/apps/command line tools/devices clap controlled!


~ Doorbell triggered Halloween window projection

Skull video projectionFig: Door projection as imagined by DALL.E.

I did a thing, and, similar to most stuff made here, it is quite a bit of effort and rather pointless. In that sense, it is a bit like life itself. Anyhow, it seems that the Halloween tradition of trick-or-treating has found a strong foothold in mainland Europe. Due to social embeddedness, I prepared Halloween themed projection that responds to my door-bell. I have a glass door, which is ideal for scary projections. The idea is to have a continuous door projection but with a twist: when kids press the doorbell a projected ghost reacts and rushes towards them along with a loud ghostly scream.

This blog post details the technical setup with the intention to inspire similar projects and serve as documentation for next year. First we need a way react to the doorbell.

Doorbell trigger setup

I sourced a couple of FSR‘s from a “sound book” that I had taken apart. Most of these sound books with e.g. animal sounds are meant for toddlers and have a some type of button and a small electronics circuit to make sound. Some of these books work with FSR ’buttons’ which are similar in size to a doorbell. I took a single FSR from such a book.

I attached the FSR to a “Teensy LC” micro-controller with an additional resistor and put it in a small 3D-printed case. The Teensy was programmed to emit a MIDI Note On event when the FSR/doorbell is pressed. A Note Off follows when the button is released. Once it is connected via USB to a computer it is essentially regarded as a digital piano with only a single key. Making a micro-controller pretend to be a standard MIDI device is very practical since the message passing protocol is standardized and well supported by many types of systems. MIDI is also optimized for low-latency communication. Via the Web MIDI API there is even support for MIDI in web browsers.

Video projection

While software like Resolume allows for complex interactive video projections, my requirements are more modest: I need a continuous background video and I want the ‘scare’ video and audio to appear when the doorbell is triggered. I opted for a browser-based solution: multi-media capabilities, scripting and MIDI support are all present in modern browsers. Running things in a browser has advantages: there is no need for specialized software, it is easy to program, easy to run, relatively stable and future-proof. The proof-of-concept can be seen below. For the actual projection on a window or door you need to first cover the glass with a thin layer of white paper which lets most light through. A white paper tablecloth works well.


Demo: click the ‘start video’ to start the background video and click doorbell if you dare…

The code is not much special and a bit hacky but can be found attached. The download includes the “html, javascript, css, video, audio and the micro-controller software for a doorbell-triggered projection”.


~ Dragon! Sound effects for board games

Memory leaks
Fig: Rock & Troll collaborative board game.

I often play board games with my kids. One of them is an absolute board game fan while the other is a sore loser and only wants to play collaborative games. These games are played ‘against the board’ and you win, or lose, together. I myself also still have problems losing games so I do understand this predicament. Genetics…

Rock & Troll is one of those games. It is a chance based game where you collaboratively try to build a path to a treasure before the dragon reaches it. Every player has to flip a tile which is either a part of the path (good) or a dragon (very bad). To increase engagement during play I often add sound effects. I was thinking: this can be improved and automated. For example, by doing this when a dragon tile is flipped:

The idea is to unobtrusively detect game state and add sound effects at critical moments. The sound effect should be playing without too much lag, ideally within about 200ms, so it feels immediate and connected to the game event. To implement this a camera based system with robust, fast object detection seemed like the way to go.

Dragon detection

To detect dragons in a video stream I want to retrain an existing object-detection system. So two things need to happen: first a realistic, labeled dataset needs to be created. Then a system needs to be trained to detect the dragons. We do not want to label a massive dataset so we will use transfer learning to retrain an existing network. This existing network should already have learned basic features like edges, colors, geometries and other basic patterns. With the hope that this would result in robust detection, even with a limited dataset.

To create the dataset I wrote a small script which took a webcam picture every few seconds while I was manipulating the board and tiles. This resulted in about 130 pictures, some with no dragons and some with six, 300 labels in total. For annotating the dataset I used the free roboflow web-app which also hosts the final dragon dataset. After augmentation, the size of the dataset can be tripled. The command to extract images from a webcam looks like this on my system:

ffmpeg -y -r 30 -f avfoundation -i '0' -frames:v 1 snapshot.jpg

After some consideration for alternatives I landed on the YOLOv8 object-detection system: a robust and fast object-detection system. Additionally, it is well-documented, pytorch-based, easy-to-use and it has support for video streams. The annotated roboflow dataset can be downloaded in a YOLOv8 compatible format as well. Transfer learning, was based on the yolov8s.pt weights, which are downloaded automatically. With the system installed correctly and the dataset dowloaded, a local GPU based training command might look like this:

yolo train data=RocknTroll.v3i.yolov8/data.yaml epochs=30 model=yolov8s.pt device=mps imgsz=640 batch=32

Once the system was trained – download the model wheights here – a bit of glue code is needed. The python script needs to stream images from a camera, here via open cv, and detect dragons in each image. Every time a new dragon is found, the sound effect is played. Note that the Roboflow website automatically trains a model as well which can be tried out with a webcam.

There are a few ways improve the robustness of the system. During a game there are only more and more dragons: if the script detects less dragons than before it is probably a false negative or there is occlusion. Additionally, the dragon tiles remain in the same location once they are placed on the board. This means that new dragons are expected only in certain regions of the image. Both heuristics can be used to together to improve robustness.

Notes

One of the reasons I bought a M1 mac with unified memory is for exactly these types of AI applications. After installing pytorch 2.0, the GPU acceleration resulted in a 10x training speed improvement. Training on a GeForce 1080 GTX from 2016 was still quite a bit faster, probably thanks to years of performance tuning targeting CUDA. It is clear that the mac GPU acceleration software ecosystem can use more effort, even system tools in macOS are limited: e.g. in the macOS activity monitor, GPU activity is very much an afterthought.

I am and hesitant to use cloud based GPU computing due to lack of control and privacy. I am not willing to send pictures from my kids to e.g. Google Cloud GPUs. The dependency on hardware of others might also limit the longevity of systems.

The ease-of-use, performance and accessibility of these deep-learning systems is great. Only a couple of years ago it would take months of hard work to maybe only approach similar detection performance. Adapting this idea for other board games and more types of tiles or board game events should be very possible.


~ ☀️ Solar sockets - Delivers power only on solar energy surplus

Memory leaks
Fig: Solar socket.

I recently installed a couple of smart electrical sockets. The sockets only switch on when there is a solar energy surplus: when my rooftop solar panels produce more than the current energy consumption. I use these ‘solar sockets’ to charge the battery of an electric bike, for air conditioning and for charging other smaller devices. This post describes the components needed for such a system with the aim to inspire similar build.

  1. Solar panels and a solar inverter with some form of readout.
  2. A device to measure electrical energy use in a home.
  3. Smart sockets with an easy to use API.
  4. Some software to glue everything together.

1. ☀️ Solar panels and inverter

Most solar inverters have some form of API to readout the current solar panel output. In my case I use a SMA inverter which has two ways to extract this data: via Bluetooth and via wired ethernet. I found the wired ethernet solution to be the most reliable. The SMA inverter does use a somewhat annoying data formatting protocol but luckily there is an open source solution to decode the data: SBFspot

For SMA inverters, and possibly for others as well, there another option: the data is also automatically uploaded to a cloud based platform. This platform has an API which can be used to extract data on solar energy production. I do not like to be dependent on external cloud based software platforms, which might change at any time. Additionally, for real-time data cloud based platforms can be slow.

2. Measuring total electrical energy use

To measure total power use, I use an “Eastron SDM220M” measurement device which communicates with a server over a serial connection. There are adapters to translate serial Modbus to USB. The device is installed in my wiring closet by a professional: it is directly connected to the 60A mains and I would not advise to DIY it.

Alternatively, some places are equipped digital energy meters which might have a way for direct readout or readout via a cloud based API, after a few minutes. This might suffice for a solar socket install.

Energy use measurement might not be strictly needed for the ‘solar sockets’: if energy use is predictable it might be ok to simply switch the sockets on your average peak solar power. Perhaps combined with a local weather API. Finally we need to switch on some sockets.

3. Smart WiFi Socket

There are many smart WiFi sockets on the market. Most come with a smartphone app which allows you to control the socket from anywhere. Behind the scenes the sockets communicates with the vendor’s cloud based system over the internet. Additionally there are some integrations with systems like Apple Home, Amazon Alexa en Google Assistant. For fundamental
infrastructure like sockets in my home I want to avoid dependencies on a external cloud based systems. Next to the concerns about privacy and ownership there is a very practical concern: the cloud based system might just stop working in a few years. Especially any dependency on a Google service is suspect. Also I am not convinced of Amazon Alexa’s future.

Luckily, there is Tasmota which provides open source firmware targeting many types of ‘smart’ devices including smart sockets. The tagline for Tasmota is ‘Total local control with quick setup and updates’. I bought a couple of Nous A1T Tasmota Smart WiFi sockets which come with Tasmota firmware. Switching the socket on is done by sending an HTTP GET request to an url, which can be scripted easily. There is some

4. Control software

A script glues everything together: it logs energy usage and solar output. It switches on the solar sockets when a surplus is detected and switches again when the surplus is gone. There is some additional logic which ensures that the socket remains on for at least an hour even if there is no solar surplus. This to ensure that batteries are charged to a minimal usable state.

In summary, here we presented a couple of building blocks to build ‘solar sockets’ which are on only when there is a energy surplus. By using simple API’s offered by Tasmota and locally running software, there is no dependency on (in the long term) unreliable cloud based systems which ensures the longevity of the build.

As an additional bonus, the solar sockets also serve as an indicator. A small LED shows when they are on, or, in other words, when there is solar energy surplus and when it is a good idea to switch on other electrical appliances.


~ The automatic HTTPS capabilities of Caddy

Caddy logo

This blog has been running on Caddy for the last couple of months. Caddy is a http server with support for reverse proxies and automatic https. The automatic https feature takes care of requesting, installing and updating SSL certificates which means that you need much less configuration settings or maintenance compared with e.g. lighttpd or Nginx. The underlying certmagic ACME client is responsible for requesting these certificates.

Before, it was using lighttpd but the during the last decade the development of lighttpd has stalled. lighttpd version 2 has been in development for 7 years and the bump from 1.4 to 1.5 has been taking even longer. lighttpd started showing its age with limited or no support for modern features like websockets, http/3 and finicky configuration for e.g. https with virtual domains.

Caddy with Ruby on Rails

I really like Caddy’s sensible defaults and the limited lines of configuration needed to get things working. Below you can find e.g. a reusable https enabled configuration for a Ruby on Rails application. This configuration does file caching, compression, http to https redirection and load balancing for two local application servers. It also serves static files directly and only passes non-file requests to the application servers.

(cachestaticfiles) {
    @staticFiles {
        file
    }
    header @staticFiles Cache-Control "public, max-age=604800, must-revalidate"
}

(railsdefaults) {
    #compress responses
    encode zstd gzip

    #redirect from http to https
    @http {
        protocol http
    }

    redir @http https://{host}{uri}

    @notStatic {
        not file
    }

    import cachestaticfiles
    reverse_proxy @notStatic localhost:{args.0} localhost:{args.1}
    file_server
}

example.com {
    root * /var/www/example.com/current/public
    import railsdefaults 10000 10001
}

If you are self-hosting I think Caddy is a great match in all but the most exotic or demanding setups. I definitely am kicking myself for not checking out caddy sooner: it could have saved me countless hours installing and maintaining https certs or configuring lighttpd in general.


~ Sending audio over a network with ffmpeg

Fig: stable diffusion imagining a networked music performance

This post describes how to send audio over a network using the ffmpeg suite. Ffmpeg is the Swiss army knife for working with audio and video formats. It is a command line tool that supports almost all audio formats known to man and woman. ffmpeg also supports streaming media over networks.

Here, we want to send audio recorded by a microphone, over a network to a single receiver on the other end. We are not aiming for low latency. Also the audio is going only in a single direction. This can be of interest for, for example, a networked music performance. Note that ffmpeg needs to be installed on your system.

The receiver – Alice

For the receiver we use ffplay, which is part of the ffmpeg tools. The command instructs the receiver to listen to TCP connections on a randomly chosen port 12345. The \?listen is important since this keeps the program waiting for new connections. For streaming media over a network the stateless UDP protocol is often used. When UDP packets go missing they are simply dropped. If only a few packets are dropped this does not cause much harm for the audio quality. For TCP missing packets are resent which can cause delays and stuttering of audio. However, TCP is much more easy to tunnel and the stuttering can be compensated with a buffer. Using TCP it is also immediately clear if a connection can be made. With UDP packets are happily sent straight to the void and you need to resort to wiresniffing to know whether packets actually arrive.

ffplay -nodisp -f mpegts tcp://0.0.0.0:12345\?listen

In this example we use MPEGTS over a plain TCP socket connection. Alteratively RTMP could be used (which also works over TCP). RTP , however is usually delivered over UDP.

The shorthand address 0.0.0.0 is used to bind the port to all available interfaces. Make sure that you are listening to the correct interface if you change the IP address.

The sender – Björn

Björn, aka Bob, sends the audio. First we need to know from which microphone to use. To that end there is a way to list audio devices. In this example the macOS avfoundation system is used. For other operating systems there are similar provisions.

ffmpeg -f avfoundation -list_devices true -i ""

Once the index of the device is determined the command below sends incoming audio to the receiver (which should already be listening on the other end). The audio format used here is MP3 which can be safely encapsulated into mpegts.

Note that the IP address 192.168.x.x needs to be changed to the address of the receiver. Now if both devices are on the same network the incoming audio from Bob should arrive at the side of Alice.

The tunnel

If sender and receiver are not on the same network it might be needed to do Network Addres Translation (NAT) and port forwarding. Alternatively an ssh tunnel can be used to forward local tcp connections to a remote location. So on the sender the following command would send the incoming audio to a local port:

ffmpeg -f avfoundation -i ":1" -acodec libmp3lame -ab 196k -f mpegts tcp://192.168.x.x:12345

The connection to the receiver can be made using a local port forwarding tunnel. With ssh the TCP traffic on port 12345 is forwarded to the remote receiver via an intermediary (remote) host using the following command:

ssh -v -L 12345:192.168.x.x:12345 user@host -N 

~ Workshop on ESP32 microcontroller

ESP32 Thing On Saturday the eight of April I gave a workshop on the ESP 32 micro controller at Newline, the yearly hackerspace conference of Hackerspace Ghent. The aim was to provide a hands-on introduction. The participants had to program to make the ESP execute the following:

At the start of the workshop I gave a presention as an introduction.


~ TarsosLSH in a Photomosaic Web App

TarsosLSH is a Java library implementing Locality-sensitive Hashing (LSH), a practical nearest neighbor search algorithm for high dimensional vectors that operates in sublinear time. The open source software package is authored by me and is available on GitHub: TarsosLSH on GitHub.

With TarsosLSH, Joseph Hwang and Nicholas Kwon from Rice University created an Image Mosaic web application. The application chops an uploaded photo into small blocks. For each block, a color histogram is created and compared with an index of color histograms of reference images. Subsequently each block is replaced with one of the top three nearest neighbors, creating a mosaic. Since high dimensional nearest neighbor search is needed, this is an ideal application for TarsosLSH. The application somewhat proves that TarsosLSH can be used in practical applications, which is comforting.


~ Spotify Music Quiz - With a Big Red Button

This post documents how to implement a simple music quiz with the meta data provided by Spotify and a big red button.

During my last birthday party, organized at Hackerspace Ghent ,there was an ongoing Spotify music quiz. The concept was rather simple: If music that was playing was created in the same year as I was born, the guests could press a big red button and win a price! If they guessed incorrectly, they were publicly shamed with a sad trombone. Only one guess for each
song was allowed.

Below you can find a small videograph which shows the whole thing in action. The music quiz is simple: press the button if the song is created in 1984. In the video, at first a wrong answer is given, then a couple of invalid answers follow. Finally a good answer is given, the song “The Killing Moon by Echo & the Bunnymen” is from 1984! Woohoo!

Allright, what did I use to to implement the spotify music quiz:

The Red Button

A nice big red button, which main features are that it is red and big, serves as the main input device for the quiz. To be able to connect the salvaged safety button to a computer via USB, an Arduino Nano is used. The Arduino is loaded with the code from the debounce tutorial, basically unchanged. Unfortunately, I could not fit the the FTDI-chip, which provides the USB connection, in the original enclosure. An additional enclosure, the white one seen in the pictures below, was added.

When pressed, the button sends a signal over the serial line. See below how the Ruby Quiz script waits for, and handles such event.

Get Meta Data from the Currently Playing Song in Spotify

Another requirement for the quiz to work is the ability to get information about the song currently playing in Spotify. On Linux this can be done with the dbus interface for Spotify. The script included below returns information about the artist, album and title of the song. It also detects if Spotify is running or not. Find it, fork it, use it, on github

On Mac OS X, Spotify can be controlled using AppleScript. A script on github shows how to get meta data about the currently playing song in Spotify, on Mac OS X.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/usr/bin/python
#
# now_playing.py
# 
# Python script to fetch the meta data of the currently playing
# track in Spotify. This is tested on Ubuntu.
 
import dbus
bus = dbus.SessionBus()
try:
  spotify = bus.get_object('com.spotify.qt', '/')
  iface = dbus.Interface(spotify, 'org.freedesktop.MediaPlayer2')
  meta_data = iface.GetMetadata()
  artistname = ",".join(meta_data['xesam:artist'])
  trackname = meta_data['xesam:title']
  albumname = meta_data['xesam:album']
  #Other fields are:
  # 'xesam:trackNumber', 'xesam:discNumber','mpris:trackid',
  # 'mpris:length','mpris:artUrl','xesam:autoRating','xesam:contentCreated','xesam:url'
  print str(trackname + " | " + artistname + " | " + albumname + " | Unknown")
except dbus.exceptions.DBusException:
  print "Spotify is not running."

The Quiz Ruby Script

With the individual part in order, we now need Ruby glue to paste it all together. The complete music quiz script can be found on github. The main loop, below, waits for a button press. If it is pressed the sad trombone, invalid answer or winner sound is played. The sounds are attached to this post.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
while true do
  # Wait for a button press
  data = sp.readline
  # Fetch meta data about the currently
  # playing song
  result = `#{now_playing_command}`
  # Parse the meta data
  title,artist,album = parse_result(result)
  #Title is hash key, should be unique within playlist
  key = title
  if responded_to.has_key? key
    puts "Already answered: you cheater"
    play cheater
  elsif correct_answers.has_key? key
    puts "Correct answers: woohoo"
    responded_to[key]=true
    play winner
  else
    puts "Incorrect answer: sad trombone"
    responded_to[key]=true
    play sad_trombone
  end
end

~ Text to Speech to Speech Recognition - Am I Sitting in a Room?

This post is about a hack I did for the 2012 Amsterdam music hack days. From the website:

The Amsterdam Music Hack Day is a full weekend of hacking in which participants will conceptualize, create and present their projects. Music + software + mobile + hardware + art + the web. Anything goes as long as it’s music related

The hackathon was organized at the NiMK(Nederlands instituut voor Media Kunst) the 25th and 24th of May. My hack tries to let a phone start a conversation on its own. It does this by speaking a text and listening to the spoken text with speech recognition. The speech recognition introduces all kinds of interesting permutations of the original text. The recognized text is spoken again and so a dreamlike, unique nonsensical discussion starts. It lets you hear what goes on in the mind of the phone.

The idea is based on Alvin Lucier’s I am Sitting in a Room form 1969 which is embedded below. He used analogue tapes to generate a similar recursive loop. It is a better implementation of something I did a couple of years ago.

The implementation is done with Android and its API’s. Both speech recognition and text to speech are available on android. Those API’s are used and a user interface shows the recognized text. An example of a session can be found below:

To install the application you can download Tryalogue.apk of use the QR-code below. You need Android 2.3 with Voice Recognition and TTS installed. Also needed is an internet connection. The source is also up for grabs.


~ Android Gingerbread 2.3.4 on LG GT540 Optimus

I have upgraded the operating system on my LG GT540 Optimus from the stock Android 1.6 to Android Gingerbread 2.3.4. I followed this updgrade procedure.

It is well worth it to spend some time upgrading the phone, especially from 1.6. Everything feels a lot faster and the upgraded applications, e.g. Gallery, are nicely improved.

The main reason I upgraded my phone is to get the open source accessory development kit (ADK) for Android working. I got the DemoKit application working after some time but need to do some more experiments to see if the hardware actually works: I am waiting for a USB Host Shield for Arduino. To be continued…



~ TwinSeats heeft Apps For Ghent gewonnen!

Vorige zaterdag werd Apps For Ghent georganiseerd: een activiteit om het belang van open data te onderstrepen in navolging van onder meer Apps For Amsterdam en New York City Big App. Tijdens de voormiddag kwamen er verschillende organisaties hun open gestelde data voorstellen de namiddag werd gereserveerd voor een wedstrijd. Het doel van de wedstrijd was om in enkele uren een concept uit te werken en meteen voor te stellen. Het uitgewerkte prototype moest gedeeltelijk functioneren en gebruik maken van (Gentse) open data.

Luk Verhelst en ikzelf hebben er TwinSeats voorgesteld.

TwinSeats is een website / online initiatief om nieuwe mensen te leren kennen. Met hen deel je dezelfde culturele interesse en ga je vervolgens samen naar deze of gene voorstelling. Door events centraal te stellen kan TwinSeats uitzonderlijke cultuurburen zoeken. Leden vinden die cultuurburen dankzij een gezamenlijke voorliefde voor een artiest of attractie of eender welke bezigheid in de vrijetijdssfeer.

Het prototype is ondertussen terug te vinden op TwinSeats.be. Let wel dit is in enkele uren in elkaar geflanst en is verre van ‘af’, het achterliggende concept is belangrijker.

Samen met Wa Kank Doen van SumoCoders werden we door de jury tot winnaar uitgeroepen. Maandag verscheen er een artikel in de Standaard over AppsForGhent met een vermelding van TwinSeats. Op de Apps For Ghent site is uiteraard ook iets te vinden over TwinSeats ook het juryverslag is er te vinden. Zoals het hoort bij die categorie evenementen werd ook wat afgetweet.

Er is ook een publieksprijs verbonden aan AppsForGhent die wordt over enkele weken uitgereikt.


~ Find the MAC Address of your Android Device

Just a quick hint. If you ever need to find the MAC address of your Android device, and who doesn’t, check this file:

1
/sys/devices/virtual/net/wlan0

You can read the file with the Astro File Manager.


~ Remote Port Forwarding with Ubuntu 8.04 and OpenSSH 4.7

OpenSSH Logo

With this post I would like to draw attention to the fact that remote port forwarding with OpenSSH 4.7 on Ubuntu 8.04.1 does not work as expected.

If you follow the instructions of a SSH remote port forwarding tutorial everything goes well until you want to allow everyone to access the forwarded port (not just localhost). The problem arises when binding the forwarded port to an interface. Even with GatewayPorts yes present in /etc/ssh/sshd_config the following command shows that it went wrong:

1
2
3
4
5
user@local$ssh -R 2222:localhost:22 user@remote
user@remote$sudo netstat -lntp #on the remote server
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp6       0      0 ::1:2222                :::*                    LISTEN

It listens only via IPv6 and only on localhost an not on every interface (as per request by defining GatewayPorts yes). The netstat command should yield this output:

1
2
3
4
5
user@local$ssh -R 2222:localhost:22 user@remote
user@remote$sudo netstat -lntp #on the remote server
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:2222            0.0.0.0:*               LISTEN

I do not really know here it goes wrong but there is an easy workaround. By defining both

1
2
GatewayPorts yes
AddressFamily inet

in /etc/ssh/sshd_config remote port forwarding works fine but you lose IPv6 connectivity (this due to the AddressFamily setting). Another solution is to use more up to date software: the bug is not present in Ubuntu 10.04 with OpenSSH 5.3 (I don’t know if it is an Ubuntu or OpenSSH bug, or even a configuration issue.

I have been struggling with this issue for a couple of hours and, with this blog post, I hope I can prevent someone else from doing the same.


~ Oneliner to Install ssh-copy-id on Mac OS X

ssh-copy-id is a practical bash script, installed by default on Ubuntu. The script is used to distribute public keys. The following oneliner makes it available on Mac OS X:

1
sudo bash < <( curl --silent http://0110.be/files/attachments/314/install-ssh-copy-id.bash )

This oneliner does three things:

  1. It copies ssh-copy-id from this website to /bin/ssh-copy-id.
  2. It makes sure that ssh-copy-id is executable, using chmod.
  3. There is no three

The install procedure needs superuser rights because it writes in the /bin folder. Executing scripts from untrusted sources with superuser rights is actually really, really, extremely dangerous. But in this case it is rather innocent.

The ssh-copy-id script is the one provided with Ubuntu and Debian, I assume it is GPL’ed. I have not modified it for Mac OS X but it seems to behave as expected. I have only tested the install script and behavior on 10.6.5, YMMV.


~ How to Develop for LG GT540 Optimus on Ubuntu

This post describes a crucial aspect of how to connect an android phone, the LG GT540 Optimus, to an Ubunu Linux computer. The method is probably similar on different UNIX like platforms with different phones.

To recognize the phone when it is connected via usb you need to create an UDEV rule. Create the file /etc/udev/rules.d/29.lg545.rules with following contents:

1
SUBSYSTEM=="usb",ATTRS{idVendor}=="1004",ATTRS{idProduct}=="61b4",MODE="0666"

On the phone you need to enable debugging using the settings and (this is rather important) make sure that the “mass storage only” setting is disabled.

Rooting the device makes sure you have superuser rights. Installing the android SDK is well documented.

Good luck!


~ OpenRD - A Low Power Server Running Debian on ARM

GuruPlug
This blog post comments on using the Marvell OpenRD SoC(System on a Chip) as a low power multipurpose home server.

The Hardware

The specifications of the OpenRD SoC are very similar to the better known SheevaPlug devices, so it has 512MB DDR2 RAM, an 1.2GHz ARM processor and 512MB internal flash. To be more precise the OpenRD SoC is essentially a SheevaPlug in a different form factor. The main advantage of this form factor is the number of available connections: 7xUSB, SATA, eSATA, 2xGb Ethernet, VGA, Audio, … which make the device a lot more extendable and practical as a mulitpurpose home server.

The Software

Thanks to the work of Dr. Martin Michlmayr there is a Debian port for the Kirkwood platform readily available. He even wrote a tutorial on how to install Debian on a SheevaPlug. Installing Debian on an OpenRD is exactly the same except for one important detail: the arcNumber variable.

Once Debian is installed you can apt-get or aptitude almost all the software you are used to: webserver, samba, ruby, …

The Alternatives


~ Doorhacking: Opening a Door With Your Cellphone

The problem: There is a group of people that want access to Hackerspace Ghent but there is only one remote to open the gate.

The solution: Build a system that reacts to a phone call by opening the gate if the number of the caller is whitelisted.

What you need:

The Hack: First of all try to get caller id working by following the Caller ID with Linux and Huawei e220 tutorial. If this works you can listen to the serial communication using pySerial and react to a call. The following python code shows the wait for call method:

1
2
3
4
5
6
7
8
9
10
def wait_for_call(self):
  self.data_channel.open()
  call_id_pattern = re.compile('.*CLIP.*"\+([0-9]+)",.*')
  while True:
    bytes = self.data_channel.inWaiting()
    buffer = self.data_channel.readline(bytes)
    call_id_match = call_id_pattern.match(buffer)
    if call_id_match:
      number = call_id_match.group(1)
      self.handle_call(number)

The handle_call method … handles the call.

The second thing that is needed is a way to send a signal from the beagle board to the remote. Sending a signal from the beagle board using Linux is really simple. The following bash commands initialize, activate and deactivate a pin.

1
2
3
echo 168 > /sys/class/gpio/export
echo "high" > /sys/class/gpio/gpio168/direction
echo "low" > /sys/class/gpio/gpio168/direction

~ Caller ID with Linux and Huawei e220

This is the scenario: you have a Huawei e220, a linux computer and you want to react to a call from a set of predefined numbers. E.g. ordering a pizza when you receive a call from a certain number.

The Huawei e220 supports a subset of the AT commands, which subset is an enterprise secret of te Huawei company. So there is no documentation available for the device I bought, thanks Huawei. Anyhow when you attach the e220 to a Linux machine you should get two serial ports:

1
2
/dev/ttyUSB0
/dev/ttyUSB1

To connect to the devices you can use a serial client. GNU Screen can be used as a serial client like this: screen /dev/ttyUSB0 115200. The first device, ttyUSB0 is used to control ttyUSB1, so to enable caller ID on te Huawei e220 you need to send this message to ttyUSB0:

1
AT+CLIP=1

To check for calls you should listen to ttyUSB1. A serial session for ttyUSB1 looks like:

1
2
3
4
5
^BOOT:44594282,0,0,0,6
^RSSI:18
RING
+CLIP: "+33499311152",145,,,,0
^BOOT:44594282,0,0,0,6

The RING and CLIP messages are the most interesting. The RING signifies an incoming call, the CLIP is the caller ID. The BOOT and RSSI are some kind of ping messages. The following Python script demonstrates a complete session that enables caller ID, waits for a phone call and prints the number of the caller.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
#!/usr/bin/env python
import serial, re

command_channel = serial.Serial(
        port='/dev/ttyUSB0',
        baudrate=115200,
        parity=serial.PARITY_NONE,
        stopbits=serial.STOPBITS_ONE,
        bytesize=serial.EIGHTBITS
)
command_channel.open()
#enable caller id
command_channel.write("AT+CLIP=1" + "\r\n")
command_channel.close()

ser = serial.Serial(
        port='/dev/ttyUSB1',
        baudrate=9600,
        parity=serial.PARITY_NONE,
        stopbits=serial.STOPBITS_ONE,
        bytesize=serial.EIGHTBITS
)

ser.open()

pattern = re.compile('.*CLIP.*"\+([0-9]+)",.*')

while 1:
        buffer = ser.read(ser.inWaiting()).strip()
        buffer = buffer.replace("\n","")
        match = pattern.match(buffer)
        if match:
                number = match.group(1)
                print number

~ YIN Pitch Tracker in JAVA

To make Tarsos more portable I wrote a pitch tracker in pure JAVA using the YIN algorithm based on the implementation in C of aubio. The implementation also uses some code written by Karl Helgasson and Teun de Lange of the Jazzperiments project.

It can be used to perform real time pitch detection or to analyse files. To use it as a real time pitch detector just start the JAR-file by double clicking. To analyse a file execute one of the following. The first results in a list of annotations (text), the second shows the annotations graphically.

1
2
java -jar pitch_detector_yin.jar  flute.novib.mf.C5B5.wav
java -jar pitch_detector_yin.jar  --file flute.novib.mf.C5B5.wav

The provided flute sample is from The Musical Samples library of the University of Iowa and converted to mono wav. The source code of the pitch tracker can be found below.

Update: the Yin implementation in Java has been incorporated into the TarsosDSP project. An open source, Real-Time Audio Processing Framework in Java.


~ Boids 3D with Processing


~ Jobsopschool.be

Jobsopschool.be werd door 0110.be ontwikkeld in opdracht van scholengemeenschap Sperregem. Het doel van die webapplicatie is om de administratieve rompslomp bij het zoeken naar en aanwerven van kandidaten voor vervangingen in het onderwijs te vereenvoudigen. Zoek je vacatures in het basisonderwijs? Neem dan zeker een kijkje op Jobopschool.be.


~ Order Pizza with USB Pizza Button

Recently I bought a big shiny red USB-button. It is big, red and shiny. Initially I planned to use it to deploy new versions of websites to a server but I found a much better use: ordering pizza. Graphically the use case translates to something akin to:

If you would like to enhance your life quality leveraging the power of a USB pizza-button: you can! This is what you need:

  1. A PC running Linux. This tutorial is specifically geared towards Debian-based distos. YMMV.
  2. A big, shiny red USB button. Just google “USB panic button” if you want one.
  3. A location where you can order pizzas via a website. I live in Ghent, Belgium and use just-eat.be. Other websites can be supported by modifying a Ruby script.

Technically we need a driver to check when the button was pushed, a way to communicate the fact that the button was pushed and lastly we need to be able to react to the request.

The driver: on the internets I found a driver for the button. Another modification was done to make the driver process a daemon.

The communication: The original Python script executed another script on the local pc. A more flexible approach is possible using sockets. With sockets it is possible to notify any computer on a network.

1
2
3
4
5
6
7
if PanicButton().pressed():
  # create a TCP socket
  s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
  # connect to server on the port
  s.connect((SERVER, SERVER_TCP_PORT))
  # send the order (margherita at restaurant mario)
  s.send("mario:  [margherita_big]\n")

The reaction: a ruby TCP server waits for message from the driver. When it does it automates a HTTP session on a website. It executes a series of HTTP-GET’s and POST’s. It uses the mechanize library.

1
2
3
4
5
6
7
8
9
login_url = "http://www.just-eat.be/pages/member/login.aspx"
a = WWW::Mechanize.new
a.get(login_url) do |login_page|   
  #post login_form
  login_form = login_page.forms.first
  login_form.txtUser = "username"
  login_form.txtPass  = "password"
  a.submit(login_form, login_form.buttons[1])
end

Some libraries are needed. For python you need the usb library, the python deamons lib needs to be installed seperatly. Setuptools are needed to install the deamons package.

1
sudo apt-get install python-usb python-setuptools

Ruby needs rubygems to install the needed mechanize and daemons library. Mechanize needs the libxslt-dev package. You also need the build-essential package to build mechanize.

1
2
sudo apt-get install rubygems libxslt-dev
sudo gem install mechanize daemons

To automatically start the daemons on boot you can use the crontab @reboot directive of the root user. E.g.:

1
2
@reboot /opt/pizza_service/pizza_daemon.rb
@reboot /opt/pizza_service/pizza_button_driver.py

~ Touchatag RFID reader and Ubuntu Linux

Touchatag Logo

This blog post is about how to use the Touchatag RFID reader hardware on Ubuntu Linux without using the Touchatag web service.

An RFID reader with tags can used to fire events. With a bit of scripting the events can be handled to do practically any task.

Normally a Touchatag reader is used together with the Touchatag web service but for some RFID applications the web service is just not practical. E.g. for embedded Linux devices without an Internet connection. In this tutorial I wil document how I got the Touchatag hardware working under Ubuntu Linux.

To follow this tutorial you will need:

The touchatag USB reader works at 13.56MHz (High Frequency RFID) and has a readout distance of about 4 cm (1.5 inch) when used with the touchatag RFID tags. Internally it uses an ACS ACR122U reader with a SAM card. A Linux driver is readily available so when you plug it in lsusb you should get something like this:

1
2
3
4
lsusb 

Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 005 Device 004: ID 072e:90dd Advanced Card Systems, Ltd

lsusb recognizes the device incorrectly but that’s not a problem. To read RFID-tags and respond to events additional software is needed: tagEventor is a software library that does just that. It can be downloaded using an svn command:

1
svn export http://tageventor.googlecode.com svn/trunk/ tageventor

To compile tagEventor a couple of other software packages or header files should be available on your system. Te tagEventor software dependencies are described on the tagEventor wiki. On Ubuntu (and possibly other Debian based distro’s the installation is simple:

1
2
3
sudo aptitude install build-essential libpcsclite-dev build-essential pcscd libccid
#if you need gnome support
#sudo aptitude install libgtk2.0-dev

Now the tricky part. Two header files of the pcsclite package need to be modified (update: this bug is fixed see here). tagEventor builds and can be installed:

1
2
3
4
5
6
7
cd tageventor
make
...
tagEventor BUILT (./bin/Release/tagEventor)

sudo ./install.sh
...

When tagEventor is correctly installed the only thing left is … to build your application. When an event is fired tagEventor executes the /etc/tageventor/generic script with three parameters (see below). Using some kind of IPC an application can react to events. A simple and flexible way to propagate events (inter-processes, over a network, platform and programming language independent) uses sockets. The code below is the /etc/tageventor/generic script (make sure it is executable), it communicates with the server: the second script. To run the server execute ruby /name/of/server.rb

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/usr/bin/ruby

# $1 = SAM (unique ID of the SAM chip in the smart card reader if exists, "NoSAM" otherwise
# $2 = UID (unique ID of the tag, as later we may use wildcard naming)
# $3 = Event Type (IN for new tag placed on reader, OUT for tag removed from reader)

require 'socket'

data = ARGV.join('|')
puts data

streamSock = TCPSocket.new( "127.0.0.1", 20000 )
streamSock.send(data, 0)
streamSock.close
1
2
3
4
5
6
7
8
require "socket"  
dts = TCPServer.new('localhost', 20000) 
loop do  
   Thread.start(dts.accept) do |s|
     puts s.gets
     s.close  
   end  
end

The tagEventor software is made by the Autelic Association a Non-Profit association dedicated to making technology easier to use for all. I would like to thank Andrew Mackenzie, the founder and president of the association for creating the software and the support.


~ Jobsopschool

Ik heb in opdracht van scholengroep Sperregem een website gemaakt die het vinden van kandidaten voor korte vervangingen vlotter doet verlopen. Mensen met interesse voor vacatures in het onderwijs in West-Vlaanderen kunnen zich er op inschrijven.

De website heeft enkele voordelen voor verschillende scholen in de scholengroep:

Ook voor de aspirant onderwijzers is de website handig:

Daarnaast is het ook voor de personeelsdienst een handige tool: die kan nu een beter overzicht bewaren over de vacatures en de invulling ervan in de verschillende scholen.

Hieronder staan enkele screenshots.


~ Vooruit.be vernieuwd

Vooruit Logo

Vandaag is de vernieuwde vooruitwebsite gelanceerd:

We bieden je nog meer video’s, foto’s, audiotracks en tekstmateriaal en hebben ook jouw persoonlijke voordelen uitgebreid. Wanneer je lid wordt van www.vooruit.be, kan je nog steeds je kalender aanvullen, vrienden maken en reacties posten, maar daarnaast krijg je ook aanbevelingen op maat, kan je voorstellingen tippen en kan je berichten sturen naar vrienden *.

Het gepersonaliseerde aanbevelingssysteem is door Greet Dolvelde en mezelf in het kader van onze thesis: Collaborative Filtering: Onderzoek & implementatie [pdf] ontwikkeld. Dus waar wacht je nog op? Word lid, check de aanbevelingen bij concerten en vooral je gepersonaliseerde aanbevelingen.

Voor de iets minder enthousiaste doorklikkikkers staan hieronder wat screenshots van de verschillende soorten aanbevelingen op www.vooruit.be:


~ Verhuis naar VPS

VPS

Waarschijnlijk heb je het al gemerkt: deze site gaat nu heel wat sneller. Dit is te danken aan een verhuis. 0110.be wordt nu gehost op een VPS.

De virtuele server heeft Ubuntu 8.04 LTS Server als besturingssysteem en draait op een Xen hypervisor. De fysieke server zelf bevat een achttal Intel® Xeon® E5440 @ 2.83GHz CPU’s.

De server staat in Amsterdam en is rechtstreeks verbonden met het grootste internetknooppunt ter wereld: AMS-IX.


~ SQL-bestand met een lijst van alle Belgische postcodes en steden

Logo de Post

Uit de lijst van postcodes van alle Belgische steden heb ik een SQL-bestand samengesteld. De gegevens bevatten de postcode zelf, de naam van de stad, de naam van de stad in hoofdletters en een veld “structure” waaruit de gemeente-deelgemeente relatie gehaald kan worden als er op gesorteerd wordt. Dit zijn bijvoorbeeld de deelgemeentes van Chimay.

6460   CHIMAY
6460        Bailièvre
6460        Robechies
6460        Saint-Remy (Ht.)
6460        Salles
6460        Villers-la-Tour
6461        Virelles
6462        Vaulx-lez-Chimay
6463        Lompret
6464        Baileux
6464        Bourlers
6464        Forges
6464        l'Escaillère
6464        Rièzes


Het sorteren kan in PostgreSQL met deze SQL instructie: order by translate(structure, ' ', 'z'). Het SQL-script zelf is een lijst van INSERT INTO SQL-Statements.

insert into cities(zipcode,name,up,structure)  VALUES ('1790','Affligem','AFFLIGEM','1790   AFFLIGEM');
insert into cities(zipcode,name,up,structure)  VALUES ('9051','Afsnee','AFSNEE','9051        Afsnee');
insert into cities(zipcode,name,up,structure)  VALUES ('5544','Agimont','AGIMONT','5544        Agimont');
...

Dit is het SQL-bestand met een lijst van alle Belgische postcodes en steden. Hopelijk is hier iemand ooit iets mee.


~ Query Tool

Vooruit Logo

While working at the Vooruit Arts Centre I got the assignment to create a tool to query an Oracle database with ticketing data. There were a few requirements for the Query Tool, in the current version all of these are met:

By publishing the Query Tool on my website I hope that the fruits of my labour can be enjoyed by a wider audience. To see it in action you can give it a spin. A recent version, version 6, of the JRE is needed.

How Do I Use The Query Tool?

The program supports two ways to query a database:

The two buttons below are self explanatory. When the button “CVS Export” is hit a CVS file is created in a configured directory.

Depending on the complexity of a query it can take a long time before results are returned. Because the application is multithreaded the user interface remains responsive and the query can be stopped at any time.

The contents of the tab “log” gives you an idea what the application does. When something goes awry while executing a query a message appears in this tab.

The tab “Config” can be used to set configuration parameters. The tab “Help” contains… helpful information.

Screenshot

How Do I Add My Own Queries?

The list of predefined queries is constructed by iterating over SQL-files in a configured directory. Adding additional queries to the program is easy, just add an extra SQL-file to the directory. An SQL-file should have the following format, otherwise it is ignored:

TITEL
----
DESCRIPTION
----
SQL-INSTRUCTION with zero or more !{PARAMETERS}!

In the screen shot above this query is visible:

Select products in category
----
Select all the products in a category.
----
SELECT * FROM  
products WHERE categoryid = !{category}!  

To make the queries dynamic the Query Tool supports different kinds of parameters. A parameter has this form: !{type name}!, the name is optional. If there is a name specified it is used as a label in the interface, otherwise type is used. There are three types of parameters:

  1. Parameters that define a type. For each type a corresponding user interface is rendered. E.g. for the type string a text field is rendered. The supported types are:
    • !{string}!
    • !{boolean}!
    • !{double}!
    • !{date}!
    • !{integer}!
  2. Parameters for raw SQL. A textfield is rendered, the contents is directly injected in the SQL-query. It has this format: !{sql}!
  3. Parameters for lists. In the example above a list parameter is used. These lists are fetched from the database. E.g. a list of categories. The SQL-instruction and name of the list parameters can be configured.

If you want to use your own database you need to configure the database connection string. The program uses JDBC to connect to the database. It uses metadata provided by the JDBC layer. If your database has a JDBC driver with support for metadata the Query Tool will work correctly. The JDBC driver must be included in the classpath.

Credits

The Query Tool uses the famfamfam mini icons.

For demoing purposes the executable contains a lightweight hsql database. The data in the database is a modified version of the Microsoft Northwind database. The northwind hsql database is created with this SQL-script.

Downloads


~ Boids in Python

Python Logo

Na het bekijken van het onderstaande filmpje van een zwerm spreeuwen vroeg ik mij af of die bewegingen zich aan een bepaald algoritme houden en of ik een programma kon schrijven die dit gedrag simuleerde. Na wat onderzoek bleek dat zowat alle dieren die zich in kudde voortbewegen dit doen volgens gelijkaardige, relatief eenvoudige processen.



Er zijn drie basisregels waaraan onder andere scholen vissen, zwermen vogels en kuddes gnoes zich houden:

  1. Voorkom botsingen met de dichtste buren door de andere kant op te gaan.
  2. Beweeg ongeveer in de zelfde richting en even snel als het gemiddelde van de buren.
  3. Beweeg naar het midden van de groep.

De paper Flocks, Herds, and Schools:
A Distributed Behavioral Model – 1987
van Craig W. Reynolds was de eerste die deze regels formeel omschreef. Aan de hand van die documentatie en een praktische omschrijving kon ik aan een implementatie beginnen. De boids implementatie in Python gebruikt pygame om een groep creaturen voor te stellen met een gekleurd vierkantje. De creaturen bewegen zich volgens de drie bovenstaande regels. Daarnaast proberen ze om binnen het zichtbare kader te blijven en begeven ze zich naar het midden van het kader. Om de boel wat interactiever te maken wordt de muisaanwijzer gezien als een gevaarlijk roofdier die niets liever lust dan vierkantjes. De vierkantjes proberen de roof-muis dus te ontlopen. De zesde en laatste regel legt een maximum snelheid op, zodat de bewegingen realistisch blijven.

De huidige implementatie is O(n²), terwijl het O(nk) zou moeten zijn, met k de grootte van de burenlijst. Een vloeiende simulatie van een zwerm van duizenden is dus momenteel niet mogelijk. De berekeningen voor een extra dimensie zijn erg eenvoudig te implementeren, helaas is de visualisatie van de resultaten dat niet. Ik heb geprobeerd om met de OpenGL bindingen voor Python te werken maar veel resultaat heeft dat niet opgeleverd. Dit is de 3D-versie, maar dan met een 2D visualsatie.

Ik heb er voor het gemak ook een uitvoerbaar bestand voor Windows van gemaakt.


~ Vergelijking Ruby VMs

Ruby Logo

Ik heb een B-Tree en een Red-Black tree geschreven in Ruby. Om die datastructuren te testen heb ik een programma geschreven dat alle woorden uit een grote tekst inleest in een b-tree met het woord als sleutel en de frequentie als waarde en daarna een red black tree gebruikt als priority queue met als sleutel de frequentie en als waarde het woord. Op die manier kunnen de meest voorkomende woorden bepaald worden. De broncode is hier neer te laden.

Het programma is een ideale test voor Ruby VM’s: het is redelijk intensief en gevarieerd. IronRuby, JRuby, Ruby 1.8 en Ruby 1.9 werden getest op een Intel Core 2 Duo E6660 en dit zijn de resultaten:

VM Duur Geheugen VM details
JRuby 28.79 sec 162MB jruby 1.1.3 (ruby 1.8.6 patchlevel 114) (2008-07-20 rev 7243) [x86-java]
IronRuby 88.15 sec 195MB IronRuby 1.0.0.1 on .NET 2.0.50727.1433
Ruby 1.8 104.1 sec 102MB ruby 1.8.6 (2007-09-24 patchlevel 111) [i386-mswin32]
Ruby 1.8 66.8 sec 96MB ruby 1.8.6 (2007-09-24 patchlevel 111) [universal-darwin9.0]
Ruby 1.9 33.42 sec 88MB ruby 1.9.1p0 (2009-01-30 revision 21907) [i386-darwin9.2.0]

De verschillen zijn dus erg groot. Zowel in geheugengebruik als in duur. Ruby 1.8 is blijkbaar erg traag maar gebruikt relatief weinig geheugen. JRuby is in deze test drie keer sneller maar gebruikt meer geheugen. Ook IronRuby is sneller dan de standaard Ruby VM maar gebruikt net niet het dubbele aan geheugen. Hierbij moet wel verteld worden dat IronRuby een alfa build is, de resultaten kunnen dus nog veel veranderen.

Ruby 1.9 werd later getest op Mac OS X, met dezelfde pc. De nieuwe Ruby lijkt toch enkele beloften in te lossen. Ter vergelijking werd de voor Mac OS X geoptimaliseerde Ruby 1.8 VM die standaard met het besturingssysteem meegeleverd wordt ook nog getest.


~ Bash Script to Backup Remote Postgres Databases via Cron with Password Authentication

PostgreSQL Logo

I have modified a bash-script to backup PostgreSQL databases, this is the original script. The modified version can be used to backup databases on a remote or local database server. Also this script does not need a trust relationship but uses a login and password. To get started you need to:

  1. Modify the directory and database variables to suit your needs.
  2. Add an entry to crontab to perform the backups nightly or whenever you wish.
  3. Have fun.

The script empties ~/.pgpass and writes login info for the system databases. Then it logs in and fetches an up-to-date list of databases. For every database an entry is made in ~/.pgpass and every database is backed up. The results are logged to $logfile.


~ Collaborative Filtering: Onderzoek & implementatie

Vooruit Logo

Gisteren werd de laatste hand gelegd aan de thesis over collaborative filtering (CF) waar Greet Dolvelde en ikzelf een jaar mee bezig zijn geweest. Als je hier meer over wilt weten dan kan je het werk Collaborative Filtering: Onderzoek & implementatie [pdf] downloaden. De intiemste details van verschillende CF-benaderingen worden er in geuren en kleuren uit de doeken gedaan. Uit de poster zou moeten duidelijk zijn waarover de thesis eigenlijk gaat:

Poster Collaborative filtering: onderzoek & implementatie

De poster is ook verkrijgbaar in pdf-formaat.


~ Genetisch algoritme in Python

Python Logo

Maandag heb ik een examen over A.I. Dat gaat onder ander over genetische algoritmen. Om dat principe in werking te zien heb ik een eenvoudig programmatje geschreven in Python: er zitten enkele beestjes (vierkantjes) in een omgeving. Als de beestjes opvallen, witte beestjes zie je goed zitten op een zwarte achtergrond, worden ze verslonden. De beestjes die minder opvallen overleven, muteren of planten zich voort. Overlevenden gaan een generatie langer mee. Bij het muteren verandert de huidskleur willekeurig. Bij voortplanten wordt er een kind gemaakt die het gemiddelde van de huidskleuren van zijn ouders als kleur heeft. Als het meerendeel van de beestjes uiteindelijk een goeie schutkleur aangenomen hebben kan de achtergrond veranderd worden en begint alles van voor af aan.

Screenshot genetisch algoritme.

Dit is de broncode van het programma, het werkt enkel met grijswaarden. Er is ook een uitvoerbaar bestand voor Windows. De .exe is gemaakt met PyInstaller. De achtergrondkleur kan veranderd worden door er op te klikken. Dit is broncode van de versie met kleur.


~ Text To Speech Recognition

Python Logo

Om Python wat te leren kennen heb ik een “Text To Speech Recognition” programma geschreven. Het roept SAPI 5.1 aan om een tekst voor te laten lezen door Microsoft Sam. Het voorgelezen stuk tekst wordt daarna meteen via microfoon opgenomen en Sam probeert het zelf, via Speech Recognition, te verstaan. Het resultaat van de speech recognition wordt dan gelezen door Sam enzovoort… Dit is een voorbeeld van Sam in dialoog met zichzelf:

I am sitting in a room different from the one you are in now. I am recording the sound of my speaking voice and I am going to play it back into the room again.

I’m sitting in a room different from the one U.N. NA I’m recording the sound of my speak English and I’m going to play it back into the room against

I’m sitting in a room different from the one you could in a LAN recording the sound of my speak English and I’m going to clamp back into the room against

I’m sitting in a room different from the one you put in a LAN recording the sound and I speak English and I’m going to clamp back into the room against

I’m sitting in a room different from the one you put in a LAN recording the sound and I speak a Mac into ghent

I’m sitting in a room different from the one you put in a LAN recording the sound and I speak a match into ghent

De broncode is hier te vinden.


~ Studium Generale: Het Vergeten Van Het Geheugen

Ik heb voor het vak Studium Generale een paper geschreven. De paper moet een kritische verwerking zijn van vijf bijgewoonde lezingen. Dit jaar was het thema

Het studium generale van de Hogeschool Gent zal zich dit academiejaar
buigen over de paradoxale werking van geheugen en vergeten voor de huidige
wereld. Ook blijkt het massale opslaan en toegankelijk maken van het collectieve
geheugen slechts voor een minderheid kritische meerwaarde te bezitten; het
toenemen van historisch besef en kritische inventarisatie gaat niet hand in hand
met de toename aan inzicht.

In deze paper wordt de aandacht gevestigd op één aspect van de brede waaier aan
mogelijke onderwerpen: De invloed van retorische technieken en de filosofie van
het dramatisme op de vorming van het collectieve geheugen. Aan dat centrale
thema worden de verschillende lezingen verbonden. De paper is te downloaden als pdf en ook de latex bron bestanden zijn beschikbaar:


~ Stage bij kunstencentrum Vooruit

Vooruit Logo

Kunstencentrum Vooruit heeft sinds kort een nieuwe site opgericht. Aan de site is een community luik gekoppeld waarop gebruikers een profiel kunnen aanmaken en evenementen op een persoonlijke wishlist kunnen plaatsen. Daarnaast kunnen ze er ook tickets voor voorstellingen kopen. Ook kunnen de gebruikers relaties tussen zichzelf en vrienden leggen.

Aan de hand van die gegevens en de gegevens van in het back-office systeem zou het mogelijk moeten zijn om een cultureel profiel op te stellen van de gebruikers en ze gepersonaliseerde, relevante tips geven. De voordelen van zo’n Customer Intelligence systeem zijn legio:

En dat C.I. systeem gaan wij volgend jaar ontwikkelen. Er zal een uitgebreid onderzoek gebeuren naar de manier waarop en daarna wordt een implementatie gekoppeld met de in Ruby on Rails ontwikkelde website.


~ Muzieksmaak in een grafiekje

Dit is een grafiekje waarop je de evolutie van mijn muziekale smaak kan zien tijdens de voorbije twee jaar. De dikte van de stroom toont de populariteit tijdens die periode aan. Onderaan staat er in het klein een tijdslijn. Klik voor de (erg) grote versie.

Joren zijn persoonlijke muzieksmaak en dat in één grafiekje.


~ Sorteeralgoritmes in c++

Voor het vak algoritmen hebben we enkele sorteeralgoritmes besproken en in c++ geïmplementeerd. Dit is mijn versie van de algoritmes het gebruikt een interface SortAlgorithm en het Strategy design pattern om zijn werk te doen.

Sorteer algoritmes en het Strategy design pattern

In principe kan om het even wat gesorteerd worden maar sommige sorteeralgoritmes (Counting Sort) werken enkel met int’s. Om strings te sorteren kan gebruik gemaakt worden van de Nstring klasse.

Elk sorteeralgoritme kan getest en gemeten worden, dit is de uitvoor voor het shell sort algoritme met de Sedgewick incrementen:


Measuring sorting algorithm: Shell Sort: Sedgewick increments
                            Random           Sorted         Reversed
              128                0                0                0
              230                0                0                0
              414                0                0                0
              745                0                0                0
             1341                0                0                0
             2413                0                0                0
             4343                0                0                0
             7817                0                0                0
            14070             0.01                0                0
            25326             0.01                0             0.01
            45586             0.01             0.01             0.01
            82054             0.03             0.01             0.02
           147697             0.07             0.02             0.03
           265854             0.12             0.05             0.06
           478537             0.22             0.09             0.12
           861366             0.44             0.15             0.22

Hier kan de code gedownload worden: download. Niet alle algoritmes werken even goed dit is een lijst van werkende algoritmes die het wel doen:


~ Vakantiejob bij Encima

Ik werk momenteel bij Encima. Encima maakt websites en andere toepassingen in Java. Mijn eerste week zit er al op en ik heb me bezig gehouden met een module voor www.weekendesk.com. Maandag wordt de module, samen met de nieuwe versie van de site, in gebruik genomen. Weekenddesk doet het volgende:

Weekendesk.com is een B2C e-commerce site die weekend- en dagtrips on line verkoopt en zich hierbij in eerste instantie op de Belgische en Nederlandse markt richt. Weekendesk fungeert hierbij als tussenpersoon tussen de consument en de organisator van de vrijetijdsactiviteit. De website biedt de klant op een frisse en overzichtelijke manier alle nodige informatie over de vrijetijdsactiviteiten.
De activiteiten zijn onderverdeeld in twee types: cadeaubonnen en weekendideeën. De prijs en beschikbaarheid van elke activiteit is steeds up-to-date. On line boeken is snel, eenvoudig en veilig. Betalen kan via creditcard of per overschrijving.
Via een on line content management module kan Weekendesk alle activiteiten en de gerelateerde informatie (beschrijving, fotoboek, prijzen, beschikbaarheid, promotie, ...) beheren. Een order management module laat hen toe de bestellingen on line op te volgen.
Ook de leveranciers (organisatoren) van de activiteiten kunnen via een private on line module de beschikbaarheid, prijzen en promoties inbrengen.
En net die module voor de leveranciers, organisatoren (meestal hotels) heb ik in elkaar gestoken.

~ 0110 Logo set

Hieronder staat een lijst van 0110 logo's in verschillende formaten die gebruikt kunnen worden voor verschillende doeleinden.

Web:

Hieronder staat een lijst met voorgebakken logo's voor op het web. Voor andere groottes, achtergronden (transparant), kleuren, formaten kun je je behelpen met het 0110-logo in photoshop formaat.

Print

Onderstaande logo's kunnen gebruikt worden door mensen die dingen uitprinten, op papier, met inkt.


~ Edutainment paper: Imperceptible edutainment & mathematics

Ik heb voor BIT4, een cursus die ik hier volg een paper geschreven over onwaarneembare edutainment en wiskunde. De bedoeling ervan is aan te tonen dat edutainment meer is dan schooltelevisie of iets als een flash game over de Afrikaanse zwaluw. Ik probeer te bewijzen dat drie verschillende spellen spelenderwijs wiskunde aanleren en dus edutaining zijn. Right...:


~ Studium Generale: Smaakmakers

Ik heb voor het vak Studium Generale een paper geschreven. De paper moet een kritische verwerking zijn van vijf bijgewoonde lezingen of een eigen verhaal rond het centrale thema. Ik heb voor het tweede gekozen en omdat het algemene thema nogal breed is:

Tijdens de volgende jaargang willen we nadenken over de vraag, hoe het gesteld is met onze zo geroemde individuele vrijheid in het licht van de globale netwerken die deze vrijheden zeggen te leveren.

Dacht ik van het aangename aan het nuttige te koppelen en over iets te schrijven wat me zelf interesseert en waar ik vaag iets over weet. Muziek en het internet en hoe smaak beïnvloed wordt door het internet. Mijn paper is dan ook getiteld Smaakmakers. De paper moet pas ingediend worden de 2e mei dus ik zou het appreciëren mocht er iemand feedback op geven. Hieronder staan links naar het document in verschillende formaten:


~ Wat we in Halmstad doen

Halmstad University

Mel en ik zijn bezig aan een systeem om war games te ondersteunen. War games zijn grootschalige ramp-oefeningen.

Bijvoorbeeld een aanval van terroristen op een kerncentrale. Er wordt een bepaald scenario opgesteld: terroristen gijzelen werknemers en dreigen de boel op te blazen. Er wordt op deze situatie gereageerd door iedereen die daar in het echt ook mee zou te maken hebben: politie, swat teams, er worden fake nieuwsberichten gemaakt door de media, de werknemers van de centrale zelf,....
Tijdens die simulatie wordt adhv vragenlijsten gepolst hoe goed (of slecht) alles verloopt. Die vragenlijsten komen op een beveiligde website die wij aan het programmeren zijn. Die data is dan de basis voor een rapport met de bevindingen: wat verliep er goed en wat kan beter.

We gebruiken het ASP.NET 2.0 platform in samenwerking met een sql express 2005 database én een object database: db4o. Daarmee zijn we aan het experimenteren. Daarbij horen unit tests en load tests. Daaruit blijkt dat Db4o zijn beloftes waar maakt:

Embed db4o's native Java and .NET open source object database engine into your product and store even the most complex object structures with only one line of code. db4o slashes development cost and time, provides superior performance, and requires no DBA.

blijkbaar zijn we verplicht een access database te gebruiken, hoe 1994, zucht.


~ Hallo uit halmstad

We zijn hier eergisteren (maandag) aangekomen. We namen het vliegtuig van Brussel naar Kopenhagen en dan de trein naar Halmstad. De reis is volledig volgens plan verlopen, we kwamen zoals gepland om 16.54 toe. Dus daar valt niet veel spannends over te melden. Bij deze: niets spannends wink ([;)]).

Sinds kort, drie minuuten dus, heb ik draadloos internet op mijn kamer. Dat is praktisch omdat een groot deel van onze opdracht "maak een website" is. Aan die opdracht kunnen we ten vroegste vrijdag beginnen: vrijdag krijgen we een uitgebreidere briefing over wat we moeten doen. Gisteren kregen we al een oppervlakkige uitleg tijdens een lunch met onze mentor.


~ Acceptance Letter ontvangen

Logo Halmstad University

Dit weekend heb ik een brief ontvangen van de "School of Information Science, Computer and Electrical Engineering". Die ging als volgt:

LETTER OF ACCEPTANCE

With this letter I certify that the student Mr Joren Six is admitted as an exchange student to Halmstad University during the Spring semester 2006.



~ Ik ga naar Halmstad

Ik heb me, vorig jaar al, kandidaat gesteld om aan een Erasmus/Socrates uitwisselingsprogramma mee te doen. Ik had meteen interesse om naar Zweden te vertrekken. Ik ga dus waarschijnlijk vanaf begin maart in Zweden, meer bepaald Halmstad, te vinden zijn. Aan de "Högskolan Halmstad" zou ik dan een opdracht krijgen die vergelijkbaar is met een stage in een bedrijf.
Välkommen till Högskolan I Halmstad

~ Eindwerk over Beeldschermtechnologieën: LCD, PDP & CRT

LCD-Scherm

Samen met Kris Welvaert en Dries Boone heb ik een eindwerk gemaakt over verschillende beeldschermtechnologieën. Dit is een pdf van het eindwerk over LCD, PDP & CRT. De inleiding maakt duidelijk over wat het precies allemaal gaat:

De televisie en de computer hebben in de 2 de helft van de twintigste eeuw de wereld totaal veroverd en op vele gebieden sterk veranderd. Dag na dag en in alle delen van
de wereld kijken mensen naar televisie; en dag na dag zijn miljoenen computermonitoren in bedrijf over een immens gebied in sterk verschillende toepassingen.


Schermen zijn een niet te verslaan medium geworden om elektronische informatie
(tekst of grafieken), stilstaande of bewegende beelden weer te geven. Geen enkel
ander medium heeft de veelzijdigheid, de snelheid en de interactie als die van een
scherm. Belangrijkste kenmerken van een scherm zijn de weergavekwaliteit en de
reactietijd. Ze bestaan in alle soorten, maten en gewichten. Afhankelijk van de
doelgroep ziet men dat er zich bepaalde weergavetechnieken profileren, zoals
bijvoorbeeld, voor grote groepen, zal er meer gebruik gemaakt worden van projectoren
dan van enig andere techniek. Net zoals met verplaatsbare schermen, hier ziet men
hoofdzakelijk maar flat panel displays (FPD). Deze kunnen nog wel onderverdeeld
worden in bepaalde technieken maar steeds zullen dit FPD’s zijn. Bij schermen voor
persoonlijk gebruik die stabiel moeten zijn is wel een grote verscheidenheid aan
weergaventypes. Men zal deze daarom nog onderverdelen in bepaalde
toepassingsgebieden zoals op het hoofd gemonteerde schermen, muurgemonteerde
schermen, bureauschermen…

Het zou onmogelijk zijn om alle beeldvormingprincipes voldoende te kunnen belichten,
daarom hebben wij ons beperkt tot het bespreken van drie beeldvormingprincipes,
namelijk CRT (Cathode Ray Tube), PDP (Plasma Display Panel) en LCD (Liquid Chrystal
Display) omdat dit tot op heden de meest courante beeldschermen zijn. Voorafgaand
aan deze hoofdstukken hebben wij de eigenschappen van licht en kleur en de werking
van het oog uitgelegd, dit om de werking van de verschillende technieken te kunnen
begrijpen. Vervolgens is er ook een hoofdstuk over modulatie in ons eindwerk
opgenomen, om de zendermodulatie van televisie te kunnen begrijpen en tot slot
hebben wij nog een hoofdstukje over teletekst.


~ first post :p

first post :d

~ db info

De site werkt nu met een access 2002 database maar er is nog veel werk.