There is something about surprising interfaces. Having a switch to turn on a light gets quite boring after a while. Turning on a light by clapping twice, on the other hand, has some kind of magic feel to it. In a recent Mr Beast video he and his gang visit a number of expensive houses and in one of those mansions there is a light operated by clapping twice. I am not sure about the blatant materialism, but it got me thinking on how to build a similar clap-operated light yourself.
So, what are the elements needed: first a microphone to pick up sound. Second an algorithm is needed that detects claps. And finally, something that reacts to claps: a light or something else.
Many devices have microphones so sound input is relatively easy, and with some creativity there are many things waiting to be ‘clap triggered’: vacuum robots, sunscreens, lights, in-house ventilation, … The main difficulty is implementing a efficient clap-detection algorithm. Luckily there are already a few described in the literature. I have based my ANSI C implementation on ‘Duxbury, C., et al (2003). Complex domain onset detection for musical signals’.
My version of the clap-detection algorithm has two parameters which might need adapting to fit your environment. The silence threshold determines the minimum loudness for a clap to be triggered. The onset threshold determines more or less how ‘percussive’ the sound needs to be: the idea is to only react to things sounding like a clap and not to e.g. a loud whistle or other sounds. This is what the onset threshold tries to control. You can try it out below:
Demo: click the ‘start audio’ to capture your microphone and try to clap clearly twice. Lower the parameters if nothing happens.
Clap detection on a micro-controller
With this working we now can try to run this code on a micro-controller. Running it on a micro-controller makes it more practical in daily use to e.g. switch on lights. A low-cost ESP32 with a MEMS microphone is a good platform: these microcontrollers are easy to use and have WiFi connectivity which opens the possibility to trigger commands to smart sockets or other WiFi-enabled devices. The pector GitHub repository contains an Arduino project to run the clap-detection algorithm on an ESP32 or similar device (Teensy, RP2040,… ).
Clap detection in the command line
Next to the main clap detection software, there is a small script to trigger commands when a clap is detected. In this case, the script waits for a double clap and then pushes updates to a git repository. There are two reasons for this: the first is that it is fun, the second is for bragging rights. Not that many people can say they once pushed source code simply by clapping twice. It is, however, a challenge to find people who have the patience to listen to me explaining what I have done and who are impressed by this feat, so maybe there is only one reason: it is fun. Below a screen capture can be found pushing code to the pector repository.
Vid: pushing code by clapping
Have a look at the pector GitHub repository for more info on how you can make your websites/apps/command line tools/devices clap controlled!
This post contains some info on how do some basic home automation: it shows how cheap remote controlled power sockets can be managed using a computer. The aim is to power on or power off lights, a stereo or other devices remotely from a command shell.
The solution here uses an Arduino connected to a 433.33MHz transmitter. Via a Ruby script installed on the computer a command is send over serial to the Arduino. Subsequently the Arduino sends the command over the air to the power socket(s). If all goes well the power socket reacts by switching the connecting device on or off.
In the video below the process is shown. The command line interface controls the light via the Arduino. It should show the general idea.
The following Ruby script simply sends the binary control codes to the Arduino. For this type of power socket the code consist of a five bit group code and five bit device code. The Arduino is connected to /dev/tty.usbmodem411.
The code below is the complete Arduino sketch. It uses the RCSwich library, which makes the implementation very simple. Essentially it waits for a complete command and transmits it through the connected transmitter. The transmitter connected is a tx433n
#include <RCSwitch.h>RCSwitch mySwitch = RCSwitch();
char command[12];//2x5 for device and group + command
int index = 0;
char currentChar = -1;
//the led pin in use
int ledPin = 12;
void setup() {
//start the serial communication
Serial.begin(9600);
//433MHZTransmitter is connected to ArduinoPin#10
mySwitch.enableTransmit(10);
//Led connected to led pin
pinMode(ledPin, OUTPUT);
Serial.println("Started the power command center! Mwoehahaha!");
}
void readCommand(){
//read a command
while (Serial.available() > 0){
if(index < 11){
currentChar = Serial.read(); //Read a character
command[index] = currentChar; //Store it
index++; //Increment where to write next
command[index] = '\0'; // append termination char
}
}
}
void loop() {
//read a command
readCommand();
//if a command is complete
if(index == 11){
Serial.print("Recieved command: ");
Serial.println(command);
char operation = command[0];
char* group = &command[1];
//group is 5 bits, as is device
char* device = &command[6];
//execute the operation
doSwitch(operation,group,device);
//reset the index to read a new command
index=0;
}
}
void doSwitch(char operation, char* group, char* device){
digitalWrite(ledPin, HIGH);
if(operation == '1'){
mySwitch.switchOn(group, device);
Serial.print("Switched on device ");
} else {
mySwitch.switchOff(group, device);
Serial.print("Switched off device ");
}
Serial.println(device);
digitalWrite(ledPin, LOW);
}
This post documents how to implement a simple music quiz with the meta data provided by Spotify and a big red button.
During my last birthday party, organized at Hackerspace Ghent ,there was an ongoing Spotify music quiz. The concept was rather simple: If music that was playing was created in the same year as I was born, the guests could press a big red button and win a price! If they guessed incorrectly, they were publicly shamed with a sad trombone. Only one guess for each
song was allowed.
Below you can find a small videograph which shows the whole thing in action. The music quiz is simple: press the button if the song is created in 1984. In the video, at first a wrong answer is given, then a couple of invalid answers follow. Finally a good answer is given, the song “The Killing Moon by Echo & the Bunnymen” is from 1984! Woohoo!
Allright, what did I use to to implement the spotify music quiz:
A big red Arduino button, attatched by USB to a laptop.
A system with Spotify.
A way to access the meta data of the currently playing song in Spotify.
A Ruby script to connect al the parts and checks answers.
The Red Button
A nice big red button, which main features are that it is red and big, serves as the main input device for the quiz. To be able to connect the salvaged safety button to a computer via USB, an Arduino Nano is used. The Arduino is loaded with the code from the debounce tutorial, basically unchanged. Unfortunately, I could not fit the the FTDI-chip, which provides the USB connection, in the original enclosure. An additional enclosure, the white one seen in the pictures below, was added.
When pressed, the button sends a signal over the serial line. See below how the Ruby Quiz script waits for, and handles such event.
Get Meta Data from the Currently Playing Song in Spotify
Another requirement for the quiz to work is the ability to get information about the song currently playing in Spotify. On Linux this can be done with the dbus interface for Spotify. The script included below returns information about the artist, album and title of the song. It also detects if Spotify is running or not. Find it, fork it, use it, on github
#!/usr/bin/python## now_playing.py# # Python script to fetch the meta data of the currently playing# track in Spotify. This is tested on Ubuntu.
import dbus
bus = dbus.SessionBus()
try:
spotify = bus.get_object('com.spotify.qt', '/')
iface = dbus.Interface(spotify, 'org.freedesktop.MediaPlayer2')
meta_data = iface.GetMetadata()
artistname = ",".join(meta_data['xesam:artist'])
trackname = meta_data['xesam:title']
albumname = meta_data['xesam:album']
#Other fields are:# 'xesam:trackNumber', 'xesam:discNumber','mpris:trackid',# 'mpris:length','mpris:artUrl','xesam:autoRating','xesam:contentCreated','xesam:url'
print str(trackname + " | " + artistname + " | " + albumname + " | Unknown")
except dbus.exceptions.DBusException:
print "Spotify is not running."
The Quiz Ruby Script
With the individual part in order, we now need Ruby glue to paste it all together. The complete music quiz script can be found on github. The main loop, below, waits for a button press. If it is pressed the sad trombone, invalid answer or winner sound is played. The sounds are attached to this post.
whiletruedo# Wait for a button press
data = sp.readline
# Fetch meta data about the currently# playing song
result = `#{now_playing_command}`# Parse the meta data
title,artist,album = parse_result(result)
#Title is hash key, should be unique within playlist
key = title
if responded_to.has_key? key
puts "Already answered: you cheater"
play cheater
elsif correct_answers.has_key? key
puts "Correct answers: woohoo"
responded_to[key]=true
play winner
else
puts "Incorrect answer: sad trombone"
responded_to[key]=true
play sad_trombone
endend
The Amsterdam Music Hack Day is a full weekend of hacking in which participants will conceptualize, create and present their projects. Music + software + mobile + hardware + art + the web. Anything goes as long as it’s music related
The hackathon was organized at the NiMK(Nederlands instituut voor Media Kunst) the 25th and 24th of May. My hack tries to let a phone start a conversation on its own. It does this by speaking a text and listening to the spoken text with speech recognition. The speech recognition introduces all kinds of interesting permutations of the original text. The recognized text is spoken again and so a dreamlike, unique nonsensical discussion starts. It lets you hear what goes on in the mind of the phone.
The idea is based on Alvin Lucier’s I am Sitting in a Room form 1969 which is embedded below. He used analogue tapes to generate a similar recursive loop. It is a better implementation of something I did a couple of years ago.
The implementation is done with Android and its API’s. Both speech recognition and text to speech are available on android. Those API’s are used and a user interface shows the recognized text. An example of a session can be found below:
To install the application you can download Tryalogue.apk of use the QR-code below. You need Android 2.3 with Voice Recognition and TTS installed. Also needed is an internet connection. The source is also up for grabs.
I have upgraded the operating system on my LG GT540 Optimus from the stock Android 1.6 to Android Gingerbread 2.3.4. I followed this updgrade procedure.
It is well worth it to spend some time upgrading the phone, especially from 1.6. Everything feels a lot faster and the upgraded applications, e.g. Gallery, are nicely improved.
The main reason I upgraded my phone is to get the open source accessory development kit (ADK) for Android working. I got the DemoKit application working after some time but need to do some more experiments to see if the hardware actually works: I am waiting for a USB Host Shield for Arduino. To be continued…
Vorige zaterdag werd Apps For Ghent georganiseerd: een activiteit om het belang van open data te onderstrepen in navolging van onder meer Apps For Amsterdam en New York City Big App. Tijdens de voormiddag kwamen er verschillende organisaties hun open gestelde data voorstellen de namiddag werd gereserveerd voor een wedstrijd. Het doel van de wedstrijd was om in enkele uren een concept uit te werken en meteen voor te stellen. Het uitgewerkte prototype moest gedeeltelijk functioneren en gebruik maken van (Gentse) open data.
Luk Verhelst en ikzelf hebben er TwinSeats voorgesteld.
TwinSeats is een website / online initiatief om nieuwe mensen te leren kennen. Met hen deel je dezelfde culturele interesse en ga je vervolgens samen naar deze of gene voorstelling. Door events centraal te stellen kan TwinSeats uitzonderlijke cultuurburen zoeken. Leden vinden die cultuurburen dankzij een gezamenlijke voorliefde voor een artiest of attractie of eender welke bezigheid in de vrijetijdssfeer.
Het prototype is ondertussen terug te vinden op TwinSeats.be. Let wel dit is in enkele uren in elkaar geflanst en is verre van ‘af’, het achterliggende concept is belangrijker.
This blog post comments on using the Marvell OpenRD SoC(System on a Chip) as a low power multipurpose home server.
The Hardware
The specifications of the OpenRD SoC are very similar to the better known SheevaPlug devices, so it has 512MB DDR2 RAM, an 1.2GHz ARM processor and 512MB internal flash. To be more precise the OpenRD SoC is essentially a SheevaPlug in a different form factor. The main advantage of this form factor is the number of available connections: 7xUSB, SATA, eSATA, 2xGb Ethernet, VGA, Audio, … which make the device a lot more extendable and practical as a mulitpurpose home server.
The Software
Thanks to the work of Dr. Martin Michlmayr there is a Debian port for the Kirkwood platform readily available. He even wrote a tutorial on how to install Debian on a SheevaPlug. Installing Debian on an OpenRD is exactly the same except for one important detail: the arcNumber variable.
Once Debian is installed you can apt-get or aptitude almost all the software you are used to: webserver, samba, ruby, …
The problem: There is a group of people that want access to Hackerspace Ghent but there is only one remote to open the gate.
The solution: Build a system that reacts to a phone call by opening the gate if the number of the caller is whitelisted.
What you need:
A BeagleBoard or some BeagleBoard alternative with a Linux distribution running on it. Any server running a unix like operating system should be usable.
A Huaweii e220 or an alternative GSM that supports (a subset of) AT commands and has a USB port.
A team of hackers that know how to solder something togeher. E.g. The hardware guys of hackerspace Ghent.
The Hack: First of all try to get caller id working by following the Caller ID with Linux and Huawei e220 tutorial. If this works you can listen to the serial communication using pySerial and react to a call. The following python code shows the wait for call method:
The second thing that is needed is a way to send a signal from the beagle board to the remote. Sending a signal from the beagle board using Linux is really simple. The following bash commands initialize, activate and deactivate a pin.
This is the scenario: you have a Huawei e220, a linux computer and you want to react to a call from a set of predefined numbers. E.g. ordering a pizza when you receive a call from a certain number.
The Huawei e220 supports a subset of the AT commands, which subset is an enterprise secret of te Huawei company. So there is no documentation available for the device I bought, thanks Huawei. Anyhow when you attach the e220 to a Linux machine you should get two serial ports:
To connect to the devices you can use a serial client. GNU Screen can be used as a serial client like this: screen /dev/ttyUSB0 115200. The first device, ttyUSB0 is used to control ttyUSB1, so to enable caller ID on te Huawei e220 you need to send this message to ttyUSB0:
The RING and CLIP messages are the most interesting. The RING signifies an incoming call, the CLIP is the caller ID. The BOOT and RSSI are some kind of ping messages. The following Python script demonstrates a complete session that enables caller ID, waits for a phone call and prints the number of the caller.
To make Tarsos more portable I wrote a pitch tracker in pure JAVA using the YIN algorithm based on the implementation in C of aubio. The implementation also uses some code written by Karl Helgasson and Teun de Lange of the Jazzperiments project.
It can be used to perform real time pitch detection or to analyse files. To use it as a real time pitch detector just start the JAR-file by double clicking. To analyse a file execute one of the following. The first results in a list of annotations (text), the second shows the annotations graphically.
Recently I bought a big shiny red USB-button. It is big, red and shiny. Initially I planned to use it to deploy new versions of websites to a server but I found a much better use: ordering pizza. Graphically the use case translates to something akin to:
If you would like to enhance your life quality leveraging the power of a USB pizza-button: you can! This is what you need:
A PC running Linux. This tutorial is specifically geared towards Debian-based distos. YMMV.
A big, shiny red USB button. Just google “USB panic button” if you want one.
A location where you can order pizzas via a website. I live in Ghent, Belgium and use just-eat.be. Other websites can be supported by modifying a Ruby script.
Technically we need a driver to check when the button was pushed, a way to communicate the fact that the button was pushed and lastly we need to be able to react to the request.
The driver: on the internets I found a driver for the button. Another modification was done to make the driver process a daemon.
The communication: The original Python script executed another script on the local pc. A more flexible approach is possible using sockets. With sockets it is possible to notify any computer on a network.
if PanicButton().pressed():
# create a TCP socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# connect to server on the port
s.connect((SERVER, SERVER_TCP_PORT))
# send the order (margherita at restaurant mario)
s.send("mario: [margherita_big]\n")
The reaction: a ruby TCP server waits for message from the driver. When it does it automates a HTTP session on a website. It executes a series of HTTP-GET’s and POST’s. It uses the mechanize library.
login_url = "http://www.just-eat.be/pages/member/login.aspx"
a = WWW::Mechanize.new
a.get(login_url) do |login_page|
#post login_form
login_form = login_page.forms.first
login_form.txtUser = "username"
login_form.txtPass = "password"
a.submit(login_form, login_form.buttons[1])
end
Some libraries are needed. For python you need the usb library, the python deamons lib needs to be installed seperatly. Setuptools are needed to install the deamons package.
Ruby needs rubygems to install the needed mechanize and daemons library. Mechanize needs the libxslt-dev package. You also need the build-essential package to build mechanize.
This blog post is about how to use the TouchatagRFID reader hardware on Ubuntu Linux without using the Touchatag web service.
An RFID reader with tags can used to fire events. With a bit of scripting the events can be handled to do practically any task.
Normally a Touchatag reader is used together with the Touchatag web service but for some RFID applications the web service is just not practical. E.g. for embedded Linux devices without an Internet connection. In this tutorial I wil document how I got the Touchatag hardware working under Ubuntu Linux.
To follow this tutorial you will need:
Touchatag hardware: the USB reader and some tags
A Ubuntu Linux computer (I tested 9.10 Karmic Koala and 8.04 )
SVN to download source code from a repository
The touchatag USB reader works at 13.56MHz (High Frequency RFID) and has a readout distance of about 4 cm (1.5 inch) when used with the touchatag RFID tags. Internally it uses an ACS ACR122U reader with a SAM card. A Linux driver is readily available so when you plug it in lsusb you should get something like this:
lsusb recognizes the device incorrectly but that’s not a problem. To read RFID-tags and respond to events additional software is needed: tagEventor is a software library that does just that. It can be downloaded using an svn command:
To compile tagEventor a couple of other software packages or header files should be available on your system. Te tagEventor software dependencies are described on the tagEventor wiki. On Ubuntu (and possibly other Debian based distro’s the installation is simple:
sudo aptitude install build-essential libpcsclite-dev build-essential pcscd libccid
#if you need gnome support#sudo aptitude install libgtk2.0-dev
Now the tricky part. Two header files of the pcsclite package need to be modified (update: this bug is fixed see here). tagEventor builds and can be installed:
cd tageventor
make
...
tagEventor BUILT (./bin/Release/tagEventor)
sudo ./install.sh
...
When tagEventor is correctly installed the only thing left is … to build your application. When an event is fired tagEventor executes the /etc/tageventor/generic script with three parameters (see below). Using some kind of IPC an application can react to events. A simple and flexible way to propagate events (inter-processes, over a network, platform and programming language independent) uses sockets. The code below is the /etc/tageventor/generic script (make sure it is executable), it communicates with the server: the second script. To run the server execute ruby /name/of/server.rb
#!/usr/bin/ruby# $1 = SAM (unique ID of the SAM chip in the smart card reader if exists, "NoSAM" otherwise# $2 = UID (unique ID of the tag, as later we may use wildcard naming)# $3 = Event Type (IN for new tag placed on reader, OUT for tag removed from reader)
require 'socket'
data = ARGV.join('|')
puts data
streamSock = TCPSocket.new( "127.0.0.1", 20000 )
streamSock.send(data, 0)
streamSock.close
The tagEventor software is made by the Autelic Association a Non-Profit association dedicated to making technology easier to use for all. I would like to thank Andrew Mackenzie, the founder and president of the association for creating the software and the support.
Ik heb in opdracht van scholengroep Sperregem een website gemaakt die het vinden van kandidaten voor korte vervangingen vlotter doet verlopen. Mensen met interesse voor vacatures in het onderwijs in West-Vlaanderen kunnen zich er op inschrijven.
De website heeft enkele voordelen voor verschillende scholen in de scholengroep:
Het zoeken van kandidaten is erg eenvoudig: na het invoeren van een vacature komt er een lijst met kandidaten met een geschikt profiel die tijdens de vacature beschikbaar zijn.
Er kunnen e-mail of SMS-berichten verstuurd worden om kandidaten op de hoogte te brengen van een vacature.
Profielen van kandidaten zijn altijd up-to-date: de kandidaten zijn er zelf verantwoordelijk voor en kandidaten lange tijd niets van zich laten horen worden automatisch op non-actief gezet.
De historiek van kandidaten wordt automatisch bijgehouden en kan opgezocht worden.
Ook voor de aspirant onderwijzers is de website handig:
De vacatures zijn publiek zichtbaar, kandidaten kunnen dus actief solliciteren.
Ze kunnen zelf hun profiel beheren en bijvoorbeeld een vernieuwde versie van hun C.V. uploaden.
Voor elke kandidaat is een gepersonaliseerde lijst met vacatures beschikbaar (ook via RSS), afgestemd op hun profiel.
Daarnaast is het ook voor de personeelsdienst een handige tool: die kan nu een beter overzicht bewaren over de vacatures en de invulling ervan in de verschillende scholen.