hifi2

HiFi

hifi2

hifi4

HiFi is an Audio/Visual radio powered by SoundCloud and Tumblr. You can visit HiFi online here.

You can checkout the code for the project here.

It allows the user to play sounds from a series of popular genres, as well as from their own sound library, or their favorite sounds. Plug HiFi into some speakers and a projector at a venue or at your next party and let the tunes and imagery unfold.

The original idea was to create an image playlist relating to the genre of music selected from SoundCloud. My thought was that certain genres could relate to various tags, which I would correlate based on my own experience, and then these tags would be used to request images from the Tumblr API.

I quickly learned that the Tumblr API was not as versatile as I had hoped, and searching by tag was something that was barely supported. Only 20 images would be returned from each request, and offset was not a optional parameter. I was able to find a hack around this by navigating back by timestamp to retrieve more images.

As I experimented, I found that gifs produced the most interesting results. So rather than using a bunch of different tags, I set the program up to make a series of requests to gather and randomize a collection of gifs. This could be further developed to gather gifs that also have other distinguishing tags, but for this prototype I decided to keep it simple.

The design is minimal, with the main interface consisting of a series of buttons for sound selection, and then a viewfinder that becomes present once the images and sound are loaded. Additionally, I have added keyboard shortcuts so that the app can be controlled while it is in Full-Screen mode. Those are as follows:

  • Shift + Right-Arrow – Next Sound
  • Shift + Left-Arrow – Previous Sound
  • Shift + Up Arrow – Volume Up
  • Shift + Down Arrow Volume Down
  • F – Full Screen
  • Esc – Exit Full Screen
  • K – List Keyboard Shortcuts

To begin, the user can connect to SoundCloud so that sounds from the user’s library can be used. It is also possible to continue without connecting SoundCloud, but you will only be able to select sounds from the genres in that case. As a prototype, I kept it to a basic one page app, and leveraged jQuery for hiding and showing elements during the connection flow.

Originally, I wrote much of the logic on the front-end using Javascript to query the Tumblr API to retrieve images and embed them into the page, but this approach had its limits. The load-time was significant, and only a minimal amount of images could be realistically collected during each request. I decided to migrate the logic to the backend which has been built on Python and Django. I wrote a script that gathers the images, and saves them to the application’s database, and set up a cron job to run this logic once a day so that the latest gifs from Tumblr are collected daily. Finally, the front-end now randomly selects images from the database to embed in the page. This enables the app to continue to become more interesting over time, as the variety of images continues to grow as time goes on.

I have tested the app on Chrome, Firefox, and Safari with success. There are a number of ways the project could be further developed, but I believe this prototype creates an interesting experience, and demonstrates the potential for further interesting combinations of online sound and visual media.

Screen Shot 2014-01-26 at 10.51.57 PM

GIF Launcher

Screen Shot 2014-01-26 at 10.51.57 PM

I’ve been getting back into Max/MSP/Jitter recently. I’ve been specifically exploring the potential for image processing using Jitter, and in the process of working with importing and exporting images, I had this idea to make an image launching patch that would function similarly to Modul8 or Ableton Live. The grid on the left of the patch contains 9 matrices that are populated with gifs from a directory on your hard drive of your choice. Moving the slider to the right flips through the various images within that folder. There are transparent buttons on the grid so that when you click one of the cells within it, that particular image will be launched on the main window that can be full screened on a separate monitor or projector. At the moment, this is mainly just a proof of concept. I might expand on this by building out some image processing effects and also allow for layering of the images. After doing so much text-based programming, Max definitely feels a lot easier then back when I first tried to pick it up. I’ve been having fun with it, and look forward to getting more familiar with the Jitter objects and building some new projects.

Screen Shot 2013-09-25 at 10.38.43 PM

Cura

Screen Shot 2013-10-01 at 10.38.36 PM

Screen Shot 2013-09-25 at 10.27.08 PM

Screen Shot 2013-09-25 at 11.42.47 PM

Screen Shot 2013-09-25 at 11.46.37 PM

Screen Shot 2013-09-25 at 11.13.24 PM

Cura is a content curation and discovery platform that I have been developing.

The current focus is visual media, but the underlying technology is powerful, providing the potential for various other types of content aggregation. The application works by interacting with other websites, while gathering data that has been chosen and defined within the codebase. There is some basic filtering to ensure that the images are of high quality, before being added to the database. In addition to this predefined filter, I am also working to develop an variety of sorting options that will be available to the user. Some ideas include content type (image, gif, sound ect.), primary color (if visual), size, and tags.

The application is nearing the initial launch phase, and I am excited to get it online so people can start exploring. I have already discovered so many amazing images.

While developing this, I have thought of some interesting ways the platform can continue to be expanded, and I am excited to keep building.

Update: I have since launched the site! You can view the current iteration at www.cura.io 

alittlerest

A Little Rest

The other day my friend Sam told me about a blog he’s been keeping for the past couple years. Over time, he had posted thousands of images that he had been collecting from all around the web to document things that he found inspiring in one way or another. He approached me with a question, and that was something along the lines of: Would it be possible to grab all the images off of this blog, in order to have them all in one place on my computer? I decided that this would be a fun thing to work out so I ended up writing a small Python script to hit his blog and programmatically traverse each page while downloading all the images found on each. It took some time, but after a couple hours, this program had retrieved all of the images save for those with broken links. This totalled ~20,000 images at about 3.4gb! You can check out this code at my GitHub. Above is a collage I created out of a few of the images from the top of the download stack!

AlphaSyntauri-1

Étude 0 – Phosphor

I am beginning a series of musical studies that will be designed to challenge the way I compose and produce. It is my goal to focus on very specific aspects of the process so that I can narrow in on elements that are difficult, and hopefully transcend my current abilities. This approach will follow the “Less, but better” philosophy, aiming to avoid excessive audio sources or instrumentation. With all crafts, it is easy to side-step areas of weakness and play into our strengths. I have found this especially problematic in my experience composing computer music, as there is always another sound you can drag in, another virtual instrument to insert, or another preset you could potentially load up. This is a pitfall that has previously lead me to create compositions that I felt were lacking in a certain amount of depth. These studies will be short (probably within a minute or two), and will contain few elements. Some areas of focus will probably be: instrumentation, audio effects, audio synthesis, automation, dynamics and so on. As I continue the process, I might allow the studies to become more complex. We will see. This first study features 4 voices of the same additive synthesizer. The primary focus was dynamic automation. Currently, I feel as though I failed to acheive what I was aiming for in the short time I had to work on this, but the point of this is to force myself to fail until I might hopefully succeed.

xlr8r_downloading_complete

XLR8R Downloader

 

One of my favorite sources for discovering new sounds is XLR8R. XLR8R is particularly awesome because they are constantly putting up music for free download. This music is often that of the up-and-coming, but well regarded artists are also frequently featured. Unfortunately with living in NYC, I do not always have access to WiFi during commutes, which is a time that I usually enjoy checking out some new music. So today, I built a minimal tool for quickly grabbing the latest music from XLR8R. It takes form as a standalone Mac OSX app that is built on-top of a Python script that I wrote. The script hits the downloads page at XLR8R, parses the html and retrieves the files from the download links. It does this for the latest 10 pages of free downloads, which is about 70 tracks. I did a little browsing, and this equates to about the past month’s worth of music that they have put up for free download. The files are all downloaded to the OSX Downloads directory. This allows me to quickly grab some new music and load it on to my phone to become acquainted with the new music during my travels. The images above show what is going on under the hood in the Terminal while I was building this out.

Building this out has given me a great idea for a mobile XLR8R app. I think it would be awesome to also grab the images and text from the XLR8R site to go along with the music. It would be cool if this content was pulled down once a week or something like that and stored on your mobile device (sort of like what Spotify does with their mobile app) temporarily until the new content is downloaded at the start of the next period. Hell if I had more time, I’d take personal interest in building something like that out, but for now I think this little project here has been a cool proof of concept that contains the potential for greater things.

All the code can be found at my GitHub. The standalone application is in the dist folder and is called xlr8r_downloads.app. To use it simply click the Download ZIP link on the right side of the page, unzip the project, navigate to the dist folder, and double click on xlr8r_downloads. From there, check out your Downloads folder, and you should start to see new music appear before your eyes. Note: The program could take a few minutes to download all of the recent files. Finally, if you are a code savvy individual and want to run this from the command line or modify it, I believe the only extra requirement outside of built-in Python libraries is Beautiful Soup . I’d love to hear from anyone that finds a use for this.

projectPics

Generating Processing Visuals In Max For Live

After completing the hardware aspect of building an Arduinome, I have been able to shift the focus to the custom software that was the original inspiration for the project. The idea behind Arduinome was to build a physical interface for modular devices. Being interested in interactivity between sound and visuals got me thinking that I could design unique ways to control both simultaneously. The bridge between Ableton and Max/MSP/Jitter with Max For Live immediately struck me as the perfect combination of tools that I needed to create the kind of software I was interested in designing. M4L provides the user-friendliness of Ableton with the flexibility and extensibility of Max/MSP. Further, Jitter can allow for visual programming.

A complexity I knew could be a boundary was the difficulty of visual programming in Jitter, and I considered for a couple days if that was the best route to take, especially, since I am not very experienced with that side of Max. Eventually, I began looking for alternatives after coming to understand that I could likely get a similar visual output with methods that I was more familiar with. I ended up turing to Processing, since I had recently used it on a couple projects. For some time, I have also been interested in using the Open Sound Control protocol for creative purposes, and this seemed like a great task to employ the technology. I realized that I could use M4L as a bridge from Ableton to Processing. I began designing a patch that would take audio input from Ableton, extract properties of the sound, convert the properties into OSC messages and then send them out over the network to a Processing sketch. After some time I was able to get this working as I was hoping it would. The patch is still in a rudimentary state, as I only extracted three properties to send to a Processing sketch that accepts three messages, but it was the proof of concept that I needed to continue pursuing this method.

The next step is building the interactivity between the Arduinome and the Processing sketches. This turns out to be relatively involved because it requires building parts in the M4L patch as well as the visual elements that will be controlled on the Processing side. At first, I began to jump in and patch away, but then I realized that if I didn’t build the program to be modular, it would defeat the purpose. Eventually, the goal is to have the Arduinome set up in a coherent manner that will allow for intuitive control over sound and visuals. I am still hashing out the design of the system, and this will remain a work in progress.

Arduinome Front

Arduinome – Hello World!

 It works! Here is the finished product of my Arduinome project. After ensuring that the circuitry was working properly, I began to focus on creating an enclosure to house the components. My original ideal was to find a median between organic and mechanical. Some thoughts involved a combination of wood with either a metal or acrylic faceplate. After a bit of research, I went with a fully acrylic case for the time being, since I am not a carpenter of sorts. The design was made in illustrator, and I was fortunate to be able to take advantage of the lazer cutter at ITP. The Arduino board with the attached shield screw in securely to the bottom plate and there is plenty of room for the wiring to fit within the box. The USB outlet also lines up nicely so that it can be plugged in directly on the side. I am happy with the look and feel of the finished product. Eventually, I might switch out the sides and base for a wooden case since wood is inherently more robust than acrylic, but for now it does an appropriate job of protecting the components while looking sleek. In the images you can see the Arduinome running with serialosc at the command line and max/msp. In the next post I will detail the current standing of the software side of the project.

IMG_0473

Arduinome:64

The concept of this project is to create a clone of the Monome using the open source Arduino platform. For those unfamiliar with the Monome, it is an adaptable minimalist grid-based controller that was originally designed for creating and performing electronic music. The Monome is typically released in sizes varying from 64 to 256 buttons, plus a very small run of a 512 edition. More information on the original can be found here http://monome.org/devices. Because Monome has embraced the open source movement, they have made the firmware for the controller available to the public to use, adapt, and share as they wish. This has enabled the firmware to be ported to the Arduino platform, for others to continue to produce, build upon, and contribute to the community. As a musician and composer, I have been interested in new musical interfaces, especially those that allow for electronic music production and performance. As an avid user of the audio production software Ableton Live (http://www.ableton.com), I originally gravitated towards the grid-based interface because it makes perfect sense in correspondence with the session/performance mode in Ableton. A similar controller I have used in the past is the Akai APC 40 that in many ways replicates the functionality of the Monome. As I got deeper into computer music I began to gain a greater appreciation for modularity in interface arrangement and became interested in the ways that one can create their own musical environment using Max/MSP which is another electronic music/interactive arts software (http://cycling74.com/). I also became aware of the open sound control protocol (OSC) and the benefits it has over the current, but dated MIDI standard. Recently, I’ve been searching for ways to take advantage of OSC in a practical sense for audio and interactive arts, and when Ableton teamed up with Max/MSP by creating Max For Live, it presented the perfect combination. On the hardware side, the Monome platform utilizes the OSC protocol so cloning the controller using an Arduino microcontroller seemed like a great way for me to marry my interests into a cohesive interactive interface for physical computing.

To extend the project, I seek to use the adaptability of the interface to program my own software for mapping audio/visual scenes based upon the grid. I am still in the experimental stage of software development, but I am looking to employ Max For Live to allow the user of the controller to trigger events that launch both audio and the visuals that should correspond. I am currently researching to find the best ways that I can then have the audio itself interact with the visuals. My Arduinome will be build with 64 LED backlit buttons. Below you will find some images of the current work in progress:

Flashing the Arduino in the Terminal using the DFU programmer

After the successful flash, the Arduino is now ready for the ported Monome Firmware

videoBox Visuals

Physical Computing:: Media Controller:: Video Processing Box

I worked on this media controller as a collaborative project with fellow classmates, Lauren Fritz and Cheryl Wu for the Physical Computing course at ITP. The controller is essentially a video processing box that applies various effects to the images captured by a camera connected to a computer.

For prototyping, we used to program in conjunction with a laptop camera. The video processing is done in the Processing language, which takes analog and digital input serially from a usb connected Arduino Uno. The hardware interface is a project enclosure with 4 analog  potentiometers and 2 digital switches. The potentiometers and switches correspond to the GUI sliders, which were also used for prototyping during the early stages of the project, and during my recent trip to California when I was without access to my Arduino. I created the GUI using the ControlP5 library, which is  a great set of open source controller tools. The first 3 pots control RGB values for color effects, and the fourth is meant to control the Block size for pixelation and multiple image capture. Because of difficulties programming serial digital output to Processing, the switches are currently not connected, but the idea is that they were to allow the interface user to toggle video effects on and off.

The Processing and Arduino code can be found at my Github.

 


mta card turnstile

Design Failure: Meditating on the MTA Card Turnstile

The Metrocard turnstile is a piece of interactive technology that is used by millions of people per day acording to the MTA website (http://mta.info/nyct/facts/ridership/index.htm). The device has one simple function: Allow subway riders to pay a fare and access the train platform. Ironically, it seems the simplicity of transaction is too often convoluted by the current mechanisms provided.

I see plenty of people everday struggling with these machines, missing their trains, running into turnstiles that do not allow access when expected, and even yelling out in frustration as they repeatably swipe their pass.

Why is this? The process should be as simple as any payment transaction, yet the technology obscures this process by introducing outside features that complicate matters. Furthermore, the stress level in commuters of New York is often high, and this factor only adds to the failure of the device.
This is a point that Donald Norman brings to attention in The Design of Everyday Things, as he notes that people interact with devices under different physcological conditions.

According to Norman, there are four primary components a system must provide to be sucessful for interactivity:
Conceptual Model
Visibility
Mapping
Feedback

The conceptual model of the turnstile does seem simple enough: swipe the pass, and the gate should be unlocked, but it seems frequent enough that the series of events does not happen in such a way. The mapping of card to swipe machine is basic and comes as second nature to experienced users of the MTA card, but it is easy to understand why many first time users struggle to figure out the correct way to insert the card. This should not be the case. Users should be able to tell immediatly which direction to place the card without even having to read the fine print on the card for an explaination. Feedback on the device is provide by a very small lcd screen and a dismissive beep sound. Both things that are not obvious for many users, especially when they are in a hurry to catch the train as it is arriving at the platform. Finally, there is a total lack of visibility. Only with a certain amount of practice can one gauge the correct speed to swipe a pass. If the pass does not work, the lcd simply requires the user to swipe again. It could at the very least state that the user swiped to fast or too slow. The average person has no clear understanding of how the swipe machine is working on the inside, so when it fails to grant access, they can only guess what went wrong.

Clearly, the design of the current machine is a failure on a variety of levels, but it is likely that there is too much invested in the current technology. There is no competition for subway service in the city of New York, so from MTA’s perspective: “If it’s not broken, why fix it?”