A Mobile Museum App
How to design an app for a large, heterogenic audience?
After the initial kick-off meeting for the installation, we came forward with the idea of creating a mobile app. An interactive strategy for visitors to actually “play“ it as a computer game and to offer an immersive potential to connect with the exhibition. The client was excited about the idea of developing an app as an interactive strategy and interface for the installation. First of all, we conducted a feasibility study and chose to work with Node.js, Sockets.io and a simple web server for the client app. It was mostly an efficient out-of-the-box solution and easy to implement. Also Unity 5, the game engine we used for the installation, provided a great deal of support. The app itself should allow users to choose 1 out of 4 possible actions which a custom-made avatar of the artist would then perform on screen. We also added a competitive function in case multiple users would interact with the app at the same time.
Catching user input
After narrowing down functionalities to offer a clean and intuitive interface. One of the first questions was how to catch user input repeatedly. I remembered the time when I was playing with a console as a kid and how I tormented the action buttons on the controller to make the game avatar jump faster or higher. I believed, that the well-tested and thought-through XYAB action button logic of controllers by Nintendo would successfully inspire the future interface. Besides that I was experimenting with solutions to differentiate the inputs, to add information on the location relative to space and how to circumvent the inevitable language barrier.
One of the first iterations of the user interface was the realization that the XYAB – system requires a certain knowledge of consoles. However, this might not be the case all the time, especially for users age 40+. A possible solution would have been color-coding the action buttons, adding written commands, to use numbers and/or icons. In the end, the interface was informed by the exhibition itself. The project of another artist contained a great deal of echo-ism that was clearly hear-able throughout the space. I decided to use onomatopoeic words as button descriptions. Like “Mmmmm, that’s good“, “Ohhhh, I’m flattered“ or “Ufff, ouch!“
The App In*action
How to enrich the user interface with multiple layers of information?
The four actions corresponding to the buttons were modeled after situations that would never be possible in reality. Like feeding burger to overly slim statues or flirting with a person in a portrait. Deconstructing geometric-abstract plastics or jumping on and stumbling over priced design objects. I added another information layer to the action buttons in the app by positioning them according to the position of the artworks they correspond to. I also wanted to let users know what button they pressed themselves and against which actions and against how many users they are competing.