Monday 30 April 2012

Reducing abstraction hunger


It's a bit of a pilgrimage, out here into the forest, and we have few visitors other than family. So those who do make it over the mountains are guaranteed a demonstration of Palimpsest if they show any interest. Yesterday's guest, old friend Michelle Greenwood, was the first hands-on user apart from me and Elizabeth. I couldn't have asked for a more sympathetic third user - as an engineer and musician, Michelle is about as similar to my own motivations as could be managed. So where I advise students to find trial users as different to themselves as possible for critical assessment of their interaction design, I have managed to avoid any critique beyond the blindingly obvious.

Nevertheless, there is clearly some work to be done, even in addressing the blindingly obvious. A first priority is to make a smoother transition between direct manipulation of shapes and geometric transformations of those shapes. At present, it is possible to rotate, translate or scale a shape either by dragging handles (as in a conventional direct manipulation drawing editor), or by specifying a geometric transform (which can then be adjusted by directly manipulating its parameters). However, the more powerful of the two - the transform layer - must be requested explicitly. Only after doing this can the user modify the transform under program control. Use of the direct manipulation handles is transient, changing the state of the object, but with no opportunity to reproduce or adjust that state change.

In CDs terms, this represents a combination of abstraction hunger and premature commitment. The user can either create abstractions routinely (thus allowing them to be adjusted at any time), or create them only when necessary - but this involves premature commitment to know in advance whether the abstract alternative will be necessary. Michelle was (unsurprisingly) unsure about the difference between the options, and confused by the implicit state changes between them. I can deal with this by, whenever a shape is directly manipulated, saving the state changes as potential parameters for a transformation layer, and giving the user the option to create that layer.

This does actually return to one of my first blog discussions - probably going back to last October. The was the point at which I created a "move" layer that was automatically generated in response to the user dragging of any content. This turned out to be rather annoying, as move layers quickly accumulated, appearing rather "heavyweight" for minor adjustments (a form of abstraction hunger, even though the abstractions were created automatically, in programming-by-example style). As a result, I created the direct manipulation handles in December. Now I understand that they should always have been combined, allowing transition from direct to abstract.

On the positive side, Michelle liked using Palimpsest, even in its prototype form. She said it was a lot of fun to play with, and immensely superior to her son's current experience as a student of introductory graphics programming, where it has taken him weeks of work to achieve the visual effects that she could explore within seconds. If I had any faith in Likert-scale evaluations of user experience, I could report an early confirmation sample!


Wednesday 25 April 2012

Manipulate, Automate, Compose


Margaret Burnett has been a great supporter of the Attention Investment model of abstraction use, in large part because it provided the motivation that led to her design strategy of Surprise, Explain Reward, that has proven so valuable in the development of end-user debugging systems. After many weeks wrestling with the problem of where the "language" is in my layer language, I realised that I have unconsciously been relying on my own design strategy, similarly motivated by Attention Investment, but until now not articulated.

We can call this strategy "Manipulate, Automate, Compose", in homage to Margaret's own three-part strategy for user experience design. (If you want to cite this, contact me first - there's a chance I might eventually decide to publish in a slightly revised form).

My hitherto unnamed, but analogous, strategy dates back to the invention of the "Media Cubes" over a decade ago - one of the first applications of Attention Investment. My reasoning at that point was that users would become familiar and comfortable with the operation of the individual cubes, in the course of operating them as simple remote controls. Once those unit operations had become sufficiently familiar in this way (perhaps over a period of months or years), the physical cubes would naturally start to be treated as symbolic surrogates for those direct actions, and used as a reference when automating the action (for example, setting a timer to invoke the relevant action). Once the use of references had become equally familiar, the user might even choose to compose sequences of reference invocations, or other more sophisticated abstract combinations. All of this is consistent with Piagetian education principles, and indeed with Alan Kay's original motivations in applying those principles to the design of the first GUIs.

What we have lost sight of since then is the second two steps in this process - most GUI users are stuck at the "Manipulate" phase, and are given little encouragement to move on to Automating and Composing - precisely the points at which real computational power becomes available. The various programming by demonstration systems (as in Allen Cypher's seminal collection) aim to move to the Automate step, while programming by example uses additional inference methods that Compose them as a map over different invocation contexts.

Typical approaches to programming language design often proceed in the opposite order - the mathematical principles of language design are fundamentally concerned with composition (for example in functional languages). Once the denotational semantics of the language are established, an operational semantics can be applied, so that the language can be applied to things that the user wants to automate. Finally, a programming environment is provided, in which the user is able to manipulate the notation that represents these semantics. After a language has been in use for a while, live debugging environments might even provide the user with the ability to directly manipulate objects of concern to themselves (rather than the elements of the language / notation, which for the user are a means to an end).

Those viewing the Layer Language up until this point (Beryl Plimmer's workshop in February, and James Noble's observations last week) have commented that I've provided a number of interesting user capabilities, but that they don't see where the "language" is. To be honest, I've had the same concern myself - it looks so unlike a normal language that it has taken some determination to persist along this path (not for the first time - the Media Cubes suffered from the same problem, to the extent that Rob Hague felt obliged to create a textual equivalent in order to prove that it was a real language, even though operating the direct manipulation capabilities of the system by typing in a text window would have seemed slightly ridiculous).

So after James' departure a couple of days ago, I returned to thinking about execution models. Before he arrived, I had implemented a simple event architecture that allows events to be generated from visual contexts (and hence automated), and during his recording session last week, I took the chance to implement a persistence mechanism for collections of layers (hence making composition more convenient). It's pretty clear that once these are working smoothly, they will provide a reasonable execution model, that is consistent with the visual appearance and metaphor of the multiple layers. Furthermore, users will be able to apply these in the same way as with Media Cubes and some of my other design exercises in the past - the system is quite usable for direct manipulation, with those experiences giving users the confidence for the attention investment decisions of automating their actions and composing abstract representations of them.

So this is the design strategy expressed in the Layer Language - the same one as in Media Cubes, and various other systems. The user can achieve useful results, and also become familiar with the operation of the system, through direct Manipulation that provides results of value. The notational devices by which the direct manipulation is expressed can then be used as a mechanism to Automate them, where the machine carries out actions on the user's behalf. Finally, all of the functions that the user interacts with in these ways can be Composed into more abstract combinations, potentially integrated with more powerful computational functions. The same Manipulate, Automate, Compose benefits can be seen in products such as Excel - hence the spreadsheet analogy that I have been making when explaining my intentions for the Layer Language.

Furthermore, I realised that the past 6 months work represent a meta-strategy for applying attention investment to design. I have intentionally deferred the specification of the language features for Automation and Composition until I had gained extensive experience of the Manipulation features. In part this comes from the hours of "flight-time" in which I've been using those features. But even more, it comes from the fact that I've been implementing, debugging and refining the direct manipulation behaviours as I've gone along. This has meant that the abstract aspects of the language design have been formed from my own reflective experience of the use and implementation of the direct manipulation aspects. A name for this meta-design strategy might be "Build, Reflect, Notate".

I suspect that this may be the way that many language designers go about their work in practice. I had several illuminating discussions with James about the work he and his collaborators are currently doing on the design of their Grace language. James has a great deal of expertise in architecture and usability of object-oriented languages (we had some enjoyable discussions on the beach, comparing my experiences of Java coding over the course of my project so far), so like most language designers, he is creating a language informed by his experiences as a Java "user". The difference between that kind of project and mine, however, is that the user domain in which his design work is grounded is the manipulation of OO programs by his students, in the context of teaching exercises to train them in OO programming. This is perfectly appropriate to his project, since Grace is intended as a teaching language. However, it means that the attention investment consequences arising from his use of this meta-design strategy are very different from mine. Rather than the end-user programming principles of Manipulate, Automate, Compose, his language will support some educational principles related to the the acquisition of programming skill (maybe Challenge, Analyse, Formulate). Perhaps Margaret's Surprise Explain Reward arose in a similar way from the same meta-design strategy - I look forward to discussing it with her at some point.



References

Surprise, Explain Reward: Aaron Wilson, Margaret Burnett, Laura Beckwith, Orion Granatir, Ledah Casburn, Curtis Cook, Mike Durham, and Gregg Rothermel. 2003. Harnessing curiosity to increase correctness in end-user programming. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI '03).

Programming by Demonstration: Allen Cypher, with Daniel C. Halbert, David Kurlander, Henry Lieberman, David Maulsby, Brad A. Myers, and Alan Turransky. (1993). Watch What I Do: Programming by Demonstration. MIT Press

Media Cubes: Blackwell, A.F. and Hague, R. (2001). AutoHAN: An Architecture for Programming the Home. In Proceedings of the IEEE Symposia on Human-Centric Computing Languages and Environments, pp. 150-157.
Grace: The Grace Programming Language
http://gracelang.org/

Saturday 21 April 2012

Some (routine) "tricky corners" in Java


When Sam and I were trying to develop analogies between jazz improvisation and live coding, Nick Cook told us that preparation for performance might involve rehearsing "tricky corners" that can arise in a particular improvisation genre. Although I'm not coding anything live, there are some kinds of software development task that always leave me feeling a little nervous, no matter what language I'm working in. One of these is management of realtime threads. Another is interacting with a new file system for the first time.

So in the past week, I've taken some deep breaths, and plunged into both of these. Both are routine enough, and every Java programmer (or teacher of Java programming) knows what to do - but as always, newcomers have to learn it themselves.

I'd already created a number of Java animation threads during the project, and had got accustomed to the use of SwingWorkers and timers to ensure that the system still responds to user interaction while doing memory-intensive image shuffling. I'd speeded up some of these animations with better double-buffering of the layer rendering, separating fast animated components from relatively static elements that can be updated less often,

But lately there have been too many animations all running concurrently, now that some state variables are animated (the rate and event value types), and these control other animated layers (paths) that themselves interact with or control other layers. After some encouragement from James Noble (still visiting), I therefore aggregated all the animated updates into an update task list that is reviewed and updated in a single animation thread - James tells me this is how Smalltalk does it, and that's good enough for me! However, several days of thread debugging ensued. James was off at a recording session, and it turns out could have told me immediately what my problem was - but what I thought was a subtle deadlock problem turned out to be the simple fact that exceptions in background Java threads die silently if the thread invocation wasn't made within a try-catch block. So my problem was simply an invisible null pointer exception. Once I'd learned to catch this and dump a stack trace, it became relatively easy to debug the threaded animation.

The second plunge was into Java persistence, which I've never had any reason to use before. It seems about time that my "programs" (quote marks necessary until I persuade James that this really is a programming language) can be saved by the user - at least, it is now possible to create things sufficiently complex that I might want to recover them in future. The illustration on todays post is a "Mr Watson, come here" - the first layer successfully saved and dragged back into a new Palimpsest session. As with my complex animation thread, there had been good reason for nerves - the order in which persistent objects are written to a stream is not that easy to anticipate, meaning that getting a complex class hierarchy persisted involves several hours of trial and error, with many exceptions along the way.

Tuesday 17 April 2012

Visualising parameter bindings


James Noble has been to visit, and given lots of useful advice. He also chastised me for not keeping up a daily diary of my development work. This is almost solely due to the low bandwidth of my modem connection, and the fact that Blogger has to download a Javascript editor every time I make a new post (typically a 10-15 minute load time on the page, at speeds around 300 bps). Lots of things have been happening, but I haven't necessarily written about them.

Nevertheless, a brief update on a piece of recent work - I've changed the visualisation of parameter bindings, so that they look like little inserts within the layer. A rather simple metaphor, but at least visually distinctive. There's a sample in the image above - it is a snapshot of a filter layer that has two parameters, one referring to an image value, and the to a mask value that has been applied to that image.

Sunday 15 April 2012

Sufficiently complete to surprise myself

I've now got a reasonably complete set of functions for the pictorial spreadsheet behaviour. Sufficiently complete that it's fun to play with, and see what I can create - or what emerges. This isn't the sense of "emergent" that our arts collaborators typically employ, but is probably related - here we have something that "emerges" from the user's own activity, rather than from the behaviour of the system. Rather close, in fact, to what you might describe as creative experience (or at least playful).

One example of things that have emerged from my own play in the past few days is a photograph that appears from within its own colours, as a sampling window travels over the image, controlling a translucent overlay in the colour of the current sample contents. Hard to show this in a static image, but it's sufficiently pleasant to watch that I left it running for half an hour, and could imagine it hung on a wall as a dynamic picture.

A second (illustrated) example was a dynamic paintbrush, that changes its shape and colour according to position on the screen. This one works under mouse control, so not so suitable as a displayed work (unless controlled by viewers using non-contact sensing), but an effect that can be seen more easily in a screenshot.

Both of these are things that I hadn't expected to create, demonstrate intentions that emerged while I was playing, and had results that were pleasingly surprising. That whole lot come together in a kind of liveness/flow experience. It's possible that Chris already coined a word for this in his PhD thesis, as it's pretty much the same experience that musical composers are looking for. Will have to ask him.