Saturday 11 October 2014

Final Publication

To mark the end of the Palimpsest experiment, a paper has finally been published that describes the result.

This link provides free download until October 31, 2014:

http://authors.elsevier.com/a/1Ph9Y,No1TceVp



Blackwell, A.F. (2014). Palimpsest: A layered language for exploratory image processing. Journal of Visual Languages and Computing 25(5), pp. 545-571.
Abstract:
Palimpsest is a novel purely-visual language intended to support exploratory live programming. It demonstrates a new paradigm for the visual representation of constraint programming that may be appropriate to future generations of keyboardless and touchscreen devices. The current application domain is that of creative image manipulation, although the paradigm can support a wider range of computational expression. The combination of constraint semantics expressed via a novel image-layering metaphor provides a new approach to supporting a gradual slope of abstraction from direct manipulation to behaviour specification. Exploratory evaluations with a range of users give an indication of likely audiences, and opportunities for future development and application.

Thursday 9 August 2012

Representing time: big-endian vs little-endian?

Few people now remember the bitter debates over the storage order for multi-byte values in 8-bit memory architectures. There were advantages to putting the LSB first, and other advantages to the opposite. The gently mocking term "little-endian" compares the debate to a trivial political dispute in Gulliver's Travels over which end an egg should be eaten from. A Wikipedia author picks out the key point as follows:

"On Holy Wars and a Plea for Peace" by Danny Cohen ends with: "Swift's point is that the difference between breaking the egg at the little-end and breaking it at the big-end is trivial. Therefore, he suggests, that everyone does it in his own preferred way. We agree that the difference between sending eggs with the little- or the big-end first is trivial, but we insist that everyone must do it in the same way, to avoid anarchy. Since the difference is trivial we may choose either way, but a decision must be made."

In user interface design, we regularly find ourselves in this kind of situation. In the early days of the scroll bar, it was far from clear whether the text should move up when the scroll bar moves up, or the other way round (i.e. the window moves up, so the text moves down). The best solution to these simple choices is sometimes so far from obvious that it can take years to get it right - people are still discovering (and disconcerted by) the decision to reverse the scrollbar drag direction that is used by default on Macintosh trackpads.

As Cohen notes in the case of standards wars, it's sometimes more important to agree on the choice than it is to make the right one. Sadly for the prototype developer, the only person you have to agree with is yourself. So this afternoon, I made the sudden decision to reverse the way in which the Palimpsest layer stack is rendered. I know I spent some time agonising over this about 9 months ago, but have stuck to my decision ever since then.

The problem is - should the stack be rendered in conventional narrative time order (oldest layers appear at the top of the screen, with newer ones appearing lower down), or in geological order (oldest layers at the bottom, with newer ones higher up)? I've just changed to the second of these options, in part because writing the tutorial made me increasingly uncomfortable that I had to refer to the layer "under the current one" when that layer was clearly above the current one on the screen.

It was easier to reverse this than I had feared, although an amusing discovery along they way was the realisation that the mapping of keyboard arrows to layer traversal had always been counter-intuitive. The down arrow moved up the stack, and the up arrow moved down it. Perhaps this should have been a sign that I made the wrong decision 9 months ago. (Though an interesting observation, back to the days when I said I was combining the Photoshop History and Layer palettes, is that the History palette renders time going down the screen, while the Layer palette has time going up the screen (if you paste, a new layer is created above the previous one). I wonder whether Photoshop users are ever disconcerted by this?


Cute is not (always) clever

Well, this is embarrassing ... several conclusions from the last blog post turn out to be completely wrong. But perhaps for interesting reasons.

After spending a couple of days preparing a brief introduction tutorial, I tried it out on my first independent user (the long-suffering Helen - thank you xx).

As you'd expect, there were a number of faults in both the tutorial, and the default behaviour of the system. More on these later. But the most annoying one was that the menu visualisation I created last week was really unhelpful.

In the last blog post I had been pleased with myself, because the tabbed menu had been implemented using pretty much the same elements I'd already created. In particular, the active areas that the user clicks to move between tabs were the same SubMenuCreator buttons that had previously been used to navigate between different menus. The appearance of a tabbed interface was created just by sticking a background rendering of tabs behind these buttons.

The result was both cute and elegant (in my own opinion), with the new tabbed interface immediately inheriting all the good things that came with the button regions.

Unfortunately, elegant uniformity is one of the last priorities for usability, as has been noted by countless people before me. (Remember the days when car dashboards had rows of identical switches? Cheaper to make and tidy to look at, but impossible to use without memorising their position or taking your eyes from the road to squint at the labels.)

So my elegant approach to controlling tabs was just really confusing - in fact, my trial user had not even noticed that they were tabs, but thought they were just more buttons. I should have seen the warning signs when writing that blog post last week. The real appeal of the "cute" and elegant solution was that it saved me coding effort. This ought to make us all hesitate, when we use "elegance" as a criterion for a good software solution in a user-centred application.

The replacement, after a half day coding and redrawing, now looks subtly different - with tabs no longer looking like buttons. Let's hope this works!


Friday 3 August 2012

Pretty = what you expect


Spent a day making things look "pretty" (as I was thinking of it at the time - lots of pixel nudging and colour shading). This is really in response to Luke's comment that the next thing needed is some usability improvements. At first, prettiness was just a side effect of adding some more conventional visual effects - in particular, the tabbed menus in the illustration, which replace the previous minimalist (semi transparent) menu layers. However, as I spent more time getting them right, I realised that "right" actually means that they look like they work.

Interestingly, all of this surface ordinariness was achieved without any compromise on the underlying behaviour - these tabbed menus are still live code, and any of the icons can be dragged elsewhere or incorporated into execution behaviour by the user. Making them look ordinary to start with is just a bit of reassurance for the new user, and perhaps even adds to the surprise and delight :-) when it turns out that you can do things with them beyond the ordinary.

One more picture, just to show that  things made with Palimpsest don't often look ordinary. Here's some processing of the blog logo:

Wednesday 1 August 2012

Time to fake the rationale



Not really! (Title taken from famous paper on faking design rationales). It's actually time to do some rather boring tidying up, removing final bugs, and getting ready for public showing at VL/HCC. Along the way, this has involved returning to things that were already boring - Java persistence, for example, as changes since my last big persistence binge a few months ago have broken it in new ways.

But in presenting to an academic audience, some more explicit rationale will be required. Some of it has been published along the way in this blog, but there are lots of minor decisions, not interesting enough to be included here. A recent example is that the "secondary notation" device, despite being one of the earliest things implemented, had almost no usable function. A change this week has allowed secondary notations to pass on a value from whatever layer they are annotating. This became useful in the context of more complex combinations of functionality, such as the use of multiple event trigger layers at the same time. In classic visual language usability style, it quickly became impossible to tell which of the nearly identical visual objects was which.

Wednesday 18 July 2012

Getting the connotations right

Having returned to Cambridge this week, my 6 months as isolated bush-coder are complete, and it's time to show Palimpsest to some real users. The first of these was Melissa Pierce Murray, sculptor. Melissa originally trained as a physicist, so potentially comfortable with the abstraction inherent in Palimpsest, but by her own claim "doesn't get on with computers". As I've seen in the past with artists considering what they might do with a computer, her first impressions were that she might use this to make a web page, or a powerpoint presentation - computers have never been relevant to her creative practice in the past, and this is answering question she doesn't have.

Nevertheless, after an hour or so of discussion, some points of connection did emerge - she has been sketching grids of visual elements, which she describes in terms of "matrices" and "boundary conditions". The collection operations in Palimpsest (though sadly crashing when demonstrated, because a minor piece of debugging code introduced while passing through Singapore disabled them) do indeed provide exploratory potential that is relevant to these creative questions.

This discussion focused on the potential of software as an exploratory sketching tool, in much the same way as our Choreographic Language Agent has been used at Random Dance. Our most recent thinking on sketching is set out in work with Claudia Eckert and colleagues (see below). An interesting aspect of that collaboration was our investigation of the importance of connotation in sketching - the fact that a sketch looks like a sketch, and has social functions arising from its appearance.



In the case of Palimpsest, I had just made a change to place a mathematical-looking "graph-paper" grid under the currently selected layer, as part of a more general visual overhaul. This has no real function, other than looking generally technical, and providing a bit of colour/texture to help interpret the transparency in the foreground. But Melissa specifically commented on this as helping her to understand the intention of the system, and what it could do for her. In particular, it helps to distinguish the abstract / symbolic / notational / allographic elements of the Palimpsest display from the pictorial / interpretative / autographic elements. Despite the fact that Palimpsest deliberately plays with the allographic/autographic dividing line (what Luke previously called the "anti-semantic"), I think users need to know which is which.
References

Eckert, C., Blackwell, A.F., Stacey, M., Earl, C. and Church, L. (in press). Sketching across design domains: roles and formalities. To appear in Artificial Intelligence for Engineering Design, Analysis and Manufacturing 26(3), special issue on Sketching and Pen-based Design Interaction.

Wednesday 13 June 2012

The perennial problem of abstraction


In the past few weeks, I've been grappling with some hard questions that were always on their way, but that I've managed to defer until now. After my second attempt at creating collection mapping operations ground to a halt, a third approach is proving more fruitful.



The essence of the problem is that, once operations deal with sets of values rather than single values, it is necessary to describe the structure and properties of those sets - a fundamentally abstract activity. In my case, collections of parameterised layers share some of their bindings, but not all. One approach to this has been to treat them as curried functions, but the challenge in doing so is specifying the curry bindings, and distinguishing them from the defaults that may be providing (user-perceived) desirable layer behaviours without having explicitly arisen from user choices. Most of my attempts to provide a user interface to this specification have been less than successful - complex, unreliable, and hard to plan in advance.

The latest approach has been to build more explicitly on the fact that parameters appear as graphical elements within the layer. Given the challenge of abstraction in this system is that the user must make a transition from interacting with images to interacting with bindings, I've allowed the bindings themselves to be manipulated as image elements, using the existing mask operations to select those parts of the binding set that should be preserved across a map. As I was building this, I was rather constantly reminded of the related challenges that Ken Kahn faced in ToonTalk, where Dusty the vacuum cleaner is used to specify value types by "sucking off" the value binding to leave the coloured pad. When Elizabeth used ToonTalk as a young child, this was one of the aspects that particularly upset her (along with everything that happened in the robot's thought bubble - the abstract world mode). A constant frustration was that a slip of the hand could easily delete the type, rather than the value. Easily undone, but an example of how error-proneness in the abstract notation carries significant weighted risk for attention investment. I hope that I've avoided this, by visualising the binding choices as a masked overlay that does not modify the original layer instance from which it is derived.



The next stage is to apply the binding mask overlay approach to cases where, rather than mapping a single layer (function) over a value collection, two collections are joined - typically with curried (function) layers in one (these may have resulted from a previous join), and value layers in the other. This seems like the right point to start experimenting with inference at last. In a previous attempt at the bind-then-play execution paradigm, I had created typed collections that could then be applied in contexts similar to those of single layers of the same type. However, the resulting constraints on users, for example that the collection type had to be maintained by constraining future addition of members, made this pretty cumbersome.

The inference approach that I'm about to start on, in contrast, allows users to place anything they like in a collection, while continuing to support aggregate bindings and maps. My intention is that the type of the collection will be determined (and visualised) statistically, based on the type of the largest proportion of its members. Members that are incompatible with this type can simply be passed over - the user may have intended them as secondary notation, or they may be accidental, or simply experiments to see what would happen. A user who is setting out to create an array value in a more systematic fashion can also do so, and the inferred type will be precisely as intended.