Thursday 29 March 2012

The marvels of dependency graphs

Now that it's pretty clear I really am making a spreadsheet for images (something that I was telling non-technical enquirers back in December, when they asked what this project is all about), I've been thinking about the marvels of the Excel dependency viewer. Spreadsheets are usable without a dependency view, but they can save a lot of time when debugging. That's an example of hidden dependencies that I often use when teaching Cognitive Dimensions, but the Layer Language (Palimpsest - it says so on my prototype's window headers now, even if not in the tagline of this blog :-) is just starting to get complex enough that this is annoying.

So after deferring it for a couple of weeks while doing other things, I sat down in earnest today to create the dependency graph viewer. Preparations included kicking Elizabeth and Helen out (Elizabeth driven to school instead of walking with me, Helen over the mountain to do a week's shopping), and cancelling my routine weekly visit to Beryl's group in Auckland.

After the first hour of work, it was clear that this was a more difficult problem than the typical day's coding - I had to draw a diagram! Perhaps this is a measure of how programmers really regard diagrams. A necessary evil, to be resorted to only when the problem is too tangled to be dealt with by hacking code directly. In the past couple of months, I think this has happened about 3 times (the first time, we had to go out and buy some paper, because there was none in the house - I hadn't needed paper until this point, but it's notable that when it comes to thinking with diagrams, the last thing you want to do at this stage is fire up a proper drawing program).

It's just possible that these two issues are related - the problem I'm trying to solve, and the tool I need when trying to solve it. A typical "tree of trees" episode, that could easily distract an abstraction-lover into a meta-shift, with progress on the original problem greatly deferred. Hopefully, today, I can stick with my piece of paper and finish coding my dependency graph (cycles and all :-)

Friday 23 March 2012

Doing murky arithmetic

More features that resemble (somewhat) conventional programming. I gave a talk in the Auckland CS department on Wednesday, describing the extended audience for end-user programming, beyond those who "think like engineers" (the EUSES agenda) to sloppy thinkers, as I've described them in this blog. Sam Aaron has sent an encouraging email, saying he likes this novel emphasis, so I'd better credit it properly - Thomas and other PPIG folk used to distinguish "neat" and "scruffy" programming styles, but both those styles described practices within the spectrum of professional programming. My recent emphasis comes more from a comment made at a EUSES AGM by Mary Shaw, when she told me that the Computational Thinking education campaign was intended to discourage "murky thinking". However from my perspective (and as I said on Wednesday), many typical artistic practices are unavoidably "murky" - relying on creative ambiguity, social interpretation, emergent behaviour, and other things that you don't really want in standard programming.

So how do you do arithmetic within a framework of murky thinking? I haven't yet found any need for standard (integer/real) number representation, with count and proportion being better suited to the operation parameters I need. Counting is counting, with little need for arithmetic, but it's getting a bit boring having proportions that are either the same as each other or inverse. I therefore set out to make a four function calculator for proportions. This isn't going to work the same way as a standard four function calculator, because all inputs and outputs have to be in the range 0-1 for compatibility with the rest of the system. This means that "multiply" is actually scaling up, and "division" scaling down, relative to a log slider. Addition and subtraction combine and compare proportions. All results are clipped to the floor and ceiling values.


A solution to the problem of how to visualise this non-standard arithmetic model was inspired by a hint from Helen, that she explains multiplication to children in terms of the area of a rectangle. The input value is therefore visualised as a precisely scaled version of the original proportion layer, with the output value visualised as a rectangle whose area can be larger or smaller than the original layer boundary. All four functions are controlled by sliders - vertical add and subtract change the relative height of the rectangle, while horizontal multiply and divide modify the width with log-scaling. Hopefully, direct manipulation of these sliders makes their behaviour sufficiently familiar, that the effects of subsequent parameterisation (by dragging and dropping another value layer onto a slider) can be easily anticipated.

Sunday 18 March 2012

More programmable is less usable

With an execution model in place, it's clear that this is the least usable part of the system so far. It's a bad sign when even I can't construct a syntactically valid example! (Bad news, in the sense that any other user will certainly be unable to). There are now three levels of abstraction in the interaction - direct creation and manipulation of single layers, indirect manipulation of other layers via constraints (spreadsheet-style), and generation/execution of new layers. As they become more abstract, it's unsurprising that they are harder to use - but not ideal!

The final abstraction level, which might be compared to user-defined functions in spreadsheets (i.e. almost no regular users use them) has an execution model that I've called "bind-then-play". It maps one or more operations over a set of arguments, where each operation can have a number of unbound parameters. As soon as an operation receives bindings for its parameters, a new layer instance is created and executed. As I've already commented, implementing this seemed a lot more like regular computer science - type inference for the bindings and so on - but it's unclear yet whether it will turn into anything for end-users. I've also created a more macro-like record and playback facility which is much easier to understand, and at present more fun to use.

Sunday 4 March 2012

Some recognisably computational features

Blog entries are becoming sporadic, due to slow dial up collection (and now, intermittent mains power, after our first storm of the year).

Nevertheless, progress is steady (isolation has its benefits!), with a number of more recognisable programming language features now implemented. Many of the basic layers are now parameterised in simple ways, and the mechanics of binding and substitution of parameter types has been necessary. This in turn has required a type mechanism - the basic set of value types are point, vector, proportion, rate, colour and count. These can be derived from each other in various ways, as well as from properties of source images. 

Bound parameters are propagated dynamically, and it is possible to explore some interesting interactive effects by binding multiple layer controls to the same values as in the picture here. These are sufficiently interesting that I wanted to watch the results over more extended periods - so I've created a mouse tracker, and a record/replay layer that allow dynamic sequences to be used, with the replay rate also under dynamic rate control (motivated by the importance of dynamics in the Random Dance project). It also turns out to be nice to provide a motion-persistence filter that can be superimposed on these dynamic images, providing more of a visual transition between the static and interactive displays (as used for this image).

I've also made an initial attempt at a layer supporting a second order function, mapping a layer with one or more unbound parameters over a collection of possible parameter values. It turned out to be quite a challenge to define an appropriately "sloppy" approach to this, since the kinds of functional language that usually provide features like this emphasise correct type matching and map cardinality as fundamental control mechanisms. Instead, I've created a parameter binding mechanism that searches a set of available values for one of compatible type, with a new binding instantiated as soon as sufficient values are found. If the set doesn't have enough members for even one binding, partial instantiations can be created. And if the number of resulting instances is insufficient, the user can define a minimum count value. I should probably stop coding to think more carefully about the cognitive dimensions implications of these decisions.

Another slightly disconcerting result of all this computational work is that my own usage of the system has gravitated toward repeated definition of simple geometric figures - lines, points and circles. This is not at all the kind of aesthetic that I'm wanting to promote, but is so much more natural and convenient when implementing and debugging rather mathematical relationships. In principle, it should be very easy to parameterise the image operations that I created before leaving Cambridge, and also derive values from those images. But I shouldn't delay much longer before getting around to it - this is exactly the kind of thing that I complain to students about, when they get too absorbed by the mathematical requirements of their work, and neglect aspects of the user experienced that are less precisely specifiable.