I've been thinking a lot about gestures lately. I submitted a kids drawing app, Frog Draw, to the App Store this week, and while I'm proud of it, it certainly ended up different than the plan in my head when I first sat down in Xcode and started coding it. Much of this, comes because of a gradual realization that gestures are better than buttons.
Here for instance was my first thought for a interface for modifying objects:
The idea behind this was that the user would tap an onscreen object, a square perhaps, and this item would appear to give options in button form, including a rotation button which would follow the finger in a circle as he twirled the whole edit ring and object around. I had it all worked out that the buttons would rotate with the ring's rotation. I was quite proud of the geometric perfection of its layout; it was a perfect hexagon inside a perfect ring. And then I actually tried to use it.
The ring was in the way of things, preventing me from selecting other objects. And what if the object was up in a corner, obscuring a button, which required redundant buttons such as the doubled X delete buttons in the above. No, this would never do.
So I went with gestures. Translating via dragging a finger and pinch and scale gestures could be easily enough done, but what was the gesture to delete something? At first, and for much of the development cycle, I went with an analogous gesture to the Mail app's delete swipe, a quick leftward flick.
But that turned out to be too unreliable, as it was hard to both distinguish between a normal leftward translation and a delete, so half the time, I just ended up moving the object a bit to the left, or worse, an unintended deletion. Yes, I have unlimited undo, but still. In the end, I just accepted a drag of both the finger and the object into the toolbox area as a delete. Easy to do and easy to understand.
Which brings up another idea, the unimportance of direct manipulation. In the real world, if you wanted to drag a card around on a table top you'd put your finger down on the card and move it around, the finger always obscuring some of the card. In the digital world of the touch screen the programmer has a choice. If I have a selected object, why do I have to have my finger over it, preventing me from seeing where exactly I want it to go. Oh, I could bring up a loupe view to show what's under my finger, and perhaps magnify it, but won't that take up even more precious screen space? So, I decided that I could pinch, rotate and translate anywhere in my document view and it would change the selected object.
As I was testing my app, it became more and more obvious I needed an easy way to change the front to back order of objects. Many times, I'd draw a mask (maybe a frog's face) and want to put a photographic face behind it, peering through a hole. In the old days of desktop apps, this would be a menu item or a button or both, but not in mobile where space is precious. For a long time, I was intending to have a table of all the objects in a document layer, and the user would manipulate z-order by dragging one object cell to a different position (and I still might do this) but it seemed like this was a good opportunity for another gesture, and the easiest that came to mind was a 2 finger up or down drag.
This worked out quite well, as I could bring an object up or down several levels in a single swipe. I even used the same mechanism to re-order layers if no object was selected. What I'm giving up here is discoverability, few users will discover this behavior on their own. I put a diagram in the initial onboarding dialog, but time will tell if that's enough.
Not that the button is dead, I had thoughts of dragging color swatches onto objects, but that was just too much to ask of the user in terms of consistency, so I just have simple buttons to apply colors or other style settings to the selected object.
Look for Frog Draw in the App Store next week.