As well as for non-Latin writing systems, is using an input method framework also the right architecture for dealing with input on touchscreen devices without a physical keyboard? I'm thinking of on screen keyboards (which might also have dictionaries and be fuzzy/predictive), handwriting recognition, possibly gesture recognition, and so on.
In my playing with GTA02/GTA04 phones, I've always thought that that would be the right architecture (e.g. because it provides a way for on screen keyboards / whatever to pop up when an editable widget gets focus), but it has surprised me that - except for the use of matchbox-keyboard-im in Maemo - touchscreen input generally does _not_ appear to be architected in this way.
Is that because there's some better or simpler way of achieving touchscreen device input?