DagSemProc.05382.4.pdf
- Filesize: 98 kB
- 2 pages
Traditional methods for efficient text entry are based on prediction. Prediction requires a constant context-shift between entering text and selecting or verifying the predictions. Previous research has shown that the advantages offered by prediction are usually eliminated by the cognitive load associated with such context-switching. We present a novel approach that relies on compression. Users are required to compress text using a very simple abbreviation technique that yields an average keystrok reduction of 26.4%. Input text is automatically decoded using weighted finite-state transducers, incorporating both word-based and letter-based n-gram language models. Decoding yields a residual error rate of 3.3%. User experiments show that this approach yields improved text input speeds.
Feedback for Dagstuhl Publishing