This interview is aimed at informing you about the KDE Accessibility Project. Interviews with other members of the KDEAP will follow.
I am a student of computer science at the University of Paderborn (Germany). Prior to my studies I lived with my parents, my twin brother and our older sister in Herzebrock (also in Germany). Some of my interests are singing and working with computers. Currently I sing in a choir which meets once every month. In the winter 2000/2001 I studied two terms abroad in Ottawa/Canada.
Since some years my mother has some disease which courses her nerves to disappear. As a result from that she cannot speak since about a year. So my brother looked for text-to-speech systems working with Linux. Unfortunately all text-to-speech systems for Linux are either libraries or applications that expect the text as standard input. So I began to write KMouth. After the application had made some progress I contacted the kde-accessibility mailing list for proper interoperation between KMouth and upcoming the KDE text-to-speech service (developed by Pupeno).
As time continued the KDE Accessibility Project started to be an active project and the kdeaccessibility CVS module was created.
The KDE text-to-speech service can be seen as an API for speech related KDE applications. It consists of a control center module, a daemon and a number of plug-ins. The plug-ins serve as bridges to the actual text-to-speech systems whereas the daemon provides the API for the applications and passes the text to the plug-ins. The control center module can be used for configuring the text-to-speech service.
KMouth can use the KDE text-to-speech service for speaking (although it also has a way to use a text-to-speech system without the KDE text-to-speech service). As far as I know KMouth is the first application to make use of the upcoming KDE text-to-speech service.
Currently there are three applications in the kdeaccessibility module. An infrastructure like ATK and AT-SPI from the GNU Accessibility Project (GAP) is yet missing. But we are currently discussing how this can be changed. One possibility (which is the one we currently focus on) would be to write a bridge between Qt and ATK. Qt already has some QAccessible interface which needs to be extended for that.
During design and implementation of the infrastructure we will make sure that our solution interoperates with the solutions of the other Accessibility Projects.
The current plan consists of three parts: We want to improve the existing applications, finish the work on the text-to-speech system (so that it can be included in either the kdelibs/kdebase or the kdeaccessibility package) and implement some infrastructure like ATK.
Improving the applications will certainly show effects by the next major release of the kdeaccessibility CVS module (which will be either version 1.1 or part of KDE 3.2). The text-to-speech service might or might not be part of that release. Unless some very bad things happen we will have it ready for kdeaccessibility 1.2 or KDE 3.3. If and how soon we will be able to implement a general accessibility infrastructure depends on how much help we get from both Trolltech and the GAP people.
Whether kdeaccessibility will become an official part of KDE or stay an independent project is not decided yet.
Well, I can think of three scenarios.
My dream is that we will have a fully accessible KDE version with a full accessibility infrastructure and lots of assistive technologies. Through the interoperability of the infrastructures the accessibility will not stop at the borders of KDE, so that allmost all applications (Gnome applications, mozilla, OpenOffice etc.) can make use of KDE assistive technologies. This is the best we can hope for.
The worst possible scenario would be that all currently active people leave the project because they get too busy with their jobs. In this case we would have to drop both the accessibility infrastructure and the text-to-speech service, so that only the three stable applications remain in the CVS. KDE would then stay to be the only Unix desktop system with very bad accessibility. This is the worst we have to fear for, but luckily, this is not very likely.
More realistic is the third scenario. Both the text-to-speech service and the infrastructure are implemented. A few applications exist that are actively maintained. However, we do not have assistive technologies for all possible handicaps, so that we make use of the various assistive technologies that exist outside the KDE project. These are useable thanks to the interoperability of the accessibility infrastructures.
(March 23, 2003)