This page is for LUG people that suddenly welcome e.g. a blind person in their community and don't know anything about accessibility.
To know better about disabilities themselves, [http://tldp.org/HOWTO/Accessibility-HOWTO/ the accessibility HOWTO] has pretty good information.
Now, how accessibility is handled? The basic principle is that a daemon keeps reading the content displayed on the screen, and sends it to a braille device, a speech synthesis, a magnification engine,... That said, there are actually two or three main ways for disabled people to use a computer.
Good Old Text Mode
There are several screen readers (BRLTTY, Suseblinux, speakup, yasr, Brass, ...) that just get the content of the text console via /dev/vcsa*. The most used are BRLTTY which supports a very wide range of braille displays, Suseblinux in the Suse distribution, and speakup as a kernel patch for speech. The good thing is that text applications usually don't need to be modified, but they sometimes provide information in a very non-accessible way (alsamixer shows audio levels vertically for instance). There is a [http://brl.thefreecat.org/text-apps-a11y-test.html guide] for testing accessibility of text applications. Sometimes they just need some configuration, see CategoryText.
The AT-SPI facility recently permitted to make graphical applications accessible. This needs the modification of the toolkit that is used by applications. For now, only GTK applications (hence the gnome desktop) are accessible this way, QT applications (hence the KDE desktop) should follow shortly in KDE4, more details about applications in CategoryGraphical.
emacs has its own readers emacspeak and speechd-el, and since "emacs can do anything you want", this can be an all-in-one solution.
What is best?
As usual, there is no simple answer, it depends a lot on the person. For a developper, the text mode will probably be quite fine. Else the graphical mode will probably be easiest, since that's what people are used to nowadays. The emacspeak solution is a bit peculiar, in that it's an "all in one" solution, but it works quite neat, the only problem being that "only" emacs gets accessible.
Distributions have various levels of accessibility. In a few words and alphabetical order:
Starting from Etch, debian now has integrated support for braille devices as soon as installation. Starting from Lenny, it has support for hardware speech synthesis.
- grml is an accessible live-cd that can be used for cross-installing other distributions.
knoppix has a speakup-enabled kernel.
mandriva I don't know.
- oralux is a text-mode speech-dedicated knoppix-based live-cd.
suse has integrated braille support since a long time through Suseblinux. (as soon as installation?)
ubuntu has integrated accessibilty support on the live CD and braille support on the alternate CD.
On this wiki (see CategoryCategory), you can find a bunch of applications that we know to be accessible, but as said above, in general text applications are usually accessible, and gnome applications get more and more accessible (see http://live.gnome.org/Orca/AccessibleApps for the current status).
The list of braille devices supported by BRLTTY (and hence orca and LSR since these connect to BRLTTY via BrlAPI) can be found in its [http://www.mielke.cc/brltty/doc/README.txt README] file. I don't know about Suseblinux. USB devices are usually just automatically detected (that generally triggers text-mode of installation CDs). Serial devices sometimes are too, but it's often considered not safe, and thus they need to be configured by hand at boot by appending a parameter to the kernel command line, as explained on [http://mielke.cc/brltty/download.html#braillified].
Speakup supports Accent SA, Accent PC, Appollo II, Audapter, Braille 'n' Speak, DecTalk Express, DecTalk External, DecTalk PC, DoubleTalk, Keynote Gold PC, LiteTalk/DoubleTalk LT, Speakout, Transport hardware speech synthesis boards, as well as software speech synthesis.
Software speech synthesis are quite various, the [http://larswiki.atrc.utoronto.ca/wiki/LinuxUnixAccessibilitySoftware#Speech LarsWiki] enumerates them. English speech is usually quite good, other langages less, so your mileage may vary. Now, for making a screen reader X use a speech synthesis Y, either X already knows how to drive Y, or you can use speech dispatcher, which X hopefully knows how to drive, and for which Y hopefully wrote a backend for.
For users with low vision, magnification can be useful.
On the Linux console, this can be done by using svgatextmode or by using the stty tool (see page linux).
The X server itself used to provide a way to achieve magnification by pressing ctrl-alt-plus / ctrl-alt-minus for iterating between video modes. This is however limited to the capacities of the video board (and the device driver). If you are lucky, 320x240 (~4x magnification) or even 160x120 (~8x magnification) are available.
The newer X servers do not provide this shortcut anymore, it has to be set by hand, see Xorg.
The gnome desktop provides gnome-mag, driven by Orca, which magnifies up to 32x (not a hard limit), either:
- by splitting the screen between non-magnified and magnified parts,
- or in full screen by just using two screens and telling it which one should get the magnified part,
- or in full screen with only the magnified screen. That requires enabling the composite extension.
How does it compare to Windows?
Now, braille & speech accessibility of the gnome desktop is not yet as good as what JAWS and WindowEyes achieve on Windows. But it gets better and better everyday, and what we have now is already fairly good. Now, some points deserve attention:
even blind people can install Linux themselves.
JAWS and WindowEyes are very expensive (and this adds up to the high price of the braille device, if any).
Other kinds of accessibility issues
Blind people are not the only ones to need software help. Quadriplegics may for instance use dasher in order to type by just pointing their eyes on letters ; Gnome On-screen Keyboard may be used to quickly select menu elements, etc.