>> Merrie Morris: It's my pleasure to introduce Richard Ladner. He's joining us this morning just from across the lake at UW where he's a professor in the computer science and engineering department. He also is an adjunct faculty member in the electrical engineering and the linguistics departments at UW.
And Richard lately has been focusing his research interests on accessible computing. So in particular, he studies technology for the deaf, the blind, and the deaf blind. Although before that, he was a theoretical computer scientist so, he knows quite a few tricks. And Richard is a member of the Board of Trustees of
Gallaudet University in Washington, D.C. which is the only liberal arts university that serves deaf people in the world. So I'm sure he has a lot of very interesting things to share with us today during his talk. And I'm going to let him tell us about new directions in accessible computing. So thank you.
>> Richard Ladner: Thank you for that nice introduction. So I think it was maybe my fourth talk at MSR over the years, and they've all been quite different from each other. So this is the first one I think I've done on accessibility.
So I always like to start these talks by just introducing you to some people who have disabilities and some of these people I do know. The first person there on the left is TV Raman, who is a engineer, a scientist at Google, and quite an inventor in his own right.
On the right is Chieko Asakawa who is at IBM Japan in the research division.
That's IBM Tokyo Research Labs. And she runs a accessibility group there.
She's totally blind. So is TV Raman. And they both have PhDs.
A couple of other friends of mine. Raja Kushalnagar. In fact, he's in Seattle this week. He's normally in Houston. He's a PhD student at the University of
Houston and will be getting his PhD this year.
And the person on the right is Christian Vogler who received his PhD maybe five or six years ago at Penn. And he's now in Europe. He's originally from
Germany. And both these people are totally deaf. And on the right you can see
Christian Vogler is very interested in sign language recognition and from those data gloves he's wearing he's getting some signals which you can see in the different colors there. And somehow he's able to interpret those signals as particular signs. Now, I have to say sign language recognition is really in its infancy, and it's not working very well, so there's a lot of research to do to make it work. And that's an open problem is sign language recognition from video or even from beta gloves or from any kind of sensors at all.
The next person is Bapin. I won't even try to pronounce the rest of his name.
He's -- he works for the Helen Keller National Center. I met him last summer.
He's deaf and blind. He's going to be visiting University of Washington next week, and he's going to be giving a talk in our deaf series on technology for deaf-blind people.
He's a PhD wannabe at the moment. He's not in grad school. And I'm wondering if he actually will be going to grad school because his wife just had a baby and first child, so I think he's going to be pretty busy for a while. So we'll see how that works out. But he is a technologist and he has a degree in computing.
This is an engineer Iraq War veteran John Kuniholm. I don't know him personally, but I thought it was kind of interesting that he was on the cover of
Spectrum Magazine last year. And as you can see he has -- we'll call it a bionic arm or artificial arm that's very, very high tech. And so this is an example of somebody who had -- was an engineer already and decided to turn his engineering toward him, you know, toward solving a problem for himself. And we'll see that theme as we go along.
Another gentleman I happen to know is Geerat Vermeij, who is a distinguished professor at the University of California Davis in evolutionary biology and his specialty is shells. And he's holding a Shell there. He's totally blind. But he somehow I guess the tactile sense is pretty good and smell for understanding shells. He's very famous. He's written hundreds of papers and is very accomplished.
And of course there's Steven Hawking who I don't really have to introduce. He's a very famous physicist, probably the most famous physicist living today who is severely disabled and uses speech synthesis even to communicate. So he has some kind of like ALS or some kind of degenerative disease.
I should introduce a couple of our students at University of Washington. On the bottom is Sangyun Hahn who is Korean. He's totally blind. And the young man above him there is Zach Lattin who is a math major. Had he graduated like three years ago I think. This picture is a little old. And they're both totally blind. And they are both standing at a computer or sitting at a computer. Of course they're not reading the screen, they are listening to a screen reader so that to have access to the computer. And they both work with me on various projects. Some of them I'll mentioned to.
I'm hopeful that Sangyun will finish his PhD this year. I'm pretty sure he will. And he's doing a dissertation in natural language processing under Mari Ostendorf who is a colleague of mine.
So we've met a bunch of people here. And I just wanted to set your minds that we're not talking about people who are not competent. They may be disabled, but they are competent. In fact, most of the ones I showed you have PhD or are getting PhDs. So I want to put this impression in your minds that having a disability is at least a disadvantage, but you can still accomplish quite a bit.
I should say a little bit about myself. My parents were disabled. They were deaf.
So I kind of grew up with disability. So I, you know, have actually a quite positive feeling about disability that since it's sort of in my nature.
So people with disabilities can do almost anything in almost any scientific field.
So that's kind of what I'm trying to say here. And interesting enough, and this has come up a lot, people with disabilities are often highly motivated to pursue careers in accessible computing research. So I just this week I got a call from a perspective graduate student who is blind and is interested in doing graduate work in computer science.
I don't know, several months ago I got a call from somebody who is blind who wanted to work. And I get calls from people who are deaf who want to work with me. So there are people out there that really want to contribute because it affects them personally.
And notice that a lot of people that to research -- you know, I was a mathematician. Of course my parents had informing to do with mathematics, but in my early years but you know, I sort of really enjoyed that and so on. And I'm not saying a person with a disability has to work on projects that affect them directly, but that is one option. And there does seem to be affinity there. But they could just as well be mathematicians. And I know some blind people that are -- and deaf people who are mathematicians that don't work on any kind of disability related project. Yes, question in the back.
>>: Given that this is a little bit difficult endeavor and you have disabilities and given that we know that when you start the PhD program not everybody finishes and given your interest, have you done a comparative study to sort of understand how many people with disabilities finish vis-a-vis people without disabilities?
>> Richard Ladner: That's a good question. I'm going to show -- so the question have we seen differences between people with disabilities and non-disabilities in terms of completing their degrees. And I have some data I'm going to show you momentarily, so can you wait to answer that question?
So what we're going to do today is like five points. Models of disability, data, impact of access technology, accessibility research and empowerment. And so that's kind of a theme of this talk actually is empowerment. And so that will be coming up quite a bit.
So I'd like to mention these are five models of disability that I've come across over the years. The one that most people subscribe to is the first one, medical model, disabled people are patients who need treatment and or cures, so if there were some cure for blindness, everybody would go for it. If there was a cure for deafness, everybody would go for it, or whatever it is. If you could have a bionic arm if you lost your arm and you could have it replaced with something, you know, with I don't know, some kind of clone or some kind of stem cell would grow back an arm, would you do it, right? So that's sort of the medical model. It's cure the problem. There is a problem, and you have to cure it.
The education model is a little different. Disabled youth are in need of some kind of special education. They need accommodations, they need all sorts of things to be successful in education. In fact, new classes of disability have arisen over the years, learning disability, dyslexia, things like that. I think 80 or 90 percent --
I think I have the numbers in another chart I'm not going to show you, I think 60 percent, excuse me, 60 percent of all school age children that have a disability have a learning disability. So it's not these other disabilities.
The rehabilitation model is disabled people need assistive technology, some kind of assistive technology and training for employment and everyday life so that, you know, somehow disabled people should have a job and have a sort of includes a normal life. And we want to give them devices and assisted technologies to make that possible.
The legal model is disabled people are citizens who have rights and responsibilities like other citizens. Accessibility to public buildings, spaces, voting, television, and the telephone are some of these rights. So for example in the United States, a deaf person can use a video relay service for free to call a hearing person. So the rate to use a telephone is sort of in the law.
Voting systems that are being proposed nowadays have to be accessible. So a lot of money and energy is going into voting, voting systems.
Television is now accessible by using captions. And also audio descriptions for blind people. Not all shows have audio descriptions, but many do.
Finally there's a social model, which is kind of the model I grew up in, because I didn't know about these other things. Disabled people are just part of the diversity of life, not necessarily in need of treatment and cure. They do need access when possible.
So basically there's a variety of people, they're men, there's women, there's black, there's white, there's disabled and non- disabled. It's just the way it is.
And so when you have to make due with your circumstances. And technology can be very helpful in any of those people's circumstances.
So I just want to distinguish some different kinds of technologies. We'll call a prosthesis some kind of augmentation to restore lost function. So for example that arm that that Iraq War vet was wearing is kind of like a prosthesis. It looks like a hand, it behaves like a hand, although probably not very well, and the idea is to try and get the hand back the way it was.
Assistive technology is popular in the rehabilitation literature. And the emphasis is on need for assistance. So it's a -- the term itself is somewhat paternalistic you know, that somehow these poor disabled people need assistance so we're going to give them assistance. So it's a word I kind of avoid. Paternalism is a term that you'll hear which means from people who are disabled that somebody is acting paternalistically, they are treating me like a child, they're treating me like I can't control my own life and things like that.
So assistive technology, not that it's bad, I'm not saying it's bad, it's just the term assistive is maybe a little bit of a turnoff. Access technology has a much stronger empowerment tone to it. And that's the term I use most of the time.
So access technology allows a activity that would be difficult to impossible to achieve without it. The emphasis is not on restoring function but on achieving an end goal by whatever means possible. So for example, a screen meter. You know, you're not trying to cure the vision of somebody who is blind but you're trying to give them access to the information that's on a computer.
Video phone. So we're not trying to make these deaf people hearing, we're trying to give them access to a phone that actually can -- the phone doesn't understand sign language but they can communicate in sign language over the phone, which is their native language.
Or wheelchair. Wheelchair is something that allows you to get from point A to point B. But you don't have to have new legs or have a spinal cord fixed or anything like that. It does allow. So that's access technology. Allows you to do things that you wouldn't be able to do otherwise. And so I don't, you know, think there's a whole lot of difference between assistive technology and access technology, it's just the word access sounds a little better to me and to people.
So here's that data I promised you. And this is just one slide really. I have a whole bunch of other data, but I decided it's kind of boring. So it's about 650 million people worldwide disabled, have some sort of disability. In the United
States, 16 percent of the population age 15 to 64 is disabled. So that's a pretty big percentage of people. That's more than -- that's a higher percentage than the number of African-Americans, for example in the United States.
>>: Is that refer to physical disability or --
>> Richard Ladner: All forms of disability. Including learning disabled, cognitive disability, intellectual disability. I didn't go through -- I should have had a slide on different categories of disability.
So this is getting to answer your question. 10 percent of the workforce is disabled. So that's lower than the 16 percent. So there are a bunch of disabled people that are not in the workforce because either they can't hold jobs, maybe their schizophrenic for example and there's no way for them to hold a job or they can't get a job.
Only five percent of the STEM workforce, STEM stands for science technology engineering mathematics which is, you know, us, is disabled. And only one percent of PhDs in STEM are disabled.
So you can see that those persons with disabilities as they progress through the education system either they lose their disability, which is not too likely, or they don't make it all the way to the PhD. So it's a small number.
I to have more data for bachelors degrees and so on. So for bachelors degrees in computer science it's about 11 percent of students entering bachelors programs have a disability, but only five percent graduate. So it's a smaller number. So there is a 50 percent falloff there. Yes, question over there.
>>: How do you know that these numbers are -- are these numbers all collected using the same methodology or is it possible they're being reported differently at different times.
>> Richard Ladner: The question is are these numbers collected by different methodologies? I didn't put all the references here but they're definitely different methodologies. So there are some inconsistencies that you'll see in the data.
For example, if you look in the world, the concept of disability might be different in one country over another country. There might be a heavy stigma in one country about have a disability and people don't report it if it's self reported data.
If it's reported by other people, there are hidden disabilities that people wouldn't even know about like dyslexia or color blindness or whatever, something like that.
So the data is fraught with errors. Some of this data is very good though, like the one percent data is every PhD in the United States fills out a form when they graduate. And on that form you're asked if you have a disability. And so this is all self reported data. They also ask for your race, ethnicity and your gender. So we do know how many PhDs are women and how many are men, for example.
Other data like the workforce data is done by sampling and maybe -- well, if they do good random sampling, it will be good data. Also, I've done a lot of rounding here. I didn't put -- everything is a pull percent, so I'm not worried about the decimals. Okay in the back of the room there.
>>: The 10 percent of workforce is US only or --
>> Richard Ladner: Yes, that's US only, yes. The workforce data is -- in fact, the data -- only the top one is international data.
Did you have another question?
>>: I'm just a little skeptical because PhDs are going to say program they have a learning disability, they're going to likely go into an area where that disability hurts them the least. And so they may not think of themselves as disabled when they're PhD students whereas they might have in high school or college.
>> Richard Ladner: That's a good point. I think that's, you know, perhaps -- all self reported data has this problem that you have to believe that you have a disability and that it affects you. So they might not think, well, I didn't have any accommodations when I was a PhD student, so I'm not disabled.
And there is even in the United States a stigma associated with having a disability.
So now we're going to impact of access technology. So this is usually kind of surprising to some people. Personal texting was first invented for deaf people.
So we all do personal texting now, but way back in the 1960s, the TTY was invented. Actually the TTY as shown here was actually a surplus Western Union
teletypewriter and that's why it's called a TTY. And all you needed to use the phone line with a TTY was a modem. And so modem was invented. In fact, the modem was invented by a deaf man to connect the TTY to the phone line so the signals, the electronic signals on the TTY would be converted to sounds and at the other end there would be another modem and a it would translate the sounds back to the signals for the TTY.
And so my parents had a TTY oh, early '70s, 1971, '72, but they actually came into existence in the '60s. So deaf people were doing personal texting long before you were doing e-mail or maybe long before you were born, some of you.
So there's sort of a modern TTY with a built in acoustic modem in the middle picture. And then on the right there there's a just a screen shot of Instant
Messaging. Is that Microsoft? Yeah, wave. I picked the right one. Windows
Live Messenger on the right. So personal texting now is ubiquitous and everybody does it, they have SMS on their phones and on and on and on. But the pioneers were the deaf people. In fact, the deaf people invented the whole concept.
So optical character recognition for blind people. So on the left there is Kurzwell machine, circa 1976. And I remember seeing one when I visited NSF in the mid '80s, a Kurzwell machine. And they're about the size of this table here I'm sitting in front of or a small refrigerator. And the idea was for a blind person to read books. So you would place that book page or a piece of paper on the device and it would scan it and read it out loud to you. And that was the first optical character recognition. And now optical character recognition is ubiquitous and in a few years we'll have all books that have ever been printed optically scanned and put into digital form.
So on the right there is a handheld version of what's on the left. It's called a
K-NFB Reader Mobile. And maybe Loren has one. I don't know.
>>: [inaudible].
>> Richard Ladner: You want one. This K-NFB Reader Mobile also has GPS and other features that make it very handy. And so you could just take a picture of a piece of paper and it will on the device itself, it will translate it to -- it will scan it and translate it to speech. And you have a way of navigating it as well. So what was this large is now handheld.
So speech recognition for hands free access. So speech recognition was partially pioneered so that people who couldn't use their hand could talk to a computer. And I could remember again back in the early '80s visiting Boeing and so come and meet this guy, come over and meet this guy. So I met this guy who was completely paralyzed and he was talking to his computer. And this was just a mom and pop system that somebody had given to this guy to actually talk to his computer. And he couldn't put in just general speech, he could just handle a few commands, maybe 50 or 60 commands and it made mistakes all the time, but -- so hands free talking to a computer, well, this is like doesn't Windows 7 have that built in now?
>>: Vista has it.
>> Richard Ladner: Vista had it too? I never bought Vista. I have Windows 7.
So here's a picture on the left of Ray Kurzwell himself doing some hands free access to a computer, and on the right is a UW student you know maybe 30 years later with a commercial system doing speech input.
So, modern software that's coming out, like Windows 7 have built in already accessibility features. So features that were -- had to be special and bought by a third party are now built in to Windows 7. And the picture on the left shows the magnifier, screen shot of a magnifier there for Windows 7. And then there's also the.
>>: Phone voice over so that you can to speech input and output for an iPhone now. And so they make these accessible. So the concept of having more accessibility in common process is becoming ubiquitous. And if you think -- you probably don't remember, but there was a time when curbs did not have curb cuts, right, and somebody in a wheelchair could not go over those curbs very easily. They were confined to their block. Maybe there was some driveways on their block they could go on a driveway and then into the street and into another driveway and things like that. So those are now ubiquitous around the country.
And they're still putting them in. And nobody even things about them. And mothers with little children and PRAMS really like them because they don't have to go over a curb. So the concept is that things that are for persons with disabilities become for everyone eventually. Not always, but very good examples. Yes, in the back there's a question.
>>: [inaudible] must have been renamed because I look at [inaudible] the iPhone, it's become a voice distorting app.
>> Richard Ladner: Voice, what, distortum?
>>: Voice distorting app.
>>: Distorting app.
>> Richard Ladner: Voice distorting [inaudible].
>>: It is called voiceover. I know that for a fact.
>> Richard Ladner: That's interesting. So the trend here is accessibility solutions become mainstream solutions, so it seems to be common trend. So for a company like Microsoft, they should be thinking about accessibility because downstream the stuff will probably be for everyone. And if you're a leader in accessibility, you're probably going to be a leader in technology.
So potentially trend which is I'm very interested in is that we have more and more programmable platforms. And with programmable platforms we can do multi
function accessibility solutions on standard platforms. So you don't need a special purpose device.
For example, that Kurzwell reader is just on some Nokia phone off the shelf
Nokia phone. So now we have more and more programmable devices like laptops, notebooks, phones. And so these devices can be made multi-purpose by just programming basically. Maybe you'll have to have some added, you know, physical functionality, maybe a special keyboard for import or something like that, but -- or Braille for output, but most of it will be just a programmable platform.
I have a very good example of this on one of my projects at the University of
Washington is the digital pen. I don't know if you guys have heard of a digital pen. But digital pen like the pictures show on the left actually has a little camera on it. And the camera can take pictures of patterns like bar codes. So this figure over here is a tactile graphic, and so what's black is raised, you know, so you can feel it. So it's a graphic. And inside these little circles are some little symbols that are written -- that are printed on the paper. They're not tactile.
But if I put the pen over a circle, it -- the pen actually reads some text. For example, this says if you put it over this, it will say Q0. This is a three-state finite-state machine. This says something else. This says Q accept or something like that down there. And by the way, these circles here have a very long description. I can't remember, some regular expression or something that's in these little circles. And it will just be read to you out loud.
So this digital pen is a programmable object. So its original purpose was to do -- for people to take notes on special paper, and so when they go over the paper again, in fact take notes with the paper and the pen and then it's also listening to the lecture and recording the lecture on the pen and so when you go back over your notes again you can hear the lecture at the same time. It was kind of a single purpose thing.
And I guess there was a lot of pressure. This was such a great idea, a lot of pressure on the company. And the name of the company I can't remember, digit pen I think or something like that, to open it up, and they did. So they made an
API for their digital pen. And now people can program third party applications and it's a little computer, if you like. And so one of my projects is to make tactile graphics accessible using a digital pen.
So that -- so don't just think of laptops cell phones, there's other stuff we're coming out. Refrigerators, you enforce, that are going to all be programmable.
Yeah?
>>: What's inside the circles, are they symbols or are they actual tabs?
>> Richard Ladner: They're symbols.
>>: Okay.
>> Richard Ladner: But the symbols with really tiny. They all look the same, in fact, when you look at them from the human eye.
>>: [inaudible].
>> Richard Ladner: Like a bar code. But they're patented by this company, and you have to get a license to use them and stuff like that.
So what's the problem? It seems like -- you know, this seems like the future.
Well, hmm, maybe not. So this is an article from Tuesday, September 15th,
2009. Insurers fight speech impairment remedy. This is from the New York
Times. Insurers, including Medicaid, won't pay for $300 speech solution on an iPhone but will pay $8,000 for a single function, in quotes, immediate device for text-to-speech generation. Was that a snicker, Loren?
So the problem is the iPhone is not considered a medical device, it's considered a personal device, and so medical insurance won't pay for this. So instead, somebody said, well, I'm going to take an iPhone and I'm going to make it do one thing and I'm going to make it -- well, it's not an iPhone, it's some other device.
I'm going to make it do one thing and now I can call it medical device and I can charge $8,000 for it.
So basically you turn off all the functions except for the medical function and -- is there something wrong with that picture? You know. That's our system today in the United States. It's crazy.
So disabled people are viewed from the medical model. This is the medical model problem, you know, that a person with a disability has to have a medical solution. And the medical system, insurance system and device systems and assistive technology, it's an industry. It's part of the medical industrial complex.
So here's the kind of future I'm envisioning. I finally got to this slide. A blind person buys a standard cell phone and a data service. So they're off and running. And they just download accessibility applications to suit their need.
They might have to pay for some of them. They might need some help installing them and things like that, but basically they're on their own. You know, you could have a GPS application for location and directions. You could have a bar code reader. You could have an OCR. It's something like that K-NFB reader. And then more and more things come along that make your device give you more functions.
The iPhone now has what is it 100,000 applications for the iPhone already?
Some of those probably are good for people with disabilities. I'm sure quite a few of them are. So just basically moving from the social model -- excuse me, from the medical model to the social model. They're saying that just devices that everybody have or lots of people have can become accessibility devices, all you need are programs. And of course that's what Microsoft does, right, they write programs here. And you guys should be a leader in this.
So talk a little bit about accessibility research. I did this little survey a while back.
It's CHI is the computer human interaction conference. I don't -- that's the official title, but it's sort of the highest ranked HCI conference. And so I looked back over the years in the ACM digital library and calculated the number of papers that had the word disability in it. And in the early days between 1982 and '85, there were zero. And if you look through the years, the next five years, '86 to '90, there was two percent, five papers. Then five percent the next five years. Then six percent the next five years. Then suddenly 23 percent. What happened between -- that was between 2000 and 2005. Somehow that community got turned on to the idea that there are really interesting problems to solve, hard research problems to solve having to do with accessibility and persons with disabilities. In the last four years it's been about 20 percent. So it's holding up.
So I think that what this says to the HCI community is that -- not to the HCI community. HCI community has already bought into the idea that there are lots of interesting hard problems to solve in accessibility. They've bought into it. But somehow it's growing. Or it has grown and is at least steady.
And there's a bunch of other conferences, some of them pretty new. ASSETS is an ACM conference for accessible computing. And it's been around -- I think this is its 11th or 12th year. And you know, it has an acceptance rate of about 30 percent. ICCHP is a European conference like ASSETS. There's a CSUN conference, which is a major conference for accessible technology. It's held every year at the California State University, Northridge, or at a hotel in that area.
And then there's one that's happening that's called ATIA, which is the Assistive
Technology Industry Association has a conference every year. And then finally
W4A which is an accessibility conference for the web, which is collocated with
WWW which is the main conference.
So there's just a -- at least from my perspective a lot of activity going on in research and in development and innovation in this area.
So I want to talk to you about a few of these things that we've been to go at the
University of Washington. The first one is WebAnywhere. And this was developed by -- I honestly didn't touch that mouse. Maybe I shook the table a little bit. It's a screen reader as a web application developed by Jeff Bingham about two years ago. And it's won some significant awards as you can see on the top. The most significant is the Andrew Mellon Foundation Award For
Technology Collaboration. And that award is for open source projects and this is an open source project. And I'll tell you a little more about it as we go on.
Maybe I should try to open up the web page here, see if it opens up.
I might turn up the sound a little. Can I do that. Okay. That's good.
>>: WebAnywhere has been initialized. It is now ready to use. Press the forward slash key at any time. You will hear a list of available keys linked
WebAnywhere.
>> Richard Ladner: Now, there's a keyboard for this thing somewhere?
>>: That requires no new software to be downloaded.
>> Richard Ladner: See, if I were a blind person, I wouldn't use the mouse, I would use a keyboard. [laughter].
>>: Which means you can access it --
>> Richard Ladner: Way down there. Okay.
>>: Lock down public computer terminals. WebAnywhere enables you to interact with the web in a similar way how you may have used other screen readers before.
>> Richard Ladner: Okay. Sounds like it stopped.
>>: With access to --
>> Richard Ladner: Oh, that stopped it. Usually if I hit the control key, it stops, and it didn't stop when I hit the control key. So anyway, just to give you an overview, this is a basically a screen reader. Remember I talked about that earlier. So it's a way for blind people to access the computer. But this only allows you to access web pages. And it's a service. It's a web service. It's an interactive web service.
So a blind person can go into this web page, WebAnywhere page, and go from there to any other page and have that page read to them and also be able to navigate.
So for example, if you want to go to a place to put input you do control I, and it will take to you that place. If you want to go where you put in a new URL, you go control L. Control -- tab down -- excuse me. And the down key will take you to the next paragraph and so on. So it's a way to navigate. And this is just -- it's open source, it's out there, anybody use it, we have about a thousand users a week. And also there are -- it's been developed for 20 -- 20, I think 25 other languages besides the US. And that was not done by Jeff Bingham, it was down by people out there in the open source community.
So there's one example. Another one is a newer project, this is web accessibility project which I just started, and it's bridge to the world for blind, low-vision, and deaf-blind people. So the idea is to take a phone like this one here and make it into an accessibility device by programming it. And I actually am teaching a course -- it doesn't want to start. Here it goes.
>>: Point the camera and tab the screen to identify the color. Teal or blue.
>> Richard Ladner: I could show different things.
>>: Orange.
>> Richard Ladner: It says this is orange. So you know, just like an undergraduate programming this up. And it did say -- sometimes it said orange, so both -- it actually has two programs in there. Both agree it was orange. The other one said teal or blue. There are two different opinions, two different programs that did that.
So you could imagine, you know, just adding more applications like that to this phone. It has all these sensors. It has GPS. It has a camera. It has a compass so you could use this -- one of my students is building a little compass, a talking compass. And lots of other things. And also it's networked. So we have a bar code reader on here that goes out to the network and identifies a product and things like that. So this becomes by just adding new applications to this phone, becomes an accessibility device.
So the picture here I'm showing is on the right here we're using Google Phone and T Mobile for this project. And it's with Jeff Bingham who is now at the
University of Rochester and so this picture on the right just shows you sort of a schematic of here's our user, it has all sorts of -- it could do user input here, it can get output from speech synthesis or perhaps the vibrator on the phone, could give output. And it has all the sensors that are going into some -- into the phone and then somehow it has to be mediated to go over the Internet to go to either automatic services like for bar code reading or some human services like mechanical turk or maybe volunteers to answer questions. If I was at a street corner, I don't know where I am, I could point this around. I don't have GPS maybe in that particular location or whatever I could find out -- or maybe the GPS isn't very accurate. I could point it to a street sign and push a button and that will go out and something will tell me what street I'm at, what street corner I'm at.
So you have human and automatic services. So one example for output is -- and
I demonstrated this to Rico earlier, and anybody afterwards I can demonstrate this. So V-Braille you can get Braille output from this using the vibration in the screen. So the screen is divided into six parts with the six parts of Braille, and then if a part if you touch a part and the phone vibrates, that tells that you that dot is up, if you like, and if it doesn't vibrate, it's down. So here's a picture of a P. So quadrant one vibrates, quadrant two vibrates, quadrant three vibrates, quadrant four, but not five and six. I guess Loren knows Braille because he was doing it in his head.
And the vibrations are a little on the transitions here are a little bit different, so there's a little stronger vibration, so it's a little easier to read. And I'll demonstrate that for anybody who wants to later.
And then Mobile ASL, and I don't have time to demonstrate that right now, but I did -- anybody who wants to do it later, I am demonstrate Mobile ASL, which is basically a video phone as a cell phone. And it uses HT 64, H.264 video compression. And it really works. And there's really a lot of interest in this, so I can demonstrate that later. And this is a pretty big project. It involves my colleague Eve Riskin and Sheila Hemani from Cornell and Jake Wobbrock at the
University of Washington. So we have some NSF money. I think I'll just show you the video, though. It's a YouTube.
[Video played].
>>: Mobile ASL is a video compression project with the goal of enabling realtime sign language communication over the standard wireless cell phone network in the United States. Due to the low bandwidth of the standard GPRS cell phone network in the US and the limiting processing power of today's cell phones, even today's best video encoders cannot produce the quality video needed for intelligible realtime sign language conversations.
The Mobile ASL project is developing a new realtime video compression scheme to transmit video within the existing wireless network in realtime while maintaining video quality that allows for intelligible sign language. Our video encoders are compatible with the H.264 compression standard using X264 and open source codec. By taking into account empirically validated visual and perceptual processes that occur during conversations in sign language, we can optimize the realtime video encoding for this specific domain.
Mobile ASL is enabling members of the deaf community to more easily access the freedom, independence, and portable convenience of the wireless telephone network.
>> Richard Ladner: So that's sort of a -- you could see that the people actually using it in that video. And Anna Cavender, my student who is in the picture here is doing the narration of that and actually developed that video for YouTube. I don't know if you noticed how many hits were on the YouTube video?
>>: 41,000.
>> Richard Ladner: 41,000, right. 41,000 hits. And I get e-mailed every week and video phone calls or VRS phone calls every week from people that want it.
And this was a best student paper at ASSETS in 2006, this, the original paper on
Mobile ASL. I know that Microsoft is interested in mobile now. There's a lot of energy going to mobile. And if the first Microsoft phone could have Mobile ASL on it or realtime video conferencing on it, you would suddenly be noticed. So I kind of recommend that you do that if you have the wherewithal.
And we have the technology, and you can use us as experts if you like.
So another project is ClassInFocus, which is trying to make education more accessible to deaf students. And here's one of the problems with being a deaf student is that all your information comes in visually. And so you are constantly looking at the slide, the professor, the sign language interpreter, the captions, you know, there's lots of things to keep track. Well, if you could bring all that information together on one screen, then maybe you could do a little bit visual dispersion would be reduced and also if you got notifications, for example, if the slide changed that somehow that part of the -- part of the screen moved around a little bit you get notification that the screen had changed there and so you could
turn your focus to that. Or if the interpreter started talking or something changed, then you get some disruption or notification that that happens.
And Anna Cavender did a study on that, and it's reported in this -- doesn't say this was another best student paper by Anna Cavender at the ASSETS conference this year on this ClassInFocus project.
Now, one of the issues and why this is kind of a telepresence project is that if I'm taking a science class, I'm deaf, the best interpreter might be in another state, so
I'd like to bring that interpreter into that classroom by telepresence, by video conferencing and put them right on the screen. So this interpreter -- whoops, I touched the screen and it went bad. So maybe -- do I hit that? It's fine there, though. On my screen it's really tiny.
>>: Just touch where the screen is, where the [inaudible].
>> Richard Ladner: Oh, thank you. It's a toggle. And so, you know, bringing the best interpreter into the classroom, this also is true for captioning as well, bringing the best captioner that knows a little bit about the subject matter that's going to make a big difference and the accessibility of that class. So that's the
ClassInFocus project.
And then this is a social networking project that we developed, it's called
ASL-STEM Forum. ASL stands for American sign language. STEM of course I mentioned before is science, technology, engineering and mathematics. So this is a web page for people to upload signs and definitions and just sort of share what they know about different signs. And maybe I will do a demo here. This is a little easier for me. And there's no audio on this. So here's an example of this.
So this was kind of the homepage. So actually this sign was just put up last year.
There's nothing recent here. But there's some activity. So I can do a search.
>>: [inaudible].
>> Richard Ladner: So I need to type in again. Yeah. So the word algorithm actually has a sign here, and you can click on that sign and see if that's one you like. So she's spelling out algorithm now. And then signing algorithm which is an
AM on the palm.
And then there's other ones that people have added. So we need to make that larger. So people can put up different alternatives. There's a little sound on this.
Whoops, what happened there? Let's do it again. It went so fast. She went like this, which is completely different than this. And they use them in different parts of the country. So getting these -- and it's a algorithm, it's a very fundamental word and there's many, many different signs for it. So that will give you some idea of how this works. Let's go back to algorithms.
So on this web page you can get the definition, some example, perhaps. There's an ontology over here so you can find different things. A lot of work is being done at Gallaudet in science, natural science. They're adding a lot of definitions,
but they haven't had a lot of sign like for genetics. I don't even know what some of these things mean. They've only gotten to E though, so far in there. And then you have signs.
And then you have ways at the bottom you can discuss the sign. So if I went back to algorithms, you would see some discussion of the signs. So down here you can do some discussion. And here -- whoops, I didn't want to do that. Let's go there. Okay.
And this is done in conjunction with Gallaudet university, actually RIT, as well,
Rochester Institute of Technology, National Technical Institute for the Deaf. Yes, question?
>>: [inaudible] sign languages American sign language or is there any collaboration available since this is a basically a web project?
>> Richard Ladner: Not yet, no.
>>: Okay.
>> Richard Ladner: But there ought to be.
>>: [inaudible].
>> Richard Ladner: Yeah. So I would be happy to share everything we did with people do BSL, yeah. British sign language, by the way.
>>: [inaudible] of sharing we actually attempted to create a Microsoft ease, if you will, sign language language and you can check [inaudible] but it does need to be updated, but it is there so that our interpreters can --
>> Richard Ladner: That's good. Yeah. So that's internal within Microsoft? So maybe those people could add those signs to this web forum and Microsoft would have some influence.
I'm kind of running out of time. I just wanted to mention an open problem. It's the CAPTCHA problem. So there are visual CAPTCHAs which are shown here, which blind people can't use. So those are for sighted people. There's audio
CAPTCHAs, which are different to use but are for blind people. But should there be a text CAPTCHA? And maybe there is something I don't know about. So that would be for deaf-blind people, or for anyone. And here's an example of one, you know, you might ask, well, what is the sum of 2 and 4? And if you type 6, then you've answered the CAPTCHA. So it's something only a human could do.
But -- people are saying well, that's not going to work because natural language processing has gotten pretty far along and they'll probably figure that one out.
Question in the back.
>>: Natural language a few years back did work problems on the SAT [inaudible] grade of 600 which beats the [inaudible] population. [laughter]. [inaudible] computer.
>> Richard Ladner: That sounds like artificial intelligence. Anyway, so this is a great problem for Microsoft to work on if you could get a team together to figure out how to do text CAPTCHA. I mean, it's going to take some creative new thing.
I don't know what it's going to be. It's definitely something that needs to be done.
So a little bit about empowerment. So this is an HCI concepts user centered design involve the user at every step. There's a little picture of what here.
Universal design. Design for all users, if possible. So the concept I like is design for user empowerment. And I think this is true not just for disabled but for everyone. Designed to enable people to solve their own accessibility problems.
So somehow that she be sort of the motto for the next generation of user interfaces.
So examples. Cell phones become credible. Accessibility tools. Users download accessibility applications. That's an example. Social accessibility.
Users interact to accomplish some accessibility goal like I mentioned ASL-STEM
Forum.
So persons with disabilities, with a superior education, can solve their own accessibility problems. So at Microsoft there are quite a few disabled employees and I'm sure they contributed to Microsoft's success quite a bit, by having that different point of view, having, you know, skills that people without disabilities wouldn't even think about.
And I just mention Nicole Torcolini. She's very well known at Microsoft. She's been an intern here the last two years, summer intern. She's currently a sophomore at Stanford University. She's from Bremerton or someplace over there. And I've known her since she was maybe 15. And she came to the
University of Washington to a summer program that I had called vertical mentoring workshop for the blind, and there were about 10 high school students, maybe 15 college students, five or six graduate students all blind and maybe 10 professionals, some with PhDs, maybe five of them had PhDs. Some were professors. Jeovat Verami [phonetic] was there, for example.
And it was vertical mentoring because each group would mentor the lower group.
And so Nicole was in a little workshop called math accessibility. And she was interested in that. And she was doing a lot of math herself back in high school.
She was taking AP calculus and things like that. And she got some mentoring from Sangyun Hahn I mentioned in the very beginning of the talk who was my graduate student and taught her was LaTeX was. She had never heard of it before. LaTeX is a markup language for math, basically. And it's very good, designed by Don Knuth. So it has to be very good.
And she went back, and she thought about it, and she realized she had a problem that she could solve because of the mentoring she got and -- or the education she received. Normally when she produced her math, she used the
Nemeth code, which is a math code in Braille, and then some Nemeth expert had to translate that, to write it out for the math teacher to actually grade the assignment.
So what she did, was she actually wrote a program which translated Nemeth code to LaTeX and then she ran it through a LaTeX compiler and printed out beautiful math. So when she turned her math in from then on, it was the best looking math in the class, because nobody could do what Don Knuth could do.
So the power -- she was empowered to solve her own accessibility problem.
So I thought that's -- it's quite a story, and Microsoft was very fortunate to have her here the last two summers, and I've been encouraging Merrie Morris to bring her into Microsoft Research because I think she could contribute to Microsoft
Research as well.
By the way, but that program she started a company with her parents and she sells that for like $50 or something like that. And I don't know how -- I've asked her how many customers, but she's not telling me. It's a private company.
And finally, I just want to tell you a little bit about this Access Computing alliance.
And actually Microsoft is a partner in this alliance. I think Anushca [phonetic] knows quite a bit about it.
So the goal of this is to increase the participation and success of individuals with disabilities in computing careers. So it's and outreach focused project. And I'm the PI, and Sheryl Burgstahler is very famous, very lucky to have her as my co-PI and director of this project. And so we have many different activities nationwide in the United States, different kinds of workshops, internships and all sorts of things to try and do this. And one of our major goals is to increase the capacity of computing departments and organizations like Microsoft Research and any other organization, not just Microsoft generally, but any organization that's a computing organization to improve their ability to include persons with disabilities on their various teams, research teams or whatever. And if anybody would like to talk about that, I can tell you more about that project. It's in its fourth year, just finishing its fourth year now, and we're going to be writing a proposal to get five more years on this. And it's a pretty significant project. It's about $900,000 a year.
So with that, I know I think I went over my time. I'm okay. So I'll just take any questions or comments from -- there were quite a few questions during the talk.
Yes, Gary?
>>: I guess among the technologies partly in -- at least partly invented for people with disabilities, I think you can include the telephone. Alexander Graham bell, right?
>> Richard Ladner: I could, yes. Right. So the telephone was -- you might fought know this, but Alexander Graham bell's wife and mother were both deaf.
And so he was trying to build and assistive listening device and by chance built
the telephone. So he was motivation by accessibility and invented the telephone which changed the world. So that's a really --
>>: [inaudible] he was a professor at a school for the deaf or something.
>> Richard Ladner: Yes. So I could go on and on about Alexander Graham Bell.
It's a very interesting story about him. Yes? Loren?
>>: I kind of laugh about talking books because now everybody has these long commutes and oh, we have this cool thing called audible.com, and I'm like people have the talking books for 75 years, you know, people that are wannabes and [inaudible].
>> Richard Ladner: Another example. Talking books. I'm sure I could add one more slide for each of those and be like 20 slides in the end and the whole talk would be just about, you know, accessibility products becoming ubiquitous and useful for everyone. I just picked a few. I thought personal texting was a good one, though, because people just don't know where that -- you know, it was going on long before SMS.
>>: It's like closed captions, people use those most often sports bars and health clubs.
>> Richard Ladner: Yes. In the back.
>>: If I got it right, you could only get talking box if you are considered blind, even though the marginal [inaudible] was zero. Talking books from libraries are for the blind.
>>: That was true because there was some technological issues like there was special -- even still there's special tape players and you know, things like that.
So I guess technically that [inaudible].
>> Richard Ladner: Yes, Anushca?
>>: I'm curious what you think of machine learning and how that plays into accessible technology and how you're [inaudible].
>> Richard Ladner: Yes. The answer is machine learning is very important. In fact, there's a guy who was here last year in Microsoft Research, Christophe
Goyos [phonetic], who put together a system where the user interface could adapt to the person's disability, and it used machine learning as a part of its engine. I showed you the tachographics project earlier. I used machine learning in there.
Machine learning is a basic tool that can be used when you have a bunch of examples and you want to generalize. Boom. Machine learning quite often will work for you. So I use it whenever I can, whenever it's appropriate. It's just like, you know, we have algorithms, we have data structures, we have machine learning. It's just another thing in your toolbox.
In the back.
>>: [inaudible] my hand down because [inaudible]. So there's a lots of upside to design and infrastructure and plumbing and using [inaudible] and then this idea of using learning for instance [inaudible] systems the components that go from AI to [inaudible]. Probably to really sort of accelerate [inaudible] interactions.
People often used machine learning as [inaudible] these days they use AI more
[inaudible]. But I think that my sense is this big upside hasn't really been exploited deeply by the Assets community. Am I correct?
>> Richard Ladner: No it has been.
>>: In a deep way? Christophe's work is like an interesting [inaudible] but there's so much more we could be doing.
>> Richard Ladner: I think it has, but maybe not as much as it should have. I mean, I hear about projects that use machine learning or use AI techniques, particularly at the University of Washington some -- for people to have cognitive disabilities sometimes they're having a hard time finding their way around, and so people have used, you know --
>>: [inaudible].
>> Richard Ladner: Yes, like Henry Kautz' work or Dieter Fox and so on. And I hear from University of Colorado there's some work there being done as well. So
I think it's being integrated then just like as I mentioned, algorithms and data structures and AI, you know, stuff. These are all basic tools that people need to engineer software generally. So I know AI is also in some Microsoft products as well, you know, it's built in. Doesn't Microsoft Word infer things that I want to do, it tries to do them?
>>: Sometimes [inaudible] [laughter].
>> Richard Ladner: I did the say it was artificial intelligence, not human intelligence.
>>: [inaudible] you know many of these techniques we're talking about right now
I think have a kind of a false positive, false negative rate. Each has its own kind of a curve of performance that -- but that's [inaudible] challenge I think for
[inaudible]. So you get more wind and cost [inaudible] cost of [inaudible].
>> Richard Ladner: So a really good example of that is the new captioning software that was announced by Google recently and for captioning video, and it
-- you know, people who have used it tell me, I haven't tried it myself, had told me that it's really inaccurate, and it sometimes says exactly the wrong thing that the person said, you know, in print.
So there is a long way to go there. But I think that's a great start. And so I kind of like that approach. You don't come out with something perfect the first time.
You come out with something that's imperfect and that drives a challenge for more people to do better.
>>: [inaudible] not just coming out with imperfect, it's like you look at the
[inaudible] look at the imperfections as expected and design around that so that it can still -- it's not [inaudible] saying this is noisy versus saying this is exactly correct. Can you imagine a UI design that's actually much more robust
[inaudible] as the perfect [inaudible].
>> Richard Ladner: Just like human beings. Any other comments, questions?
Okay. Dwight. He hasn't asked yet.
>>: I was just curious where you see things moving? There's lots of technology and in some cases things like touch screens are being seen as leaving people behind, and it seems like the latest new thing is always something that's accessible for at least some period of time.
>> Richard Ladner: Right.
>>: Do you see that accelerating or do you see it improving? And do you have some ideas about how we can have it improve more in sync with the innovations?
>> Richard Ladner: That's a really tough question. So the question is, you know, is technology seems to change more rapidly than access to that technology, sort of last behind, and I think that's always going to be the case to some degree. But you want to try to make that gap too big. So touch screens are -- they seemed inaccessible initially, but, you know -- somehow I wasn't able to get this started.
But, you know, this V-Braille that we did here -- that's really strange. I think I have to turn the whole thing off and start over. But the V-Braille here allows you to read Braille on a touch screen. This is a standard phone. Because as I showed you in the -- in there -- so it seemed on the surface accessible but then you can make it accessible. It's not personal accessible, but it has some accessibility.
Also, this thing has Bluetooth. So you could connect this to another Bluetooth device that has the accessibility stuff, refreshable Braille, for example. So there's a product called the deaf-blind communicator that does exactly that. So that makes the cell phone is just programmed so it will talk with something that's -- makes the cell phone more accessible through Bluetooth.
>>: I think that the V-Braille that you're talking about, that's one character at a time pretty low bandwidth, right?
>> Richard Ladner: Right.
>>: It's like displaying one character at a time and you have to press a button before you can see the next character and trying to understand the word from that.
>> Richard Ladner: Right.
>>: So really cumbersome.
>> Richard Ladner: It is. But we tried it out with some deaf-blind people, like nine deaf-blind people, and they were able to read it.
>>: Okay.
>> Richard Ladner: So it's better than nothing.
>>: Talented people.
>> Richard Ladner: Yeah.
>>: Must the Opticon --
>> Richard Ladner: The Opticon was very difficult.
>>: But amazing to watch someone good at it.
>>: The other interesting thing to think about is just hiring, like you were saying earlier, getting more people with disabilities into research, so into the earlier stages and sort of pie in the sky stuff and see what you can come up with.
>> Richard Ladner: I was encouraged that Merrie Morris's work is doing some projects for blind, and she went around and brought together blind people here at
Microsoft, she went over and visited the White House for the blind and so she's taken the right approach, including people at the very beginning. Of course she's a trained ACI person, so she better be doing that. Bring the users in at the beginning. Yeah?
>>: You mentioned earlier the New York Times article that highlighted the tact that Medicare and insurance [inaudible] iPhones. Have you actually heard of anything anywhere that's likely to change any time soon?
>> Richard Ladner: I didn't read the healthcare a bills that are coming out of it, but I don't know if they addressed access technology and those bills. Does anybody know? There's like five different bills that have gone through. I think they should, and I think they should allow for people to buy iPhones and download accessible applications or Google phones. I think that should be paid for. So -- and maybe it's just the phone itself and the person buys the service, you know. So it's just a matter of time. I mean, it's just -- you know, it's the right thing to do, so it should happen.
>>: Yeah, but [inaudible] for a while.
>> Richard Ladner: Yes. Well, one of the reasons it's not changing is because of the lobbies of the medical industrial complex who want things to stay the same
because they can charge these large amounts of money for very little. For a tongue depressor you know, five dollars. You know. Yes?
>>: I was just going to add that the ATIA, the AT Industry Association is working that issue. They -- because of course they are interested in getting all their AT prices to [inaudible] used in the right circumstance because it's a problem for people in rehab who are outside the school systems and problems for people in the school systems as well, but all the education [inaudible] and there is an education technology plan that's being created right now by the [inaudible] administration and they are asking for success stories and [inaudible] so. And I agree that it's -- there's a lot of pressures.
>> Richard Ladner: There's also, you know, the efforts by open source. So
Mozilla Foundation, you know, WebAnywhere. People are very interested in putting stuff out that's for free or for very low cost. Because something like
WebAnywhere takes almost no maintenance, you know. There is development and there is some training. I guess phone calls. But I can usually to the training in like five minutes, you know. But it's not that difficult. So it's -- also I think there is this service side. So if you bring out something that's just download on a iPhone that's an accessibility application that maybe the person who needs it can't really do it themselves for some reason. So there is still a need for a person as a service provider to help that person get that application.
But so the manufacturer maybe doesn't make much money, but then there's still work to be done. So it's not really reducing the amount of work completing but it's giving people power to solve their own problems. That's what I like. Okay.
>> Merrie Morris: Thank you very much.
>> Richard Ladner: You bet.
[applause]