An Image-To-Speech iPad App
File(s)
Date
2012-07-26Author
Zhu, Xiaojin
Rosin, Jake
Jun, Kwang-Sung
Dyer, Charles R.
Maynord, Michael
Tiachunpun, Jitrapon
Publisher
University of Wisconsin-Madison Department of Computer Sciences
Metadata
Show full item recordAbstract
We describe an iPad app which assists in language acquisition and development. Such an application can be used by clinicians for human developmental disabilities. A user drags images around on the screen. The app generates and speaks random (but sensible) phrases that matches the image interact. For example, if a user drags an image of a squirrel onto an image of a tree, the app may say "the squirrel ran up the tree." A key challenge is the automated creation of "sensible" English phrases, which we solve by using a large corpus and machine learning.
Permanent Link
http://digital.library.wisc.edu/1793/61884Citation
TR1774
Part of
Licensed under: