Title: 36x48 Tri-Fold Template
1Programming By Voice Josh Pepperman University of
Alabama, Department of Computer Science Advisor
Dr. Jeff Gray
Se
Methods
Conclusions
Introduction
Next, the block can be dragged and deleted off to
the left side
Despite these issues with how to get certain
functions of the program working, the rest of the
team is making good progress on it. From what we
have done so far, the project is certainly an
achievable feat to accomplish, and once it is
done it will be very helpful when teaching those
with physical disabilities how to program. Some
of the issues I listed can be fixed by forcing
the program to run on a fixed resolution, so if
we ask the user to set their screen at a certain
resolution before running it, we could avoid
certain issues related to screen size and
positioning. Finding ways to make the program
dynamic to its environment can be considered by
scaling based on screen size. Also, changing the
amount that the screen scrolls by a set rate, and
then adding or subtracting that from the
positions stored for the blocks as the screen
moves are possible ideas for making the
environment adapt to screen changes by the
user. The ultimate goal of our efforts to bring
the PBV concept to children is to support many
different visual programming languages so that
teachers are not restricted to just one when they
are teaching children to program. A future idea
for the project is to support adaptation to
different environments instead of having to put
in all the locations in for each environment
element (as was done in this effort with Snap!).
The motivation for this project is to provide a
capability for children with disabilities to
learn how to program a computer. Much of the
experimentation was done by myself, but future
efforts are planned to perform user-based testing
with children identified by our collaborators at
United Cerebral Palsy of Birmingham. This will
let us see what works well with our project and
what does not, and give us a better understanding
of exactly how our Myna project will be used in a
teaching environment. Seeing how teachers
utilize it and how students use and react to it
will help us as we continue to improve this
project.
Over the past decade, there has been an increased
interest in providing new environments for
teaching children about computer programming.
This has resulted in several environments and
languages that offer a visual language that is
much easier to learn than a traditional
programming language. However, the visual
interface of such environments requires use of a
mouse and keyboard (with drag and drop usage),
which limits the potential for adoption by
children with a physical disability. The
purpose of this Emerging Scholars project is to
develop a voice interface to use along with
visual programming languages so that individuals
who cannot use a traditional keyboard and mouse
can still learn to program. The tool support for
this effort uses voice recognition software to
understand what words are spoken, and then parses
those words for understanding of what actions to
take to mimic the equivalent mouse and keyboard
events. The goal is to support the concept of
Programming by Voice (PBV). The Myna project
considers the topic of PBV for multiple
environments. For this project, I investigated
the application of PBV on a tool called Snap!,
which is a web-based programming environment for
children that was developed by the University of
California at Berkeley. The main challenge is
making the voice interface as versatile as the
graphical interface. Simple tasks like scrolling
and moving objects through voice commands are
challenging because the voice interface has to
know exactly where everything is with the
programming tool, even when objects are moved or
the screen is changed. We are investigating ways
to implement solutions to these problems, and
discovering that a voice interface, while
offering its own challenges, is capable of
providing the assistive needs to help children
with special needs learn to program.
My primary goals for this project were to
initiate the PBV concepts to handle the Snap!
interface online, and to implement the delete
function. In order to get the PBV concepts
working with Snap!, I had to take screenshots
(like the ones below) of each of the menus and
record their pixel positions on the screen. Then
I had to transfer all that data into the
programs code, and also check to make sure that
all of it worked. As an example, the delete
function works as if the user is using a mouse
drag the block to be deleted off into the menu on
the left. If there is something under the
to-be-deleted block, then move it down and
proceed. Then, move the other block back up. Here
is what that looks like in the Snap!
environment Here are three typical blocks.
Finally, the previous block can be dragged to
reattach to the new sequence
This process seems simple at first, but when
considering all the complications of not being
able to use a mouse, it becomes a challenging
problem. For example, if the screen is scrolled
down, all the block positions are saved after
they are moved. Thus, if the screen itself is
moved, the internal positions representing each
block must also be modified. Many complications
arose when dealing with screen changes and trying
to represent where everything is located in the
internal representation. Other issues also arose
in this investigation, such as what if screen
sizes are not consistent across different
computers? How can we get the program to undo and
redo numerous actions? How can we allow scrolling
without using the mouse? Some blocks (like the
ones pictured) have places to input text, so how
can the user add text verbally, and how to handle
the size of the block expanding?
If we want to delete the block in the middle, the
block below it must be moved beneath it,
programmatically within Myna