Program Instruction Transcribing Through Voice By Speech Recognotion Tools

Authors

  • Mr. Ashwin Kumar M , Mr. Dony Armstrong DSouza

Abstract

In today’s world, technology has advanced so much that the time is not far when
programming will be something that must be learnt by everyone. Amongst all the trending
technologies Speech recognition technology is one of the fastest growing technologies and is
a topic that is very useful in many applications and environment in day to day life.
Techniques of recognition of speech involving some aspects of processing of speech is
employed to facilitate people with functional disability or other kinds of hindrance for
mobility is taken into consideration. This proposed work is concentrated basically towards
people who have physical disabilities prominently motor function disability. The proposed
work targets on converting voice commands of the user into programming vocabulary and
syntaxes of a language which can then be executed to get desired output. This proposed work
is designed such that the defined procedures will be transformed to control specifications by
the help of the programming phase. In this phase the program specification is converted by
the programmer into computer instruction. We have to achieve this by using a python
framework for speech recognition called Dragonfly that makes it convenient to create custom
commands. This will aid to design speech commands and grammar objects by treating them
as first-class Python objects. Dragonfly can be used for general programming by voice i.e.,
being flexible enough to be used for programming in any language, not limiting it to Python.
Voice is given as the input, recognized by the speech recognition engine and the preprocessing task takes place. The voice commands are then converted into objects which are
be mapped to the commands written in the mapping module by implementing certain rules
for mapping which are defined by the grammar module. At the end of successful grammar
mapping, output is generated and displayed as output by screen action module.

Published

2020-12-02

Issue

Section

Articles