You can create some simple wearable projects with just your Android phone, an arm band and a set of headphones with a microphone. A wearable solution like this could be used by anyone that has their hands busy, for example recipes for chefs, instruction manuals for mechanics or directions for a cyclist.
In this article we will look at creating an Android application that does speech recognition of a keyword. The keyword will be searched in a simple CSV file and text-to-speech will be used for the result. We will create the Android app using MIT’s AppInventor package. AppInventor is a free Web based Android app development tool, that allows you to create applications in a graphical environment.
To get started with AppInventor, you will need a Google user account and a PC, Mac or Linux computer. AppInventor is a Web based Android app creation tool (http://appinventor.mit.edu), that uses a graphical programming environment.
AppInventor has two main screens. The Designer screen is used for the layout of the Android app and the Blocks screen is used to build the logic.
On the Designer screen, an app is created by dragging a component from the Palette window onto the Viewer window.
For the visuals on this application we will a Button, a Label and a ListView from the User Interface Palette window. The button will be used to initiate the speech recognition. The label will show the result from the speech recognition, and the listview component will show the CSV file data.
Also some non-visual components will as be used. In the Media section of the Palette window, select the SpeechRecognizer and TextToSpeech components and drag them into the Viewer window. As well select add the File component from the Storage heading .
The Components window is used to rename or delete components. When a component is selected the Properties window is used to change its editable features. In this example we renamed the button to BT_Speak, and we changed the backgroundColor, FontSize, Width and Text.
Once the layout design is complete, logic can be added by clicking on the Designer button (on the top menu bar).
Logic is built by selecting an object in the Blocks window, and then click on the specific block that you would like to use.
The entire program only requires 1 variable and 4 when blocks.
The first step is to load the text file when Screen1.Initialize block is called. The when File1.GotText block loads the text file data into the global variable, (THELIST), and it populates the ListView component.
The when BT_Speak.Click block is activated on a button push and it starts the speech recognition block.
The final block, when SpeechRecognizer1.AfterGettingText, shows the result of the speech in a label and it checks if the result is in the global variable. If the result is found a text-to-speech message is generated with the full line of text.
The Data File
For our test file we placed the key words at the starting of each line.
"Hope Bay has a sandy beach with good hiking and..." "The Glen is a horseshoe shaped valley with ..." "Isaac Lake is a bird sanctuary with ..." "Oliphant is a great for kite surfing..." ...
Our data file used some local landmarks, but there are lots of other choices like: friends addresses, recipes ingredients or favorite restaurants.
The file was saved as places.txt in the phones download directory, this should match up with the File1.ReadFrom block definition (/Download/places.txt).
Compiling and Running the App
After the screen layout and logic is complete, the menu item Build will compile the app. The app can be made available as an APK downloadable file or as a QR code link.
Once the app is install in the phone, pushing the “Talk” button will open the Google speech recognition dialog. If you’ve spoken a valid keyword then you should hear the line from the data file. The data file can be updated without any changes to the app.
This example used a simple text file, but it could be enhanced to support multi-field CSV files, Cloud Services, HTTP requests or Google Maps.