01
Part One
Train your audio model
teachablemachine.withgoogle.com
1
Go to teachablemachine.withgoogle.com and start a new Audio Project.2
Create your sound classes — for example: Clap, Whistle, Background Noise.3
Record 20–50 audio samples per class using your microphone.4
Click Train Model and wait for training to finish.5
Click Export Model → TensorFlow.js, then copy the hosted model URL.Better accuracy tip: Vary your recording environment, mic distance, and volume intensity. Diverse samples train a more robust model.
02
Part Two
MIT App Inventor setup
ai2.appinventor.mit.edu
1
Go to ai2.appinventor.mit.edu and create a new project.2
Switch to the Designer tab and add these components:- Button — label it "Start Listening"
- Label — this will display the predicted sound class
- WebViewer — bridges your JavaScript model and App Inventor
WebViewer Home URL
URL
file:///android_asset/index.html
03
Part Three
HTML integration
Upload your index.html to the project's Assets folder. This file loads your TensorFlow.js model inside the WebViewer and sends predictions back to App Inventor using AppInventor.setWebViewString().
index.html template
HTML
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.3.1/dist/tf.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/speech-commands@0.4.0/dist/speech-commands.min.js"></script>
</head>
<body>
<button onclick="init()">Start Listening</button>
<p id="result">Waiting...</p>
<script>
const URL = "YOUR_MODEL_URL_HERE/";
let recognizer;
async function createModel() {
const checkpointURL = URL + "model.json";
const metadataURL = URL + "metadata.json";
recognizer = speechCommands.create(
"BROWSER_FFT",
undefined,
checkpointURL,
metadataURL
);
await recognizer.ensureModelLoaded();
}
async function init() {
if (!recognizer) {
await createModel();
}
const classLabels = recognizer.wordLabels();
recognizer.listen(result => {
const scores = result.scores;
let maxIndex = 0;
for (let i = 1; i < scores.length; i++) {
if (scores[i] > scores[maxIndex]) {
maxIndex = i;
}
}
const prediction = classLabels[maxIndex];
const confidence = scores[maxIndex].toFixed(2);
// Update HTML
document.getElementById("result").innerText =
prediction + " (" + confidence + ")";
// Send to MIT App Inventor
if (window.AppInventor) {
window.AppInventor.setWebViewString(prediction);
}
}, {
probabilityThreshold: 0.75,
invokeCallbackOnNoiseAndUnknown: true,
overlapFactor: 0.5
});
}
</script>
</body>
</html>
Replace the URL: Swap
YOUR_MODEL_URL_HERE with the link you copied from Teachable Machine's export step.
04
Part Four
Blocks logic
Switch to the Blocks editor and add these two event handlers.
Button click — load the HTML page
Blocks
when Button1.Click do call WebViewer1.GoHome
Receive prediction from JavaScript
Blocks
when WebViewer1.WebViewStringChange do set Label1.Text to WebViewer1.WebViewString
05
Part Five
Microphone permission
Request microphone access on startup so the audio model can listen via the device mic.
Blocks
when Screen1.Initialize do
call Screen1.AskForPermission
"android.permission.RECORD_AUDIO"
Handle denial gracefully: If the user denies the permission, your app should show a clear message explaining why the mic is needed, rather than silently failing.
06
Part Six
Build & install the APK
MIT App Inventor — Build menu
1
In App Inventor, click Build → App (provide QR code for .apk) — or download the APK file directly.2
On your Android device, go to Settings → Security and enable Install from unknown sources.3
Scan the QR code with your phone or transfer the APK file and tap to install.4
Open the app, grant microphone permission, press Start Listening, and test your sound classes.