jjordanoc / azure_speech_recognition_null_safety Goto Github PK
View Code? Open in Web Editor NEWA Flutter plugin that enables interaction with the Azure Cognitive Services Speech-To-Text API
License: GNU General Public License v3.0
A Flutter plugin that enables interaction with the Azure Cognitive Services Speech-To-Text API
License: GNU General Public License v3.0
Hello there, I am attempting to utilize this package; however, certain words appear to be masked due to profanity and are being displayed as asterisks (***).
I wanted to know if this package has feature to turn this off as mentioned here.
Is there any development plan to support ios?
I need an interface to recognize WAV file. I have found that Azure's API is supported. Will it be added in the future?
Thanks
AudioConfig audioConfig = AudioConfig.fromWavFileInput("YourAudioFile.wav");
when I ran sample demo and microphone is open, I can't get any text from azure speech callback. Our company registered Azure service for web, I didn't have authority to create a new service for mobile now, so I used key from web directly. Is that related with key? Looking forward to your reply, thank you. @jjordanoc
Here is flutter env:
[✓] Flutter (Channel stable, 3.16.0, on macOS 14.1 23B74 darwin-x64, locale zh-Hans-CN) [✓] Android toolchain - develop for Android devices (Android SDK version 31.0.0) [✓] Xcode - develop for iOS and macOS (Xcode 15.0.1) [✓] Chrome - develop for the web [✓] Android Studio (version 2022.1) [✓] VS Code (version 1.84.1) [✓] Connected device (4 available)
Here is my code:
`
initAzure(String token){
_speechAzure = new AzureSpeechRecognition();
// MANDATORY INITIALIZATION
AzureSpeechRecognition.initialize(token, "eastus",lang: "zh-Hans");
_speechAzure?.setFinalTranscription((text) {
// do what you want with your final transcription
debugPrint("speech azure:$text");
});
_speechAzure?.setRecognitionStartedHandler(() {
// called at the start of recognition (it could also not be used)
});
_speechAzure?.setRecognitionResultHandler((text) {
debugPrint("speech recognition result: $text");
// do what you want with your partial transcription (this one is called every time a word is recognized)
// if you have a string that is displayed you could call here setState() to updated with the partial result
});
}
Future _recognizeVoice() async {
try {
AzureSpeechRecognition
.simpleVoiceRecognition(); //await platform.invokeMethod('azureVoice');
} on Exception catch (e) {
print("Failed to get text '$e'.");
}
}
`
Hello there!, I'm facing an issue on iOS after using the continuousRecordingWithAssessment
function I can't use the audio player again, to me seems to be related to some kind of conflicts between the audio and the recording, I'm using the stopContinuousRecognition
function to stop the recording, but I think that I'm missing something.
Thanks!
PS: Android works fine
[ERROR:flutter/shell/common/shell.cc(1038)] The 'azure_speech_recognition' channel sent a message from native to Flutter on a non-platform thread. Platform channel messages must be sent on the platform thread. Failure to do so may result in data loss or crashes, and must be fixed in the plugin or application code creating that channel.
See https://docs.flutter.dev/platform-integration/platform-channels#channels-and-platform-threading for more information.
Any idea how to fix this? Thank you so much.
When tapping on Continuous Recognition
button...
════════════════════════════════════════════════════════════════════════════════
════════ Exception caught by gesture ═══════════════════════════════════════════
Could not find a generator for route RouteSettings("/continuous", null) in the _WidgetsAppState.
════════════════════════════════════════════════════════════════════════════════
I got error both in IOS and Android:
late AzureSpeechRecognition _speechAzure;
String subKey = Env.azureTTSAPIKey;
String region = Env.azureTTSAPIRegion;
String lang = "en-US";
Future<void> azureStt() async {
await activateSpeechRecognizer();
await recognizeVoiceMicStreaming();
}
Future<void> activateSpeechRecognizer() async {
await Permission.microphone.request();
_speechAzure = AzureSpeechRecognition();
// MANDATORY INITIALIZATION
AzureSpeechRecognition.initialize(subKey, region,
lang: lang,);
_speechAzure.setFinalTranscription((text) {
print("1: $text");
// do what you want with your final transcription
});
_speechAzure.setRecognitionResultHandler((text) {
print("2: $text");
// do what you want with your partial transcription (this one is called every time a word is recognized)
// if you have a string that is displayed you could call here setState() to updated with the partial result
});
_speechAzure.setRecognitionStartedHandler(() {
print("3: Start");
// called at the start of recognition (it could also not be used)
});
}
Future<void> recognizeVoiceMicStreaming() async {
try {
await AzureSpeechRecognition.micStream();
} on PlatformException catch (e) {
print("Failed start the recognition: '${e.message}'.");
}
}
IOS - Error
[VERBOSE-2:dart_vm_initializer.cc(41)] Unhandled Exception: MissingPluginException(No implementation found for method micStream on channel azure_speech_recognition)
#0 MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:308:7)
<asynchronous suspension>
Android - Error
Print results:
I/flutter (20409): 3: Start
I/flutter (20409): 1:
Immediately after each other.
Also is there a possibility to stop recognizing manually?
The readme says iOS and Android only, but the example folder has a web sub folder.
Is there any way to access the audio data in any format used for Azure service? In my app I also needed to store it as a file
Thank you for providing this awesome plugin!
Tried to add permissions, etc.
E/AudioRecord(13643): createRecord_l(0): AudioFlinger could not create record track, status: -1 E/AudioRecord-JNI(13643): Error creating AudioRecord instance: initialization check failed with status -1. E/android.media.AudioRecord(13643): Error code -20 when initializing native AudioRecord object. W/ms.VoiceChatGPT(13643): Attempt to remove non-JNI local reference, dumping thread E/AndroidRuntime(13643): FATAL EXCEPTION: main
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.