Code Monkey home page Code Monkey logo

azure_speech_recognition_null_safety's People

Contributors

cristianbregant avatar jjordanoc avatar marifdev avatar matias222 avatar qianyukun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

azure_speech_recognition_null_safety's Issues

Some words are masked

Hello there, I am attempting to utilize this package; however, certain words appear to be masked due to profanity and are being displayed as asterisks (***).

I wanted to know if this package has feature to turn this off as mentioned here.

ios support

Is there any development plan to support ios?

Unable to recognize wav file

I need an interface to recognize WAV file. I have found that Azure's API is supported. Will it be added in the future?
Thanks

AudioConfig audioConfig = AudioConfig.fromWavFileInput("YourAudioFile.wav");

Can't get text directly when run demo

when I ran sample demo and microphone is open, I can't get any text from azure speech callback. Our company registered Azure service for web, I didn't have authority to create a new service for mobile now, so I used key from web directly. Is that related with key? Looking forward to your reply, thank you. @jjordanoc
Here is flutter env:
[✓] Flutter (Channel stable, 3.16.0, on macOS 14.1 23B74 darwin-x64, locale zh-Hans-CN) [✓] Android toolchain - develop for Android devices (Android SDK version 31.0.0) [✓] Xcode - develop for iOS and macOS (Xcode 15.0.1) [✓] Chrome - develop for the web [✓] Android Studio (version 2022.1) [✓] VS Code (version 1.84.1) [✓] Connected device (4 available)
Here is my code:

`
initAzure(String token){
_speechAzure = new AzureSpeechRecognition();

// MANDATORY INITIALIZATION
AzureSpeechRecognition.initialize(token, "eastus",lang: "zh-Hans");

_speechAzure?.setFinalTranscription((text) {
  // do what you want with your final transcription
  debugPrint("speech azure:$text");
});

_speechAzure?.setRecognitionStartedHandler(() {
// called at the start of recognition (it could also not be used)
});

_speechAzure?.setRecognitionResultHandler((text) {
  debugPrint("speech recognition result: $text");
  // do what you want with your partial transcription (this one is called every time a word is recognized)
  // if you have a string that is displayed you could call here setState() to updated with the partial result
});

}

Future _recognizeVoice() async {
try {
AzureSpeechRecognition
.simpleVoiceRecognition(); //await platform.invokeMethod('azureVoice');
} on Exception catch (e) {
print("Failed to get text '$e'.");
}
}
`

Continuous Assessment is causing conflicts with audio player on iOS

Hello there!, I'm facing an issue on iOS after using the continuousRecordingWithAssessment function I can't use the audio player again, to me seems to be related to some kind of conflicts between the audio and the recording, I'm using the stopContinuousRecognition function to stop the recording, but I think that I'm missing something.

Thanks!

PS: Android works fine

Speech Recognition Fails

[ERROR:flutter/shell/common/shell.cc(1038)] The 'azure_speech_recognition' channel sent a message from native to Flutter on a non-platform thread. Platform channel messages must be sent on the platform thread. Failure to do so may result in data loss or crashes, and must be fixed in the plugin or application code creating that channel.
See https://docs.flutter.dev/platform-integration/platform-channels#channels-and-platform-threading for more information.

Any idea how to fix this? Thank you so much.

Continuous Recognition not implemented in Example app

When tapping on Continuous Recognition button...

════════════════════════════════════════════════════════════════════════════════

════════ Exception caught by gesture ═══════════════════════════════════════════
Could not find a generator for route RouteSettings("/continuous", null) in the _WidgetsAppState.
════════════════════════════════════════════════════════════════════════════════

Error micStream

I got error both in IOS and Android:

late AzureSpeechRecognition _speechAzure;
  String subKey = Env.azureTTSAPIKey;
  String region = Env.azureTTSAPIRegion;
  String lang = "en-US";

  Future<void> azureStt() async {
    await activateSpeechRecognizer();
    await recognizeVoiceMicStreaming();
  }

  Future<void> activateSpeechRecognizer() async {
    await Permission.microphone.request();
    _speechAzure = AzureSpeechRecognition();
    // MANDATORY INITIALIZATION
    AzureSpeechRecognition.initialize(subKey, region,
        lang: lang,);

    _speechAzure.setFinalTranscription((text) {
      print("1: $text");
      // do what you want with your final transcription
    });

    _speechAzure.setRecognitionResultHandler((text) {
      print("2: $text");
      // do what you want with your partial transcription (this one is called every time a word is recognized)
      // if you have a string that is displayed you could call here setState() to updated with the partial result
    });

    _speechAzure.setRecognitionStartedHandler(() {
      print("3: Start");
      // called at the start of recognition (it could also not be used)
    });
  }

  Future<void> recognizeVoiceMicStreaming() async {
    try {
      await AzureSpeechRecognition.micStream();
    } on PlatformException catch (e) {
      print("Failed start the recognition: '${e.message}'.");
    }
  }

IOS - Error

[VERBOSE-2:dart_vm_initializer.cc(41)] Unhandled Exception: MissingPluginException(No implementation found for method micStream on channel azure_speech_recognition)
#0      MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:308:7)
<asynchronous suspension>

Android - Error
Print results:

I/flutter (20409): 3: Start
I/flutter (20409): 1: 

Immediately after each other.

Also is there a possibility to stop recognizing manually?

Crashes: AudioFlinger could not create record track

Tried to add permissions, etc.

E/AudioRecord(13643): createRecord_l(0): AudioFlinger could not create record track, status: -1 E/AudioRecord-JNI(13643): Error creating AudioRecord instance: initialization check failed with status -1. E/android.media.AudioRecord(13643): Error code -20 when initializing native AudioRecord object. W/ms.VoiceChatGPT(13643): Attempt to remove non-JNI local reference, dumping thread E/AndroidRuntime(13643): FATAL EXCEPTION: main

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.