Code Monkey home page Code Monkey logo

picovoice / leopard Goto Github PK

View Code? Open in Web Editor NEW
409.0 18.0 23.0 427.42 MB

On-device speech-to-text engine powered by deep learning

Home Page: https://picovoice.ai/

License: Apache License 2.0

Python 18.72% C 3.66% Ruby 0.47% Swift 6.70% Java 10.81% JavaScript 9.14% Shell 0.54% Go 9.84% TypeScript 9.78% C# 13.48% Rust 12.02% Dart 4.85%
stt speech-to-text asr automatic-speech-recognition on-device speech-recognition transcription voice-recognition voice-to-text

leopard's Introduction

Leopard

GitHub release GitHub

Crates.io Maven Central Maven Central npm npm npm npm Nuget CocoaPods Pub Version PyPI Go Reference

Made in Vancouver, Canada by Picovoice

Twitter URL YouTube Channel Views

Leopard is an on-device speech-to-text engine. Leopard is:

  • Private; All voice processing runs locally.
  • Accurate
  • Compact and Computationally-Efficient
  • Cross-Platform:
    • Linux (x86_64), macOS (x86_64, arm64), Windows (x86_64)
    • Android and iOS
    • Chrome, Safari, Firefox, and Edge
    • Raspberry Pi (3, 4, 5) and NVIDIA Jetson Nano

Table of Contents

AccessKey

AccessKey is your authentication and authorization token for deploying Picovoice SDKs, including Leopard. Anyone who is using Picovoice needs to have a valid AccessKey. You must keep your AccessKey secret. You would need internet connectivity to validate your AccessKey with Picovoice license servers even though the voice recognition is running 100% offline.

AccessKey also verifies that your usage is within the limits of your account. Everyone who signs up for Picovoice Console receives the Free Tier usage rights described here. If you wish to increase your limits, you can purchase a subscription plan.

Language Support

Demos

Python Demos

Install the demo package:

pip3 install pvleoparddemo

Run the following in the terminal:

leopard_demo_file --access_key ${ACCESS_KEY} --audio_paths ${AUDIO_FILE_PATH}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_FILE_PATH} with a path to an audio file you wish to transcribe.

C Demo

Build the demo:

cmake -S demo/c/ -B demo/c/build && cmake --build demo/c/build

Run the demo:

./demo/c/build/leopard_demo -a ${ACCESS_KEY} -l ${LIBRARY_PATH} -m ${MODEL_FILE_PATH} ${AUDIO_FILE_PATH}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console, ${LIBRARY_PATH} with the path to appropriate library under lib, ${MODEL_FILE_PATH} to path to default model file (or your custom one), and ${AUDIO_FILE_PATH} with a path to an audio file you wish to transcribe.

iOS Demo

To run the demo, go to demo/ios/LeopardDemo and run:

pod install

Replace let accessKey = "${YOUR_ACCESS_KEY_HERE}" in the file ViewModel.swift with your AccessKey.

Then, using Xcode, open the generated LeopardDemo.xcworkspace and run the application.

Android Demo

Using Android Studio, open demo/android/LeopardDemo as an Android project and then run the application.

Replace "${YOUR_ACCESS_KEY_HERE}" in the file MainActivity.java with your AccessKey.

Node.js Demo

Install the demo package:

yarn global add @picovoice/leopard-node-demo

Run the following in the terminal:

leopard-file-demo --access_key ${ACCESS_KEY} --input_audio_file_path ${AUDIO_FILE_PATH}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_FILE_PATH} with a path to an audio file you wish to transcribe.

For more information about Node.js demos go to demo/nodejs.

Flutter Demo

To run the Leopard demo on Android or iOS with Flutter, you must have the Flutter SDK installed on your system. Once installed, you can run flutter doctor to determine any other missing requirements for your relevant platform. Once your environment has been set up, launch a simulator or connect an Android/iOS device.

Run the prepare_demo script from demo/flutter with a language code to set up the demo in the language of your choice (e.g. de -> German, ko -> Korean). To see a list of available languages, run prepare_demo without a language code.

dart scripts/prepare_demo.dart ${LANGUAGE}

Replace "${YOUR_ACCESS_KEY_HERE}" in the file main.dart with your AccessKey.

Run the following command from demo/flutter to build and deploy the demo to your device:

flutter run

Go Demo

The demo requires cgo, which on Windows may mean that you need to install a gcc compiler like Mingw to build it properly.

From demo/go run the following command from the terminal to build and run the file demo:

go run filedemo/leopard_file_demo.go -access_key "${ACCESS_KEY}" -input_audio_path "${AUDIO_FILE_PATH}"

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_FILE_PATH} with a path to an audio file you wish to transcribe.

For more information about Go demos go to demo/go.

React Native Demo

To run the React Native Leopard demo app you will first need to set up your React Native environment. For this, please refer to React Native's documentation. Once your environment has been set up, navigate to demo/react-native to run the following commands:

For Android:

yarn android-install    # sets up environment
yarn android-run        # builds and deploys to Android

For iOS:

yarn ios-install        # sets up environment
yarn ios-run

Java Demo

The Leopard Java demo is a command-line application that lets you choose between running Leopard on an audio file or on microphone input.

From demo/java run the following commands from the terminal to build and run the file demo:

cd demo/java
./gradlew build
cd build/libs
java -jar leopard-file-demo.jar -a ${ACCESS_KEY} -i ${AUDIO_FILE_PATH}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_FILE_PATH} with a path to an audio file you wish to transcribe.

For more information about Java demos go to demo/java.

.NET Demo

Leopard .NET demo is a command-line application that lets you choose between running Leopard on an audio file or on real-time microphone input.

From demo/dotnet/LeopardDemo run the following in the terminal:

dotnet run -c FileDemo.Release -- --access_key ${ACCESS_KEY} --input_audio_path ${AUDIO_FILE_PATH}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_FILE_PATH} with a path to an audio file you wish to transcribe.

For more information about .NET demos, go to demo/dotnet.

Rust Demo

Leopard Rust demo is a command-line application that lets you choose between running Leopard on an audio file or on real-time microphone input.

From demo/rust/filedemo run the following in the terminal:

cargo run --release -- --access_key ${ACCESS_KEY} --input_audio_path ${AUDIO_FILE_PATH}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_FILE_PATH} with a path to an audio file you wish to transcribe.

For more information about Rust demos, go to demo/rust.

Web Demos

Vanilla JavaScript and HTML

From demo/web run the following in the terminal:

yarn
yarn start

(or)

npm install
npm run start

Open http://localhost:5000 in your browser to try the demo.

React Demo

From demo/react run the following in the terminal:

yarn
yarn start ${LANGUAGE}

(or)

npm install
npm run start ${LANGUAGE}

Open http://localhost:3000 in your browser to try the demo.

SDKs

Python

Install the Python SDK:

pip3 install pvleopard

Create an instance of the engine and transcribe an audio file:

import pvleopard

leopard = pvleopard.create(access_key='${ACCESS_KEY}')

print(leopard.process_file('${AUDIO_FILE_PATH}'))

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_FILE_PATH} to path an audio file. Finally, when done be sure to explicitly release the resources using leopard.delete().

C

Create an instance of the engine and transcribe an audio file:

#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>

#include "pv_leopard.h"

pv_leopard_t *leopard = NULL;
bool enable_automatic_punctuation = false;
bool enable_speaker_diarization = false;

pv_status_t status = pv_leopard_init(
  "${ACCESS_KEY}",
  "${MODEL_FILE_PATH}",
  enable_automatic_punctuation,
  enable_speaker_diarization,
  &leopard);
if (status != PV_STATUS_SUCCESS) {
    // error handling logic
}

char *transcript = NULL;
int32_t num_words = 0;
pv_word_t *words = NULL;
status = pv_leopard_process_file(
    leopard,
    "${AUDIO_FILE_PATH}",
    &transcript,
    &num_words,
    &words);
if (status != PV_STATUS_SUCCESS) {
    // error handling logic
}

fprintf(stdout, "%s\n", transcript);
for (int32_t i = 0; i < num_words; i++) {
    fprintf(
            stdout,
            "[%s]\t.start_sec = %.1f .end_sec = %.1f .confidence = %.2f .speaker_tag = %d\n",
            words[i].word,
            words[i].start_sec,
            words[i].end_sec,
            words[i].confidence,
            words[i].speaker_tag);
}

pv_leopard_transcript_delete(transcript);
pv_leopard_words_delete(words);

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console, ${MODEL_FILE_PATH} to path to default model file (or your custom one), and ${AUDIO_FILE_PATH} to path an audio file. Finally, when done be sure to release resources acquired using pv_leopard_delete(leopard).

iOS

The Leopard iOS binding is available via CocoaPods. To import it into your iOS project, add the following line to your Podfile and run pod install:

pod 'Leopard-iOS'

Create an instance of the engine and transcribe an audio_file:

import Leopard

let modelPath = Bundle(for: type(of: self)).path(
        forResource: "${MODEL_FILE}", // Name of the model file name for Leopard
        ofType: "pv")!

let leopard = Leopard(accessKey: "${ACCESS_KEY}", modelPath: modelPath)

do {
    let audioPath = Bundle(for: type(of: self)).path(forResource: "${AUDIO_FILE_NAME}", ofType: "${AUDIO_FILE_EXTENSION}")
    let result = leopard.process(audioPath)
    print(result.transcript)
} catch let error as LeopardError {
} catch { }

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console, ${MODEL_FILE} a custom trained model from console or the default model, ${AUDIO_FILE_NAME} with the name of the audio file and ${AUDIO_FILE_EXTENSION} with the extension of the audio file.

Android

To include the package in your Android project, ensure you have included mavenCentral() in your top-level build.gradle file and then add the following to your app's build.gradle:

dependencies {
    implementation 'ai.picovoice:leopard-android:${LATEST_VERSION}'
}

Create an instance of the engine and transcribe an audio file:

import ai.picovoice.leopard.*;

final String accessKey = "${ACCESS_KEY}"; // AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)
final String modelPath = "${MODEL_FILE_PATH}";
try {
    Leopard leopard = new Leopard.Builder()
        .setAccessKey(accessKey)
        .setModelPath(modelPath)
        .build(appContext);

    File audioFile = new File("${AUDIO_FILE_PATH}");
    LeopardTranscript transcript = leopard.processFile(audioFile.getAbsolutePath());

} catch (LeopardException ex) { }

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console, ${MODEL_FILE_PATH} with a custom trained model from console or the default model, and ${AUDIO_FILE_PATH} with the path to the audio file.

Node.js

Install the Node.js SDK:

yarn add @picovoice/leopard-node

Create instances of the Leopard class:

const Leopard = require("@picovoice/leopard-node");
const accessKey = "${ACCESS_KEY}" // Obtained from the Picovoice Console (https://console.picovoice.ai/)
let leopard = new Leopard(accessKey);

const result = engineInstance.processFile('${AUDIO_FILE_PATH}');
console.log(result.transcript);

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_FILE_PATH} to path an audio file.

When done, be sure to release resources using release():

leopard.release();

Flutter

Add the Leopard Flutter plugin to your pub.yaml.

dependencies:
  leopard_flutter: ^<version>

Create an instance of the engine and transcribe an audio file:

import 'package:leopard/leopard.dart';

final String accessKey = '{ACCESS_KEY}'  // AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)

try {
    Leopard _leopard = await Leopard.create(accessKey, '{MODEL_FILE_PATH}');
    LeopardTranscript result = await _leopard.processFile("${AUDIO_FILE_PATH}");
    print(result.transcript);
} on LeopardException catch (err) { }

Replace ${ACCESS_KEY} with your AccessKey obtained from Picovoice Console, ${MODEL_FILE_PATH} with a custom trained model from Picovoice Console or the default model, and ${AUDIO_FILE_PATH} with the path to the audio file.

Go

Install the Go binding:

go get github.com/Picovoice/leopard/binding/go/v2

Create an instance of the engine and transcribe an audio file:

import . "github.com/Picovoice/leopard/binding/go/v2"

leopard = Leopard{AccessKey: "${ACCESS_KEY}"}
err := leopard.Init()
if err != nil {
    // handle err init
}
defer leopard.Delete()

transcript, words, err := leopard.ProcessFile("${AUDIO_FILE_PATH}")
if err != nil {
    // handle process error
}

log.Println(transcript)

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_FILE_PATH} to path an audio file. Finally, when done be sure to explicitly release the resources using leopard.Delete().

React Native

The Leopard React Native binding is available via NPM. Add it via the following command:

yarn add @picovoice/leopard-react-native

Create an instance of the engine and transcribe an audio file:

import {Leopard, LeopardErrors} from '@picovoice/leopard-react-native';

const getAudioFrame = () => {
  // get audio frames
}

try {
  const leopard = await Leopard.create("${ACCESS_KEY}", "${MODEL_FILE_PATH}")
  const { transcript, words } = await leopard.processFile("${AUDIO_FILE_PATH}")
  console.log(transcript)
} catch (err: any) {
  if (err instanceof LeopardErrors) {
    // handle error
  }
}

Replace ${ACCESS_KEY} with your AccessKey obtained from Picovoice Console, ${MODEL_FILE_PATH} with a custom trained model from Picovoice Console or the default model and ${AUDIO_FILE_PATH} with the absolute path of the audio file. When done be sure to explicitly release the resources using leopard.delete().

Java

The latest Java bindings are available from the Maven Central Repository at:

ai.picovoice:leopard-java:${version}

Create an instance of the engine with the Leopard Builder class and transcribe an audio file:

import ai.picovoice.leopard.*;

final String accessKey = "${ACCESS_KEY}";

try {
    Leopard leopard = new Leopard.Builder().setAccessKey(accessKey).build();
    LeopardTranscript result = leopard.processFile("${AUDIO_FILE_PATH}");
    leopard.delete();
} catch (LeopardException ex) { }

System.out.println(result.getTranscriptString());

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console and ${AUDIO_FILE_PATH} to the path an audio file. Finally, when done be sure to explicitly release the resources using leopard.delete().

.NET

Install the .NET SDK using NuGet or the dotnet CLI:

dotnet add package Leopard

Create an instance of the engine and transcribe an audio file:

using Pv;

const string accessKey = "${ACCESS_KEY}";
const string audioPath = "${AUDIO_FILE_PATH}";

Leopard leopard = Leopard.Create(accessKey);

Console.Write(leopard.ProcessFile(audioPath));

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console. Finally, when done release the resources using leopard.Dispose().

Rust

First you will need Rust and Cargo installed on your system.

To add the leopard library into your app, add pv_leopard to your app's Cargo.toml manifest:

[dependencies]
pv_leopard = "*"

Create an instance of the engine using LeopardBuilder instance and transcribe an audio file:

use leopard::LeopardBuilder;

fn main() {
    let access_key = "${ACCESS_KEY}"; // AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)
    let leopard: Leopard = LeopardBuilder::new().access_key(access_key).init().expect("Unable to create Leopard");

    if let Ok(leopard_transcript) = leopard.process_file("/absolute/path/to/audio_file") {
        println!("{}", leopard_transcript.transcript);
    }
}

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console.

Web

Vanilla JavaScript and HTML (ES Modules)

Install the web SDK using yarn:

yarn add @picovoice/leopard-web

or using npm:

npm install --save @picovoice/leopard-web

Create an instance of the engine using LeopardWorker and transcribe an audio file:

import { Leopard } from "@picovoice/leopard-web";
import leopardParams from "${PATH_TO_BASE64_LEOPARD_PARAMS}";

function getAudioData(): Int16Array {
  // ... function to get audio data
  return new Int16Array();
}

const leopard = await LeopardWorker.create(
  "${ACCESS_KEY}",
  { base64: leopardParams },
);

const { transcript, words } = await leopard.process(getAudioData());
console.log(transcript);
console.log(words);

Replace ${ACCESS_KEY} with yours obtained from Picovoice Console. Finally, when done release the resources using leopard.release().

React

yarn add @picovoice/leopard-react @picovoice/web-voice-processor

(or)

npm install @picovoice/leopard-react @picovoice/web-voice-processor
import { useLeopard } from "@picovoice/leopard-react";

function App(props) {
  const {
    result,
    isLoaded,
    error,
    init,
    processFile,
    startRecording,
    stopRecording,
    isRecording,
    recordingElapsedSec,
    release,
  } = useLeopard();

  const initEngine = async () => {
    await init(
      "${ACCESS_KEY}",
      leopardModel,
    );
  };

  const handleFileUpload = async (audioFile: File) => {
    await processFile(audioFile);
  }

  const toggleRecord = async () => {
    if (isRecording) {
      await stopRecording();
    } else {
      await startRecording();
    }
  };

  useEffect(() => {
    if (result !== null) {
      console.log(result.transcript);
      console.log(result.words);
    }
  }, [result])
}

Releases

v2.0.0 - November 30th, 2023

  • Added speaker diarization feature
  • Added React SDK
  • Improvements to error reporting
  • Upgrades to authorization and authentication system
  • Improved engine accuracy
  • Various bug fixes and improvements
  • Node min support bumped to Node 16
  • Bumped iOS support to iOS 13+
  • Patches to .NET support

v1.2.0 - March 27th, 2023

  • Added language support for French, German, Italian, Japanese, Korean, Portuguese and Spanish
  • Added support for .NET 7.0 and fixed support for .NET Standard 2.0
  • iOS minimum support moved to 11.0
  • Improved stability and performance

v1.1.0 - August 11th, 2022

  • Added true-casing by default for transcription results
  • Added option to enable automatic punctuation insertion
  • Word timestamps and confidence returned as part of transcription
  • Support for 3gp (AMR) and MP4/m4a (AAC) audio files
  • Leopard Web SDK release

v1.0.0 - January 10th, 2022

  • Initial release

leopard's People

Contributors

albho avatar dejaydev avatar dependabot[bot] avatar erismik avatar kenarsa avatar ksyeo1010 avatar laves avatar mrrostam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

leopard's Issues

Leopard Issue:

Make sure you have read the documentation, and have put forth a reasonable effort to find an existing answer.

Expected behaviour

read files from model "assets/models/leopard_params_it.pv"

Actual behaviour

1- i put leopard_params_it.pv on assets/models
2- i put
flutter:
assets:
- assets/models/leopard_params_it.pv

Steps to reproduce the behaviour

on my code i put:
final String modelPath = "assets/models/leopard_params_it.pv";
_leopard = await Leopard.create(accessKey, modelPath,
enableAutomaticPunctuation: true);

error: l "failed to extract 'assets/models/leopard_params_it.pv'"

(Include enough details so that the issue can be reproduced independently.)

pvLeopard not exported by package leopard

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

I am using Golang 1.19 and am trying to define a Leopard object. In older versions of Leopard, I could define it like

var leopardInstance leopard.Leopard

But now, the name has changed to pvLeopard (starting with a lowercase character) rather than Leopard (starting with a capital character), so it gives the error shown in the title.

Describe the solution you'd like

I would like pvLeopard to start with a capital letter so it is exported, like leopard.PvLeopard or leopard.Leopard rather than leopard.pvLeopard.

Additional context
The intended use is like this:

leopardInstance := leopard.NewLeopard("key")
(do stuff with leopardInstance)

But my use-case involves having multiple Leopard processes ready to be used at any time, and there isn't enough time to initiate a process each time a request comes in. This is what my code was like before this change

var leopardSTTArray []leopard.Leopard
for i := 0; i < picovoiceInstances; i++ {
fmt.Println("Initializing Picovoice Instance " + strconv.Itoa(i))
leopardSTTArray = append(leopardSTTArray, leopard.NewLeopard(picovoiceKey))
leopardSTTArray[i].Init()
}

LeopardIOError not otherwise specified

On Ubuntu 20.04, with version 1.1.2, installed via pip3, when calling the process_file method whether passing absolute or relative paths I get the following error:

โ€œLeopardIOError:โ€

I cannot figure this out because when this error is raised in the code it appears to always have additional verbiage (โ€œcannot find the [audio/model/etc]โ€). My error message does not further specify what the IO error is. The sample rate of the file is 48000, but this is greater than the sample rate param (which is 16000), and this should be supported by the process_file method based on the comments

only support english?

when i run this project, i find only english language can be accept? this is not support multi-language?

Leopard Issue: An error occurred while creating video: Unsupported CPU: `0x000`

Make sure you have read the documentation, and have put forth a reasonable effort to find an existing answer.

Expected behaviour

image

Actual behaviour

image

Steps to reproduce the behaviour

pvleopard.create(access_key=access_key)
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.11/site-packages/pvleopard/_factory.py", line 42, in create
library_path = default_library_path('')
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pvleopard/_util.py", line 59, in default_library_path
linux_machine = _linux_machine()
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pvleopard/_util.py", line 45, in _linux_machine
raise NotImplementedError("Unsupported CPU: %s." % cpu_part)
NotImplementedError: Unsupported CPU: 0x000.

(Include enough details so that the issue can be reproduced independently.)

This is running in container docker

FROM python:3.11-slim
# FROM python:3.11-alpine3.18

and system arch
r```
oot@b6136564c5d7:/app# uname -a
Linux b6136564c5d7 5.15.49-linuxkit-pr #1 SMP PREEMPT Thu May 25 07:27:39 UTC 2023 aarch64 GNU/Linux
root@b6136564c5d7:/app# uname -am
Linux b6136564c5d7 5.15.49-linuxkit-pr #1 SMP PREEMPT Thu May 25 07:27:39 UTC 2023 aarch64 GNU/Linux
root@b6136564c5d7:/app# uname -m
aarch64
root@b6136564c5d7:/app#

Leopard Issue: Android Build Failed | JAVA Version Clash

Make sure you have read the documentation, and have put forth a reasonable effort to find an existing answer.

Expected behaviour

Local system has JAVA 17 version installed. There is some clash happening between the Local java version and the java version used in the android demo.

Actual behaviour

image image

Steps to reproduce the behaviour

Following the steps in the document, Run -> npm run android-run en , command to view the issue. Expecting to build successfully. without version clashes.

(Include enough details so that the issue can be reproduced independently.)

Automatic punctuation in "words" list

Is your feature request related to a problem? Please describe.
I am trying to make subtitles to videos with leopard's function "leopard.process_file()" and I want to do it with punctuation in it. But enable_automatic_punctuation=True works only on "transcript" string but not on list of words and because of that there's no punctuation in subtitles. I am using Python.

Describe the solution you'd like
If enable_automatic_punctuation is True, then punctuation is on not only in transcript, but in words as well.

Describe alternatives you've considered
I would like to see one more flag like "enable_punctuation_in_words" that enables to add punctuation the same way as in transcript string.

Additional context
image
image
As you can see on that images, there's the end of the sentence in transcript (and the dot as well there), but there's no dot in words.
And also i have a question. what punctuation leopard can mark in text? I didnt find this in documentation, sorry.
Thank you very much!

Leopard Documentation Issue: Need for specific GlibC version on Raspberry

What is the URL of the doc?

https://picovoice.ai/docs/quick-start/leopard-python/

What's the nature of the issue? (e.g. steps do not work, typos/grammar/spelling, etc., out of date)

I tried to use leopard on my Raspberry 4 with Raspian Buster. Unfortunately, I get the following error that I don't have the correct glibc version /lib/arm-linux-gnueabihf/libm.so.6: version 'GLIBC_2.29' not found. In the documentation, I can't find any information about the need of a specific version. From what I read it's discouraged to only upgrade glibc. So does one need the latest Raspberry OS for leopard to work?

Cheetah and porcupine work flawlessly. Really great software. Thanks so much for developing it and making it available for personal users :)

Leopard Issue: Python library not working - GLIBC_2.29 not found

Expected behaviour

pvleopard.create(...) should work similiar to the other constructors e.g. pvrhino.create(...) and pvporcupine.create(...).

Actual behaviour

The constructors for pvrhino.create(...) and pvporcupine.create(...), which I have been using for several months now, are working as expected when used in Python 3.7.3 on Raspian (Buster).

The constructor pvleopard.create(...) fails:

self._leopard = pvleopard.create(access_key=ACCESS_KEY, model_path=LEOPARD_MODEL_PATH)

File "/home/patrick/.local/lib/python3.7/site-packages/pvleopard/_factory.py", line 44, in create
enable_automatic_punctuation=enable_automatic_punctuation)
File "/home/patrick/.local/lib/python3.7/site-packages/pvleopard/_leopard.py", line 148, in init
library = cdll.LoadLibrary(library_path)
File "/usr/lib/python3.7/ctypes/init.py", line 434, in LoadLibrary
return self._dlltype(name)
File "/usr/lib/python3.7/ctypes/init.py", line 356, in init
self._handle = _dlopen(self._name, mode)
OSError: /lib/arm-linux-gnueabihf/libm.so.6: version `GLIBC_2.29' not found (required by /home/patrick/.local/lib/python3.7/site-packages/pvleopard/lib/raspberry-pi/cortex-a72/libpv_leopard.so)

When I try to manually upgrade GLIBC, I get:

sudo apt-get install libc6
Reading package lists... Done
Building dependency tree
Reading state information... Done
libc6 is already the newest version (2.28-10+rpt2+rpi1+deb10u2).
The following package was automatically installed and is no longer required:
libva-wayland2
Use 'sudo apt autoremove' to remove it.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Is it really the case that Leopard - unlike the other libraries - needs GLIBC 2.29?

Are there any tools to recognize only digits in popular languages?

Hello. Are there any tools, maybe by picovoice or other, that can recognize digits in many languages at once?
I want to be able to quickly transform all digits heard into, well, string of numbers. In many popular languages like english, italian, spanish, russian, german...
Thank you.

[Error] license file belongs to a different version of the library

Hi Sir,

I am using Ubuntu 14.04 LTS to try your demo. I had complied using gcc -I include/ -O3 demo/c/leopard_demo.c -ldl -o leopard_demo. However, I faced problem when I tried to run the following:
./leopard_demo
./lib/linux/x86_64/libpv_leopard.so
./lib/common/acoustic_model.pv
./lib/common/language_model.pv
./resources/license/leopard_eval_linux.lic
./resources/audio_samples/test.wav

It gave me an error of the following:

[Error] license file belongs to a different version of the library

Is there anyway to resolve this?

Leopard Issue: [LeopardIOError: Process failed.]

Make sure you have read the documentation, and have put forth a reasonable effort to find an existing answer.

Expected behaviour

I am uploading an mp3 audio file or WAV audio file and passing its absolute path using expo-document-picker to the processFile method of Leopard. I am expecting it to transcribe the audio file and return the transcript and words after processing. The same file working for web but it's not working for React Native

Actual behaviour

It throws [LeopardIOError: Process failed.] error, it's not clear enough to understand what I am doing wrong.

image

Steps to reproduce the behaviour

(Include enough details so that the issue can be reproduced independently.)

import { StatusBar } from 'expo-status-bar';
import { StyleSheet,Text, View,Button } from 'react-native';
import { Leopard } from '@picovoice/leopard-react-native'
import { useEffect, useRef, useCallback, useState } from 'react';
import * as DocumentPicker from 'expo-document-picker';

const accessKey = "<accessKey>";

export default function App() {
  const [fileResponse, setFileResponse] = useState([]);
  const leopard = useRef();

  useEffect(() => {
    createLeopardInstance();

    return () => {
      deleteLeopardInstance();
    }
  },[]);

  const createLeopardInstance = async () => {
    try {
      leopard.current = await Leopard.create(accessKey, "models/leopard_params.pv", {enableAutomaticPunctuation: true});
    } catch (err) {
      if (err) {
        // handle error
        console.log("===",err);
      }
    }
  }

  const deleteLeopardInstance = async () => {
    if(!leopard.current) return;

    await leopard.current.delete();
  }

  const transcribeAudio = async (path) => {
    try {
      const { transcript, words } = await leopard.current.processFile(path);
      console.log(transcript,words)
    } catch (err) {
      if (err) {
        // handle error
        console.log("===",err);
      }
    }
  }

  const handleDocumentSelection = useCallback(async () => {
    try {
      const response = await DocumentPicker.getDocumentAsync({
        presentationStyle: 'fullScreen',
      });
      setFileResponse(response);
      transcribeAudio(response.assets[0].uri);
    } catch (err) {
      console.warn(err);
    }
  }, []);

  return (
    <View style={styles.container}>
      <Text>Hello</Text>
      <Button title="Select ๐Ÿ“‘" onPress={handleDocumentSelection} />
      <StatusBar style="auto" />
    </View>
  );
}

const styles = StyleSheet.create({
  container: {
    flex: 1,
    backgroundColor: '#fff',
    alignItems: 'center',
    justifyContent: 'center',
  },
});

Leopard Issue: App crashing in Non-GMS device and in APK

When I try ti run application APK in any device it crashes on tapping START. I have been creating apks using "Flutter buil apk --split-per-abi". Also I want to know that can we run this application in a device without google services like non-google Android with any GMS.

Expected behaviour

App should work fine.

Actual behaviour

App Crashing

Steps to reproduce the behaviour

(Include enough details so that the issue can be reproduced independently.)

Flutter plugin

do you guys offer flutter plugin for this to work next to picovoice manager?

Question

I don't understand the free version, can I use it in a web app, no matter how users will access the web app?

Error when running python demo

when running the python demo I have this error that pop out.
I have made sure to use the whole folder as given on the github and to include my licence file:

Traceback (most recent call last):
File "demo/python/leopard_demo.py", line 61, in
license_path=args.license_path)
File "demo/python../../binding/python\leopard.py", line 54, in init
self.libc = CDLL(find_library('c'))
File "C:\Users\parth\anaconda3\lib\ctypes_init
.py", line 364, in init
self._handle = _dlopen(self._name, mode)
TypeError: LoadLibrary() argument 1 must be str, not None

Publish IOS React Native Demo for I Phone 14.

After running npm run ios-run en, below issue is happening. In my updated operating system I don't have I phone 13. Please update this so that I can use the demo for ios app also.

image

libpv_leopard.so: wrong ELF class: ELFCLASS64 - Python

I've just reinstalled pvleopard for Python 3.9 on my Raspberry Pi 4. Upon attempting to create a leopard object, I receive an error. It worked fine prior to the reinstallation, but I'm now running v1.2. I tried downgrading to v1.1 and even v1.0, but I just get the same error.

import pvleopard

with open('access_key.txt', 'r') as file:
    key = file.read()

leopard = pvleopard.create(access_key=key)

Here's the error I receive.

Traceback (most recent call last):
  File "/home/pi/Desktop/Project/test.py", line 6, in <module>
    leopard = pvleopard.create(access_key=key)
  File "/home/pi/Desktop/Project/venv/lib/python3.9/site-packages/pvleopard/__init__.py", line 47, in create
    return Leopard(access_key=access_key, library_path=library_path, model_path=model_path)
  File "/home/pi/Desktop/Project/venv/lib/python3.9/site-packages/pvleopard/leopard.py", line 117, in __init__
    library = cdll.LoadLibrary(library_path)
  File "/usr/lib/python3.9/ctypes/__init__.py", line 452, in LoadLibrary
    return self._dlltype(name)
  File "/usr/lib/python3.9/ctypes/__init__.py", line 374, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: /home/pi/Desktop/Project/venv/lib/python3.9/site-packages/pvleopard/lib/raspberry-pi/cortex-a72-aarch64/libpv_leopard.so: wrong ELF class: ELFCLASS64

From the error message, I can tell the problem is with the library version, as my platform is 32-bit. Would you mind helping me figure out why pip is installing the wrong version and how I can specify the version that I want installed? Thanks very much.

help with using leopard with pyaudio

So I want to use pyaudio to record from mic detect the sentence and print the text. this is happening in loop but I can't get it to work in other way other than leopard demo package. help me with this

Can't install lib

I have some problems when install lib

I used npm to install, and i received error:

โžœ  TestPicoVoice npm install @picovoice/web-voice-processor @picovoice/cheetah-web
npm ERR! code 127
npm ERR! path /Directory/Workspace/TestPicoVoice/node_modules/@picovoice/cheetah-web
npm ERR! command failed
npm ERR! command sh -c yarn copywasm
npm ERR! sh: yarn: command not found

npm ERR! A complete log of this run can be found in:
npm ERR!    /Directory/.npm/_logs/2022-11-07T13_54_11_535Z-debug-0.log

How can fix it? Thanks guys!

Defining new words

Can Leopard or Cheetah be enhanced with new words, e.g. by giving a mapping from phonemes to words?

Leopard Issue:

Make sure you have read the documentation, and have put forth a reasonable effort to find an existing answer.

Expected behaviour

I've download from the folder https://github.com/Picovoice/leopard/tree/master/lib/common the default models.
I put all of them into the config/language_models folder. Then I try to load them with the model path, but an error occurs

pvleopard.create(access_key=Mykey,
model_path = 'config/language_models/leopard_params.pv')

Actual behaviour


LeopardInvalidArgumentError Traceback (most recent call last)
/var/folders/n0/7rhb7mg55rdfrz13sxnn0dgc0000gn/T/ipykernel_14326/2304799399.py in
----> 1 pvleopard.create(access_key=myKey,
2 model_path = 'config/language_models/leopard_params.pv')

~/opt/anaconda3/lib/python3.9/site-packages/pvleopard/_factory.py in create(access_key, model_path, library_path, enable_automatic_punctuation)
38 library_path = default_library_path('')
39
---> 40 return Leopard(
41 access_key=access_key,
42 model_path=model_path,

~/opt/anaconda3/lib/python3.9/site-packages/pvleopard/_leopard.py in init(self, access_key, model_path, library_path, enable_automatic_punctuation)
156 status = init_func(access_key.encode(), model_path.encode(), enable_automatic_punctuation, byref(self._handle))
157 if status is not self.PicovoiceStatuses.SUCCESS:
--> 158 raise self._PICOVOICE_STATUS_TO_EXCEPTIONstatus
159
160 self._delete_func = library.pv_leopard_delete

LeopardInvalidArgumentError:

Steps to reproduce the behaviour

(Include enough details so that the issue can be reproduced independently.)

Additional info

I've insalled the package using pip, actual version pvleopard==1.2.2
Also, I tried with a download model from the picovice console, but the result is the same error.
The code is correct, because if I copy in the same folder the default model downloaded with the package itself, it's working.

iOS and android libraries

Hey guys I have previously asked about flutter plug-in and I know it doesn't exist.

But what about the iOS and android libraries? How can I access it?

Russian language support

Is your feature request related to a problem? Please describe.
Leopard doesn't support Russian language

Describe the solution you'd like
Leopard being able to recognize Russian

Additional context
I know additional languages support is planned, but there is no clear way to be notified when particular language is available. If that's ok, maybe this FR could be used for that: those interested could subscribe, and when support for Russian language lands, this FR gets closed, and all subscribers are automatically notified by GitHub : )

And thank you for your work! ๐Ÿค˜

requires SoundFile-0.10.*

Dear Picovoice,
please include in documentation that the python test requires installation of SoundFile module

python2 -m pip install SoundFile
python3 -m pip install SoundFile

Way to Track Progress of Processing?

Is there any way to track the progress of the process function of Leopard? A way to find the percentage complete would be amazing. Maybe with a callback to the Leopard object? I'm using Python.

Thanks very much.

Leopard Implementaiton Issue- React Native and Expo Go:

Have you checked the docs and existing issues?

  • I have read all of the relevant Picovoice Leopard docs
  • I have searched the existing issues for Leopard

SDK

React Native

Leopard package version

2.0.2

Framework version

React Native: 0.72.6 Expo:^49.0.13

Platform

Windows (x86_64)

OS/Browser version

Windows 11

Describe the bug

I am attempting to implement the Leopard STT api into my project that is built using React Native and Expo Go. When calling const leopard = await Leopard.create(accessKey, modelPath);, I get an error stating [Error: unexpected code: undefined, message: Cannot read property 'create' of null], despite me following all of the installation steps correctly and I am verified that my Leopard import is indeed functional (console.log(Leopard) prints [Function Leopard] prior to my create call). Are there compatibility issues with Leopard and Expo? I have also ensured that the accessKey and model path are being correctly passed.

Steps To Reproduce

  1. Import Leopard
  2. Use useState to set and use the Leopard instance
  3. Define an asynchronous function to get an instance of Leopard
  4. Use leopardInstance in stopRecording() function when recording is done and audio needs to be transcribed
    The txt file is a simple app.js file for a basic react native & expo set up that is throwing this error
    leopard_error.txt

Expected Behavior

I would expect the audio file being passed (which in my case is just a tester .wav file) to be transcribed, and I would expected for the leopardInstance to be set correctly as everything is being done as per the docs.

Leopard Issue: OSError: exception: stack overflow

Expected behaviour

Leopard outputs transcribed text from audio.

Actual behaviour

Leopard throws an OSError.

Steps to reproduce the behaviour

  1. pip3 install pvleoparddemo
  2. leopard_demo_file --access_key ${ACCESS_KEY} --audio_paths ${AUDIO_PATH}
  3. Wait

Other Information

I cannot provide the audio file I am using, sorry.
I have tried a different audio file and Leopard does work correctly for it.
The audio file I am using is not broken, and behaves normally in any other circumstance I've observed.

Full traceback:
Traceback (most recent call last): File "C:\Users\jgogo\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\jgogo\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\jgogo\AppData\Local\Programs\Python\Python310\Scripts\leopard_demo_file.exe\__main__.py", line 7, in <module> File "C:\Users\jgogo\AppData\Local\Programs\Python\Python310\lib\site-packages\pvleoparddemo\leopard_demo_file.py", line 36, in main transcript, words = o.process_file(audio_path) File "C:\Users\jgogo\AppData\Local\Programs\Python\Python310\lib\site-packages\pvleopard\leopard.py", line 253, in process_file status = self._process_file_func( OSError: exception: stack overflow

Error in C# binding - Attempted to Read or Write Protected Memory

Make sure you have read the documentation, and have put forth a reasonable effort to find an existing answer.

Expected behaviour

Processing an audio file should produce a transcript on each and every call.

var transcript = leopardTranscriber.ProcessFile(audioFilePath);

Actual behaviour

The transcription requests are being handled by a server and work very well when the requests come from a single client. However, when I send requests simultaneously from two clients:

  1. The requests get serviced properly for a while
  2. Then after some unpredictable time the server crashes with the error:
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Repeat 2 times:
--------------------------------
   at Pv.Leopard.pv_leopard_process_file(IntPtr, IntPtr, IntPtr ByRef, Int32 ByRef, IntPtr ByRef)
--------------------------------
   at Pv.Leopard.ProcessFile(System.String)
   at AudioTranscriber.Services.TranscriptionService.GetTranscript(AudioTranscriber.AudioRequest, Grpc.Core.ServerCallContext)

After some research, I came across this plausible explanation for the behavior: https://stackoverflow.com/a/42382470/212076

  • The problem may be due to mixed build platforms DLLs in the project. i.e You build your project to Any CPU but have some DLLs in the project already built for x86 platform. These will cause random crashes because of different memory mapping of 32bit and 64bit architecture. If all the DLLs are built for one platform the problem can be solved.

I examined the DLLs that ship with the Leopard Nuget package:

  • Leopard.dll: 32-bit
  • libpv_leopard.dll: 64-bit

Since my project targets 64-bit architecture, then the issue must be triggered by the involvement of the 32-bit Leopard.dll. Under normal circumstances, it plays nicely with it's 64-bit counterparts. However, under conditions of increased load from multiple simultaneous requests, it causes memory access issues arising from the different memory mapping.

Is it possible to have a 64-bit version of Leopard.dll available for testing to verify this assertion?

Steps to reproduce the behaviour

  1. Create a C# gRPC Server that

    • Initializes the Leopard library
      • var leopardTranscriber = Leopard.Create(accessKey);
    • Waits for requests from clients
    • Handles each client request and returns the transcript
      • var transcript = leopardTranscriber.ProcessFile(audioFilePath);
      • return Task.FromResult(transcript);
  2. Create a C# gRPC Client that

    • Initializes the gRPC connection

      • var port = 3050;
      • var serverUrl = $"http://localhost:{port}";
      • var channel = GrpcChannel.ForAddress(serverUrl);
      • var client = new AudioTranscriber.AudioTranscriberClient(channel);
      • var audioRequest = new AudioRequest { AudioFilePath = audioFilePath };
    • Sends a request to the server for a transcript

      • var transcript = await client.GetTranscript(audioRequest);
  3. Start two instances of the Client and have them repeatedly send requests to the Server. Exactly as you would to stress test the Server.

After a while of getting proper transcripts you get the AccessViolationException and the server crashes.

Ability to control length of subtitles with Leopard

I'm using Leopard to try and create subtitles for a project, the problem is that each subtitle has way too many words and is way too long. I don't see any way to control this and I'm getting sections of subtitles that are too wordy

Leopard crashes when hearing the word "misses"

Make sure you have read the documentation, and have put forth a reasonable effort to find an existing answer.

Expected behaviour

Leopard can understand the word "misses"

Actual behaviour

Crashes on the word "misses"

Steps to reproduce the behaviour

This is super weird. I don't know if it's specific to my voice or what, but it's crashed about 5 times whenever I say "misses." It'll be happily running along and then I'll say "misses" which will often pick the wrong word (usually "Mrs." which would make sense, but if I repeat it then it often appears to fault all the way out of the program.

I'm running a lightly modified version of the Leopard Mic demo for dotnet. I have a little bit of silence detection and then a little bit of logic to detect the end of a sentence (probably not the best thing to do here, but I was experimenting):

                Task recordingTask = Task.Run(() =>
                {
                    audioFrame.Clear();
                    recorder.Start();
                    var keepListening = 0;
                    while (!token.IsCancellationRequested)
                    {
                        short[] pcm = recorder.Read();
                        var loudest = pcm
                            .Max(p => Math.Abs(p));

                        if (loudest < 1000 || keepListening > 0)
                        {
                            if (keepListening > 0)
                            {
                                keepListening--;
                            }
                            else if (audioFrame.Count > 0)
                            {
                                var result = leopard.Process(audioFrame.ToArray());
                                audioFrame.Clear();
                                Console.WriteLine(result.TranscriptString);
                                continue;
                            }
                            else
                            {
                                continue;
                            }
                        }
                        else
                        {
                            keepListening = 15;
                        }
                        audioFrame.AddRange(pcm);
                    }
                    recorder.Stop();
                });

image

I'm not particularly worried about it, but thought someone might want to have a second look at it.

(Include enough details so that the issue can be reproduced independently.)

Questions about custom vocabulary and keywords boosting features

Hello, I have questions about custom vocabulary and keywords boosting features,
Are there limitations ?

I have 2 use case, one with 1K custom words, the other with 10K, is it still a viable approach accuracy/performance wize or should I consider an alternative/train my own model ?
The same question for Cheetah aswell !

Thanks

Getting Timestamps in Output

Hello Picovoice team!

I was wondering if your tool provides a way to extract word-level timestamps in the transcript, or a way to output the occurrences of words in audio files (maybe returned as JSON)?

Thanks!

picovoice.h uses visibility attribute not available in MSVC

currently picovoice.h has

#define PV_API __attribute__((visibility("default")))

Which does not work with MSVC. This should have an additional switch for MSVC to use either __declspec(dllimport) or __declspec(dllexport) depending if its for your internal build for the libraries or the consumable client build.

Leopard Issue: Demo doesn't run

Have you checked the docs and existing issues?

  • I have read all of the relevant Picovoice Leopard docs
  • I have searched the existing issues for Leopard

SDK

.NET

Leopard package version

2.0.1

Framework version

.NET 8.0

Platform

Windows (x86_64)

OS/Browser version

N/A

Describe the bug

Pv.LeopardActivationLimitException: Leopard init failed:
  [0] Picovoice Error (code `00000136`)
  [1] Picovoice Error (code `00000136`)
  [2] Picovoice Error (code `0000012C`)
   at Pv.Leopard..ctor(String accessKey, String modelPath, Boolean enableAutomaticPunctuation, Boolean enableDiarization)
   at Pv.Leopard.Create(String accessKey, String modelPath, Boolean enableAutomaticPunctuation, Boolean enableDiarization)
   at LeopardDemo.MicDemo.RunDemo(String accessKey, String modelPath, Boolean enableAutomaticPunctuation, Boolean enableDiarization, Boolean verbose, Int32 audioDeviceIndex) in C:\git\picovoice\leopard\demo\dotnet\LeopardDemo\MicDemo.cs:line 59
   at LeopardDemo.MicDemo.Main(String[] args) in C:\git\picovoice\leopard\demo\dotnet\LeopardDemo\MicDemo.cs:line 249

Steps To Reproduce

  1. Open project
  2. Switch "StartupObject" to "LeopardDemo.MicDemo"
  3. Switch Framework version to net8.0 (net6.0 was LTS, but the support has lapsed)
  4. Run the project

Expected Behavior

Demo runs. This previously worked before the 2.* upgrade.

Leopard Issue: Golang - Panic when no text was transcribed

Go 1.18.2, v1.1.1 leopard bindings

Expected behaviour

When the Process function gets called and it is given audio data with no transcribe-able voice data, it should return a blank string or something to indicate there was no text transcribed

Actual behaviour

A panic happens and the program crashes, here is the trace https://pastebin.com/raw/PKzqVmnj

A defer-recover function could be used to prevent a crash, but that is not ideal.

Steps to reproduce the behaviour

Call the Process function with audio data that does not have any transcribe-able voice in it.

Leopard Documentation Issue

Hi,

I have a question regarding the accessKey. How I understand it from the docs the key needs to be present in the client to use the api. But that makes it easily extractable especially in web usage. So how can it be keept secret if it needs to be send to every user in a production environment?

Thanks

Leopard Issue: Error trying to run demo on windows 10

windows 10
node v8.11.3
npm 5.6.0

0 info it worked if it ends with ok
1 verbose cli [ 'C:\Program Files\nodejs\node.exe',
1 verbose cli 'C:\Program Files\nodejs\node_modules\npm\bin\npm-cli.js',
1 verbose cli 'run',
1 verbose cli 'start' ]
2 info using [email protected]
3 info using [email protected]
4 verbose run-script [ 'prestart', 'start', 'poststart' ]
5 info lifecycle [email protected]prestart: [email protected]
6 info lifecycle [email protected]
start: [email protected]
7 verbose lifecycle [email protected]start: unsafe-perm in lifecycle true
8 verbose lifecycle [email protected]
start: PATH: C:\Program Files\nodejs\node_modules\npm\node_modules\npm-lifecycle\node-gyp-bin;C:\Ricardo\git\leopard\demo\web\node_modules.bin;C:\Program Files\Microsoft MPI\Bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0;C:\Program Files\Microsoft SQL Server\130\Tools\Binn;C:\Program Files\dotnet;C:\Program Files (x86)\GtkSharp\2.12\bin;C:\Tools\ffmpeg\bin;C:\Program Files (x86)\Calibre2;C:\WINDOWS\System32\OpenSSH;C:\Program Files\nodejs;C:\ProgramData\chocolatey\bin;C:\Program Files\Git\cmd;C:\Program Files (x86)\Microsoft VS Code\bin;C:\HaxeToolkit\haxe;C:\HaxeToolkit\neko;C:\Program Files (x86)\dotnet;C:\Program Files\PuTTY;C:\ProgramData\UNIVALI\Portugol Studio;C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn;C:\Users\Ricardo\AppData\Local\Programs\Python\Python311\Scripts;C:\Users\Ricardo\AppData\Local\Programs\Python\Python311;C:\Users\Ricardo.cargo\bin;C:\Users\Ricardo\AppData\Local\Programs\Python\Python36-32\Scripts;C:\Users\Ricardo\AppData\Local\Programs\Python\Python36-32;C:\Users\Ricardo\AppData\Local\Microsoft\WindowsApps;C:\Program Files (x86)\Microsoft VS Code\bin;C:\Program Files\MongoDB\Server\3.4\bin;C:\Program Files\Heroku\bin;C:\Users\Ricardo\AppData\Roaming\npm;C:\Tools\C3PO;;C:\Users\Ricardo\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\Ricardo.dotnet\tools
9 verbose lifecycle [email protected]start: CWD: C:\Ricardo\git\leopard\demo\web
10 silly lifecycle [email protected]
start: Args: [ '/d /s /c', 'yarn run http-server -a localhost -p 5000' ]
11 silly lifecycle [email protected]start: Returned: code: 1 signal: null
12 info lifecycle [email protected]
start: Failed to exec start script
13 verbose stack Error: [email protected] start: yarn run http-server -a localhost -p 5000
13 verbose stack Exit status 1
13 verbose stack at EventEmitter. (C:\Program Files\nodejs\node_modules\npm\node_modules\npm-lifecycle\index.js:285:16)
13 verbose stack at emitTwo (events.js:126:13)
13 verbose stack at EventEmitter.emit (events.js:214:7)
13 verbose stack at ChildProcess. (C:\Program Files\nodejs\node_modules\npm\node_modules\npm-lifecycle\lib\spawn.js:55:14)
13 verbose stack at emitTwo (events.js:126:13)
13 verbose stack at ChildProcess.emit (events.js:214:7)
13 verbose stack at maybeClose (internal/child_process.js:925:16)
13 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)
14 verbose pkgid [email protected]
15 verbose cwd C:\Ricardo\git\leopard\demo\web
16 verbose Windows_NT 10.0.19044
17 verbose argv "C:\Program Files\nodejs\node.exe" "C:\Program Files\nodejs\node_modules\npm\bin\npm-cli.js" "run" "start"
18 verbose node v8.11.3
19 verbose npm v5.6.0
20 error code ELIFECYCLE
21 error errno 1
22 error [email protected] start: yarn run http-server -a localhost -p 5000
22 error Exit status 1
23 error Failed at the [email protected] start script.
23 error This is probably not a problem with npm. There is likely additional logging output above.
24 verbose exit [ 1, true ]

Request: Cortex-A7 Linux lib

I would like to run a golang application on a Qualcomm APQ8009 armv7 CPU which utilizes Leopard. I tried modifying the code a bit to use the android/armeabi-v7a lib but the golang wrappers don't seem to be compatible with it. By default, on an APQ8009 running embedded Linux, I get an error saying "Unsupported CPU: 0xc07". I also have a Banana Pi with a Cortex-A7 processor which I would like to try running this on. Judging by how well Leopard works on old Core 2 Duo processors it seems like it would run well on this too.

Add flutter macos plugin

Hello,

I can see there is Flutter Android/iOS compatible plugin. We are building a MacOS commercial product in Flutter. Would be feasible for you to add MacOS support to your Flutter library, please?

Thank you,
Jakub

Turning words to numbers and avoid saving a file

For a project, I tried using pvleopard but had to ultimately decide against it. This was because of two issues:

  1. It would take the user saying 'forty four' and return the words 'forty four' instead of the number.
  2. I could only use it for a limited amount of time or I would have to use speech recognition to save the file and then process the wav file.

I was wondering if there were any ways around these issues. For the second one, I would like to avoid using speech recognition but would need the program to stop listening when the user has stopped talking, not after a set amount of time. Sorry if this isn't the right place for this.

Transcripts with Word Alignments

Hello,

Is it possible to output the world alignments similar to Mozilla's Deepspeech when using Leopard?

I tried looking around the docs and can't seem to find it. If it is not possible, any suggestions on how I can achieve it using Picovoice.

Thank you!

Leopard Issue: Unable to use Android demo because of "initalization failed" error message

Make sure you have read the documentation, and have put forth a reasonable effort to find an existing answer.

Expected behaviour

To be able to use the Android demo with my AccessKey from the console.

Actual behaviour

I have used the same AccessKey in the python demo and it works, but when I use that AccessKey in the Android Demo I get:
Initialisation Failed.
Ensure your AccessKey 'removed for security reasons' is valid.

i have checked the app has access to the internet and it says its connected by adding the following code to your demo:
ConnectivityManager cm = (ConnectivityManager)getApplicationContext().getSystemService(Context.CONNECTIVITY_SERVICE);
NetworkInfo nInfo = cm.getActiveNetworkInfo();
boolean connected = nInfo != null && nInfo.isAvailable() && nInfo.isConnected();
if (connected) Log.d("ben", "connected = yes");
else Log.d("ben", "connected = no");

Steps to reproduce the behaviour

Android 11 and Android 13 phones
Use my access key with the Android demo app. I have submitted this error under a personal account, but if you search my name in your console you'll see I have an account with my work email that I didn't want to share online.

(Include enough details so that the issue can be reproduced independently.)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.