Code Monkey home page Code Monkey logo

whisper-auto-transcribe's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

whisper-auto-transcribe's Issues

Different results CUDA - CPU

Results are very different using CUDA or CPU...

This is just an example:
tyod-216-2 Subtitles - CPU & CUDA.zip

Of course CUDA is like 10x faster or more. But it seems sometimes it just misses a lot. Sometimes there are parts on the video with people clearly talking and there isn't a single subtitle line for a long time... is this due to the duration of the files?

0.3.2b2 - Many GB temp files left undeleted

Still not cleaning up temp files after processing is done.

For a 6.8GB MKV I still get:

  • 1 remaining 6GB MKV file in Temp\tempfreesubtitle\
  • 2 remaining 2.25GB+2.25GB WAV files in Temp\htdemucs\

The program should do the clean-up after the task is completed. The 'Temp' folder is NOT a folder that Windows will delete or clean on its own.

what it is?

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\whisper-auto-transcribe-main\new\whisper-auto-transcribe\venv\lib\site-packages\gradio\routes.py", line 399, in run_predict
output = await app.get_blocks().process_api(
File "D:\whisper-auto-transcribe-main\new\whisper-auto-transcribe\venv\lib\site-packages\gradio\blocks.py", line 1303, in process_api
result = await self.call_function(
File "D:\whisper-auto-transcribe-main\new\whisper-auto-transcribe\venv\lib\site-packages\gradio\blocks.py", line 1026, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\whisper-auto-transcribe-main\new\whisper-auto-transcribe\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\whisper-auto-transcribe-main\new\whisper-auto-transcribe\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\whisper-auto-transcribe-main\new\whisper-auto-transcribe\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\whisper-auto-transcribe-main\new\whisper-auto-transcribe\src\transcribe_gui.py", line 327, in handle_form_submit
subtitle_file_path = task.transcribe(
File "D:\whisper-auto-transcribe-main\new\whisper-auto-transcribe\src\utils\task.py", line 110, in transcribe
raise Exception(
Exception: Error. Vocal extracter unavailable. Received:
C:\Users\73B51\AppData\Local\Temp\6a2c5585a48a66a68c6f1a9593cb8d763b9e0aad\Не хочу-0-100.mp3, C:\Users\73B51\AppData\Local\Temp\tempfreesubtitle\main.mp3, tmp/2023-05-19 23-39-52
demucs --two-stems=vocals "C:\Users\73B5~1\AppData\Local\Temp\tempfreesubtitle\main.mp3" -o "tmp/2023-05-19 23-39-52" --filename "{stem}.{ext}"

(this error crashes immediately when you go to the result page)

webui doesn't work

When using transcribe or translation browser reports error and this is output from console:

Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Traceback (most recent call last):
File "C:\Users\user\Downloads\autotranscribe\whisper-auto-transcribe\src\utils\task.py", line 108, in transcribe
subprocess.run(cmd, check=True)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 505, in run
with Popen(*popenargs, **kwargs) as process:
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 951, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 1420, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\user\Downloads\autotranscribe\whisper-auto-transcribe\venv\lib\site-packages\gradio\routes.py", line 399, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\user\Downloads\autotranscribe\whisper-auto-transcribe\venv\lib\site-packages\gradio\blocks.py", line 1303, in process_api
result = await self.call_function(
File "C:\Users\user\Downloads\autotranscribe\whisper-auto-transcribe\venv\lib\site-packages\gradio\blocks.py", line 1026, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\user\Downloads\autotranscribe\whisper-auto-transcribe\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\user\Downloads\autotranscribe\whisper-auto-transcribe\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\user\Downloads\autotranscribe\whisper-auto-transcribe\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\user\Downloads\autotranscribe\whisper-auto-transcribe\src\transcribe_gui.py", line 327, in handle_form_submit
subtitle_file_path = task.transcribe(
File "C:\Users\user\Downloads\autotranscribe\whisper-auto-transcribe\src\utils\task.py", line 110, in transcribe
raise Exception(
Exception: Error. Vocal extracter unavailable. Received:
C:\Users\user\AppData\Local\Temp\bebf2e05e25135efa50c8a746b06c2875e007655\IMG_5503.MP4, C:\Users\user\AppData\Local\Temp\tempfreesubtitle\main.MP4, C:\Users\user\AppData\Local\Temp
demucs --two-stems=vocals "C:\Users\user\AppData\Local\Temp\tempfreesubtitle\main.MP4" -o "C:\Users\user\AppData\Local\Temp" --filename "{stem}.{ext}"

unexpected keyword argument 'caption'

Getting error:
cuda:0
Detected language: English
100%|███████████████████████████████████████████████████████████████████| 69968/69968 [02:45<00:00, 422.93frames/s]
tmp/Updated | Near-Automated Voice Cloning | Whisper STT + Coqui TTS | Fine Tune a VITS Model on Colab.srt tmp/Updated | Near-Automated Voice Cloning | Whisper STT + Coqui TTS | Fine Tune a VITS Model on Colab.vtt tmp/Updated | Near-Automated Voice Cloning | Whisper STT + Coqui TTS | Fine Tune a VITS Model on Colab.ass
Traceback (most recent call last):
File "C:\Users\CHP_7575\Documents\whisper-auto-transcribe\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\CHP_7575\Documents\whisper-auto-transcribe\venv\lib\site-packages\gradio\blocks.py", line 1018, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "C:\Users\CHP_7575\Documents\whisper-auto-transcribe\venv\lib\site-packages\gradio\blocks.py", line 947, in postprocess_data
prediction_value = postprocess_update_dict(
File "C:\Users\CHP_7575\Documents\whisper-auto-transcribe\venv\lib\site-packages\gradio\blocks.py", line 371, in postprocess_update_dict
update_dict = block.get_specific_update(update_dict)
File "C:\Users\CHP_7575\Documents\whisper-auto-transcribe\venv\lib\site-packages\gradio\blocks.py", line 257, in get_specific_update
specific_update = cls.update(**generic_update)
TypeError: Video.update() got an unexpected keyword argument 'caption'

Tried running inside venv and without. Tried different gradio versions as well. Does not work for me on .mp3's or using YT vid.

Error during processing (ValueError: Expected parameter logits)

Hi, I was running a translation and during it i had a crash.
The command I ran was
python C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\cli.py "D:\j\X.mp4" --output "D:\j\X.srt" -lang ja --task translate --model-size large --device cuda
Only change in repo I've made is I've set vocal_extracter=False in task.py because it didn't start otherwise.
Stacktrace:
43%|██████████████████████████████▋ | 2698.92/6231.83 [04:35<06:00, 9.79sec/s] Traceback (most recent call last): File "C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\cli.py", line 139, in <module> cli() File "C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\cli.py", line 121, in cli subtitle_path = transcribe( File "C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\src\utils\task.py", line 156, in transcribe result = used_model.transcribe( File "C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\venv\lib\site-packages\stable_whisper\whisper_word_level.py", line 453, in transcribe_stable result: DecodingResult = decode_with_fallback(mel_segment, ts_token_mask=ts_token_mask) File "C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\venv\lib\site-packages\stable_whisper\whisper_word_level.py", line 337, in decode_with_fallback decode_result, audio_features = model.decode(seg, File "C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\venv\lib\site-packages\stable_whisper\decode.py", line 112, in decode_stable result = task.run(mel) File "C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\venv\lib\site-packages\whisper\decoding.py", line 729, in run tokens, sum_logprobs, no_speech_probs = self._main_loop(audio_features, tokens) File "C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\venv\lib\site-packages\stable_whisper\decode.py", line 61, in _main_loop tokens, completed = self.decoder.update(tokens, logits, sum_logprobs) File "C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\venv\lib\site-packages\whisper\decoding.py", line 276, in update next_tokens = Categorical(logits=logits / self.temperature).sample() File "C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\venv\lib\site-packages\torch\distributions\categorical.py", line 64, in __init__ super(Categorical, self).__init__(batch_shape, validate_args=validate_args) File "C:\Users\san-a\Downloads\tools\whisper-auto-transcribe-0.3.2b2\venv\lib\site-packages\torch\distributions\distribution.py", line 55, in __init__ raise ValueError( ValueError: Expected parameter logits (Tensor of shape (1, 51865)) of distribution Categorical(logits: torch.Size([1, 51865])) to satisfy the constraint IndependentConstraint(Real(), 1), but found invalid values: tensor([[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')

GPU not available

I have a RTX 3070 and followed all the installation steps for GPU support, but the GUI still doesn't allow me to select "GPU" as Device:

image

What'are the differences between transcribe models?

In cli.py, three model types are listed whisper, whisper_timestams, stable_whisper. Could you elaborate the difference between them? @tomchang25

Thanks for making this tool available. I used it to transcribe my entire audiobook library and it worked great!

0.3.1 changing " " spaces for "-" dashes in names

Why does 3.1 version now changes " " spaces in names for "-" dashes?
That didn't happen before... it worked perfectly well using "C:\My Movies\Whatever with spaces\A Title With Many Spaces In The Name.mp4" without automatically converting it to "C:\My-Movies\Whatever-with-spaces\A-Title-With-Many-Spaces-In-The-Name.mp4"...

webui.bat/install fails with Python 3.11

Problem:

There are no available .whl packages for torch 1.12.1+cu113 for Python 3.11, as the latest ones are for 3.10 a the latest.
This causes errors during install if you have Python 3.11 as the default Python interpreter when webui.bat creates the venv for this project.

Fix:

Make sure your Python version is (as of writing) 3.7, 3.8, 3.9, or 3.10, and this is what the venv is refering to.

How I fixed my install:

In the whisper-auto-transcribe folder:
I deleted the venv created by webui.bat to remove the reference to the 3.11 install of Python.
(Here you would find and install python 3.10 and use its install location instead of this one, I already had my old install in my Program Files tho'.)
Recreate the venv:
PS F:\whisper-auto-transcribe> & "C:\Program Files\Python310\python.exe" -m venv venv
Now the venv is using Python 3.10, and the webui.bat runs as intended.

This is especially tricky for new users, as no official installers/binaries exist for older Python versions (including 3.10), where the current README points users to.
There are some good samaritans building them (Google can help you finding them), but I'm not sure if it is good to just link to one here, so no PR from me.
Then there is the build-your-own-Python-way, but if you can manage that, the missing .whls for torch wouldn't really be a problem for you.

Can't get webui.bat running successfully

venv "D:\whisper-auto-transcribe\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 06bd1ea
Check torch and torchvision
Fetching updates for Custom Gradio...
Checking out commint for Custom Gradio with hash: 8ac54ca7902a04ede2cb93a11ff42c6cb011b296...
Traceback (most recent call last):
File "D:\whisper-auto-transcribe\launch.py", line 198, in
git_clone(
File "D:\whisper-auto-transcribe\launch.py", line 169, in git_clone
run(
File "D:\whisper-auto-transcribe\launch.py", line 63, in run
raise RuntimeError(message)
RuntimeError: Couldn't checkout commit 8ac54ca7902a04ede2cb93a11ff42c6cb011b296 for Custom Gradio.
Command: git -C repositories\gradio checkout 8ac54ca7902a04ede2cb93a11ff42c6cb011b296
Error code: 128
stdout:
stderr: fatal: reference is not a tree: 8ac54ca7902a04ede2cb93a11ff42c6cb011b296

Subtitle Naming Scheme

Possible to have an option to produce a subtitle file with the same name as the input file (in my case video)?

Thank you for the guide.

Error when running

Hello. Getting this error after installing.

Log:

venv "F:\whisper-auto-transcribe\venv\Scripts\Python.exe"
Python 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)]
Commit hash: 0f023b4
Check torch and torchvision
Check gradio
Installing requirements for Web UI
Launching Web UI with arguments:
Traceback (most recent call last):
File "F:\whisper-auto-transcribe\launch.py", line 244, in
start_webui()
File "F:\whisper-auto-transcribe\launch.py", line 238, in start_webui
from gui import gui
File "F:\whisper-auto-transcribe\gui.py", line 1, in
from src import transcribe_gui
File "F:\whisper-auto-transcribe\src\transcribe_gui.py", line 7, in
from src.utils import task
File "F:\whisper-auto-transcribe\src\utils\task.py", line 8, in
import stable_whisper
File "F:\whisper-auto-transcribe\venv\lib\site-packages\stable_whisper_init_.py", line 1, in
from .whisper_word_level import *
File "F:\whisper-auto-transcribe\venv\lib\site-packages\stable_whisper\whisper_word_level.py", line 8, in
import whisper
File "F:\whisper-auto-transcribe\venv\lib\site-packages\whisper_init_.py", line 13, in
from .model import ModelDimensions, Whisper
File "F:\whisper-auto-transcribe\venv\lib\site-packages\whisper\model.py", line 13, in
from .transcribe import transcribe as transcribe_function
File "F:\whisper-auto-transcribe\venv\lib\site-packages\whisper\transcribe.py", line 20, in
from .timing import add_word_timestamps
File "F:\whisper-auto-transcribe\venv\lib\site-packages\whisper\timing.py", line 7, in
import numba
File "F:\whisper-auto-transcribe\venv\lib\site-packages\numba_init_.py", line 42, in
from numba.np.ufunc import (vectorize, guvectorize, threading_layer,
File "F:\whisper-auto-transcribe\venv\lib\site-packages\numba\np\ufunc_init_.py", line 3, in
from numba.np.ufunc.decorators import Vectorize, GUVectorize, vectorize, guvectorize
File "F:\whisper-auto-transcribe\venv\lib\site-packages\numba\np\ufunc\decorators.py", line 3, in
from numba.np.ufunc import _internal
SystemError: initialization of _internal failed without raising an exception

Stable whisper throw error in extremly long audio

Track this issue now, but I don't think it will be resolved in the near future.
As an alternative solution, you can either change the model or slice your file.

It appears that one hour is a dividing line.

Edit:
I have found where the problem is.
Setting the language to 'auto' may cause the model to incorrectly identify the language, resulting in errors.
In my case, both two different language audio incorrectly identify as Welish:

English -> Welish
Japanese-> Welish

Setting the language explicitly should solve this problem.

Nan errors on a lot of Japanese video

Hi, first of all nice project! I can transcribe quite a few videos but more often than not they fail with the following exception (I'm using the cli so input is consistent). I couldn't figure out why though. Any suggestions?

An error occurred during transcription: Expected parameter logits (Tensor of shape (1, 51865)) of distribution Categorical(logits: torch.Size([1, 51865])) to satisfy the constraint IndependentConstraint(Real(), 1), but found invalid values:
tensor([[nan, nan, nan, ..., nan, nan, nan]], device='cuda:0')

[FEATURE] Support CLI mode

Hi,

Thank you for this nice project i was thinking of doing the same, however my use case is rather different i have many video files that lack subtitles, so to speed this up i thought to automate the generation of the subs using this tool. however it seems to be GUI focused, if possible, could you make a CLI mode to be able to containerize this tool and run in the cloud if needed as well.

For example

$ ./python wat.py --input file.mkv --lang jpn --audio-track a:1 --output sub.srt

it would be extremely helpful to have CLI mode.

Thank you.

EDIT: i managed to created something that works but keep in mind i am no python dev at all

#!/usr/bin/env python3
import os
import re
import whisper
import datetime
import torch
from time import gmtime, strftime
import argparse

precision2model = ["tiny", "base", "small", "medium", "large"]

parser = argparse.ArgumentParser(description='Whisper Auto Transcribe')

parser.add_argument('input', metavar='input', type=str,
                    help='Input video file')

parser.add_argument('--output', metavar='output', type=str,
                    help='SRT file output.',
                    required=True)

parser.add_argument('--language', metavar='language', type=str,
                    help='Input lanuage code [ISO 639-1]. Default [auto].',
                    required=False, default='auto')

parser.add_argument('--task', metavar='task', type=str,
                    help='Task mode [translate, transcribe] Default [translate].',
                    required=False, default='translate')

parser.add_argument('--device', metavar='device', type=str,
                    help='Use device. [cpu, cuda] Default [cpu].',
                    required=False, default='cpu')

parser.add_argument('--model', metavar='model', type=str,
                    help='Use model. [tiny, base, small, medium, large] Default [base].',
                    required=False, default='base')


def transcribe_start(model_type, file_path, language, output_path, task="transcribe", device=None):
    model = whisper.load_model(model_type, device=device)

    print(("Task: {task} \nModel: {model_type}\nInput: {file_path} \nDevice: {device} \nLanguage: {language} \nOutput: {output_path}")
          .format(model_type=model_type, file_path=file_path, output_path=output_path, language=language, task=task, device=model.device))

    if language == 'auto':
        language = None

    result = model.transcribe(
        file_path, language=language, task=task, verbose=False)

    with open(output_path, "w", encoding="UTF-8") as f:
        for seg in result["segments"]:
            id = seg["id"]
            start = (
                str(datetime.timedelta(seconds=round(seg["start"])))
                + ","
                + str(seg["start"] % 1)[2:5]
            )
            end = (
                str(datetime.timedelta(seconds=round(seg["end"])))
                + ","
                + str(seg["end"] % 1)[2:5]
            )
            text = seg["text"]
            f.write(f"{id}\n{start} --> {end}\n{text}\n\n")

    del model.encoder
    del model.decoder

    torch.cuda.empty_cache()

    return output_path, result


if __name__ == "__main__":
    args = parser.parse_args()
    res = transcribe_start(model_type=args.model, file_path=args.input,
                           language=args.language, task=args.task, output_path=args.output, device=args.device)
    print(("{task} file is found at [{file}].\n").format(
        file=res[0], task=args.task))

Error when running on Google Colab

When using on Google Colab (T4 GPU) through Command-line interface, I get error related to Vocal extracter. Specifically

Running Command:
!python /content/whisper-auto-transcribe/cli.py '/content/Files/Youtube.mp4' --output '/content/tmp/Youtube.srt' -lang ja --model large

Error Message:

FileNotFoundError: [Errno 2] No such file or directory: 'demucs --two-stems=vocals "/tmp/tempfreesubtitle/main.mp4" -o "/tmp" --filename "{stem}.{ext}"'

Exception: Error. Vocal extracter unavailable. Received: 
/content/Files/Youtube.mp4, /tmp/tempfreesubtitle/main.mp4, /tmp
demucs --two-stems=vocals "/tmp/tempfreesubtitle/main.mp4" -o "/tmp" --filename "{stem}.{ext}"

Output:

Traceback (most recent call last):
  File "/content/whisper-auto-transcribe/src/utils/task.py", line 108, in transcribe
    subprocess.run(cmd, check=True)
  File "/usr/lib/python3.10/subprocess.py", line 503, in run
    with Popen(*popenargs, **kwargs) as process:
  File "/usr/lib/python3.10/subprocess.py", line 971, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/usr/lib/python3.10/subprocess.py", line 1863, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'demucs --two-stems=vocals "/tmp/tempfreesubtitle/main.mp4" -o "/tmp" --filename "{stem}.{ext}"'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/content/whisper-auto-transcribe/cli.py", line 139, in <module>
    cli()
  File "/content/whisper-auto-transcribe/cli.py", line 121, in cli
    subtitle_path = transcribe(
  File "/content/whisper-auto-transcribe/src/utils/task.py", line 110, in transcribe
    raise Exception(
Exception: Error. Vocal extracter unavailable. Received: 
/content/Files/Youtube.mp4, /tmp/tempfreesubtitle/main.mp4, /tmp
demucs --two-stems=vocals "/tmp/tempfreesubtitle/main.mp4" -o "/tmp" --filename "{stem}.{ext}"

Solution that i have tried:
I set the vocal_extractor to False on Line 27 in src/util/task.py, it working properly with no issues. Source: (#47 (comment)).
But i would like to have a better solution for running this without disable Vocal extracter

Subtitels repeating

A lot of srt I created from japanese have a lot of respating lines. Both with transscribe and translate.

4
0:02:10.580 --> 0:02:13.580
and very easy

5
0:02:13.580 --> 0:02:16.580
and very easy

6
0:02:16.580 --> 0:02:19.580
and very easy

7
0:02:19.580 --> 0:02:22.580
and very easy

8
0:02:22.580 --> 0:02:25.580
and very easy

9
0:02:25.580 --> 0:02:28.580
and very easy

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.