Code Monkey home page Code Monkey logo

dishantkharkar / lipsync-wav2lip-project Goto Github PK

View Code? Open in Web Editor NEW
1.0 2.0 1.0 62 KB

The LipSync-Wav2Lip-Project repository is a comprehensive solution for achieving lip synchronization in videos using the Wav2Lip deep learning model. This open-source project includes code that enables users to seamlessly synchronize lip movements with audio tracks.

Python 99.79% Shell 0.21%
computer-vision deepfakes dubber lipsync pretrained-models wav2lip

lipsync-wav2lip-project's Introduction

LipSync-Wav2Lip-Project

This repository contains code for lip synchronization using Wav2Lip, a deep learning-based model.

How to Use this Code for Lip Synchronization

Step 1: Clone the Repository

git clone https://github.com/Dishantkharkar/LipSync-Wav2Lip-Project.git
cd LipSync-Wav2Lip-Project

Step 2: Install Requirements

pip install -r requirements.txt

Step 3: Download Pretrained Model

Download the pretrained model from s3fd.pth and save it in the face_detection/detection/sfd/ folder.

Step 4: Obtain Additional Weights

Navigate to the official Wav2Lip repository and follow the instructions in the README to obtain additional weights.

Step 5: Add Video and Audio

Place your video and audio files in the folder shown below: Folder Structure

Step 6: Lip Synchronization

Run the following command to perform lip synchronization:

python inference.py --checkpoint_path <path_to_pretrained_model> --face <path_to_face_video> --audio <path_to_audio_file>

Replace <path_to_pretrained_model>, <path_to_face_video>, and <path_to_audio_file> with the appropriate paths.

Example using newscript.txt file:

Example

The result will be stored in the Result folder with the name result_audio. image

you got like this :image

Evaluation

For evaluating the model, you can use the provided evaluation script:

python evaluation/evaluate.py --model_path <path_to_model> --data_path <path_to_evaluation_data>

Replace <path_to_model> and <path_to_evaluation_data> with the paths to your trained model and evaluation dataset, respectively.

Additional Information

For more details and updates, refer to the original Wav2Lip README.

Contributors

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.