Gesture2Text is a machine learning model integrated with Google Meet to enable real-time gesture detection for inclusive communication during meetings.
- Detects gestures in real time.
- Facilitates communication for differently-abled individuals.
- Integrates seamlessly with Google Meet.
- Front-end built with React; back-end powered by Flask.
Before you begin, ensure you have met the following requirements:
- Node.js and npm: Install Node.js and npm to run the React front-end.
- Python and Flask: Install Python and Flask for the back-end.
- Webcam Access: Ensure your system allows access to the webcam for gesture detection.
-
Clone the repository:
git clone https://github.com/yourusername/Gesture2Text.git
-
Front-end (React) setup:
cd Gesture2Text/frontend npm install npm start
-
Back-end (Flask) setup:
cd Gesture2Text/backend pip install -r requirements.txt python app.py
-
Start the React development server:
cd Gesture2Text/frontend npm start
-
Run the Flask back end:
cd Gesture2Text/backend python app.py
-
Access Gesture2Text:
Open your web browser and visit
http://localhost:3000
to access Gesture2Text.
Watch a demo of Gesture2Text
This project is licensed under the MIT License.
Replace the placeholders as follows:
- **Image URL:** Replace `https://example.com/your-image.jpg` with the actual URL of your project's image.
- **GitHub Repository URL:** Replace `https://github.com/yourusername/Gesture2Text.git` with your GitHub repository URL.
- **Demo Video URL:** If you have a demo video, replace `https://example.com/demo-video` with the actual URL.
- **Contribution Guidelines:** If you have specific contribution guidelines, create a `CONTRIBUTING.md` file and link to it.
- **License:** Specify the actual license your project is under, or if it's MIT, keep it as is.