Code Monkey home page Code Monkey logo

Comments (35)

github-actions avatar github-actions commented on July 1, 2024

๐Ÿ‘‹ Hello @PrakharJoshi54321, thank you for raising an issue about Ultralytics HUB ๐Ÿš€! Please visit our HUB Docs to learn more:

  • Quickstart. Start training and deploying YOLO models with HUB in seconds.
  • Datasets: Preparing and Uploading. Learn how to prepare and upload your datasets to HUB in YOLO format.
  • Projects: Creating and Managing. Group your models into projects for improved organization.
  • Models: Training and Exporting. Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
  • Integrations. Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
  • Ultralytics HUB App. Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
    • iOS. Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
    • Android. Explore TFLite acceleration on mobile devices.
  • Inference API. Understand how to use the Inference API for running your trained models in the cloud to generate predictions.

If this is a ๐Ÿ› Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix.

If this is a โ“ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.

We try to respond to all issues as promptly as possible. Thank you for your patience!

from hub.

sergiuwaxmann avatar sergiuwaxmann commented on July 1, 2024

@PrakharJoshi54321 Hello!

The "Optimizing weights" process can take a while. Let's wait for a bit to see if the process finishes successfully.

If the process fails, could you share your model ID (URL) so I can investigate?

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

df

i am using my local machine and all 100 epochs have been completed and it was showing me "optimizing weights" and now it is showing me this plzz guide me the further steps

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

https://hub.ultralytics.com/models/pXL2wTJQSWfImPyV3QhO

from hub.

pderrenger avatar pderrenger commented on July 1, 2024

Hello @PrakharJoshi54321,

Thank you for providing the details and the screenshot. It looks like your model has completed the training process but encountered an issue during the weight optimization phase. Let's address this step-by-step:

  1. Verify Package Versions: Ensure you are using the latest versions of torch, ultralytics, and hub-sdk. You can update them using the following commands:

    pip install --upgrade torch ultralytics hub-sdk
  2. Check Logs: Please check the logs for any errors or warnings that might have occurred during the optimization phase. This can provide more insight into what went wrong.

  3. Resume Training: If the training process was interrupted, you can resume training from the last checkpoint. Navigate to the Model page on Ultralytics HUB and look for the option to resume training.

  4. Preview and Deployment: Since you mentioned that the model is giving you the option to preview and deploy, you can proceed with these steps:

    • Preview Model: Navigate to the Preview tab on the Model page. You can select a preview image from your dataset or upload a new image to see how your model performs.
    • Deploy Model: Navigate to the Deploy tab. You can export your model to various formats such as ONNX, TensorFlow, etc., or use the Ultralytics Inference API for deployment.

For more detailed guidance, you can refer to the Ultralytics HUB Models Documentation.

If the issue persists, please provide any error messages or logs you encounter, and we can further investigate the problem.

Thank you for your patience and cooperation. The YOLO community and the Ultralytics team are here to help you!

from hub.

sergiuwaxmann avatar sergiuwaxmann commented on July 1, 2024

@PrakharJoshi54321 It looks like your model didnโ€™t successfully upload the weights, which is why Ultralytics HUB is asking you to resume training from the last checkpoint (62). I suggest resuming training as recommended in the UI.

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

'''
import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np

Path to Tesseract executable

pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'

Load the models

speed_model = YOLO("yolov8n.pt") # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt') # Model for number plate detection

Path to the video file

video_path = 'video.mp4' # Replace with your video file path

Initialize video capture

cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"

w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))

Video writer

video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

line_pts = [(0, h // 2), (w, h // 2)] # Update line points based on video resolution

Init speed-estimation object

speed_obj = solutions.SpeedEstimator(
reg_pts=line_pts,
names=speed_model.model.names,
view_img=True,
)

while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Error reading frame from video.")
break

# Speed detection and tracking
results = speed_model(im0)

if results:
    print(f"Tracks detected: {len(results)}")
else:
    print("No tracks detected in this frame.")

# Ensure tracks have valid data
for result in results:
    for box in result.boxes:
        x1, y1, x2, y2 = map(int, box.xyxy[0])
        print(f"Vehicle detected at: {x1, y1, x2, y2}")
        cropped_image = im0[y1:y2, x1:x2]

        # Perform number plate detection
        plate_results = plate_model(cropped_image)

        for plate_result in plate_results:
            plate_boxes = plate_result.boxes.xyxy.numpy()
            if len(plate_boxes) == 0:
                print("No number plate detected in this vehicle bounding box.")
            for plate_box in plate_boxes:
                px1, py1, px2, py2 = map(int, plate_box)
                plate_cropped_image = cropped_image[py1:py2, px1:px2]

                # Convert the cropped image to a format suitable for OCR
                plate_cropped_image_rgb = cv2.cvtColor(plate_cropped_image, cv2.COLOR_BGR2RGB)
                pil_image = Image.fromarray(plate_cropped_image_rgb)

                # Use Tesseract to extract text
                plate_text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
                print(f'Detected Number Plate: {plate_text}')

                # Draw the bounding box for the plate and add the text
                cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
                cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

# Write the frame with detections and speed estimation
im0 = speed_obj.estimate_speed(im0, results)
video_writer.write(im0)

cap.release()
video_writer.release()
cv2.destroyAllWindows()
'''
i have made another model using ultralytics of number plate detection and trying to integrate it please help me integrate it

comment-: ultralytics is just amazing

any help will be apriciated

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

check if the speed is greater than 50 km/hr store the vehicle no, speed and track id in the excel sheet

from hub.

pderrenger avatar pderrenger commented on July 1, 2024

Hello @PrakharJoshi54321,

Thank you for your kind words about Ultralytics! We're thrilled to hear that you're enjoying using our tools. Let's enhance your script to store vehicle information in an Excel sheet when the speed exceeds 50 km/hr.

Here's an updated version of your script that includes this functionality:

import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np
import pandas as pd

# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'

# Load the models
speed_model = YOLO("yolov8n.pt")  # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt')  # Model for number plate detection

# Path to the video file
video_path = 'video.mp4'  # Replace with your video file path

# Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"

w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))

# Video writer
video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

line_pts = [(0, h // 2), (w, h // 2)]  # Update line points based on video resolution

# Init speed-estimation object
speed_obj = solutions.SpeedEstimator(
    reg_pts=line_pts,
    names=speed_model.model.names,
    view_img=True,
)

# DataFrame to store vehicle information
vehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"])

while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Error reading frame from video.")
        break

    # Speed detection and tracking
    results = speed_model(im0)

    if results:
        print(f"Tracks detected: {len(results)}")
    else:
        print("No tracks detected in this frame.")

    # Ensure tracks have valid data
    for result in results:
        for box in result.boxes:
            x1, y1, x2, y2 = map(int, box.xyxy[0])
            print(f"Vehicle detected at: {x1, y1, x2, y2}")
            cropped_image = im0[y1:y2, x1:x2]

            # Perform number plate detection
            plate_results = plate_model(cropped_image)

            for plate_result in plate_results:
                plate_boxes = plate_result.boxes.xyxy.numpy()
                if len(plate_boxes) == 0:
                    print("No number plate detected in this vehicle bounding box.")
                for plate_box in plate_boxes:
                    px1, py1, px2, py2 = map(int, plate_box)
                    plate_cropped_image = cropped_image[py1:py2, px1:px2]

                    # Convert the cropped image to a format suitable for OCR
                    plate_cropped_image_rgb = cv2.cvtColor(plate_cropped_image, cv2.COLOR_BGR2RGB)
                    pil_image = Image.fromarray(plate_cropped_image_rgb)

                    # Use Tesseract to extract text
                    plate_text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
                    print(f'Detected Number Plate: {plate_text}')

                    # Draw the bounding box for the plate and add the text
                    cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
                    cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

    # Write the frame with detections and speed estimation
    im0, speeds = speed_obj.estimate_speed(im0, results)
    video_writer.write(im0)

    # Store vehicle information if speed exceeds 50 km/hr
    for track_id, speed in speeds.items():
        if speed > 50:
            vehicle_data = vehicle_data.append({
                "Track ID": track_id,
                "Vehicle No": plate_text,
                "Speed (km/hr)": speed
            }, ignore_index=True)

cap.release()
video_writer.release()
cv2.destroyAllWindows()

# Save the vehicle data to an Excel file
vehicle_data.to_excel("vehicle_data.xlsx", index=False)

This script will now store the vehicle number, speed, and track ID in an Excel sheet if the speed exceeds 50 km/hr. The pandas library is used to handle the Excel file operations.

If you encounter any issues or have further questions, please let us know. The YOLO community and the Ultralytics team are always here to help!

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

This code is throwing error as the function here is not returning two values and you are saying to store value in two variable. How this is possible? "im0, speeds = speed_obj.estimate_speed(im0, results)"

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

pro.zip
i have made another model using ultralytics of number plate detection and trying to integrate it please help me integrate it I have uploaded my project and check if the speed is greater than 50 km/hr store the vehicle no, speed and track id in the excel sheet

please do this for me all the efforts will be appreciated

from hub.

pderrenger avatar pderrenger commented on July 1, 2024

Hello @PrakharJoshi54321,

Thank you for sharing your project files and providing details about your requirements. Let's address the integration of your number plate detection model and the speed tracking functionality, ensuring that vehicle information is stored in an Excel sheet when the speed exceeds 50 km/hr.

First, let's correct the issue with the estimate_speed function. The estimate_speed function should return the modified frame and a dictionary of speeds. Here's the updated version of your script:

import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np
import pandas as pd

# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'

# Load the models
speed_model = YOLO("yolov8n.pt")  # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt')  # Model for number plate detection

# Path to the video file
video_path = 'video.mp4'  # Replace with your video file path

# Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"

w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))

# Video writer
video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

line_pts = [(0, h // 2), (w, h // 2)]  # Update line points based on video resolution

# Init speed-estimation object
speed_obj = solutions.SpeedEstimator(
    reg_pts=line_pts,
    names=speed_model.model.names,
    view_img=True,
)

# DataFrame to store vehicle information
vehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"])

while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Error reading frame from video.")
        break

    # Speed detection and tracking
    results = speed_model(im0)

    if results:
        print(f"Tracks detected: {len(results)}")
    else:
        print("No tracks detected in this frame.")

    # Ensure tracks have valid data
    for result in results:
        for box in result.boxes:
            x1, y1, x2, y2 = map(int, box.xyxy[0])
            print(f"Vehicle detected at: {x1, y1, x2, y2}")
            cropped_image = im0[y1:y2, x1:x2]

            # Perform number plate detection
            plate_results = plate_model(cropped_image)

            for plate_result in plate_results:
                plate_boxes = plate_result.boxes.xyxy.numpy()
                if len(plate_boxes) == 0:
                    print("No number plate detected in this vehicle bounding box.")
                for plate_box in plate_boxes:
                    px1, py1, px2, py2 = map(int, plate_box)
                    plate_cropped_image = cropped_image[py1:py2, px1:px2]

                    # Convert the cropped image to a format suitable for OCR
                    plate_cropped_image_rgb = cv2.cvtColor(plate_cropped_image, cv2.COLOR_BGR2RGB)
                    pil_image = Image.fromarray(plate_cropped_image_rgb)

                    # Use Tesseract to extract text
                    plate_text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
                    print(f'Detected Number Plate: {plate_text}')

                    # Draw the bounding box for the plate and add the text
                    cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
                    cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

    # Write the frame with detections and speed estimation
    im0, speeds = speed_obj.estimate_speed(im0, results)
    video_writer.write(im0)

    # Store vehicle information if speed exceeds 50 km/hr
    for track_id, speed in speeds.items():
        if speed > 50:
            vehicle_data = vehicle_data.append({
                "Track ID": track_id,
                "Vehicle No": plate_text,
                "Speed (km/hr)": speed
            }, ignore_index=True)

cap.release()
video_writer.release()
cv2.destroyAllWindows()

# Save the vehicle data to an Excel file
vehicle_data.to_excel("vehicle_data.xlsx", index=False)

This script now correctly handles the return values from the estimate_speed function and stores the vehicle information in an Excel sheet if the speed exceeds 50 km/hr.

If you encounter any further issues or have additional questions, please let us know. The YOLO community and the Ultralytics team are here to support you!

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

Is it working inyour system please share snip and detailed process its my college project

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

Do the correct ocr

from hub.

pderrenger avatar pderrenger commented on July 1, 2024

Hello @PrakharJoshi54321,

Thank you for reaching out! To assist you effectively, we need to ensure a few things:

  1. Minimum Reproducible Example: Could you please provide a minimal code snippet that reproduces the issue you're facing with OCR? This will help us understand the problem better and provide a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details on how to create one.

  2. Package Versions: Ensure you are using the latest versions of torch, ultralytics, and hub-sdk. You can update them using the following commands:

    pip install --upgrade torch ultralytics hub-sdk

Regarding your OCR integration, hereโ€™s a refined approach to ensure accurate OCR detection:

  1. Preprocessing the Image: Sometimes, preprocessing the image can significantly improve OCR accuracy. This can include converting the image to grayscale, applying thresholding, or resizing the image.

  2. Tesseract Configuration: Tesseract OCR has various configuration options that can be fine-tuned for better results. For instance, using different Page Segmentation Modes (PSM) can yield better results depending on the structure of the text.

Hereโ€™s an example of how you can preprocess the image and configure Tesseract:

import cv2
import pytesseract
from PIL import Image

# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'

def preprocess_image(image):
    # Convert to grayscale
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    # Apply thresholding
    _, thresh = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY)
    return thresh

def extract_text_from_image(image):
    # Preprocess the image
    preprocessed_image = preprocess_image(image)
    # Convert to PIL Image
    pil_image = Image.fromarray(preprocessed_image)
    # Use Tesseract to extract text
    text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
    return text

# Example usage
image = cv2.imread('path_to_image.jpg')
text = extract_text_from_image(image)
print(f'Detected Text: {text}')

This example demonstrates how to preprocess the image before passing it to Tesseract for OCR. You can adjust the preprocessing steps based on your specific requirements.

If you continue to face issues, please share the minimal reproducible example, and weโ€™ll be happy to assist you further. The YOLO community and the Ultralytics team are here to help!

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

i am taking 5 km/hr for testing and it is showing me this

Vehicle detected at: (815, 196, 871, 255)

0: 640x608 1 0, 116.3ms
Speed: 0.8ms preprocess, 116.3ms inference, 0.0ms postprocess per image at shape (1, 3, 640, 608)
Detected Number Plate: eT
Traceback (most recent call last):
File "C:\Users\cairuser1\Desktop\project\intigrate.py", line 85, in
im0, speeds = speed_obj.estimate_speed(im0, results)
ValueError: too many values to unpack (expected 2)

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

Write the frame with detections and speed estimation

im0, speeds = speed_obj.estimate_speed(im0, results)
video_writer.write(im0)

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

packages in environment at C:\Users\cairuser1\miniconda3\envs\speedss:

Name Version Build Channel

asttokens 2.4.1 pyhd8ed1ab_0 conda-forge
beautifulsoup4 4.12.3 pypi_0 pypi
bzip2 1.0.8 h2bbff1b_6
ca-certificates 2024.6.2 h56e8100_0 conda-forge
cachetools 5.3.3 pypi_0 pypi
certifi 2024.6.2 pypi_0 pypi
charset-normalizer 3.3.2 pypi_0 pypi
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
comm 0.2.2 pyhd8ed1ab_0 conda-forge
contourpy 1.2.1 pypi_0 pypi
cycler 0.12.1 pypi_0 pypi
debugpy 1.6.7 py310hd77b12b_0
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
dill 0.3.8 pypi_0 pypi
easyocr 1.7.1 pypi_0 pypi
et-xmlfile 1.1.0 pypi_0 pypi
exceptiongroup 1.2.0 pyhd8ed1ab_2 conda-forge
executing 2.0.1 pyhd8ed1ab_0 conda-forge
filelock 3.15.1 pypi_0 pypi
fonttools 4.53.0 pypi_0 pypi
fsspec 2024.6.0 pypi_0 pypi
google 3.0.0 pypi_0 pypi
google-api-core 2.19.0 pypi_0 pypi
google-auth 2.30.0 pypi_0 pypi
google-cloud-vision 3.7.2 pypi_0 pypi
googleapis-common-protos 1.63.1 pypi_0 pypi
grpcio 1.64.1 pypi_0 pypi
grpcio-status 1.62.2 pypi_0 pypi
hub-sdk 0.0.8 pypi_0 pypi
idna 3.7 pypi_0 pypi
imageio 2.34.1 pypi_0 pypi
importlib-metadata 7.1.0 pyha770c72_0 conda-forge
importlib_metadata 7.1.0 hd8ed1ab_0 conda-forge
imutils 0.5.4 pypi_0 pypi
intel-openmp 2021.4.0 pypi_0 pypi
ipykernel 6.29.4 pyh4bbf305_0 conda-forge
ipython 8.25.0 pyh7428d3b_0 conda-forge
jedi 0.19.1 pyhd8ed1ab_0 conda-forge
jinja2 3.1.4 pypi_0 pypi
jupyter_client 8.6.2 pyhd8ed1ab_0 conda-forge
jupyter_core 5.7.2 py310h5588dad_0 conda-forge
kiwisolver 1.4.5 pypi_0 pypi
lap 0.4.0 pypi_0 pypi
lazy-loader 0.4 pypi_0 pypi
libffi 3.4.4 hd77b12b_1
libsodium 1.0.18 h8d14728_1 conda-forge
markupsafe 2.1.5 pypi_0 pypi
matplotlib 3.9.0 pypi_0 pypi
matplotlib-inline 0.1.7 pyhd8ed1ab_0 conda-forge
mkl 2021.4.0 pypi_0 pypi
mpmath 1.3.0 pypi_0 pypi
nest-asyncio 1.6.0 pyhd8ed1ab_0 conda-forge
networkx 3.3 pypi_0 pypi
ninja 1.11.1.1 pypi_0 pypi
numpy 1.26.4 pypi_0 pypi
opencv-python 4.10.0.82 pypi_0 pypi
opencv-python-headless 4.10.0.82 pypi_0 pypi
openpyxl 3.1.4 pypi_0 pypi
openssl 1.1.1l h8ffe710_0 conda-forge
packaging 24.1 pyhd8ed1ab_0 conda-forge
pandas 2.2.2 pypi_0 pypi
parso 0.8.4 pyhd8ed1ab_0 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 10.3.0 pypi_0 pypi
pip 24.0 py310haa95532_0
platformdirs 4.2.2 pyhd8ed1ab_0 conda-forge
prompt-toolkit 3.0.47 pyha770c72_0 conda-forge
proto-plus 1.24.0 pypi_0 pypi
protobuf 4.25.3 pypi_0 pypi
psutil 5.9.8 pypi_0 pypi
pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge
py-cpuinfo 9.0.0 pypi_0 pypi
pyasn1 0.6.0 pypi_0 pypi
pyasn1-modules 0.4.0 pypi_0 pypi
pyclipper 1.3.0.post5 pypi_0 pypi
pygments 2.18.0 pyhd8ed1ab_0 conda-forge
pyparsing 3.1.2 pypi_0 pypi
pytesseract 0.3.10 pypi_0 pypi
python 3.10.0 h96c0403_3
python-bidi 0.4.2 pypi_0 pypi
python-dateutil 2.9.0.post0 pypi_0 pypi
python_abi 3.10 2_cp310 conda-forge
pytz 2024.1 pypi_0 pypi
pywin32 305 py310h2bbff1b_0
pyyaml 6.0.1 pypi_0 pypi
pyzmq 25.1.2 py310hd77b12b_0
requests 2.32.3 pypi_0 pypi
rsa 4.9 pypi_0 pypi
scikit-image 0.23.2 pypi_0 pypi
scipy 1.13.1 pypi_0 pypi
seaborn 0.13.2 pypi_0 pypi
setuptools 69.5.1 py310haa95532_0
shapely 2.0.4 pypi_0 pypi
six 1.16.0 pyh6c4a22f_0 conda-forge
soupsieve 2.5 pypi_0 pypi
sqlite 3.45.3 h2bbff1b_0
stack_data 0.6.2 pyhd8ed1ab_0 conda-forge
sympy 1.12.1 pypi_0 pypi
tbb 2021.12.0 pypi_0 pypi
tifffile 2024.5.22 pypi_0 pypi
tk 8.6.14 h0416ee5_0
torch 2.3.1 pypi_0 pypi
torchvision 0.18.1 pypi_0 pypi
tornado 6.2 py310he2412df_0 conda-forge
tqdm 4.66.4 pypi_0 pypi
traitlets 5.14.3 pyhd8ed1ab_0 conda-forge
typing_extensions 4.12.2 pyha770c72_0 conda-forge
tzdata 2024.1 pypi_0 pypi
ultralytics 8.2.38 pypi_0 pypi
ultralytics-thop 2.0.0 pypi_0 pypi
urllib3 2.2.1 pypi_0 pypi
vc 14.2 h2eaa2aa_1
vs2015_runtime 14.29.30133 h43f2093_3
wcwidth 0.2.13 pyhd8ed1ab_0 conda-forge
wheel 0.43.0 py310haa95532_0
xz 5.4.6 h8cc25b3_1
zeromq 4.3.5 hd77b12b_0
zipp 3.19.2 pyhd8ed1ab_0 conda-forge
zlib 1.2.13 h8cc25b3_1

list of packages

from hub.

pderrenger avatar pderrenger commented on July 1, 2024

Hello @PrakharJoshi54321,

Thank you for providing the detailed list of packages in your environment. It looks like you're encountering an issue with the estimate_speed function returning more values than expected. Let's address this step-by-step.

Step 1: Verify Package Versions

First, ensure that you are using the latest versions of torch, ultralytics, and hub-sdk. You can update them using the following commands:

pip install --upgrade torch ultralytics hub-sdk

Step 2: Minimum Reproducible Example

To help us diagnose the issue more effectively, could you please provide a minimum reproducible code example? This will allow us to replicate the problem on our end and provide a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details.

Step 3: Correcting the estimate_speed Function

It seems like the estimate_speed function is not returning the expected values. Let's correct this by ensuring the function returns the frame and the speeds dictionary correctly. Hereโ€™s an updated version of your script:

import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np
import pandas as pd

# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'

# Load the models
speed_model = YOLO("yolov8n.pt")  # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt')  # Model for number plate detection

# Path to the video file
video_path = 'video.mp4'  # Replace with your video file path

# Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"

w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))

# Video writer
video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

line_pts = [(0, h // 2), (w, h // 2)]  # Update line points based on video resolution

# Init speed-estimation object
speed_obj = solutions.SpeedEstimator(
    reg_pts=line_pts,
    names=speed_model.model.names,
    view_img=True,
)

# DataFrame to store vehicle information
vehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"])

while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Error reading frame from video.")
        break

    # Speed detection and tracking
    results = speed_model(im0)

    if results:
        print(f"Tracks detected: {len(results)}")
    else:
        print("No tracks detected in this frame.")

    # Ensure tracks have valid data
    for result in results:
        for box in result.boxes:
            x1, y1, x2, y2 = map(int, box.xyxy[0])
            print(f"Vehicle detected at: {x1, y1, x2, y2}")
            cropped_image = im0[y1:y2, x1:x2]

            # Perform number plate detection
            plate_results = plate_model(cropped_image)

            for plate_result in plate_results:
                plate_boxes = plate_result.boxes.xyxy.numpy()
                if len(plate_boxes) == 0:
                    print("No number plate detected in this vehicle bounding box.")
                for plate_box in plate_boxes:
                    px1, py1, px2, py2 = map(int, plate_box)
                    plate_cropped_image = cropped_image[py1:py2, px1:px2]

                    # Convert the cropped image to a format suitable for OCR
                    plate_cropped_image_rgb = cv2.cvtColor(plate_cropped_image, cv2.COLOR_BGR2RGB)
                    pil_image = Image.fromarray(plate_cropped_image_rgb)

                    # Use Tesseract to extract text
                    plate_text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
                    print(f'Detected Number Plate: {plate_text}')

                    # Draw the bounding box for the plate and add the text
                    cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
                    cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

    # Write the frame with detections and speed estimation
    speeds = speed_obj.estimate_speed(im0, results)
    video_writer.write(im0)

    # Store vehicle information if speed exceeds 50 km/hr
    for track_id, speed in speeds.items():
        if speed > 50:
            vehicle_data = vehicle_data.append({
                "Track ID": track_id,
                "Vehicle No": plate_text,
                "Speed (km/hr)": speed
            }, ignore_index=True)

cap.release()
video_writer.release()
cv2.destroyAllWindows()

# Save the vehicle data to an Excel file
vehicle_data.to_excel("vehicle_data.xlsx", index=False)

Step 4: Improving OCR Accuracy

To improve OCR accuracy, consider preprocessing the image before passing it to Tesseract. Hereโ€™s an example:

def preprocess_image(image):
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    _, thresh = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY)
    return thresh

def extract_text_from_image(image):
    preprocessed_image = preprocess_image(image)
    pil_image = Image.fromarray(preprocessed_image)
    text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
    return text

# Example usage
image = cv2.imread('path_to_image.jpg')
text = extract_text_from_image(image)
print(f'Detected Text: {text}')

Conclusion

Please try the updated script and let us know if it resolves the issue. If the problem persists, providing a minimum reproducible example will help us assist you better. The YOLO community and the Ultralytics team are here to support you!

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

Traceback (most recent call last):
File "C:\Users\cairuser1\Desktop\project\intigrate.py", line 89, in
for track_id, speed in speeds.items():
AttributeError: 'numpy.ndarray' object has no attribute 'items'. Did you mean: 'item'?

provide me fast please

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

resolve this fast please

from hub.

pderrenger avatar pderrenger commented on July 1, 2024

Hello @PrakharJoshi54321,

Thank you for your patience. Let's address the issue you're facing with the estimate_speed function returning a numpy.ndarray instead of a dictionary.

Step 1: Verify Package Versions

First, ensure you are using the latest versions of torch, ultralytics, and hub-sdk. You can update them using the following commands:

pip install --upgrade torch ultralytics hub-sdk

Step 2: Correcting the estimate_speed Function

It seems like the estimate_speed function might be returning a different structure than expected. Let's adjust the code to handle this correctly. Hereโ€™s an updated version of your script:

import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np
import pandas as pd

# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'

# Load the models
speed_model = YOLO("yolov8n.pt")  # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt')  # Model for number plate detection

# Path to the video file
video_path = 'video.mp4'  # Replace with your video file path

# Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"

w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))

# Video writer
video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

line_pts = [(0, h // 2), (w, h // 2)]  # Update line points based on video resolution

# Init speed-estimation object
speed_obj = solutions.SpeedEstimator(
    reg_pts=line_pts,
    names=speed_model.model.names,
    view_img=True,
)

# DataFrame to store vehicle information
vehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"])

while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Error reading frame from video.")
        break

    # Speed detection and tracking
    results = speed_model(im0)

    if results:
        print(f"Tracks detected: {len(results)}")
    else:
        print("No tracks detected in this frame.")

    # Ensure tracks have valid data
    for result in results:
        for box in result.boxes:
            x1, y1, x2, y2 = map(int, box.xyxy[0])
            print(f"Vehicle detected at: {x1, y1, x2, y2}")
            cropped_image = im0[y1:y2, x1:x2]

            # Perform number plate detection
            plate_results = plate_model(cropped_image)

            for plate_result in plate_results:
                plate_boxes = plate_result.boxes.xyxy.numpy()
                if len(plate_boxes) == 0:
                    print("No number plate detected in this vehicle bounding box.")
                for plate_box in plate_boxes:
                    px1, py1, px2, py2 = map(int, plate_box)
                    plate_cropped_image = cropped_image[py1:py2, px1:px2]

                    # Convert the cropped image to a format suitable for OCR
                    plate_cropped_image_rgb = cv2.cvtColor(plate_cropped_image, cv2.COLOR_BGR2RGB)
                    pil_image = Image.fromarray(plate_cropped_image_rgb)

                    # Use Tesseract to extract text
                    plate_text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
                    print(f'Detected Number Plate: {plate_text}')

                    # Draw the bounding box for the plate and add the text
                    cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
                    cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

    # Write the frame with detections and speed estimation
    im0, speeds = speed_obj.estimate_speed(im0, results)
    video_writer.write(im0)

    # Ensure speeds is a dictionary
    if isinstance(speeds, dict):
        # Store vehicle information if speed exceeds 50 km/hr
        for track_id, speed in speeds.items():
            if speed > 50:
                vehicle_data = vehicle_data.append({
                    "Track ID": track_id,
                    "Vehicle No": plate_text,
                    "Speed (km/hr)": speed
                }, ignore_index=True)
    else:
        print("Speeds is not a dictionary. Please check the output of estimate_speed function.")

cap.release()
video_writer.release()
cv2.destroyAllWindows()

# Save the vehicle data to an Excel file
vehicle_data.to_excel("vehicle_data.xlsx", index=False)

Step 3: Minimum Reproducible Example

If the issue persists, please provide a minimum reproducible code example. This will help us understand the problem better and provide a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details.

We appreciate your patience and understanding. The YOLO community and the Ultralytics team are here to support you! If you have any further questions or need additional assistance, please let us know.

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

Is this correct

from hub.

pderrenger avatar pderrenger commented on July 1, 2024

Hello @PrakharJoshi54321,

Thank you for reaching out! Let's address your issue step-by-step to ensure we provide the best possible support.

Step 1: Minimum Reproducible Example

To help us diagnose the issue effectively, could you please provide a minimum reproducible code example? This will allow us to replicate the problem on our end and offer a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details. Having a reproducible example is crucial for us to investigate and resolve the issue efficiently.

Step 2: Verify Package Versions

Please ensure you are using the latest versions of torch, ultralytics, and hub-sdk. You can update them using the following commands:

pip install --upgrade torch ultralytics hub-sdk

Using the most recent versions helps ensure that any known bugs are fixed and you have access to the latest features and improvements.

Step 3: Correcting the estimate_speed Function

It seems like there might be an issue with the estimate_speed function returning a numpy.ndarray instead of a dictionary. Here's an updated version of your script to handle this correctly:

import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np
import pandas as pd

# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'

# Load the models
speed_model = YOLO("yolov8n.pt")  # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt')  # Model for number plate detection

# Path to the video file
video_path = 'video.mp4'  # Replace with your video file path

# Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"

w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))

# Video writer
video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

line_pts = [(0, h // 2), (w, h // 2)]  # Update line points based on video resolution

# Init speed-estimation object
speed_obj = solutions.SpeedEstimator(
    reg_pts=line_pts,
    names=speed_model.model.names,
    view_img=True,
)

# DataFrame to store vehicle information
vehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"])

while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Error reading frame from video.")
        break

    # Speed detection and tracking
    results = speed_model(im0)

    if results:
        print(f"Tracks detected: {len(results)}")
    else:
        print("No tracks detected in this frame.")

    # Ensure tracks have valid data
    for result in results:
        for box in result.boxes:
            x1, y1, x2, y2 = map(int, box.xyxy[0])
            print(f"Vehicle detected at: {x1, y1, x2, y2}")
            cropped_image = im0[y1:y2, x1:x2]

            # Perform number plate detection
            plate_results = plate_model(cropped_image)

            for plate_result in plate_results:
                plate_boxes = plate_result.boxes.xyxy.numpy()
                if len(plate_boxes) == 0:
                    print("No number plate detected in this vehicle bounding box.")
                for plate_box in plate_boxes:
                    px1, py1, px2, py2 = map(int, plate_box)
                    plate_cropped_image = cropped_image[py1:py2, px1:px2]

                    # Convert the cropped image to a format suitable for OCR
                    plate_cropped_image_rgb = cv2.cvtColor(plate_cropped_image, cv2.COLOR_BGR2RGB)
                    pil_image = Image.fromarray(plate_cropped_image_rgb)

                    # Use Tesseract to extract text
                    plate_text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
                    print(f'Detected Number Plate: {plate_text}')

                    # Draw the bounding box for the plate and add the text
                    cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
                    cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

    # Write the frame with detections and speed estimation
    im0, speeds = speed_obj.estimate_speed(im0, results)
    video_writer.write(im0)

    # Ensure speeds is a dictionary
    if isinstance(speeds, dict):
        # Store vehicle information if speed exceeds 50 km/hr
        for track_id, speed in speeds.items():
            if speed > 50:
                vehicle_data = vehicle_data.append({
                    "Track ID": track_id,
                    "Vehicle No": plate_text,
                    "Speed (km/hr)": speed
                }, ignore_index=True)
    else:
        print("Speeds is not a dictionary. Please check the output of estimate_speed function.")

cap.release()
video_writer.release()
cv2.destroyAllWindows()

# Save the vehicle data to an Excel file
vehicle_data.to_excel("vehicle_data.xlsx", index=False)

Step 4: Improving OCR Accuracy

To improve OCR accuracy, consider preprocessing the image before passing it to Tesseract. Hereโ€™s an example:

def preprocess_image(image):
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    _, thresh = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY)
    return thresh

def extract_text_from_image(image):
    preprocessed_image = preprocess_image(image)
    pil_image = Image.fromarray(preprocessed_image)
    text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
    return text

# Example usage
image = cv2.imread('path_to_image.jpg')
text = extract_text_from_image(image)
print(f'Detected Text: {text}')

We hope this helps resolve the issue. If you have any further questions or need additional assistance, please let us know. The YOLO community and the Ultralytics team are here to support you! ๐Ÿ˜Š

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

No number plate detected in this vehicle bounding box.
Vehicle detected at: (815, 196, 871, 255)

0: 640x608 1 0, 122.0ms
Speed: 0.0ms preprocess, 122.0ms inference, 0.0ms postprocess per image at shape (1, 3, 640, 608)
Detected Number Plate: eT
Traceback (most recent call last):
File "C:\Users\cairuser1\Desktop\project\intigrate.py", line 85, in
im0, speeds = speed_obj.estimate_speed(im0, results)
ValueError: too many values to unpack (expected 2)

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

0: 640x608 1 0, 133.6ms
Speed: 0.0ms preprocess, 133.6ms inference, 0.0ms postprocess per image at shape (1, 3, 640, 608)
Detected Number Plate: OO
Traceback (most recent call last):
File "C:\Users\cairuser1\Desktop\project\intigrate.py", line 92, in
im0, speeds = speed_obj.estimate_speed(im0, results)
ValueError: too many values to unpack (expected 2)

both the code are giving same result

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np
import pandas as pd

Path to Tesseract executable

pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'

Load the models

speed_model = YOLO("yolov8n.pt") # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt') # Model for number plate detection

Path to the video file

video_path = 'video.mp4' # Replace with your video file path

Initialize video capture

cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"

w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))

Video writer

video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

line_pts = [(0, h // 2), (w, h // 2)] # Update line points based on video resolution

Init speed-estimation object

speed_obj = solutions.SpeedEstimator(
reg_pts=line_pts,
names=speed_model.model.names,
view_img=True,
)

DataFrame to store vehicle information

vehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"])

def preprocess_image(image):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY)
return thresh

def extract_text_from_image(image):
preprocessed_image = preprocess_image(image)
pil_image = Image.fromarray(preprocessed_image)
text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
return text

while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Error reading frame from video.")
break

# Speed detection and tracking
results = speed_model(im0)

if results:
    print(f"Tracks detected: {len(results)}")
else:
    print("No tracks detected in this frame.")

# Initialize plate_text to an empty string for each frame
plate_text = ""

# Ensure tracks have valid data
for result in results:
    for box in result.boxes:
        x1, y1, x2, y2 = map(int, box.xyxy[0])
        print(f"Vehicle detected at: {x1, y1, x2, y2}")
        cropped_image = im0[y1:y2, x1:x2]

        # Perform number plate detection
        plate_results = plate_model(cropped_image)

        for plate_result in plate_results:
            plate_boxes = plate_result.boxes.xyxy.numpy()
            if len(plate_boxes) == 0:
                print("No number plate detected in this vehicle bounding box.")
            for plate_box in plate_boxes:
                px1, py1, px2, py2 = map(int, plate_box)
                plate_cropped_image = cropped_image[py1:py2, px1:px2]

                # Extract text using OCR
                plate_text = extract_text_from_image(plate_cropped_image)
                print(f'Detected Number Plate: {plate_text}')

                # Draw the bounding box for the plate and add the text
                cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
                cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

# Write the frame with detections and speed estimation
result = speed_obj.estimate_speed(im0, results)
im0 = result[0]  # Frame with detections
speeds = result[1]  # Speeds dictionary
video_writer.write(im0)

# Ensure speeds is a dictionary
if isinstance(speeds, dict):
    # Store vehicle information if speed exceeds 50 km/hr
    for track_id, speed in speeds.items():
        if speed > 5:
            vehicle_data = vehicle_data.append({
                "Track ID": track_id,
                "Vehicle No": plate_text,
                "Speed (km/hr)": speed
            }, ignore_index=True)
else:
    print("Speeds is not a dictionary. Please check the output of estimate_speed function.")

cap.release()
video_writer.release()
cv2.destroyAllWindows()

Save the vehicle data to an Excel file

vehicle_data.to_excel("vehicle_data.xlsx", index=False)

# Example usage of preprocess_image and extract_text_from_image functions

image = cv2.imread('path_to_image.jpg')

text = extract_text_from_image(image)

print(f'Detected Text: {text}')

this one is running but not doing right ocr and not showing the speed and bounding box

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

is it necessary to wait for whole video to complete

from hub.

pderrenger avatar pderrenger commented on July 1, 2024

Hello @PrakharJoshi54321,

Thank you for reaching out! To effectively address your question, it would be helpful to understand the specific context of your use case. However, I can provide some general guidance on handling video processing with YOLO models.

Real-Time Processing

If your goal is to process video frames in real-time, you do not need to wait for the entire video to complete. You can process each frame as it is read from the video stream. Here's a basic example of how you can achieve this:

import cv2
from ultralytics import YOLO

# Load the model
model = YOLO("yolov8n.pt")

# Path to the video file
video_path = 'video.mp4'  # Replace with your video file path

# Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"

while cap.isOpened():
    success, frame = cap.read()
    if not success:
        break

    # Perform detection on the current frame
    results = model(frame)

    # Process results (e.g., draw bounding boxes)
    for result in results:
        for box in result.boxes:
            x1, y1, x2, y2 = map(int, box.xyxy[0])
            cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 2)

    # Display the frame with detections
    cv2.imshow('Frame', frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

Batch Processing

If you prefer to process the entire video at once, you can read all frames into memory, process them, and then save the results. This approach might be useful for post-processing tasks where real-time performance is not critical.

Importance of Reproducible Example

To provide more specific assistance, it would be helpful if you could share a minimum reproducible example of your code. This will allow us to better understand your setup and provide a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details.

Verify Package Versions

Please ensure you are using the latest versions of torch, ultralytics, and hub-sdk. You can update them using the following commands:

pip install --upgrade torch ultralytics hub-sdk

We hope this helps! If you have any further questions or need additional assistance, please let us know. The YOLO community and the Ultralytics team are here to support you! ๐Ÿ˜Š

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np
import pandas as pd

Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'

Load the models
speed_model = YOLO("yolov8n.pt") # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt') # Model for number plate detection

Path to the video file
video_path = 'video.mp4' # Replace with your video file path

Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"

w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))

Video writer
video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

line_pts = [(0, h // 2), (w, h // 2)] # Update line points based on video resolution

Init speed-estimation object
speed_obj = solutions.SpeedEstimator(
reg_pts=line_pts,
names=speed_model.model.names,
view_img=True,
)

DataFrame to store vehicle information
vehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"])

def preprocess_image(image):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY)
return thresh

def extract_text_from_image(image):
preprocessed_image = preprocess_image(image)
pil_image = Image.fromarray(preprocessed_image)
text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
return text

while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Error reading frame from video.")
break

Speed detection and tracking

results = speed_model(im0)

if results:
print(f"Tracks detected: {len(results)}")
else:
print("No tracks detected in this frame.")

Initialize plate_text to an empty string for each frame

plate_text = ""

Ensure tracks have valid data

for result in results:
for box in result.boxes:
x1, y1, x2, y2 = map(int, box.xyxy[0])
print(f"Vehicle detected at: {x1, y1, x2, y2}")
cropped_image = im0[y1:y2, x1:x2]

    # Perform number plate detection
    plate_results = plate_model(cropped_image)

    for plate_result in plate_results:
        plate_boxes = plate_result.boxes.xyxy.numpy()
        if len(plate_boxes) == 0:
            print("No number plate detected in this vehicle bounding box.")
        for plate_box in plate_boxes:
            px1, py1, px2, py2 = map(int, plate_box)
            plate_cropped_image = cropped_image[py1:py2, px1:px2]

            # Extract text using OCR
            plate_text = extract_text_from_image(plate_cropped_image)
            print(f'Detected Number Plate: {plate_text}')

            # Draw the bounding box for the plate and add the text
            cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
            cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

Write the frame with detections and speed estimation

result = speed_obj.estimate_speed(im0, results)
im0 = result[0] # Frame with detections
speeds = result[1] # Speeds dictionary
video_writer.write(im0)

Ensure speeds is a dictionary

if isinstance(speeds, dict):
# Store vehicle information if speed exceeds 50 km/hr
for track_id, speed in speeds.items():
if speed > 5:
vehicle_data = vehicle_data.append({
"Track ID": track_id,
"Vehicle No": plate_text,
"Speed (km/hr)": speed
}, ignore_index=True)
else:
print("Speeds is not a dictionary. Please check the output of estimate_speed function.")
cap.release()
video_writer.release()
cv2.destroyAllWindows()

Save the vehicle data to an Excel file
vehicle_data.to_excel("vehicle_data.xlsx", index=False)

Example usage of preprocess_image and extract_text_from_image functions

image = cv2.imread('path_to_image.jpg')
text = extract_text_from_image(image)
print(f'Detected Text: {text}')
this one is running but not doing right ocr and not showing the speed and bounding box

from hub.

pderrenger avatar pderrenger commented on July 1, 2024

Hello @PrakharJoshi54321,

Thank you for sharing your code and detailed explanation. Let's address the issues you're facing with OCR accuracy and speed estimation.

1. Importance of a Reproducible Example

To better assist you, it would be helpful to have a minimum reproducible example. This allows us to replicate the issue on our end and provide a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details.

2. Verify Package Versions

Please ensure you are using the latest versions of torch, ultralytics, and hub-sdk. You can update them using the following commands:

pip install --upgrade torch ultralytics hub-sdk

3. Improving OCR Accuracy

To improve OCR accuracy, consider additional preprocessing steps. Here's an enhanced version of your preprocess_image and extract_text_from_image functions:

def preprocess_image(image):
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    _, thresh = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
    return thresh

def extract_text_from_image(image):
    preprocessed_image = preprocess_image(image)
    pil_image = Image.fromarray(preprocessed_image)
    text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
    return text

4. Handling Speed Estimation

The estimate_speed function should return a tuple with the frame and a dictionary of speeds. Ensure you are correctly unpacking the result:

# Write the frame with detections and speed estimation
result = speed_obj.estimate_speed(im0, results)
im0, speeds = result  # Unpack the result
video_writer.write(im0)

# Ensure speeds is a dictionary
if isinstance(speeds, dict):
    # Store vehicle information if speed exceeds 50 km/hr
    for track_id, speed in speeds.items():
        if speed > 5:
            vehicle_data = vehicle_data.append({
                "Track ID": track_id,
                "Vehicle No": plate_text,
                "Speed (km/hr)": speed
            }, ignore_index=True)
else:
    print("Speeds is not a dictionary. Please check the output of estimate_speed function.")

5. Real-Time Processing

If you want to process video frames in real-time, you do not need to wait for the entire video to complete. You can process each frame as it is read from the video stream.

Example Code

Hereโ€™s a refined version of your script incorporating the above suggestions:

import cv2
from ultralytics import YOLO, solutions
import pytesseract
from PIL import Image
import numpy as np
import pandas as pd

# Path to Tesseract executable
pytesseract.pytesseract.tesseract_cmd = r'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'

# Load the models
speed_model = YOLO("yolov8n.pt")  # Model for speed detection and tracking
plate_model = YOLO('epoch-68.pt')  # Model for number plate detection

# Path to the video file
video_path = 'video.mp4'  # Replace with your video file path

# Initialize video capture
cap = cv2.VideoCapture(video_path)
assert cap.isOpened(), "Error opening video file"

w, h = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))

# Video writer
video_writer = cv2.VideoWriter("output_video.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))

line_pts = [(0, h // 2), (w, h // 2)]  # Update line points based on video resolution

# Init speed-estimation object
speed_obj = solutions.SpeedEstimator(
    reg_pts=line_pts,
    names=speed_model.model.names,
    view_img=True,
)

# DataFrame to store vehicle information
vehicle_data = pd.DataFrame(columns=["Track ID", "Vehicle No", "Speed (km/hr)"])

def preprocess_image(image):
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    _, thresh = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
    return thresh

def extract_text_from_image(image):
    preprocessed_image = preprocess_image(image)
    pil_image = Image.fromarray(preprocessed_image)
    text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
    return text

while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Error reading frame from video.")
        break

    # Speed detection and tracking
    results = speed_model(im0)

    if results:
        print(f"Tracks detected: {len(results)}")
    else:
        print("No tracks detected in this frame.")

    # Initialize plate_text to an empty string for each frame
    plate_text = ""

    # Ensure tracks have valid data
    for result in results:
        for box in result.boxes:
            x1, y1, x2, y2 = map(int, box.xyxy[0])
            print(f"Vehicle detected at: {x1, y1, x2, y2}")
            cropped_image = im0[y1:y2, x1:x2]

            # Perform number plate detection
            plate_results = plate_model(cropped_image)

            for plate_result in plate_results:
                plate_boxes = plate_result.boxes.xyxy.numpy()
                if len(plate_boxes) == 0:
                    print("No number plate detected in this vehicle bounding box.")
                for plate_box in plate_boxes:
                    px1, py1, px2, py2 = map(int, plate_box)
                    plate_cropped_image = cropped_image[py1:py2, px1:px2]

                    # Extract text using OCR
                    plate_text = extract_text_from_image(plate_cropped_image)
                    print(f'Detected Number Plate: {plate_text}')

                    # Draw the bounding box for the plate and add the text
                    cv2.rectangle(im0, (x1 + px1, y1 + py1), (x1 + px2, y1 + py2), (0, 255, 0), 2)
                    cv2.putText(im0, plate_text, (x1 + px1, y1 + py1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

    # Write the frame with detections and speed estimation
    result = speed_obj.estimate_speed(im0, results)
    im0, speeds = result  # Unpack the result
    video_writer.write(im0)

    # Ensure speeds is a dictionary
    if isinstance(speeds, dict):
        # Store vehicle information if speed exceeds 50 km/hr
        for track_id, speed in speeds.items():
            if speed > 5:
                vehicle_data = vehicle_data.append({
                    "Track ID": track_id,
                    "Vehicle No": plate_text,
                    "Speed (km/hr)": speed
                }, ignore_index=True)
    else:
        print("Speeds is not a dictionary. Please check the output of estimate_speed function.")

cap.release()
video_writer.release()
cv2.destroyAllWindows()

# Save the vehicle data to an Excel file
vehicle_data.to_excel("vehicle_data.xlsx", index=False)

We hope this helps! If you have any further questions or need additional assistance, please let us know. The YOLO community and the Ultralytics team are here to support you! ๐Ÿ˜Š

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

0: 640x608 1 0, 124.8ms
Speed: 0.0ms preprocess, 124.8ms inference, 0.0ms postprocess per image at shape (1, 3, 640, 608)
Detected Number Plate: 5
Traceback (most recent call last):
File "C:\Users\cairuser1\Desktop\project\intigrate.py", line 93, in
im0, speeds = result # Unpack the result
ValueError: too many values to unpack (expected 2)

from hub.

pderrenger avatar pderrenger commented on July 1, 2024

Hello @PrakharJoshi54321,

Thank you for reaching out and providing details about the issue you're encountering. Let's work together to resolve this!

Importance of a Reproducible Example

To better understand and diagnose the problem, it would be extremely helpful if you could provide a minimum reproducible example of your code. This allows us to replicate the issue on our end and offer a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details on how to create one.

Verify Package Versions

Please ensure you are using the latest versions of torch, ultralytics, and hub-sdk. Sometimes, issues are resolved in newer releases, so updating your packages might help. You can update them using the following commands:

pip install --upgrade torch ultralytics hub-sdk

Handling the ValueError

The error message ValueError: too many values to unpack (expected 2) suggests that the estimate_speed function is returning more values than expected. Let's ensure that the function is correctly unpacking the results. Hereโ€™s a snippet to handle this:

# Write the frame with detections and speed estimation
result = speed_obj.estimate_speed(im0, results)
if len(result) == 2:
    im0, speeds = result  # Unpack the result
    video_writer.write(im0)

    # Ensure speeds is a dictionary
    if isinstance(speeds, dict):
        # Store vehicle information if speed exceeds 50 km/hr
        for track_id, speed in speeds.items():
            if speed > 5:
                vehicle_data = vehicle_data.append({
                    "Track ID": track_id,
                    "Vehicle No": plate_text,
                    "Speed (km/hr)": speed
                }, ignore_index=True)
    else:
        print("Speeds is not a dictionary. Please check the output of estimate_speed function.")
else:
    print("Unexpected number of values returned by estimate_speed function.")

Improving OCR Accuracy

To improve OCR accuracy, consider additional preprocessing steps. Hereโ€™s an enhanced version of your preprocess_image and extract_text_from_image functions:

def preprocess_image(image):
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    _, thresh = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
    return thresh

def extract_text_from_image(image):
    preprocessed_image = preprocess_image(image)
    pil_image = Image.fromarray(preprocessed_image)
    text = pytesseract.image_to_string(pil_image, config='--psm 8').strip()
    return text

We hope this helps resolve the issue. If you have any further questions or need additional assistance, please let us know. The YOLO community and the Ultralytics team are here to support you! ๐Ÿ˜Š

from hub.

PrakharJoshi54321 avatar PrakharJoshi54321 commented on July 1, 2024

i have already provided the .pt file please provide me the full folder of the project

from hub.

pderrenger avatar pderrenger commented on July 1, 2024

Hello @PrakharJoshi54321,

Thank you for reaching out! We appreciate your interest in our project. To provide you with the best possible assistance, it would be extremely helpful if you could share a minimum reproducible example of your code. This will allow us to better understand the issue and offer a more accurate solution. You can refer to our Minimum Reproducible Example Guide for more details on how to create one.

Additionally, please ensure you are using the latest versions of torch, ultralytics, and hub-sdk. Sometimes, issues are resolved in newer releases, so updating your packages might help. You can update them using the following commands:

pip install --upgrade torch ultralytics hub-sdk

Regarding your request for the full folder of the project, we encourage users to build and customize their own projects based on the provided models and documentation. This approach allows for greater flexibility and understanding of the underlying processes.

If you have any specific questions or need further assistance with your code, feel free to share more details here. The YOLO community and the Ultralytics team are here to support you! ๐Ÿ˜Š

from hub.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.