Code Monkey home page Code Monkey logo

motpy's People

Contributors

kha-white avatar wmuron avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

motpy's Issues

Flying bboxes from lost objects

BBoxes fly on the last frames after disappearing of object. This isn't issue of detector (I use pre-trained faster rcnn), but seems like trackrer don't clean up tracks properly and that's why bboxes of object fly on the frame. Someone knows how to fix it?

Skip object detection frames.

Thanks for developing motpy - I have found it is working great so far and is simple to use.

I am trying to use the tool to speed up my object detection pipeline using a deep learning model. I am hoping to be able to have to do inference on the deep learning model every few frames, and rely on object tracking for the frames in between.

Is there a way to update the tracker in motpy without having to provide a new set of detections?

Thank you!

MOT tracks non-existent objects

Hi wmuron,
I really appreciate your effort, keep up the great work!
Sometimes, the MOT tracks objects that aren't present. I wonder, which parameter I can change in order to improve that.
Thanks in advance.

Error while processing a video

Hi,

I am using a combination of yolo v4 tiny and motpy to track people in a video. However, after processing successfully for few seconds, the code fails and gives the following error:

/home/user1/anaconda3/envs/people_counting_windows/lib/python3.8/site-packages/motpy/metrics.py:23: RuntimeWarning: invalid value encountered in true_divide iou = val_inter / (val_b1 + np.transpose(val_b2) - val_inter) Traceback (most recent call last): File "motpy_t.py.py", line 101, in <module> tracker.step(detections) File "/home/user1/anaconda3/envs/people_counting_windows/lib/python3.8/site-packages/motpy/tracker.py", line 279, in step matches = self.matching_fn(self.trackers, detections) File "/home/user1/anaconda3/envs/people_counting_windows/lib/python3.8/site-packages/motpy/tracker.py", line 201, in __call__ return match_by_cost_matrix( File "/home/user1/anaconda3/envs/people_counting_windows/lib/python3.8/site-packages/motpy/tracker.py", line 133, in match_by_cost_matrix row_ind, col_ind = scipy.optimize.linear_sum_assignment(cost_mat) File "/home/user1/anaconda3/envs/people_counting_windows/lib/python3.8/site-packages/scipy/optimize/_lsap.py", line 93, in linear_sum_assignment raise ValueError("matrix contains invalid numeric entries") ValueError: matrix contains invalid numeric entries
Kindly help me with this.

Parameters for random accelerating objects

Dear Sir,
What might be the best parameters, or acceleration model, for a setting like this:
image

  • Cars accelerating and slowing down at random
  • My detections bounding boxes are floats

image

  • Video is captured at fixed FPS = 5
  • It only detects tracks if min_steps_alive=2 or 1, if > 2 it doesn't detect

I'have tried with some attempts but I always get track-id switching, generating always new tracks

raspberry

Yours Sincerely.

code.TXT

Enhancement of Object Tracking Accuracy in Low-Contrast Scenes

Dear motpy Maintainers,

I hope this message finds you well. I am reaching out to discuss a potential enhancement to the motpy library that could significantly improve tracking accuracy in low-contrast environments.

Context:
Upon utilising motpy for a project involving surveillance footage, I observed that the tracking accuracy diminishes notably in scenes where the contrast between the objects and the background is minimal. This is particularly evident during dusk and dawn sequences, where the lack of sufficient lighting conditions leads to poor object detection and subsequent tracking failures.

Suggestion:
I propose the introduction of a contrast enhancement pre-processing step before the detection phase. This could involve dynamic histogram equalisation or adaptive histogram equalisation (CLAHE) to improve the visibility of objects. An additional configuration parameter could allow users to enable or disable this feature based on their specific use case.

Potential Benefits:

  • Improved detection and tracking in challenging lighting conditions.
  • Enhanced robustness of the motpy library across a wider range of scenarios.
  • Greater utility for users dealing with consistently low-contrast footage.

Preliminary Results:
I have conducted preliminary experiments by manually applying CLAHE to the input frames before feeding them into the motpy tracker. The initial results are promising, showing a marked improvement in tracking consistency.

Conclusion:
I believe this enhancement could be a valuable addition to the motpy library. I am more than willing to contribute to the development of this feature and provide further details on my findings. Your thoughts on this suggestion would be greatly appreciated.

Thank you for your time and consideration.

Best regards,
yihong1120

Feature Request: Include ego motion of camera

Hello, thanks for this awesome library!
Is there any way to include ego motion of camera into motpy to make it more stable? We have quite an accurate camera pose estimation and are in a highly dynamic environment, we believe that on an acceleration/jerk level this may have a large performance boost.
Thanks

Velocity tracking

Hello, I was woundering if there's a way of extending existing functionality by adding velocity parameter to the track outputs. If it is possible, could you guide me how I should change motpy/motpy/model.Model or maybe add a new preset to motpy/motpy/model.ModelPreset . I assume the speed of my objects to track is constant

Thank you in advance!

Code not working on Windows

I downloaded the folder as it is and installed all the requirements given in Requirements.txt and requirements-dev.txt
Still while running the code it is showing w=error.
image

Custom Object detector for another domain

Hello Author,

Could you please suggest if this Bayesian tracker can be implemented for another domain along with any deep learning based object detector.

If yes, can you please let me know the changes that is needed to this tracking algorithm ?

Thank You !

Tracker detection always create, never track

Hi,

I have some code which is generating blobs from a background subtraction method. Each frame, I have the boxes of the blobs. In each frame there is a possibility that the blob disappears or appears The blobs are always white against a black background.

I've been trying to input the rectangle blobs each frame however it constantly creates a tracker and never actually tracks them. What the best way to get around this? I can't not give the tracker the detection as there may be a new detection every frame.

Should matching follows after tracker.predict()?

        # filter out empty detections
        detections = [det for det in detections if det.box is not None]

        logger.debug('step with %d detections' % len(detections))
        matches = self.matching_fn(self.trackers, detections)
        logger.debug('matched %d pairs' % len(matches))

        # all trackers: predict
        for t in self.trackers:
            t.predict()

Now it precedes prediction which seems conflict with MOT pipeline:

image

Retrieve detection score (and metadata) from Track

Hi,

The score of a Detection can be provided as input. How can it be retrieved from the Track objects ( i.e the return value of active_tracks )?

For context, since detected boxes usually have a score/confidence value it needs to remain associated with the Detection/box when it takes the shape of a Track (i.e uuid gets added).
This associated score has to pass through the tracker otherwise this information is lost and is not available for any downstream processing.
The intent is to be able keep the score (or any metadata) associated with the Detection and retrieve it from the output Track.

Let me know if there is a way to do it currently ? and if it can be considered for implementation ?

not working example command

When the following command is executed

python examples/detect_and_track_in_video.py \
            --video_path=./assets/video.mp4 \
            --detect_labels=['car','truck'] \
            --tracker_min_iou=0.15 \
            --device=cuda

The following error occurs

no matches found: --detect_labels=[car,truck]

How to pass detection meta-information through the tracker?

As already slightly discussed in #9, it can be necessary to pass information of the detection through the tracker, to be used in a later step. For example, I have more information per detection, which is not relevant for the tracker, but for my later pipeline.

In most of the examples, the detection and active-tracks are not "related" to each other and just drawn onto the image. But I have the case where I would like to know, which detection object has become which tracked-object. Is there a way to determine this?

Very good libs ! but how to merge the tracking result to the source list? always error of `IndexError: list index out of range`

  1. list_detect2 contains detection bbox,not rotated bbox
  2. list_detect3 is the detections list. i want to merge the track.id to list_detect3
  3. idxs is a dict, key is track.id, value is a int. it means to convert uuid to int id.

questions is

  1. how to merge the tracking result to list_detect3?my code is often crash.
  2. tracking result is more than source list sometimes. what can i do?
  3. tracking result is not stable,what can i do?
list_detect2 = [Detection(box=bbox, score=1) for bbox in list_detect2]
active_tracks = tracker.step(detections=list_detect2)

for index, track in enumerate(active_tracks):
    if track.id in idxs:
        if index < lenght_list3:
            list_detect3[index].append(idxs.get(track.id))
        # else:
        #     print("===============================")
        #     print("track [%s] not in list_detect3[%s]" %
        #           (index, lenght_list3))
        #     print("===============================")
    else:
        if index >= lenght_list3:
            counter += 1
            continue
        idxs[track.id] = counter
        list_detect3[index].append(counter)
        counter += 1

ezgif-7-5c2686a9ffbd

question: any suggestions on how to speed up?

Very clearly written package and all works well with little effort so thanks for that. Have it working on Raspberry pi with coral TPU for the detection.

With 100 items tracked on a static camera tracking time may be 80ms per frame and I can get 5fps end to end in real time. However with 200 items being tracked and moving camera tracking time is 300ms per frame and sometimes over 1 second per frame. Do you have any suggestions as to how to speed this up? I wonder if it could be done on TPU but I guess that would mean rewriting it in tensorflow.

Installation fail on raspberry pi 4

Hello,

I'm in trouble to install & test motpy on my RPI 4B.
It's working well in Ubuntu on PC.

After git clone, I changed 'python' to 'python3' in Makefile then run
$ sudo make install-develop

Error messages are as follows,

--- start of message ---
python3 setup.py develop
running develop
running egg_info
writing motpy.egg-info/PKG-INFO
writing dependency_links to motpy.egg-info/dependency_links.txt
writing requirements to motpy.egg-info/requires.txt
writing top-level names to motpy.egg-info/top_level.txt
reading manifest file 'motpy.egg-info/SOURCES.txt'
writing manifest file 'motpy.egg-info/SOURCES.txt'
running build_ext
Creating /usr/local/lib/python3.8/dist-packages/motpy.egg-link (link to .)
motpy 0.0.8 is already the active version in easy-install.pth

Installed /home/sol/proj/Tracking/motpy
Processing dependencies for motpy==0.0.8
Searching for matplotlib
Reading https://pypi.org/simple/matplotlib/
Downloading https://files.pythonhosted.org/packages/7b/b3/7c48f648bf83f39d4385e0169d1b68218b838e185047f7f613b1cfc57947/matplotlib-3.3.3.tar.gz#sha256=b1b60c6476c4cfe9e5cf8ab0d3127476fd3d5f05de0f343a452badaad0e4bdec
Best match: matplotlib 3.3.3
Processing matplotlib-3.3.3.tar.gz
Writing /tmp/easy_install-1lt1s5ie/matplotlib-3.3.3/setup.cfg
Running matplotlib-3.3.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-1lt1s5ie/matplotlib-3.3.3/egg-dist-tmp-y7m_sz5d
UPDATING build/lib.linux-aarch64-3.8/matplotlib/_version.py
set build/lib.linux-aarch64-3.8/matplotlib/_version.py to '3.3.3'
error: Setup script exited with error: Failed to download FreeType. Please download one of ['https://downloads.sourceforge.net/project/freetype/freetype2/2.6.1/freetype-2.6.1.tar.gz', 'https://download.savannah.gnu.org/releases/freetype/freetype-2.6.1.tar.gz'] and extract it into build/freetype-2.6.1 at the top-level of the source repository.
make: *** [Makefile.3:5: install-develop] Error 1
sol@sol-rpi:~/proj/Tracking/motpy$
--- end of message ---

I download the freetype2 then extract it to motpy/build/ but not working as before.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.