FMOX

Benchmarking EfficientTAM on FMO datasets

[Paper] [Code]

In this repo, we extend Fast Moving Object (FMO) datasets (FMOv2, TbD-3D, TbD and Falling Objects, all available at https://cmp.felk.cvut.cz/fmo/) with additional ground truth information in JSON format (our new metadata is called FMOX). The provided FMOX JSON format allows for seamless compatibility with various machine learning frameworks, making it easier for developers and researchers to utilize the dataset in their applications. With FMOX, we test a recently proposed foundational model for tracking (EfficientTAM) showing that its performance compares well with the pipelines originally developed for these FMO datasets.

Scripts provided in this repo allow to download all FMO datasets, create json metadata, and assess object tracking with EfficientTAM using with TIoU metric.

If you are using this repo in your research or applications, please cite our paper related to this work:

@inproceedings{FMOX_AKTAS2025,
  title={Benchmarking EfficientTAM on FMO datasets},
  author={Senem Aktas and Charles Markham and John McDonald and Rozenn Dahyot},
  booktitle={Irish Machine Vision and Image Processing},
  doi={upcoming},
  url={upcoming},
  month={September},
  address={Londonderry, UK},
  year={2025},
}

Installation

Getting started

git clone https://github.com/CVMLmu/FMOX.git branch main
cd FMOX
# for conda, create the environment using:
conda env create -n fmo_data_env -f environment.yml
conda activate fmo_data_env

Notebooks

The following notebooks can be run in that environment:

Repo tree structure

  environment.yml
│   LICENSE
│   README.md
│
└───FMOX-code
    │   download_datasets.py
    │   __init__.py
    │
    ├───create-FMOX
    │   │   combine_all_mask_to_single_img.py
    │   │   create_fmov2_json.py
    │   │   create_jsons_main.ipynb
    │   │   create_tbd_json.py
    │   │   main.py
    │   │   rle_to_seg_mask_img.py
    │   │   tbd_visualize_bboxes.py
    │   │
    │   └───dataset_loader
    │           create_json_via_benchmark_loader.py
    │           loaders_helpers.py
    │           reporters.py
    │
    ├───EfficientTAM-Jsons
    │       efficienttam_All4.json
    │       efficienttam_falling.json
    │       efficientTam_fmov2.json
    │       efficienttam_tbd3d.json
    │       efficienttam_tdb.json
    │
    ├───FMOX-Jsons
    │       FMOX_All4.json
    │       FMOX_fall_and_tbd3d.json
    │       FMOX_fmov2.json
    │       FMOX_tbd.json
    │       FMOX_tbd_whole_sequence.json
    │
    └───use-FMOX
        │   access_json_bboxes.py
        │   calciou.py
        │   csv_to_graphics.py
        │   efficientam_evaluation.py
        │   EfficientTAM_averageTIoU.csv
        │   FMOX_all4_json_to_CSV.py
        │   FMOX_All4_statistics.csv
        │   fmox_main.ipynb
        │   fmox_main.py
        │   size_label_bar.png
        │   size_label_count.py
        │   vis_trajectory.py
        │   __init__.py
        │
        └───efficientTAM_traj_vis
                efficientTAM_traj_Falling_Object_v_box_GTgamma.jpg
                (...)

Additional Information

The following results are shared in this repo (created with fmox_main.ipynb):

FMOX Object Size Categories

The sizes of the objects in the public FMO datasets were calculated and “object size levels” were assigned. A total of five distinct level defined as below:

Extremely Tiny Tiny Small Medium Large
[1 × 1, 8 × 8) [8 × 8, 16 × 16) [16 × 16, 32 × 32) [32 × 32, 96 × 96) [96 × 96, ∞)

Table: FMOX object size categories.

Structure of FMOX

{
  "databases": [
    {
      "dataset_name": "Falling_Object",
      "version": "1.0",
      "description": "Falling_Object annotations.",
      "sub_datasets": [
        {
          "subdb_name": "v_box_GTgamma",
          "images": [
            {
              "img_index": 1,
              "image_file_name": "00000027.png",
              "annotations": [
                {
                  "bbox_xyxy": [161, 259, 245, 333],
                  "object_wh": [84, 74],
                  "size_category": "medium"
                }
              ]
            },
            {
              "img_index": 2,
              "image_file_name": "00000028.png",
              "annotations": ["bbox_xyxy": [.....], "object_wh": [.....], "size_category": "...." ]
            }
          ]
        }
      ]
    }
  ]
}

Average TIoU Performance Comparison

This table compares the average TIoU $(\uparrow)$ performance of various studies on FMO datasets. Evaluation of the EfficientTAM has been done with FMOX Json. The best results are indicated with $^*$ and the second-best results with $^{**}$.

Datasets Defmo [Rozumnyi et al., 2021] FmoDetect [Rozumnyi et al., 2021] TbD [Kotera et al., 2019] TbD-3D [Rozumnyi et al., 2020] EfficientTAM [Xiong et al., 2024]
Falling Object 0.684** N/A 0.539 0.539 0.7093*
TbD 0.550** (a) 0.519 (b) 0.715* 0.542 0.542 0.4546
TbD-3D 0.879* N/A 0.598 0.598 0.8604**

(a) Real-time with trajectories estimated by the network. (b) With the proposed deblurring. N/A indicates “Not defined”.

Acknowledgments

This Github repo was created and developped by Senem Aktas, and in addition it was tested by Rozenn Dahyot.
This research was supported by funding through the Maynooth University Hume Doctoral Awards.
We would like to thank the authors of the FMO datasets for making their datasets available.

License:

This code is available under the MIT License.