开源软件名称(OpenSource Name):xlliu7/MUSES开源软件地址(OpenSource Url):https://github.com/xlliu7/MUSES开源编程语言(OpenSource Language):Jupyter Notebook 98.3%开源软件介绍(OpenSource Introduction):MUSESThis repo holds the code and the models for MUSES, introduced in the paper: MUSES is a large-scale video dataset, designed to spur researches on a new task called multi-shot temporal event localization. We present a baseline aproach (denoted as MUSES-Net) that achieves SOTA performance on MUSES. It also reports an mAP of 56.9% on THUMOS14 at IoU=0.5. The code largely borrows from SSN and P-GCN. Thanks for their great work! Find more resouces (e.g. annotation file, source videos) on our project page. Updates[2022.3.19] Add support for the MUSES dataset. The proposals, models, source videos of the MUSES dataset are released. Stay tuned for MUSES v2, which includes videos from more countries. ContentsUsage GuidePrerequisitesThe code is based on PyTorch. The following environment is required.
Other minor Python modules can be installed by running
The code relies on CUDA extensions. Build them with the following command:
After installing all dependecies, run Data PreparationWe support experimenting with THUMOS14 and MUSES. The video features, the proposals and the reference models are provided on OneDrive. Features and Proposals
You can also specify the path to the features/proposals in the config files Reference ModelsPut the
Testing Trained ModelsYou can test the reference models by running a single script
Here Using these models, you should get the following performance MUSES
Note: We re-train the network on MUSES and the performance is higher than that reported in the paper. THUMOS14
The testing process consists of two steps, detailed below.
Here, RESULT_PICKLE is the path where we save the detection scores. CFG_PATH is the path of config file, e.g.
On THUMOS14, we need to fuse the detection scores with RGB and Flow modality. This can be done by running
TrainingTrain your own models with the following command
SNAPSHOT_PREF: the path to save trained models and logs, e.g We provide a script that finishes all steps, including training, testing, and two-stream fusion. Run the script with the following command
Note: The results may vary in different runs and differs from those of the reference models. It is encouraged to use the average mAP as the primary metric. It is more stable than mAP@0.5. CitationPlease cite the following paper if you feel MUSES useful to your research
Related Projects
ContactFor questions and suggestions, file an issue or contact Xiaolong Liu at "liuxl at hust dot edu dot cn". |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论