• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

bellos1203/STPN: STPN - Weakly Supervised Action Localization by Sparse Temporal ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

bellos1203/STPN

开源软件地址(OpenSource Url):

https://github.com/bellos1203/STPN

开源编程语言(OpenSource Language):

Python 98.4%

开源软件介绍(OpenSource Introduction):

STPN - Weakly Supervised Action Localization by Sparse Temporal Pooling Network (reproduced)

Overview

This repository contains a reproduced code for the paper "Weakly Supervised Action Localization by Sparse Temporal Pooling Network" by Phuc Nguyen, Ting Liu, Gautam Prasad, and Bohyung Han, CVPR 2018.

Usage Guide

  • Hardware : TITAN X GPU

0.Requirements

  • Python3
  • Tensorflow 1.6.0
  • numpy 1.15.0
  • OpenCV 3.4.2
  • Sonnet (to extract features from I3D model)
  • Pandas (to evaluate)
  • SciPy 1.1.0

1.Preprocessing

  1. Subsample the video with the sampling ratio of 10 frames per second.
  2. After sampling the video frames, rescale them to make the smallest dimension of the frame equal to 256 while preserving the aspect ratio.
  3. Calculate the Optical Flow (TV-L1)
  4. Save the rgb frames to train_data/rgb and the flow frames to train_data/flows with the name of vid_num/{:06d}.png. (test_data/rgb, test_data/flows for the case of test data) I simply save the videos as 1,2,3,....200 for the convenience.
  1. Extract the feature vector of each video by using the code in the "feature_extraction" folder. The extracted features will be saved in the [train/test]_data/[rgb/flow]_features. Since I use the TITAN X GPU which has 12GB Memory, I extract the feature from 16*100 frames which means 100 segments at each time. If you have the GPU with smaller memory, you should extract the feature with the reduced number of segments. Please refer to the extract_feature.sh in the folder.

2.Train the Model

  • Run the train.sh code.
  • Please refer to the train.sh for more details.

3.Test and Extract the Result

  • Run the test.sh code.
  • Please refer to the test.sh for more details.
  • Note that I excluded two falsely annotated videos, 270, 1496, following the SSN paper.

4.Evaluate

  • Run the eval.sh code.
  • Please refer to the eval.sh for more details.
  • I used the evaluation code from the official ActivityNet repo, as the authors did.

Reproduced Result

With the provided sample checkpoint(files in the code/ckpt/ckpt001), I got the following result for the THUMOS14 test set, which is similar to the paper.

tIoU 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 mAP
STPN(paper) 52.0 44.7 35.5 25.8 16.9 9.9 4.3 1.2 0.1 21.2
Reproduced 52.1 44.2 34.7 26.1 17.7 10.1 4.9 1.3 0.1 21.3

Please note that the best result appears around 22k ~ 25k and sometimes the performance could be slightly different from the numbers above.

Comments

If you have any questions or comments, please contact me. bellos1203@snu.ac.kr

Acknowledgements

This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIT) (2017-0-01780, The technology development for event recognition/relational reasoning and learning knowledge based system for video understanding)

License

Apache-2.0




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap