Rank Team Name Team Members Classification Accuracy (%) Affiliation
1 SEARCH Jiaying Liu, Yueyu Hu, Yanghao Li, Chunhui Liu 82.3650 Peking University
2 YANQIAN Hongsong Wang, Liang Wang 80.5656 Institute of Automation, CAS
3 DeepAttention Cuiling Lan, Sijie Song, Pengfei Zhang, Wenjun Zeng 79.8458 Microsoft Research Asia
4 TJU Chuankun Li, Pichao Wang, Yonghong Hou, Wanqing Li 78.1491 Tianjin University and University of Wollongong
5 Knights Chen Chen, Mengyuan Liu, Hong Liu 75.9383 University of Central Florida and Peking University
6 ZJU-DCD Songyang Zhang, Jun Xiao 72.2365 Zhejiang University
7 RZ Rui Zhao 72.1851 German Aerospace Center and Technical University of Munich
8 LmbFreiburg Mohammadreza Zolfaghari 67.1465 University of Freiburg, Computer Vision Group
9 DISA Jan Sedmidubsky, Pavel Zezula, Petr Elias 63.4961 Masaryk University


Action Recognition Challenge

3D action recognition – recognizing of human actions based on 3D skeleton data - becomes popular recently due to its succinctness, robustness, and view-invariant representation.

Most of the published approaches in this domain were evaluated on small-sized data due to the lack of a large-scale dataset.

NTU RGB+D is a recently released large-scale dataset for 3D human action recognition which enabled us to develop more data-hungry algorithms for this problem. For more details about this dataset please refer to below links:

In this challenge, we are inviting the researchers in this domain to evaluate their 3D action recognition algorithms and systems based on the NTU RGB+D dataset, and some newly collected data for this challenge.

The released dataset contains training and testing sets for cross-subject and cross-view evaluation criteria. The existing NTU RGB+D dataset with labels provided as the training set, and a new set of skeletal data will be released without labels before the start of submission period to be used as the test set.

Top performing teams will be invited to ACCV 2016 Workshop “Large Scale 3D Human Activity Analysis Challenge in Depth Videos” to present their works.

Please take note, the evaluation data will only include skeletal files. This challenge is not suitable for depth-based or RGB+D-based methods.



Each team will be asked to register prior to the submission period. Registration is now opened, to register please submit the Registration Form via email to

Accessing the Data

You can download the currently available dataset here. More information about the dataset can be find in the paper and in the GitHub page of the dataset.

For this challenge, you just need the 3D skeletons (body joints information, 5.8 GB).

For experimental evaluations, the participants are advised to use the defined splits for the cross-view and cross-subject criteria defined in the paper.

The real evaluation of the challenge is on the new unlabelled testing data which is now available here.



Each team will be allowed to submit three times.

On the submission time, they should provide a name for the algorithm, or preferably the title of the submitted workshop paper describing the details of their algorithm.

The submissions will be a .txt file containing the labels (1 to 60) of the samples (separated by newline character). Please send the .txt file to during the evaluation period.

For this challenge, we will release the skeleton files of the new test set before the evaluation period. Evaluation period will be two weeks (see schedule).

The recognition performance of the submitted results will be declared to the team members via email.



The evaluation is both cross-subject and cross-view, in other words the new testing data will be collected from new subjects and on free viewpoints.

On evaluation phase we will compare the submitted labels with the ground truth labels and will declare the classification accuracy as the percentage of the samples which are correctly classified.