RankTeam NameTeam MembersClassification Accuracy (%)Affiliation
1SEARCHJiaying Liu, Yueyu Hu, Yanghao Li, Chunhui Liu82.3650Peking University
2YANQIANHongsong Wang, Liang Wang80.5656Institute of Automation, CAS
3DeepAttentionCuiling Lan, Sijie Song, Pengfei Zhang, Wenjun Zeng79.8458Microsoft Research Asia
4TJUChuankun Li, Pichao Wang, Yonghong Hou, Wanqing Li78.1491Tianjin University and University of Wollongong
5KnightsChen Chen, Mengyuan Liu, Hong Liu75.9383University of Central Florida and Peking University
6ZJU-DCDSongyang Zhang, Jun Xiao72.2365Zhejiang University
7RZRui Zhao72.1851German Aerospace Center and Technical University of Munich
8LmbFreiburgMohammadreza Zolfaghari67.1465University of Freiburg, Computer Vision Group
9DISAJan Sedmidubsky, Pavel Zezula, Petr Elias63.4961Masaryk University

Action Recognition Challenge

3D action recognition – recognizing of human actions based on 3D skeleton data - becomes popular recently due to its succinctness, robustness, and view-invariant representation.

Most of the published approaches in this domain were evaluated on small-sized data due to the lack of a large-scale dataset.

NTU RGB+D is a recently released large-scale dataset for 3D human action recognition which enabled us to develop more data-hungry algorithms for this problem. For more details about this dataset please refer to below links:

In this challenge, we are inviting the researchers in this domain to evaluate their 3D action recognition algorithms and systems based on the NTU RGB+D dataset, and some newly collected data for this challenge.

The released dataset contains training and testing sets for cross-subject and cross-view evaluation criteria. The existing NTU RGB+D dataset with labels provided as the training set, and a new set of skeletal data will be released without labels before the start of submission period to be used as the test set.

Top performing teams will be invited to ACCV 2016 Workshop “Large Scale 3D Human Activity Analysis Challenge in Depth Videos” to present their works.

Please take note, the evaluation data will only include skeletal files. This challenge is not suitable for depth-based or RGB+D-based methods.


Each team will be asked to register prior to the submission period. Registration is now opened, to register please submit the Registration Form via email to

Accessing the Data

You can download the currently available dataset here. More information about the dataset can be find in the paper and in the GitHub page of the dataset.

For this challenge, you just need the 3D skeletons (body joints information, 5.8 GB).

For experimental evaluations, the participants are advised to use the defined splits for the cross-view and cross-subject criteria defined in the paper.

The real evaluation of the challenge is on the new unlabelled testing data which is now available here.


Each team will be allowed to submit three times.

On the submission time, they should provide a name for the algorithm, or preferably the title of the submitted workshop paper describing the details of their algorithm.

The submissions will be a .txt file containing the labels (1 to 60) of the samples (separated by newline character). Please send the .txt file to during the evaluation period.

For this challenge, we will release the skeleton files of the new test set before the evaluation period. Evaluation period will be two weeks (see schedule).

The recognition performance of the submitted results will be declared to the team members via email.


The evaluation is both cross-subject and cross-view, in other words the new testing data will be collected from new subjects and on free viewpoints.

On evaluation phase we will compare the submitted labels with the ground truth labels and will declare the classification accuracy as the percentage of the samples which are correctly classified.