This archive contains the RWTH-Weather-Phoenix 2014 Multisigner continuous sign language recognition corpus. It is released under non-commercial cc 4.0 license with attribution (see attachment) If you use this data in your research, please cite: O. Koller, J. Forster, and H. Ney. Continuous sign language recognition: Towards large vocabulary statistical recognition systems handling multiple signers. Computer Vision and Image Understanding, volume 141, pages 108-125, December 2015. If you use the automatic annotations, please additionally cite: Koller, Zargaran, Ney. "Re-Sign: Re-Aligned End-to-End Sequence Modeling with Deep Recurrent CNN-HMMs" in CVPR 2017, Honululu, Hawaii, USA. The multisigner corpus contains 9 signers and has been recorded on the broadcastnews channel. phoenix-2014-multisigner ├── annotations │ │ │   ├── automatic -> this contains the CNN-LSTM-HMM hybrid alignments to train a new system using framelabels │   └── manual -> this contains the corpus files │ ├── evaluation -> this contains an evaluation script. make sure to have a compiled version of the NIST sclite tools in your path. call: ./evaluatePhoenix2014.sh example-hypothesis-dev.ctm dev │ ├── features │ │ │   ├── fullFrame-210x260px -> resolution of 210x260 pixels, but they are distorted due to transmission channel particularities, to undistort stretch images to 210x300 │   │   ├── dev │   │   ├── test │   │   └── train │ │ │   └── trackedRightHand-92x132px -> we further provide dumped rectangles containting the right (dominant) hand of the signers. To undistort those scale them to squared size │   ├── dev │   ├── test │   └── train │ └── models -> we provide caffe models to achieve 27.1 / 26.8 % WER on the dev/test partition of this corpus, we also provide our languagemodel