Chinese Dialogue-Level Dependency Parsing

Introduction Dataset Task Submission Timeline Rewards Organizers References

Leaderboard

Coming soon!

Introduction

We propose the Dialogue-Level Dependency Parsing (DiaDP) task for Chinese to promote research in dialogue understanding. This task focuses on fine-grained semantic structure analysis in multi-turn dialogues, challenging models to generate accurate dependency structures that capture semantic relationships between utterances and their components across dialogue turns. To comprehensively evaluate model performance, the task incorporates both inner parsing, which identifies dependencies within individual Elementary Discourse Units (EDUs), and inter parsing, which captures dependencies spanning multiple EDUs. To support this task, we present a high-quality Chinese Dialogue Dependency Parsing dataset, featuring manually annotated dialogues. This dataset includes a test set and a small training set containing 50 dialogues to help participants understand the data structure and format.

Dataset

The dataset is structured specifically for the Dialogue-Level Dependency Parsing (DiaDP) task. The dataset consists of high-quality manually annotated dialogues in Chinese, designed to capture both inner and inter dependency relationships. Each dialogue is represented in JSON format, consisting of turns, utterances, and their dependency relationships. The dataset provides rich annotations for a wide variety of syntactic and semantic relations.

Available for download at GitHub Repository.

Metrics

Participants are required to construct models that can accurately parse dialogue-level dependency structures. The evaluation of the Dialogue-Level Dependency Parsing task is based on two key metrics:

Timeline

March 18, 2025:

Direct workshop paper submission deadline

Mar 25, 2025:

ARR pre-reviewed workshop paper commitment deadline

Apr 5, 2025:

Notification of all shared tasks

Apr 30, 2025:

Acceptance notification of all papers

May 16, 2025:

Camera-ready paper deadline

Jul 7, 2025:

Pre-recorded video due (hard deadline)

Jul 31 - August 1, 2025:

Workshop dates (TBD)

Rewards

To incentivize participation and encourage innovation, we will award the top three teams with cash prizes:

1st Place: $3,000 USD

2nd Place: $1,500 USD

3rd Place: $1,000 USD

Top-ranked participants will also receive a certificate of achievement and will be recommended to write a technical paper for submission to the ACL 2025.

Organizers

Jianling Li (Tianjin University, China)

Hao Fei (National University of Singapore, Singapore)

Meishan Zhang (Harbin Institute of Technology, Shenzhen, China)

Min Zhang (Harbin Institute of Technology, Shenzhen, China)

References

  1. Zhang, M., Jiang, G., Liu, S., Chen, J., & Zhang, M. (2024). LLM–assisted data augmentation for Chinese dialogue–level dependency parsing. Computational Linguistics.
  2. Jiang, G., Liu, S., Zhang, M., & Zhang, M. (2023). A Pilot Study on Dialogue-Level Dependency Parsing for Chinese. Findings of ACL 2023.
  3. Dozat, T., & Manning, C. D. (2017). Deep biaffine attention for neural dependency parsing. In Proceedings of ICLR.
  4. Guo, P., Huang, S., Jiang, P., Sun, Y., Zhang, M., & Zhang, M. (2022). Curriculum-Style Fine-Grained Adaptation for Unsupervised Cross-Lingual Dependency Transfer. IEEE/ACM Transactions.