The ShARC Leaderboard

Currently, we are running a competition on the end to end task of conversational question answering based on ShARC. Planning to submit a model for the end to end task? Submit your code on codalab.

In the future, we will also have competitions for subtasks (classification of answer, generating follow-up questions and scenario resolution). Stay tuned!

ShARC: End-to-end Task

# Model / Reference Affiliation Date Micro Accuracy[%] Macro Accuracy[%] BLEU-1 BLEU-4
1 E3 [anonymized] Feb 2019 67.6 73.3 54.1 38.7
2 BERT-QA [anonymized] Feb 2019 63.6 70.8 46.2 36.3
3 Baseline-CM Bloomsbury AI May 2018 61.9 68.9 54.4 34.4
4 Baseline-NMT Bloomsbury AI May 2018 44.8 42.8 34.0 7.8