Конкурсы

20 августа 2018

There are currently few datasets appropriate for training and evaluating models for non-goal-oriented dialogue systems (chatbots); and equally problematic, there is currently no standard procedure for evaluating such models beyond the classic Turing test.

The aim of this competition is therefore to establish a concrete scenario for testing chatbots that aim to engage humans, and become a standard evaluation tool in order to make such systems directly comparable.

This is the second Conversational Intelligence (ConvAI) Challenge. The previous one was conducted under the scope of NIPS 2017 Competitions track. This year we aim to improve over last year:

  • providing a dataset from the beginning, Persona-Chat
  • making the conversations more engaging for humans
  • simpler evaluation process (automatic evaluation, followed then by human evaluation)

Prize

The winning entry will receive $20,000 in Mechanical Turk funding – in order to encourage further data collection for dialogue research.

Official site: http://convai.io/