You will find below the results obtained by Añotador in different corpora and languages, calculated using the software GATE. The features considered are the extent, the type and the value of each tag, and we provide strict, lenient and average measures.


TempCourt corpus

Añotador was tested on the TempCourt corpus of legal documents. The results of Añotador are below, while the result of other temporal taggers can be found in the website of the corpus. For more information for the sets of annotations (Legal and Standard) and the different parts of the corpus (comprising the documents from different courts, namely the European Court of Human Rights [ECHR], the European Curt of Justice [ECJ] and the United States Supreme Court [USC]), please go also the the webpage.    

A summarization of the results of all taggers (as reported in the webpage) in the different parts of the corpus can be found below. The results in white correspond to the Standard annotation set, while the rows in gray show the results in the Legal annotation set.









Hourglass corpus

Añotador was tested against HeidelTime and SUTime on the Hourglass corpus. The metrics obtained by these taggers against the key set of annotations for each file and feature can be found next to each tagger below: In the .zip you will find tone folder for each tagger containing the annotated .tml, and also a folder containing the GATE XML file of each document in the corpus and sets of annotations:
  • Result: the key annotations.
  • Annotador: the annotations by Annotador.
  • HeidelTime: the annotations by HeidelTime.
  • SUTime: the annotations by SUTime
  • Original markups: the original information of the document.

These files can be loaded into GATE as a corpus to facilitate visualization and comparison. This was the software that generated the previous statistics.    


The comparison can be found in the following table:


TempEval 2 test corpus

Añotador was tested against HeidelTime and SUTime on the TempEval 2 test corpus. The results of the three taggers can be found in the table below, where the best performance is highlighted in bold: