This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Responsible NLP Research Checklist

Members of the ACL are responsible for adhering to the ACL code of ethics. The ARR Responsible NLP Research checklist is designed to encourage best practices for responsible research, addressing issues of research ethics, societal impact and reproducibility.

Please read the Responsible NLP Research checklist guidelines for information on how to answer these questions. Note that not answering positively to a question is not grounds for rejection.

All supporting evidence can appear either in the main paper or the supplemental material. For each question, if you answer Yes, provide the section number; if you answer No, provide a justification.

You may complete the checklist either as a fillable PDF or via the LaTex source from .

  • If you are providing very brief justifications (less than 3 lines), using the fillable PDF will probably be easier.

  • If you use the LaTex source, please do not modify, reorder, delete or add questions, question options or other wording of this document.

A   For every submission

A1   Did you discuss the limitations of your work?

Yes, we discuss in Appendix F.

A2   Did you discuss any potential risks of your work?

Yes, we discuss in Appendix F.

A3   Do the abstract and introduction summarize the paper’s main claims?

Yes, we summarize the claims in both abstract and introduction.

B   Did you use or create scientific artifacts?

Yes, we use scientific artifacts in Section 3 and 4.

B1   Did you cite the creators of artifacts you used?

Yes, we cite the creators in Section 3.

B2   Did you discuss the license or terms for use and/or distribution of any artifacts?

Yes, we discuss the license in Appendix C.

B3   Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?

Yes, we discuss in Appendix C.

B4   Did you discuss the steps taken to check whether the data that was collected/used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?

Yes, we discuss in Appendix C.

B5   Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?

Yes, we discuss in Appendix C.

B6   Did you report relevant statistics like the number of examples, details of train/test/dev splits, etc. for the data that you used/created?

Yes, we report in Appendix C.

C   Did you run computational experiments?

Yes, we run computational experiments in Section 3 and 4.

C1   Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?

Yes, we report in Appendix B.

C2   Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?

Yes, we discuss in Appendix B.

C3   Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?

Yes, we report results in Section 3 and discuss the details in Appendix D.

C4   If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?

Yes, we mention them in Appendix B.

D   Did you use human annotators (e.g., crowdworkers) or research with human subjects?

No.