f
Documentation for the automl Package
Abstract
Create with \begin{abstract} ... \end{abstract}
.
The automl package provides a LaTeX style for the AutoML Conference. This document provides some notes regarding the package and tips for typesetting manuscripts. The package and this document is maintained at the following GitHub repository:
https://github.com/automl-conf/LatexTemplate
Users are encouraged to submit issues, bug reports, etc. to:
https://github.com/automl-conf/LatexTemplate/issues
A barebones submission is also available as barebones_submission_template.tex in the same repository.
1 Package options
With no options, the automl package prepares an anonymized manuscript with hidden supplemental material. Two options are supported changing this behavior:
-
•
final – produces non-anonymized camera-ready version for distribution and/or publication
-
•
hidesupplement – hides supplementary material (following
\appendix
); for example, for submitting or distributing the main paper without supplement
Note that final may be used in combination with hidesupplement to prepare a non-anonymized version of the main paper with hidden supplement.
2 Supplemental material
Please provide supplemental material in the main document. You may begin the
supplemental material using \appendix
. Any content following this command
will be suppressed in the final output if the hidesupplement option is
given.
3 Note regarding line numbering at submission time
To ensure that line numbering works correctly with display math mode, please do
not use TeX primitives such as $$
and eqnarray. (Using
these is not good practice anyway.)111https://tex.stackexchange.com/questions/196/eqnarray-vs-align222https://tex.stackexchange.com/questions/503/why-is-preferable-to
Please use LaTeX equivalents such as \[ ... \]
(or
\begin{equation} ... \end{equation}
) and the align environment
from the amsmath package.333http://tug.ctan.org/info/short-math-guide/short-math-guide.pdf
4 References
Authors may use any citation style as long as it is consistent throughout the document. By default we propose author–year citations. Code is provided in the preamble to achieve such citations using either natbib/bibtex or the more modern biblatex/biber.
You may create a parenthetical reference with \citep
, such as appears at
the end of this sentence (example_book). You may create a textual
reference using \citet
, as example_book also demonstrated.
5 Tables
We recommend the booktabs package for creating tables, as demonstrated in Table 1. Note that table captions appear above tables.
metric | ||
---|---|---|
method | accuracy | time |
baseline | 10 | 100 |
our method | 100 | 10 |
6 Figures and subfigures
The automl style loads the subcaption package, which may be used to create and caption subfigures. Please note that this is incompatible with the (obsolete and deprecated) subfigure package. A figure with subfigures is demonstrated in Figure 1. Note that figure captions appear below figures.
Please ensure that all text appearing in figures (axis labels, legends, etc.) is legible.
Amazing figure!
Another amazing figure!
7 Pseudocode
To add pseudocode, you may make use of any package you see fit – the
automl package should be compatible with any of them. In particular,
you may want to check out the algorithm2e444https://ctan.org/pkg/algorithm2e
and/or the algorithmicx555https://ctan.org/pkg/algorithmicx
packages, both of which can produce nicely typeset pseudocode. You may also wish
to load the algorithm666https://ctan.org/pkg/algorithms
package, which creates an algorithm floating environment you can access
with \begin{algorithm} ... \end{algorithm}
. This environment supports
\caption{}
, \label{}
and \ref{}
, etc.
8 Adding acknowledgments
You may add acknowledgments of funding, etc. using the acknowledgments environment. Acknowledgments will be automatically commented out at submission time. An example is given below in the source code for this document; it will be hidden in the pdf unless the final option is given.
9 Required Material
All submissions must include a discussion on limitations and a broader impact statement; and a Reproducibility Checklist in their manuscripts, both at submission and camera-ready time. The discussion of limitations and broader impact is part of the 9 pages allocated to the main paper (there is no page limitation for references and appendices), while the reproducibility checklist is not.
10 Limitations and Broader Impact Statement
The 9 pages allocated for the main paper must include a discussion of limitations and a broader impact statement regarding the approach, datasets and applications proposed/used in your paper. It should reflect on the environmental, ethical and societal implications of your work. The statement should require at most one page and must be included both at submission and camera-ready time.
This section is included in the template as a default, but you can also place these discussions anywhere else in the main paper, e.g., in the introduction/future work.
The Centre for the Governance of AI has written an excellent guide for writing good broader impact statements (for the NeurIPS conference) that may be a useful resource for AutoML-Conf authors.777https://medium.com/@GovAI/a-guide-to-writing-the-neurips-impact-statement-4293b723f832
11 Reproducibility Checklist
All authors must include a section with the AutoML-Conf Reproducibility
Checklist in their manuscripts, both at submission and camera-ready time.
The reproducibility checklist is a combination of the
NeurIPS ’21 checklist
and the
nas checklist.
For each question, change the default \answerTODO{}
(typeset [TODO])
to
\answerYes{[justification]}
(typeset [Yes]),
\answerNo{[justification]}
(typeset [No]), or
\answerNA{[justification]}
(typeset [N/A]).
You must include a brief justification to your answer, either by
referencing the appropriate section of your paper or providing a brief inline
description. For example:
-
•
Did you include the license of the code and datasets? [Yes] See Section 7.
-
•
Did you include all the code for running experiments? [No] We include the code we wrote, but it depends on proprietary libraries for executing on a compute cluster and as such will not be runnable without modifications. We also include a runnable sequential version of the code that we also report experiments in the paper with.
-
•
Did you include the license of the datasets? [N/A] Our experiments were conducted on publicly available datasets and we have not introduced new datasets.
Please note that if you answer a question with \answerNo{}
, we expect
that you compensate for it (e.g., if you cannot provide the full evaluation
code, you should at least provide code for a minimal reproduction of the main
insights of your paper).
Please do not modify the questions and only use the provided macros for your answers. Note that this section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below.
-
1.
For all authors…
-
(a)
Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [TODO]
-
(b)
Did you describe the limitations of your work? [TODO]
-
(c)
Did you discuss any potential negative societal impacts of your work? [TODO]
-
(d)
Have you read the ethics review guidelines and ensured that your paper conforms to them? [TODO]
-
(a)
-
2.
If you are including theoretical results…
-
(a)
Did you state the full set of assumptions of all theoretical results? [TODO]
-
(b)
Did you include complete proofs of all theoretical results? [TODO]
-
(a)
-
3.
If you ran experiments…
-
(a)
Did you include the code, data, and instructions needed to reproduce the main experimental results, including all requirements (e.g., requirements.txt with explicit version), an instructive README with installation, and execution commands (either in the supplemental material or as a url)? [TODO]
-
(b)
Did you include the raw results of running the given instructions on the given code and data? [TODO]
-
(c)
Did you include scripts and commands that can be used to generate the figures and tables in your paper based on the raw results of the code, data, and instructions given? [TODO]
-
(d)
Did you ensure sufficient code quality such that your code can be safely executed and the code is properly documented? [TODO]
-
(e)
Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixed hyperparameter settings, and how they were chosen)? [TODO]
-
(f)
Did you ensure that you compared different methods (including your own) exactly on the same benchmarks, including the same datasets, search space, code for training and hyperparameters for that code? [TODO]
-
(g)
Did you run ablation studies to assess the impact of different components of your approach? [TODO]
-
(h)
Did you use the same evaluation protocol for the methods being compared? [TODO]
-
(i)
Did you compare performance over time? [TODO]
-
(j)
Did you perform multiple runs of your experiments and report random seeds? [TODO]
-
(k)
Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [TODO]
-
(l)
Did you use tabular or surrogate benchmarks for in-depth evaluations? [TODO]
-
(m)
Did you include the total amount of compute and the type of resources used (e.g., type of gpus, internal cluster, or cloud provider)? [TODO]
-
(n)
Did you report how you tuned hyperparameters, and what time and resources this required (if they were not automatically tuned by your AutoML method, e.g. in a nas approach; and also hyperparameters of your own method)? [TODO]
-
(a)
-
4.
If you are using existing assets (e.g., code, data, models) or curating/releasing new assets…
-
(a)
If your work uses existing assets, did you cite the creators? [TODO]
-
(b)
Did you mention the license of the assets? [TODO]
-
(c)
Did you include any new assets either in the supplemental material or as a url? [TODO]
-
(d)
Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [TODO]
-
(e)
Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [TODO]
-
(a)
-
5.
If you used crowdsourcing or conducted research with human subjects…
-
(a)
Did you include the full text of instructions given to participants and screenshots, if applicable? [TODO]
-
(b)
Did you describe any potential participant risks, with links to Institutional Review Board (irb) approvals, if applicable? [TODO]
-
(c)
Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [TODO]
-
(a)