LLM Usage Policy

Lex Humana will not accept text generated by Language Model Tools (LLMs) as credited authorship in research. AI tools cannot take responsibility for the work, and any attribution of authorship carries with it responsibility for the content.

Authors who use LLMs in their research should explicitly disclose this fact in their manuscript. This disclosure should include the name and version number of the model used and be provided in the methodology or acknowledgments section of the manuscript.

The method or acknowledgements section should describe in detail how the generated text was produced using the LLM, including the parameters and settings used, and any additional pre- or post-processing steps applied to the generated text should be reported.

Authors should ensure that the generated text is factually accurate, relevant to the research question, and consistent with existing knowledge in the field. The original writing and the LLM-generated text should be clearly distinguished, and the generated text should be in italics and in quotation marks.

Authors should acknowledge the potential limitations of LLMs in their manuscript, including the potential for bias and errors. They should also explain how they have addressed these issues to the best of their ability.

The use of LLMs in research should be justified based on the specific research question and the potential benefits of using the tool. The manuscript should explain the reasons for using LLMs and how they contributed to the research question.

The Lex Humana editorial team reserves the right to request access to the raw data and code used in LLM-assisted research to verify the authenticity of the results.