Generative AI Policy

Our journal supports the integration of cutting-edge technologies, including Artificial Intelligence (AI), to enhance research quality and streamline various stages of scientific communication. However, we strongly emphasise the need to adhere to the highest ethical standards in the use of these technologies. The primary aim of this policy is to define clear boundaries for AI use, especially regarding academic integrity and the responsibility of authors, reviewers, and the editorial team.

Transparency in the Use of AI

To ensure transparency, we require authors, reviewers, and the editorial team to openly disclose any use of AI in the manuscript preparation or review process. This ensures that potential manipulation of data is avoided and helps to clearly differentiate between human contributions and supporting technologies.

Requirements for Authors

Authors are allowed to use AI tools at various stages of manuscript preparation. This may include:

- text editing (grammar, spelling, style);

- data analysis and the automated generation of graphs, tables, or diagrams;

- improving readability or translating text;

- fact-checking and scientific referencing.

However, authors must explicitly state in the manuscript if AI was used at any of these stages. It is important that AI technologies do not replace the scientific work of the authors and are not used to create research outcomes that have not been validated by experimental data or other scientific methods.

AI must not be used to create fabricated data, falsify results, or generate scientific claims without appropriate verification. AI users must be accountable for the accuracy of the results provided and ensure compliance with ethical standards.

Requirements for Reviewers

Reviewers are prohibited from submitting the manuscript content to external AI systems, as this violates confidentiality principles. However, reviewers may use AI for improving the structure and language of their reviews, provided that the manuscript content is not shared with the AI system. Any changes or recommendations made by reviewers must be based on their own analysis of the material.

Restrictions on the Use of AI

Despite the benefits AI offers, we impose certain restrictions on its use:

- AI cannot be used to create research outcomes or falsify data;

- AI cannot be credited as a co-author: all scientific conclusions must be based on real research results, not on automatically generated content;

- AI cannot be used for plagiarism, even if the technology assists in searching or providing similar studies.

Role of the Editorial Team

The editorial team actively monitors the use of AI. We ensure that all submitted materials are checked not only for plagiarism but also for correct AI usage. To achieve this, the editorial team has the right to request clarifications or additional explanations from authors regarding the technologies used.

The editorial team also reserves the right to conduct further checks to identify unethical AI usage, including verifying the presence of fabricated data or manipulation.

Editorial Responsibility

While AI can be a valuable tool in the scientific process, the full responsibility for the accuracy and ethics of the results rests with the authors and reviewers. The use of AI does not exempt authors from their academic responsibility for the research conclusions. In case of unethical use of AI or the detection of errors, the editorial team has the right to:

- request corrections or additions;

- reject the manuscript or review;

- retract the article after publication.

Policy Updates and Adaptation

We acknowledge that technologies are constantly evolving. Therefore, this policy will be reviewed and updated regularly in line with advancements in AI technologies and changes in international editorial standards (Elsevier). Updates will take into account new practices in the use of AI in the scientific process as technology progresses.