Purpose and our values
Our core values are to connect, share and inspire and we are proud that many educational psychologists, teachers, researchers and allied professionals see us as an inclusive, informed, psychological space. Readers of the articles we publish, rely on us to publish material that is informed, insightful, ethical and reflective.
Human authorship of our articles is central to maintaining the trust of our community and in ensuring that the quality of professional discourse remains high.
This policy sets out our position on the use of generative AI in articles submitted to us.
Our position
Any contributions to edpsy must be human-authored.
We do not accept blogs and other contributions that have been generated or shaped by generative AI tools. This includes using Gen-AI tools to produce drafts, rewrite sections, generate arguments, interpretations, or conclusions, offer reflections or analysis, or speak in place of professional or lived experience.
This position reflects our commitment to authenticity, integrity, high quality professional discourse, the value of your voice as authors, and our planetary and environmental responsibilities.
Our rationale
The application of ethical principles drives this policy. With specific consideration of generative AI we recognise that:
- The use of AI tools accelerates climate breakdown and runs counter to our responsibility to stop making things worse.
- AI tools cannot be critical, ethical, reflective, reflexive or make moral and professional judgements.
- AI tools make things up and are trained to create fabrications rather than ‘admit’ lack of information
- AI tools are inherently biased and reflect the prejudical attitudes contained within the materials they were trained on.
- AI tools have been trained on information harvested from across the internet, often without the consent or knowledge of authors, thinkers, artists and creators.
- AI content often feels flat, homogenous and lacking creative spark. When people read content that they think has been AI generated, read time dramatically drops indicating reduced engagement, commitment and care.
Defining Generative AI
When we say ‘Generative AI’ we mean tools that produce content, ideas, or analysis in response to human prompts e.g. ChatGPT, Claude, Copilot. This includes drafting, paraphrasing, summarising, or otherwise generating text that can resemble human work.
What’s ok and what isn’t
When receiving content for the site, we will not accept:
- Submissions that are partially or fully AI generated
- Text that has been rewritten or paraphrased using generative AI
- Producing arguments, reflections, or analysis using generative AI tools
- Using AI in place of professional or lived experience
The following is generally acceptable:
- Collaborative editing with other humans (encouraged in fact!)
- Conventional proofreading software and tools, so long as no content is generated
- Formatting references
- Accessibility tools – which may be particularly relevant for authors where English is not their first language
A specific note about AI generated images
We place the same value on human generated visual content as we do on written work.
As far as possible we encourage our contributors to avoid using AI generated images. There are many fantastic photographers worldwide who provide royalty-free images, Unsplash is a good example of where to find these.
We won’t, as a general rule, accept AI-generated images within submitted content.
This position reflects several concerns:
- Authenticity and trust: AI-generated images may present scenes that are designed to appear authentic but are entirely made up, misleading readers or creating a false impression of evidence or realism.
- Theft of artwork: Many generative image systems are ‘trained’ on large datasets of artwork without consent or attribution.
- Integrity: Visual content in educational psychology should support accurate communication, not simulations.
Charts, graphics, and infographics
Chart, diagrams and visual representations can aid understanding of complex ideas.
Our strong preference is that any visuals used in blogs are created by authors using non-generative tools. However, we recognise that some contributors may use AI tools to support them to create visual representations.
Where these tools are used, authors must take full responsibility for the accuracy of the output. Contributors must ensure that:
- all representations of data are accurate, appropriately sourced, and not altered or added to by the tool
- the visualisation reflects the data and does not introduce misleading elements
- they are able to clearly explain what the diagram is showing
Authors should not rely on AI systems to generate or interpret data on their behalf.
We do not generally accept infographics as a primary means of communicating content. While they can be visually engaging, they are often not accessible – to members of our community who may use screen readers, for example. AI generated infographics are also, in our experience, littered with inaccuracies, spelling mistakes and gobbledegook words.
Where visuals are included, they should support and not replace clear, well-structured written explanations.
Author declarations
We will now as standard ask all our authors to confirm that their submission:
- Is their own original work
- Has not been generated or substantially shaped by generative AI
- Complies with this policy
Inclusion and accessibility
We know that our contributors bring different experiences, confidence levels, and additional support needs to their writing – in fact this is a recognised strength of our platform.
We have always taken a supportive and developmental approach to authorship. As editors, we work collaboratively with contributors to strengthen their writing, offer coaching and feedback, help with structure and sharpen ideas.
This human-centred support from the team remains central to our approach, and we encourage prospective authors to engage with us at any stage of the writing process for support and guidance.
Editorial decisions
If we think that generative AI tools have been used in submissions we may request additional information or draft materials to ensure we are upholding our standards and making the right decisions.
We may also choose to decline or withdraw submissions.
Our commitment to review this policy
As technology and tools continue to develop, we will keep this policy under review to ensure it remains relevant and aligned with the needs of our contributors, readers and the planet.
Last updated: April 2026
Next review: Autumn 2026
Our thanks
Many writers have contributed to our thinking in this area. In no particular order we’d like to give particular attention to:
Tanisha Jowsety, Ginny Braun, Victoria Clarke, Deborah Lupton and Michelle Fine’s piece where they explain their rejection of the use of Gen AI for reflexive qualitative research.
Naomi Klein’s work is insightful and in-depth. You can read her 2023 Guardian article on ‘Hallucinations’ and watch her recent conversation with Karen Hao. Hao authored with best-seller ‘Empire of AI: inside the reckless race for total domination‘
Patrick Galey, Head of Investigations at Global Witness who writes at length about the threats of AI tools and the companies behind them. You can browse Patrick’s Medium blogs.
Research by Williams-Ceci and colleagues that has demonstrated that ‘Biased AI writing assistants shift users’ attitudes on societal issues‘
Guidance on the use of artificial intelligence in submissions to the ‘Qualitative Psychology’ journal.