Generative artificial intelligence (AI)
Also known as:
- artificial intelligence
- chatbot
- deep fakes
What is Risk ?
Digital risk factors associated with their interests and activities
Generative AI tools are trained on data. They take enormous amounts of information from the internet and use this data to respond to a user’s request.
Requests are made through a chat prompt or written request. For example, ‘write me an essay on the French Revolution’. The generative AI will search the internet for patterns of data. It will then use this to create something new.
Generative AI chatbots get their name from having text ‘conversations’ with a user. For example, a user will request an image or information and the chatbot will reply as though it is a real person.
Examples of generative AI tools include:
- ChatGPT
- Midjourney
- Bard
- DALL-E
- Jukebox
Popular generative AI platforms have restrictions and filters in place to prevent inappropriate and harmful content. However, there are platforms where no filters or restrictions exist.
Where this can happen
Risks and motivations
Risks
Cyberbullying and harassment
A child or young person could use generative AI to cyberbully or harass others. For example, someone could prompt generative AI to create an unflattering picture of someone else.
Generative AI can also process large amounts of someone’s personal data, for example, their social media posts which are publicly accessible.
This data could then be used by AI to create an attack that references specific events, times, and locations. This form of harassment could be very personal and intimidating.
Deepfakes
A deepfake is a piece of false visual or audio content which has been generated by AI software. For example, a fake video of a celebrity doing or saying something surprising or controversial.
Deepfakes can be used to spread disinformation or create custom-made pornography. For example, adding someone's face onto a pornographic actor.
Having your image used as part of a pornographic deepfake can also be distressing. Sharing deepfake pornography to cause someone distress or for sexual gratification is illegal.
Wrong information and bias
Generative AI tools do not always understand a user’s prompt or have access to recent, accurate data. This can mean that they give out incorrect information.
They can also provide biassed or inappropriate responses. Generative AI has been reported to have created text or imagery which is culturally insensitive, racist and sexist.
Child sexual abuse material
Generative AI can be used to create child sexual abuse material. This is any image, video, or representation of a child engaged in actual or pretend sexual activities.
There are cases where a child has used generative AI to create indecent images of other children. Once created, indecent images can be widely circulated. AI-generated child sexual abuse material could be used to blackmail or groom someone for abuse.
It is illegal for anyone to create, share, or possess child sexual abuse material. This includes images created with generative AI.
Use at school and misconduct
Generative AI tools can be used by students to assist with studying. Some students might ask tools like ChatGPT to forge their homework. For example, writing an essay on a novel, or art movement.
Depending on its policy, a school might consider this use of generative AI a form of misconduct.
Motivations
Reasons a child or young person might use generative AI include to:
- have fun
- learn
- create
- get advice
- make money
- explore their interest in emerging technology
- complete school work
- bully someone
What you can do
You might be working with a young person who is interested in generative AI or already using these tools. Any response will depend on how the child or young person is using or wants to use generative AI.
You could decide to talk to them about:
- using these tools responsibly
- how using AI to hurt or upset someone is wrong, and potentially illegal
- about the limitations of these tools, like bias or incorrect information
- some of the potential risks
It is also helpful to remind children and young people that they should speak to a trusted adult if they are ever unsure or worried about anything.
If you think that a young person is at risk, follow your safeguarding procedure and read our safeguarding guidance.
Support
You may be working with a child or young person who has had a negative or harmful experience with generative AI.
Every child or young person’s recovery process will be different.
All schools must have an anti-bullying or good behaviour policy.31 If they attend a school and they have been harassed or cyberbullied, you may wish to contact the school directly.
Reports should be made to specialist organisations, for example, Report Remove or the Police, in instances involving child sexual abuse material and pornographic deepfakes.
GOV.UK offers guidance on what to do if a child or young person has shared an explicit or nude image of anyone under the age of 18.
Read more about generative artificial intelligence (ai)
Share your experience of generative artificial intelligence (ai)
You can tell us about:
- other terms you might have heard
- conversations you’ve had with young people
- a related platform or app
- another related risk or harm