SAFE
CIC
The Safeguarding Specialists
01379 871091

SAFE Newsfeed

Essex school reprimanded after using facial recognition technology for canteen payments

Source: Information Commissioner’s Office (ICO) published on this website Wednesday 24 July 2024 by Jill Powell

The ICO have issued a reprimand to a school that broke the law when it introduced facial recognition technology (FRT).

Chelmer Valley High School, in Chelmsford, Essex, first started using the technology in March 2023 to take cashless canteen payments from students.

FRT processes biometric data to uniquely identify people and is likely to result in high data protection risks. To use it legally and responsibly, organisations must have a data protection impact assessment (DPIA) in place. This is to identify and manage the higher risks that may arise from processing sensitive data.

Chelmer Valley High School, which has around 1,200 pupils aged 11-18, failed to carry out a DPIA before starting to use the FRT. This meant no prior assessment was made of the risks to the children's information. The school had not properly obtained clear permission to process the students’ biometric information and the students were not given the opportunity to decide whether they did or didn’t want it used in this way.

Lynne Currie, ICO Head of Privacy Innovation, said:

“Handling people’s information correctly in a school canteen environment is as important as the handling of the food itself. We expect all organisations to carry out the necessary assessments when deploying a new technology to mitigate any data protection risks and ensure their compliance with data protection laws.

“We’ve taken action against this school to show introducing measures such as FRT should not be taken lightly, particularly when it involves children.

“We don’t want this to deter other schools from embracing new technologies. But this must be done correctly with data protection at the forefront, championing trust, protecting children’s privacy and safeguarding their rights.”

Chelmer Valley High School also failed to seek opinions from its data protection officer or consult with parents and students before implementing the technology.

In March 2023, a letter was sent to parents with a slip for them to return if they did not want their child to participate in the FRT. Affirmative 'opt-in' consent wasn't sought at this time, meaning until November 2023 the school was wrongly relying on assumed consent. The law does not deem ‘opt out’ a valid form of consent and requires explicit permission. Our reprimand also notes most students were old enough to provide their own consent. Therefore, parental opt-out deprived students of the ability to exercise their rights and freedoms.

Ms Currie added:

“A DPIA is required by law – it's not a tick-box exercise. It’s a vital tool that protects the rights of users, provides accountability and encourages organisations to think about data protection at the start of a project.”

We have provided Chelmer Valley High School with recommendations for the future.

Ofcom research: A deep dive into deepfakes that demean, defraud and disinform

Source: Ofcom published on this website Tuesday 23 July 2024 by Jill Powell

  • Two in five people say they have seen at least one deepfake in the last six months – including depictions of sexual content, politicians, and scam adverts
  • Only one in ten are confident in their ability to spot them
  • Ofcom sets out what tech firms can do to tackle harmful deepfakes

As new Ofcom research reveals the prevalence of online deepfakes, we look at what can be done to tackle those that cause harm. 

Deepfakes are videos, pictures or audio clips made with artificial intelligence to look real. New Ofcom research, published today, has found that 43% of people aged 16+ say they have seen at least one deepfake online in the last six months – rising to 50% among children aged 8-15.

Among adults who say they have seen deepfake content, one in seven (14%) say they have seen a sexual deepfake. Most of the available evidence indicates that the overwhelming majority of this content features women, many of whom suffer from anxiety, PTSD and suicidal ideation because of their experiences.

Of those who say they have seen a sexual deepfake, two thirds (64%) say it was of a celebrity or public figure, 15% say it was of someone they know, while 6% say it depicted themselves. Worryingly, 17% thought it depicted someone under the age of 18. 

The most common type of deepfake 8-15-year-olds say they have encountered was a ‘funny or satirical deepfake’ (58%), followed by a deepfake scam advert (32%). 

Fewer than one in ten (9%) of people aged 16+ say they are confident in their ability to identify a deepfake – although older children aged 8-15 are more likely to say so (20%).

Different deepfakes

Recent technological advances in Generative AI (GenAI) have transformed the landscape of deepfake production in the last two years. In a discussion paper, published today, we look at different types of deepfakes and what can be done to reduce the risk of people encountering harmful ones – without undermining the creation of legitimate and innocuous content.[3]

GenAI and synthetic content can augment TV and film; enhance photos and videos; create entertaining or satirical material; and aid the development of online safety technologies. It can also be used to facilitate industry training, medical treatments and criminal investigations. 

Some deepfakes, however, can cause significant harm, particularly in the following ways:

Deepfakes that demean – by falsely depicting someone in a particular scenario, for example sexual activity. They can be used to extort money or force them to share further sexual content. 

Deepfakes that defraud – by misrepresenting someone else’s identity. They can be used in fake adverts and romance scams. 

Deepfakes that disinform – by spreading falsehoods widely across the internet, to influence opinion on key political or societal issues, such as elections, war, religion or health. 

In reality, there will be cases where a deepfake cuts across multiple categories. Women journalists, for example, are often the victims of sexualised deepfakes, which not only demean those featured but may contribute towards a chilling effect on critical journalism.

What tech firms could do

Addressing harmful deepfakes is likely to require action from all parts of the technology supply chain – from the developers that create GenAI models through to the user-facing platforms that act as spaces for deepfake content to be shared and amplified. 

We have looked at four routes tech firms could take to mitigate the risks of deepfakes:

  • Prevention: AI model developers can use prompt filters to prevent certain types of content from being created; remove harmful content from model training datasets; and use output filters that automatically block harmful content from being generated. They can also conduct ‘red teaming’ exercises – a type of AI model evaluation used to identify vulnerabilities.[4]
  • Embedding: AI model developers and online platforms can embed imperceptible watermarks on content, to make it detectable using a deep learning algorithm; attach metadata to content when it is created; and automatically add visible labels to AI-generated content when it is uploaded. 
  • Detection: Online platforms can use automated and human-led content reviews to help distinguish real from fake content, even where no contextual data has been attached to it. For example, machine learning classifiers that have been trained on known deepfake content. 
  • Enforcement: Online services can set clear rules within their terms of service and community guidelines about the types of synthetic content that can be created and shared on their platform, and act against users that breach those rules, for example by taking down content and suspending or removing user accounts. 

These are not requirements, but all the above interventions could help mitigate the creation and spread of harmful deepfakes. However, there is no silver bullet solution, and tackling them requires a multi-pronged approach. 

What Ofcom is doing

Illegal deepfakes can have devastating consequences, and are often targeted at women. We’re working at pace to consult on how platforms should comply with their new duties under the Online Safety Act. That’ll include guidance on protecting women and girls.

If regulated platforms fail to meet their duties when the time comes, we will have a broad range of enforcement powers at our disposal to ensure they are held fully accountable for the safety of their users.

Gill Whitehead, Ofcom's Online Safety Group Director

“When the new duties under the Online Safety Act come into force next year, regulated services like social media firms and search engines will have to assess the risk of illegal content or activity on their platforms – including many types of deepfake content (though not all types are captured by the online safety regime) – take steps to stop it appearing, and act quickly to remove it when they become aware of it.

“In our draft illegal harms and children’s safety codes, we have recommended robust measures that services can take to tackle illegal and harmful deepfakes. These include measures relating to user verification and labelling schemes, recommender algorithm design, content moderation, and user reporting and complaints. These represent our ‘first-edition’ codes and we are already looking at how we can strengthen them in the future as our evidence grows. 

“We are also encouraging tech firms that are not regulated under the Online Safety Act, such as AI model developers and hosts, to make their technology safer by design using measures we have set out today.”

Headteacher guilty of harassment

Source Norfolk Police published on this website Friday 19 July 2024 by Jill Powell

A Norfolk headteacher has been found guilty of harassing a colleague who said she lived in constant fear of his reprimands and hounding. Gregory Hill, age 48, of Valley Way, Fakenham, appeared at  Great Yarmouth Magistrates’ Court Thursday 18 July 2024 and was found guilty of one count of harassment between 12 March 2022 and 22 February 2023, and another count of resisting arrest on 6 March 2023.

Hill had pleaded not guilty to both counts. During the trial, the court heard the victim joined the school where Hill was headteacher in September 2021 as a newly qualified teacher. It was her first teaching job. At first, Hill’s messages to the victim, which he sent from the school’s social media accounts, were work related but became personal from February 2022 onwards.

On one occasion, Hill messaged her: "Can’t wait to see this smile, this beautiful face and wonderful person tomorrow." Another time, he messaged: “I’d love to build a future for us in school and outside of school xx.”

The victim told Hill she did not want a personal relationship with him, and he continued to harass her up until 19 February 2023 when she reported his behaviour to police.

Over the course of 12 months Hill claimed he had “fallen in love” with the victim, continued messaging her on social media, insisted she join him on a school trip instead of colleagues, and blamed a “slip of the finger” for a WhatsApp telephone call he made to her just before midnight one evening.

A witness explained to police how she had seen Hill photographing the victim’s car while she was visiting a family member. The victim was also told that Hill had been using Facebook to try to find information about her.

On one occasion, when the victim requested a meeting with school officials to discuss his “continued unwanted behaviour”, she was confronted by Hill who said she was responsible for “putting his job at risk” and making him and his elderly mother “homeless.”

In a Victim Personal Statement she described Hill’s messages as a “constant drain on me both mentally and physically, he was always hounding me with messages and emails late at night.

“I was never able to relax or have any personal time as there was a clear expectation from him that I replied to his every communication. When I failed to do this, he took this personally and I then received a barrage of negativity from him.

“As time progressed at school my behaviours changed. I found that I did not want to be alone in case this led to Mr Hill taking the opportunity to come into class to discuss things with me.

“I have never suffered with anxiety and have been able to handle any challenges that have been thrown at me…I started to struggle with sleeping and would have nightmares…I had concerns that Mr Hill was aware of my movements and this led to a fear of seeing him when I was out and about.

“These last few years have completely changed me as a person and I do not know if I will ever return to the person I once was.”

Plain clothes officers arrived at Hill’s school to arrest him shortly before 9am on 6 March 2023.

The court was told how Hill resisted arrest for 33 minutes: he prevented officers from putting him in handcuffs, grabbed hold of a hedgerow and refused to let go, claimed officers were trying to break his wrist and his arms, bit his own lip and pretended to pass out.

Detective Constable Claire Lordan, who led the investigation, said: “Hill exploited and abused his position, he was someone who was trusted by parents and the wider community. His behaviour and treatment of a younger colleague, who was just starting out in her career, shows he thought he could behave exactly how he wanted, and get away with it time and time again.

"When he should have been supporting a young woman in her first teaching position, he was self-serving, constantly seeking out opportunities for contact and attention from her, affecting her work and personal life, making her afraid and afraid to be alone.

“I know it wasn’t easy for her to come forward and continue with this investigation and court case, and she deserves all our thanks for having the courage to tell us what was happening because his behaviour needed to stop.”

The hearing was adjourned until 2 September 2024 for sentencing.

The Charity Commission is calling on charities that hold online meetings to review their governing document to ensure it is up to date.

Source: The Charity Commission published on this website Monday 22 July 2024 by Jill Powell

The call comes in the regulator’s redesigned guidance on charities and meetings (known as CC48), published Friday 19 July 2024.

The refreshed guidance emphasises the importance of complying with a charity’s governing document when holding meetings.

The guidance stresses that trustees should make sure any rules around holding meetings are up to date and practical. This has become particularly important now that many charity meetings are held online.

This includes updating a charity’s governing document to set out details such as how votes will be held at virtual meetings and whether all meetings will be virtual or hybrid.

It also recommends that charities that wish to hold virtual meetings have a policy that says how people can ask questions, join in the debate, and what would happen to the meeting if there were technical problems.

The guidance covers all types of charities including membership charities, as well as different types of meetings, such as trustee meetings and Annual General Meetings.

Sam Jackson, Assistant Director, Policy and Strategy at the Charity Commission said:

“The ways in which people communicate has rapidly evolved since the pandemic, and it is now very common for charities to conduct their meetings online or in a hybrid form.  Our revised guidance reflects this development, and emphasises the importance of following a charity’s governing document and keeping it up to date to ensure good governance.

“After seeking feedback from trustees, we’ve also made the guidance shorter and easier to understand.Through these improvements, we hope to make it easier for trustees to know what is expected of them, and how they can act in the best interests of their charities.”

The guidance is available on the Charity Commission’s gov.uk page

Professional Standards Authority (PSA) responds to publication of Independent Culture Review of the NMC

Source: Professional Standards Authority (PSA) published on this website Thursday 18 July 2024 by Jill Powell

The independent culture review conducted by Nazir Afzal OBE and Rise Associates on the Nursing and Midwifery Council (NMC) has found that people working in the organisation have experienced racism, discrimination and bullying, and evidence of safeguarding failures. This is concerning and these are matters we take very seriously. We are carefully considering the report and its recommendations. We are grateful for the courage of the whistleblower in raising the concerns which led to this review.

The report highlighted that since April 2023, six people have died by suicide or suspected suicide while under, or having concluded, fitness to practise investigations by the NMC. We want to extend our sympathies to the families of those six people.

We appreciate that those affected, the NMC’s registrants and the wider public will be asking questions about how this happened, and what action is now going to be taken. We are pleased to see that the NMC has accepted the all the review’s recommendations, and we will be closely monitoring the NMC’s actions to address these. 

In our oversight role of how they deliver their regulatory objectives, we evaluate whether the NMC and the other statutory regulators are meeting our Standards of Good Regulation through our Performance Review assessments. Our Standards currently focus on the key regulatory functions of fitness to practise, education and training, registration, guidance and standards. We also have general standards that cover a range of areas including equality, diversity and inclusion; reporting on performance and addressing organisational concerns; and consultation and engagement with stakeholders to manage risk to the public.

We published our last report on the NMC in September 2023. We found the NMC had a number of areas where improvements were required and did not meet Standard 15. This Standard states: ‘The regulator’s process for examining and investigating cases is fair, proportionate, deals with cases as quickly as is consistent with a fair resolution of the case and ensures that appropriate evidence is available to support decision-makers to reach a fair decision that protects the public at each stage of the process.’. The NMC has not met this Standard since its 2018/19 performance review. In September 2022, we escalated our concerns about the NMC’s performance in this area to the Secretary of State for Health and Social Care and the Health and Social Care Committee; and have continued to update them following each performance review since.

We are currently assessing the NMC’s performance for 2023/24. As part of this, we will consider the findings of the Rise review. We will also consider the findings of Ijeoma Omambala KC’s reviews of fitness to practise cases and the NMC’s handling of whistleblowing concerns, which we expect to be published later this year. We will publish the NMC’s 2023/24 Performance Review report as soon as possible after this time. In the meantime, we will consider the immediate implications of the Rise review with the four health departments of the UK, and closely monitor the NMC’s actions to address the report recommendations. 

We note the Rise review included a recommendation for us to undertake more detailed annual reviews of the NMC’s performance against our Standards, conducting a more in-depth review of randomly selected cases at each stage of the NMC’s processes. We agree that enhanced monitoring will be required of the NMC in the coming months and years to ensure improvements are implemented and sustained. We will provide an update on how we will achieve this shortly, once we have considered the findings of the review in full. We will also be considering the evidence we look at as part of our performance reviews, and our process overall, to see if these can be further improved to help us identify the sorts of issues raised in the Rise review at an earlier stage. 

As part of our regular practice, we are currently reviewing our Standards. This provides an opportunity to also look at whether we should consider internal culture, leadership and governance as part of how we assess how well a regulator is delivering on its statutory responsibilities.