SAFE
CIC
The Safeguarding Specialists
01379 871091

SAFE Newsfeed

Government acts after report highlights failings at regulator

Source: Department of Health and Social Care published on this website Friday 26 July 2024 by Jill Powell

An independent review of the Care Quality Commission (CQC) has identified significant internal failings which is hampering its ability to identify poor performance at hospitals, care homes and GP practices.  

The interim report, led by Dr Penny Dash, chair of the North West London Integrated Care Board, found inspection levels were still well below where they were pre-Covid, a lack of clinical expertise among inspectors, a lack of consistency in assessments and problems with the CQC’s IT system.  

These failings mean the regulator is currently unable to consistently and effectively judge the quality of health and care services, including those in need of urgent improvement. The report also found that social care providers are waiting too long for their registration and rating to be updated, with implications for local capacity.    

The government will now take immediate steps to restore public confidence in the effectiveness of health and social care regulation, including by increasing the level of oversight of the CQC, ahead of a full report by Dr Dash which will be published in the autumn.  

This works forms part of the government’s wider efforts to identify the challenges facing the NHS and take action to address them head on as part of its mission to build a health service fit for the future.

Health and Social Care Secretary, Wes Streeting, said:  

“When I joined the department, it was already clear that the NHS was broken and the social care system in crisis.  But I have been stunned by the extent of the failings of the institution that is supposed to identify and act on failings. It’s clear to me the CQC is not fit for purpose.

“We cannot wait to act on these findings, so I have ordered the publication of this interim report so action can begin immediately to improve regulation and ensure transparency for patients.

“I know this will be a worrying development for patients and families who rely on CQC assessments when making choices about their care. I want to reassure them that I am determined to grip this crisis and give people the confidence that the care they’re receiving has been assessed. This government will never turn a blind eye to failure.”

The Health and Social Care Secretary has announced four immediate steps the government and CQC will take to restore public confidence in the regulator and ensure patients can get an accurate”picture of the quality of care available.   

These include:  

  • The appointment by the CQC of Professor Sir Mike Richards to review CQC assessment frameworks. Sir Mike was a hospital physician for more than 20 years and became the CQC’s first Chief Inspector of Hospitals in 2013, retiring from this role in 2017.
  • Improving transparency in terms of how the CQC determines its ratings for health and social care providers
  • Increased government oversight of the CQC, with the CQC regularly updating the department on progress, to ensure that the recommendations in Dr Dash’s final review are implemented
  • Asking Dr Dash to review the effectiveness of all patient safety organisations.

Dr Dash was asked to carry out a review of the CQC in May 2024. Over the last two months she has spoken to around 200 senior managers, caregivers, and clinicians working across the health and care sector, along with over 50 senior managers and national professional advisors at the CQC.   

Some of Dr Dash’s emerging findings include:  

  • Of the locations the CQC has the power to inspect, it is estimated that around 1 in 5 have never received a rating
  • Some organisations not being reinspected for several years – with the oldest rating for an NHS hospital dating from over 10 years ago and the oldest rating for a social care provider dating from 2015
  • A lack of experience among some inspectors – with the review hearing of inspectors visiting hospitals and saying they had never been in a hospital before and an inspector of a care home who’d never met a person with dementia.

Commenting on her findings so far, Dr Dash said:  

“The contents of my interim report underscore the urgent need for comprehensive reform within the CQC.  By addressing these failings together, we can enhance the regulator’s ability to inspect and rate the safety and quality of health and social care services across England. 

“Our ultimate goal is to build a robust, effective regulator that can support a sustainable and high-performing NHS and social care system which the general public deserves. 

“To start to rebuild its credibility Dr Dash has highlighted urgent actions the CQC can take, including overhauling the inspection and assessment system, rapidly improving operational performance and fixing faltering IT systems. “ 

Professor Martin Green OBE, Chief Executive of Care England said:

“It is clear that Dr Dash has listened to the voices of care providers, resulting in a clear set of recommendations. This report acknowledges the severe and systemic problems that sit at the very heart of the CQC and gives a specific set of steps that the regulator must take to improve performance and re-establish the sector’s long-eroded trust.

“This is going to be a long and difficult journey for the CQC, but one that is entirely necessary. Care England stands ready to work with the regulator to help them deliver an effective and supportive regulatory system that will be the cornerstone of public protection and delivers fair judgements across health and social care. 

“The CQC must embark on a radical improvement program that should not only include some tangible improvements in their performance, but also needs to move away from a culture of blame. We all want proportionate and effective regulation, and the challenge now is for CQC to take action and work with organisations across the sector to deliver it.”

This government has already recognised the need for change within the health and care sector. Dr Dash’s interim report comes just days after the announcement of Lord Darzi’s independent investigation into the NHS, the findings of which will inform the government’s 10-year plan to radically reform the NHS and build a health service that is fit for the future.  

Regulator investigates aid charity over concerns about links to a terrorism-promoting news outlet

Source: Charity Commission published on this website Thursday 25 July 2024 by Jill Powell

The regulator is investigating aid charity World Aid Convoy over concerns that the charity has links to a Hamas supporting news agency.  

World Aid Convoy has purposes to provide relief and assistance to victims of war and natural disaster. It was registered with the Charity Commission in 2014.  

The Commission began engaging with the charity in March 2024, after concerns were raised that funds were being solicited for the charity through Gaza Now, a news agency that promotes Hamas and Palestinian Islamic Jihad. These concerns related to posts shared on social media and private messaging service, Telegram. The charity was referred to in posts shared on Telegram’s ‘Gaza Now’ and ‘Gaza Now in English’ channels.  

During the regulator’s engagement with the charity, Gaza Now, and its founder, were made subject to a full asset freeze by the UK Government as it has grounds to suspect that they are involved in terrorism.   

Any association between charities and terrorism or extremism is a very serious regulatory concern. Such a link cannot be in the best interests of any charity and is damaging to public trust and confidence in the sector. 

Initial engagement with the trustees gave rise to further concerns about the charity’s management and governance, and the Commission escalated matters to a statutory inquiry in May 2024.  

The inquiry is seeking to determine what, if any, links there are between the charity and Gaza Now, if any funds have been provided to Gaza Now either directly or via partners, and if these funds can be fully accounted for. 

The Commission may extend the scope of the inquiry if additional regulatory issues emerge.  

It is the Commission’s policy, after it has concluded an inquiry, to publish a report detailing the issues examined, any action taken, and the inquiry’s outcomes. 

Ofcom research: A deep dive into deepfakes that demean, defraud and disinform

Source: Ofcom published on this website Tuesday 23 July 2024 by Jill Powell

  • Two in five people say they have seen at least one deepfake in the last six months – including depictions of sexual content, politicians, and scam adverts
  • Only one in ten are confident in their ability to spot them
  • Ofcom sets out what tech firms can do to tackle harmful deepfakes

As new Ofcom research reveals the prevalence of online deepfakes, we look at what can be done to tackle those that cause harm. 

Deepfakes are videos, pictures or audio clips made with artificial intelligence to look real. New Ofcom research, published today, has found that 43% of people aged 16+ say they have seen at least one deepfake online in the last six months – rising to 50% among children aged 8-15.

Among adults who say they have seen deepfake content, one in seven (14%) say they have seen a sexual deepfake. Most of the available evidence indicates that the overwhelming majority of this content features women, many of whom suffer from anxiety, PTSD and suicidal ideation because of their experiences.

Of those who say they have seen a sexual deepfake, two thirds (64%) say it was of a celebrity or public figure, 15% say it was of someone they know, while 6% say it depicted themselves. Worryingly, 17% thought it depicted someone under the age of 18. 

The most common type of deepfake 8-15-year-olds say they have encountered was a ‘funny or satirical deepfake’ (58%), followed by a deepfake scam advert (32%). 

Fewer than one in ten (9%) of people aged 16+ say they are confident in their ability to identify a deepfake – although older children aged 8-15 are more likely to say so (20%).

Different deepfakes

Recent technological advances in Generative AI (GenAI) have transformed the landscape of deepfake production in the last two years. In a discussion paper, published today, we look at different types of deepfakes and what can be done to reduce the risk of people encountering harmful ones – without undermining the creation of legitimate and innocuous content.[3]

GenAI and synthetic content can augment TV and film; enhance photos and videos; create entertaining or satirical material; and aid the development of online safety technologies. It can also be used to facilitate industry training, medical treatments and criminal investigations. 

Some deepfakes, however, can cause significant harm, particularly in the following ways:

Deepfakes that demean – by falsely depicting someone in a particular scenario, for example sexual activity. They can be used to extort money or force them to share further sexual content. 

Deepfakes that defraud – by misrepresenting someone else’s identity. They can be used in fake adverts and romance scams. 

Deepfakes that disinform – by spreading falsehoods widely across the internet, to influence opinion on key political or societal issues, such as elections, war, religion or health. 

In reality, there will be cases where a deepfake cuts across multiple categories. Women journalists, for example, are often the victims of sexualised deepfakes, which not only demean those featured but may contribute towards a chilling effect on critical journalism.

What tech firms could do

Addressing harmful deepfakes is likely to require action from all parts of the technology supply chain – from the developers that create GenAI models through to the user-facing platforms that act as spaces for deepfake content to be shared and amplified. 

We have looked at four routes tech firms could take to mitigate the risks of deepfakes:

  • Prevention: AI model developers can use prompt filters to prevent certain types of content from being created; remove harmful content from model training datasets; and use output filters that automatically block harmful content from being generated. They can also conduct ‘red teaming’ exercises – a type of AI model evaluation used to identify vulnerabilities.[4]
  • Embedding: AI model developers and online platforms can embed imperceptible watermarks on content, to make it detectable using a deep learning algorithm; attach metadata to content when it is created; and automatically add visible labels to AI-generated content when it is uploaded. 
  • Detection: Online platforms can use automated and human-led content reviews to help distinguish real from fake content, even where no contextual data has been attached to it. For example, machine learning classifiers that have been trained on known deepfake content. 
  • Enforcement: Online services can set clear rules within their terms of service and community guidelines about the types of synthetic content that can be created and shared on their platform, and act against users that breach those rules, for example by taking down content and suspending or removing user accounts. 

These are not requirements, but all the above interventions could help mitigate the creation and spread of harmful deepfakes. However, there is no silver bullet solution, and tackling them requires a multi-pronged approach. 

What Ofcom is doing

Illegal deepfakes can have devastating consequences, and are often targeted at women. We’re working at pace to consult on how platforms should comply with their new duties under the Online Safety Act. That’ll include guidance on protecting women and girls.

If regulated platforms fail to meet their duties when the time comes, we will have a broad range of enforcement powers at our disposal to ensure they are held fully accountable for the safety of their users.

Gill Whitehead, Ofcom's Online Safety Group Director

“When the new duties under the Online Safety Act come into force next year, regulated services like social media firms and search engines will have to assess the risk of illegal content or activity on their platforms – including many types of deepfake content (though not all types are captured by the online safety regime) – take steps to stop it appearing, and act quickly to remove it when they become aware of it.

“In our draft illegal harms and children’s safety codes, we have recommended robust measures that services can take to tackle illegal and harmful deepfakes. These include measures relating to user verification and labelling schemes, recommender algorithm design, content moderation, and user reporting and complaints. These represent our ‘first-edition’ codes and we are already looking at how we can strengthen them in the future as our evidence grows. 

“We are also encouraging tech firms that are not regulated under the Online Safety Act, such as AI model developers and hosts, to make their technology safer by design using measures we have set out today.”

Essex school reprimanded after using facial recognition technology for canteen payments

Source: Information Commissioner’s Office (ICO) published on this website Wednesday 24 July 2024 by Jill Powell

The ICO have issued a reprimand to a school that broke the law when it introduced facial recognition technology (FRT).

Chelmer Valley High School, in Chelmsford, Essex, first started using the technology in March 2023 to take cashless canteen payments from students.

FRT processes biometric data to uniquely identify people and is likely to result in high data protection risks. To use it legally and responsibly, organisations must have a data protection impact assessment (DPIA) in place. This is to identify and manage the higher risks that may arise from processing sensitive data.

Chelmer Valley High School, which has around 1,200 pupils aged 11-18, failed to carry out a DPIA before starting to use the FRT. This meant no prior assessment was made of the risks to the children's information. The school had not properly obtained clear permission to process the students’ biometric information and the students were not given the opportunity to decide whether they did or didn’t want it used in this way.

Lynne Currie, ICO Head of Privacy Innovation, said:

“Handling people’s information correctly in a school canteen environment is as important as the handling of the food itself. We expect all organisations to carry out the necessary assessments when deploying a new technology to mitigate any data protection risks and ensure their compliance with data protection laws.

“We’ve taken action against this school to show introducing measures such as FRT should not be taken lightly, particularly when it involves children.

“We don’t want this to deter other schools from embracing new technologies. But this must be done correctly with data protection at the forefront, championing trust, protecting children’s privacy and safeguarding their rights.”

Chelmer Valley High School also failed to seek opinions from its data protection officer or consult with parents and students before implementing the technology.

In March 2023, a letter was sent to parents with a slip for them to return if they did not want their child to participate in the FRT. Affirmative 'opt-in' consent wasn't sought at this time, meaning until November 2023 the school was wrongly relying on assumed consent. The law does not deem ‘opt out’ a valid form of consent and requires explicit permission. Our reprimand also notes most students were old enough to provide their own consent. Therefore, parental opt-out deprived students of the ability to exercise their rights and freedoms.

Ms Currie added:

“A DPIA is required by law – it's not a tick-box exercise. It’s a vital tool that protects the rights of users, provides accountability and encourages organisations to think about data protection at the start of a project.”

We have provided Chelmer Valley High School with recommendations for the future.

The Charity Commission is calling on charities that hold online meetings to review their governing document to ensure it is up to date.

Source: The Charity Commission published on this website Monday 22 July 2024 by Jill Powell

The call comes in the regulator’s redesigned guidance on charities and meetings (known as CC48), published Friday 19 July 2024.

The refreshed guidance emphasises the importance of complying with a charity’s governing document when holding meetings.

The guidance stresses that trustees should make sure any rules around holding meetings are up to date and practical. This has become particularly important now that many charity meetings are held online.

This includes updating a charity’s governing document to set out details such as how votes will be held at virtual meetings and whether all meetings will be virtual or hybrid.

It also recommends that charities that wish to hold virtual meetings have a policy that says how people can ask questions, join in the debate, and what would happen to the meeting if there were technical problems.

The guidance covers all types of charities including membership charities, as well as different types of meetings, such as trustee meetings and Annual General Meetings.

Sam Jackson, Assistant Director, Policy and Strategy at the Charity Commission said:

“The ways in which people communicate has rapidly evolved since the pandemic, and it is now very common for charities to conduct their meetings online or in a hybrid form.  Our revised guidance reflects this development, and emphasises the importance of following a charity’s governing document and keeping it up to date to ensure good governance.

“After seeking feedback from trustees, we’ve also made the guidance shorter and easier to understand.Through these improvements, we hope to make it easier for trustees to know what is expected of them, and how they can act in the best interests of their charities.”

The guidance is available on the Charity Commission’s gov.uk page