Insights from the Generative AI Academic Advisory Group: September 2024

The Generative AI (GAI) Academic Advisory Group (AAG) is an informal, voluntary advisory group where members provide their expertise on deepfake technology and the threat it poses to UK law enforcement. The AAG was established to support a wider cross-law enforcement response to deepfakes. Members of the AAG are academics with a range of research interests related to GAI and policy leads for a number of GAI threats areas. Over 12 months, the AAG will meet online each quarter and an insight into the discussions from each meeting will be provided.
This was the second AAG meeting. The group heard from colleagues in the Department of Science, Innovation and Technology (DSIT), the Forensic Capability Network (FCN), the Office of the Police Chief Scientific Adviser (OPCSA) and the Accelerated Capability Environment (ACE).
Members of DSIT presented their work to the group which focused on the link between GAI and trust. Given the remit of this AAG includes the social and behaviour implications of deepfakes, this was an opportunity for DSIT to understand if there were any expertise in the group they could reach into or indeed be connected to through relevant colleagues.
The FCN presented their findings of their assessment into the evidential risk that deepfakes pose to forensics. The assessment was undertaken on current policing data and reports, so it was important to note that this is unlikely to portray the full picture, and thus underplay the risk given there is a gap in both timing and completeness of data with regard to prevalence and impact. Specific areas where GAI poses risks to forensic investigations was captured, as well as the associated risk to the deepfake form (audio, image, video). The assessment also looked at current forensic demand and impact on these services. A key takeaway was to remember that no piece of evidence is viewed in isolation. This assessment will be repeated to monitor how this risk develops as GAI capabilities and ease of use by individuals continues to evolve at pace.
OPCSA led a discussion on a priority set of questions for deepfake research. Activity to date has placed a greater emphasis on technical solutions for detection and mitigation. To address this imbalance, the question set was separated into 3 themes which focused on deepfake research activity outside of technical solutions.
These themes covered:
- - Understanding the threat landscape of deepfakes for policing and how this might change over time.
- - Behavioural research on public and wider stakeholder awareness, safety and reporting, as well as victim impacts and trust.
- - Understanding the validity of detection approaches to support their use by the police, in courts and confidence in them by the public.
There was a general view that research across these areas would be broadly welcomed, and OPCSA agreed to update the question set with the feedback from the group.
Finally, ACE presented insights to their organisation and the support they offer. They spoke about how ACE was created to solve the public safety and security challenges that arise from digital and data technologies. They focus on collaboration and bringing together the expertise from industry and academia to deliver front line mission impact at pace. ACE have supported some of the deepfake commissions to date and will provide more of an update at the next AAG meeting, with a specific focus on the findings from the Deepfake Detection Challenge.