Google DeepMind strengthens cooperation with UK AI Safety Research Institute... Joint promotion of AI security research
Two organizations announce deepening of key research partnership in AI safety and security

- •Google DeepMind and the British AI Safety Institute (AISI) are significantly strengthening core research cooperation in the field of AI safety and security.
- •The key to this partnership is to shift from post-mortem evaluation to designing safety from the early stages of development.
- •As a model for cooperation between private AI companies and government safety agencies, it is expected to have an impact on the formation of global AI governance standards.
Key announcement: DeepMind-UK AISI cooperation expands
Google DeepMind and the UK AI Security Institute (AISI) announced that they will significantly strengthen cooperation in the field of artificial intelligence (AI) safety and security. The two organizations plan to begin joint research to verify the safety of AI systems and respond to security threats.
UK AISI is a government-affiliated organization established in 2023 and is responsible for assessing the risks of AI systems and establishing safety standards. This deepening of the partnership is attracting international attention as a model of cooperation between leading private AI companies and government safety agencies.
Why is this cooperation important?
This announcement has meaning beyond the signing of a simple memorandum of understanding (MOU). As discussions on AI regulation accelerate around the world, cases where technology development companies proactively cooperate with government safety agencies can suggest the direction of future global AI governance.
In particular, the UK has emerged as a central axis of international AI safety discussions through the AI Safety Summit held at Bletchley Park in November 2023. AISI was established as a follow-up to this summit and is building a network with AI safety research institutes around the world.
Google DeepMind is a world-class AI research center that has developed innovative AI models such as AlphaGo and AlphaFold. Through the Gemini series, it is also forming a leading group in the large-scale language model (LLM) market, and has recently been actively conducting research and development in the field of AI agents and multimodal AI.
What will be different: Comparison with existing collaborations
| Item | Existing cooperation | This partnership |
|---|---|---|
| Scope of cooperation | Focus on individual model evaluation | Integration of all areas of safety and security |
| Share your research | Limited information exchange | Joint use of key research results |
| Participation Depth | Advisory/evaluation level | Promotion of joint research projects |
| target technology | Model verification before release | Safety design from the development stage |
The key to this deepening of cooperation is the shift from ‘post-evaluation’ to ‘prior cooperation’. Previously, government agencies assessed risks after an AI model was developed, but in the future, it is expected to adopt a 'Safety by Design' approach that reflects safety standards from the early stages of development.
Flow of global AI safety discussions
This partnership is an extension of the AI safety discussions that have accelerated since the emergence of ChatGPT in 2022.
2023 was the year when AI safety rose to the international agenda. Major countries such as the United States, the European Union (EU), the United Kingdom, and Japan have begun preparing an AI regulatory framework, and at the Bletchley Summit in the United Kingdom, 28 countries adopted a joint statement on AI risks.
The EU AI Act came into effect in 2024, and AI safety standards were strengthened in the United States through executive orders. At the same time, each country is starting to establish its own AI safety research institutes. The UK AISI, the US AI Safety Institute (US AISI), and the Japan AI Safety Institute (Japan AISI) were launched during this period.
As of 2025, AI safety discussions are moving from ‘principles’ to ‘implementation’. Rather than abstract risk warnings, specific evaluation methodologies, business-government cooperation models, and establishment of international standards have become key agenda items. The partnership between Google DeepMind and UK AISI reflects this trend.
[AI Analysis] Future prospects and implications
The changes that this strengthening of cooperation will bring can be analyzed from three aspects.
First, it is likely to have an impact on the formation of industry standards. The safety assessment framework that DeepMind is jointly developing with AISI could become a de facto industry standard. Competitors OpenAI, Anthropic, and Meta will also have incentives to pursue similar collaboration models.
Second, the UK's AI governance leadership is expected to be strengthened. This partnership is expected to contribute to the UK's strategy of becoming a 'third AI axis' between the US and China. There is a possibility that the expansion of the international network centered on AISI will accelerate.
Third, it shows the possibility of a self-regulatory model. Instead of the government unilaterally imposing regulations, the 'co-regulation' model, in which companies and the government jointly design safety standards, can spread. However, for this model to be effective, it must be supported by transparent information disclosure and an independent verification system.
Meanwhile, we must also keep an eye on the side effects that such cooperation may bring. There are also concerns that if a close relationship is formed between a large corporation and a government agency, startups or open source communities may be at a relative disadvantage. It is important to ensure the participation of various stakeholders so that AI safety discussions do not form a 'safety cartel' led by a few entities.
댓글 (6)
흥미로운 주제입니다. 주변에도 공유해야겠어요.
그 부분은 저도 궁금했습니다.
기사 잘 봤습니다. 다른 시각의 분석도 읽어보고 싶네요.
좋은 의견이십니다.
strengthens에 대해 더 알고 싶어졌습니다. 후속 기사 부탁드립니다.
그 부분은 저도 궁금했습니다.
More in this series
More in AI & Tech

영국 정치지도자들, 아동 성착취 혐의에 대한 긴급 조사 촉구

A humanoid robot performing in China has a child's face on it.

U.S. jury finds Meta and Google responsible for ‘social media addiction’… 3.7 billion won compensation ruling

Japanese X-ray Observatory makes first direct measurement of ultrafast 'cosmic wind' in galaxy M82

NASA selects 24 people for 2026 Astrophysics Postdoctoral Fellowships

Ethereum is at a crossroads to ‘redefine its identity’ ahead of the quantum computing and AI era
Latest News

"간부 잘 아는데 교통비 좀" 휴가 군인들 돈 뜯은 50대 구속
50대 A씨가 휴가 중인 군인들에게 부대 간부를 아는 척 접근해 돈을 사취

英 옥토퍼스, 이란 전쟁 이후 태양광 판매 50% 증가
이란 전쟁 이후 영국 옥토퍼스의 태양광 판매량 50% 증가

당정 "추경, 지방·취약계층에 더 지원되는 방식으로"
당정이 지방자치단체와 취약계층 중심의 추경 편성 방침 재확인

당정, 석유 최고가격제 손실 보전을 추경에 반영키로
당정이 석유 최고가격제 손실을 추경에 반영하기로 결정

어머니 폭행하고 금팔찌 빼앗은 30대 아들 경찰에 붙잡혀
어머니 폭행 후 금팔찌 빼앗은 30대 남성 체포

아이티 갱단 폭력사태로 10개월간 5천명 이상 사망
아이티에서 지난 10개월간 갱단 폭력으로 5천명 이상 사망

서방 정보당국 "러시아, 우크라이나 전쟁 후 이란에 드론·식량 공급"
서방 정보당국, 러시아의 이란 드론·식량 공급 작업 거의 완료 파악

6년 전 세 살 딸 살해한 30대 친모 구속송치
경찰, 6년 전 세 살 딸 살해 혐의 30대 친모를 구속송치