top of page
  • Writer's pictureKristina Fort

Breaking into AI policy: Insights from five industry experts

By Kristina Fort, Summer 2024 Fellow


 


As a new and exciting topic, AI policy attracts the attention of many people who would like to join this field. However, navigating this rapidly developing field can feel challenging and intimidating. Therefore, I posed the following five questions to five professionals in AI policy to share their wisdom and inspire others trying to figure out their way into AI policy:


  1. What is the most important skill for a young professional in AI policy?

  2. What should people entering the AI policy field highlight in their applications?

  3. What AI policy (research) topic seems the most impactful yet neglected to you?

  4. What is one book/newsletter/article that people interested in AI policy should read?

  5. What is the most useful advice you received when you were starting in AI policy?


My respondents were: 

  1. Oliver Guest from the Institute for AI Policy and Strategy (IAPS)

  2. Sebastien Krier from DeepMind

  3. Orsolya Dobe from Pivotal Research

  4. Saad Siddiqui from Safe AI Forum

  5. Chiara Gerosa from Talos Network


Here is what they shared with me:


What is the most important skill for a young professional in AI policy?


Oliver Guest: Quickly getting to “bottom lines” and communicating them clearly – even if they’re imperfect.


AI policy faces major uncertainties, such as about what AI progress will look like or how different actors will react. As a result, it is often tempting to avoid reaching an answer to key policy questions or to conclude that more research is needed. But AI policy is fast-moving and society has important decisions to make about AI already. If you clearly share your answer to key policy questions, you can help improve these decisions, even if your work is not perfect.


The best professionals will also make it clear how confident they are in their answer, and what evidence would change their mind. This reduces the risk of policymakers acting in a way that no longer makes sense as new evidence becomes available.


Sebastien Krier: The most important is having a diversity of skills. But the ‘skill’ I value the most is good judgment. It is essential but it is something you kind of need to build over time. At the more basic level, a good understanding of who are the different actors involved, what their incentives and constraints are, what are they trying to do, and what represents success for them is very important. So a good knowledge of the actors in the space, a good knowledge of the technical developments, and a good political and commercial acumen seem to be key. You can do better work if you have a more accurate and nuanced model of the world. 


Orsolya Dobe: Being able to translate technical concepts to non-technical audiences (e.g. policymakers) and explain how policy processes work in a clear way to people working in more technical roles.


Saad Siddiqui: Making clear recommendations while dealing with uncertainty is definitely valuable. Most organizations appreciate the ability to deal with things changing fast and the uncertainty around many claims about AI development and policy, all while still being able to make recommendations that speak to decision-makers. 


Chiara Gerosa: The best advice I’ve ever received is to “be someone who follows through with what they commit to”. This goes broader than policy and is very applicable to young professionals starting out in any field or organisation. If you can prove yourself to be reliable, people will continue to loop you into projects, until eventually you’ll be in a position to more proactively choose the projects you want to be involved in.


Clear communication is an important skill that’s more policy-specific, i.e. being able to get your thoughts and ideas across succinctly and adapted to your audience, both verbally and written. This skill is key in an environment where you’ll need to constantly translate complex ideas and build shared models of the world with a variety of very different people. 


What should people entering the AI policy field highlight in their applications to your employer?


Oliver Guest: Think tanks often like to see that you have already published policy writing, even if it is just on your blog. 


In the AI policy field in particular, there’s also a lot of demand for people who have not just policy but also relevant technical understanding. For example, if you have more than an amateur understanding of machine learning techniques, the technical details of model safety evaluations, or semiconductor manufacturing, highlighting this could be a big advantage.


Sebastien Krier: That of course depends on the person. First of all, you need to understand what the team you are applying to is doing. So you have to demonstrate that the skills and the knowledge you developed are directly applicable here and that your networks and experiences really relate to that. You should avoid generalizations, generic answers, and truisms. Detail and some sort of originality in the application are useful, showing that you have actually thought of the questions and you know the main cruxes and key things that the team is working on.


Orsolya Dobe: Thinking back to my time at the OECD’s AI Unit, policy analyst roles at international organisations usually require liaising with a wide range of stakeholders and coordinating meetings, talks and other events. So beyond policy research experience, highlighting great communication and organisational skills are usually helpful.


Saad Siddiqui: It's mostly the same things you'd highlight if applying for non-AI roles – knowledge of key terms and debates in the field, network, and interest in the AI policy direction of the target think-tank.


Chiara Gerosa: My answer is very similar to Seb’s! I think it depends on the organisation. CVs should always be tailored to the role you’re applying for, highlighting relevant experience with quantifiable achievements (e.g. “fundraised X amount over Y amount of time for Z project, resulting in ABC”).


What AI policy (research) topic seems the most impactful yet neglected to you?


Oliver Guest: What technical R&D should governments and philanthropic foundations be funding to reduce AI risks?


I expect these groups to become more and more interested in funding this kind of work, as they become more concerned about AI risks. But some R&D topics might be much more helpful than other topics. Additionally, some R&D to reduce AI risks might be done by other actors in any case. For example, work on developing RLHF further might make chatbots less likely to help malicious users, but AI companies are already strongly incentivized to do this kind of work, because it makes their products better. With good advice for potential funders, we might be able to significantly improve how the newly available resources are used.


Sebastien Krier: Neglected is a hard one because some are not neglected but still very important, such as evaluations. I believe that good regulatory design is fairly neglected, especially since it is the most important part of any legislation. We would probably benefit from something like ‘policy/legal red-teaming’. Relatedly I think there is also a lack of people able to translate our technical reality to the demands of our regulatory system.


Orsolya Dobe: AI incident reporting stands out as a critical yet underexplored area in AI policy research. While many current and proposed AI regulations include incident reporting requirements, the specific details and implementation frameworks are still largely underdeveloped.


Saad Siddiqui: I think international AI governance-related topics are pretty neglected and deserve more research. Some time and energy has been spent thinking about models of international governance that are Western-led, but significantly less time has been spent thinking about worlds where you need more actors on board in an international order, and what we need to realistically get to that stage (e.g., confidence-building measures). There is also just a wide range of interesting standards-setting organizations, and international bodies that are understudied (e.g., SPEC).


Chiara Gerosa: There are many issues that are key to progress in regulating AI, but are somewhat neglected because they’re not “traditionally” AI governance questions in the way that, say, evals are. The increasing concentration of market power in the hands of frontier tech companies is one example – the fact that these companies are building their own energy plants might seem like an economic competition question on the surface. But understanding that a key motivation for building these plants is probably to power data centres to train ever-bigger models all the sudden makes it an AI governance issue. 


What is one book/newsletter/article that people interested in AI policy should read?


Oliver Guest: The Simon Institute newsletter is excellent for staying up to date with what’s going on in AI governance at the international level.


Sebastien Krier: I think Lennart Heim had some interesting stuff on the need for technical policy people and why that is important. Also, GovAI publishes its annual research report. I believe that by reading the highlighted research part, you'll get a pretty good sense of what is going on and so I think that would be a good starting point. When it comes to newsletters, I would recommend the usual suspects like ImportAI, Zvi and Dean’s blogs, and of course our own AI Policy Perspectives. AI News and Interconnects are also very good for the recent technical developments.


Orsolya Dobe: I would recommend the book Chip War by Chris Miller about the history and present implications of the semiconductor industry. It’s an engaging read and could be fascinating to anyone working on AI governance, especially compute. Also, I agree that Jack Clark’s Import AI newsletter is great.


Saad Siddiqui: Import AI is really good, though I'd also highly recommend the Cognitive Revolution to policy folks; it's much more product/business focused but it gives you a really clear taste for how people are actually trying to use AI today.


Chiara Gerosa: I like newsletters. Geopolitechs and Transformer Weekly are two good examples.


What is the most useful advice you received when you were starting in AI policy?


Oliver Guest: Who do you want to act differently, and why will your work make it more likely that they act differently? If you don’t have concrete answers to these questions, it’s often less likely that your work will have an impact, even if it is otherwise of very high quality.


Sebastien Krier: One useful advice that I got was to get to know as many people as possible. A very large network, I think, can be very very useful and can help you go further and much quicker in your career, giving you a good idea of what's going on without wasting your time. Expert curation is actually very underrated. 


Orsolya Dobe: That AI governance is still a very new and rapidly changing policy area so it’s hard to tell what will be most important to work on in let’s say 2 years. I think staying curious and being ready to pivot one’s focus area is a must.


Saad Siddiqui: Talk to lots of people – a lot of the most important knowledge is not written down anywhere.


Chiara Gerosa: As others have already mentioned: talk to lots of people. This is also a great way to start building your network, learn about opportunities, better understand the field, and potentially even find a mentor to learn from! Conferences are great for this, but you’d also be surprised how many people are open to a short call from cold-contacting on LinkedIn – people love talking about themselves and sharing their experiences! 



 

About the author


Kristina was a Summer 2024 Talos Fellow. She has done research on AI Safety Institutes as an AI Governance Fellow at Pivotal Research in London and has previously worked in policy for the Czech government and EU institutions. Her academic background lies in social sciences and international affairs, having studied in France, the US, and the UK.


Blog image by Susan Q Yin (@syinq), from Unsplash

0 comments

Comments


bottom of page