I recently had the chance to attend the RISK AI conference at the Pullman Hotel, right by Kings Cross station in central London. Unlike many of the larger conferences this year, this one had more of a feel of a presentation or discussion due to their use of the venue’s theater stage. The organizers did a great job bringing together top experts in a wide range of roles to discuss the risks emerging as AI becomes more embedded in our businesses and daily lives. The day was divided into two main themes: in the morning, the focus was on the growing regulatory landscape around AI, and in the afternoon, the conversation shifted to the broader risks associated with AI beyond just legal and regulatory concerns.
The first session I attended featured a panel of experts from legal, consulting, and tech backgrounds who dove into the evolving regulatory frameworks for AI. One of the key topics was the EU AI Act, which recently passed and classifies AI systems by their risk levels. It imposes stricter rules on high-risk areas like healthcare, transportation, and law enforcement. While the UK is no longer part of the EU, it’s working on its own approach to AI regulation, which seems to be more flexible and risk-based, with an emphasis on encouraging innovation while still addressing potential harms. The UK’s 2023 AI White Paper advocates for a sector-specific approach, giving industries more freedom to self-regulate within broad ethical and legal guidelines. As AI adoption grows, the UK’s regulatory framework will need to keep up with both local needs and global standards to remain competitive.
The next session, “Navigating the Minefield of the AI Regulatory Landscape,” turned out to be a bit of a surprise. While the title suggested it would focus on legal perspectives, the panel actually included AI ethicists and strategists, offering some interesting thoughts on how AI regulation could develop. One panelist raised a point about the perception of "remoteness"—for example, when an AI system is bought or deployed by an organization—suggesting that this may not be enough to shield companies from legal responsibility. James Wilson, AI ethicist at Capgemini, talked about the overlap between AI ethics and legislation, noting that as laws evolve quickly, having a strong ethical foundation can help companies stay ahead of the game. He also touched on the issue of regulatory capture, where larger companies might try to bypass ethical or legal safeguards in their rush to bring AI products to market.
The final panel I attended was on diversity and inclusion in AI, and it really brought some thought-provoking points about how societal biases shape AI development. Panelists shared personal experiences and examples of how AI tools can unintentionally reinforce biases, especially when training data reflects societal inequalities. The discussion wrapped up with a reminder that the computing power behind AI systems has largely been centered in affluent Western countries. As AI continues to evolve, we need to be mindful that this could change, and we should strive for a more inclusive, globally aware approach. Towards the end, the panel briefly mentioned the potential of smaller, specialized data sets, which can improve model accuracy while also reducing the computational resources needed—though this point deserved a bit more attention.