Balancing Innovation and Regulation in AI: A Thought-Provoking Discussion

Watch the full London Tech Week 2024 panel debate on innovation vs regulation in AI. This session is chaired by Simon Edward, Non-Executive Director & Advisor at SB BID. Simon is joined by four technology leaders who discuss the importance and practice of finding a balance between innovation and regulation in the booming AI market.

Speakers

  • Simon Edward, Non-Executive Director & Advisor, Former VP & CMO IBM
  • Yatin Mahandru, VP Head of Public Sector and NHS, UK & Ireland - Cognizant
  • Emma Wright, Director - Interparliamentary Forum on Emerging Technologies
  • Kai Zenner, Head of Office & Digital Policy Adviser - European Parliament
  • Zinnya del Villar, Data and Technology Director - Data-Pop Alliance

Want to experience talks like this one in June 2025? Secure your spot at London Tech Week 2025!

Please note: The below article has been created using a transcription of the video.

Balancing Innovation and Regulation Deciding the Future of AI

Weighing up Pro-Innovation and Pro-Regulation Arguments in Artificial Intelligence

Simon

There's been loads and loads of debate around AI and really the balance between innovation and regulation. Many people are pro-innovation and driving forward, and many people are pro-regulation and making sure that we've got all those things in place around security, around the safeness around it, to make sure we can control it. So we have a fantastic panel here today to open up a discussion about that: pro-innovation, pro-regulation, and how do we balance it.

And I hope they're going to argue with each other. We certainly don't want to sit here and all agree with it. We'll let everybody introduce themselves. As I said, I'm Simon Edward. I was the Vice President of the CMO at IBM. Now I do a lot of work on the advisory side of things, and I'm an ambassador with London Tech Week, hence I'm on stage here. But let's start by giving a chance for everybody to introduce themselves.

Zinnya

Thank you. I am Zinnya del Villar. I am the Data Technology Director of Data Pop Alliance, a nonprofit organisation founded in 2012 in the USA by the Harbor Humanitarian Initiative, OBI, and MIT Connection Science. It's a group of researchers, activists that want to change the world with data. Of course, at the beginning of this organisation, we took at that moment of big data. Now we process data with AI systems to help the improvement of the SDGs. So, thank you.

Simon

Thank you, Zinnya.

Emma

Hi everyone, my name is Emma Wright. I'm a tech lawyer, so I help startups, investors, and big corporates commercialise technology or kind of procure it and look at the regulatory frameworks that apply. And about five years ago, I founded a not-for-profit called the Inter-Parliamentary Forum on Emerging Technologies with a junior MP at the time called Darren Jones. And we built a network of legislators from 30 countries and we're partnering with UNESCO on the implementation of the recommendation on AI and ethics globally.

Kai

Yes, I'm Kai Senna, head of office for parliamentarian active force, European Parliament. Over the last seven years, I think I was now heavily engaged on the European AI Act, trying to find the right balance between innovation and also protection. In the end, it worked kind of, but yeah, now it's all about enforcing it and implementing it and also there we will be heavily involved.

Yatin

Thank you, Kai. Good morning, everyone. I'm Yatin Mahandru. I head up public sector and health for Cognizant. We're a 350,000 people organisation worldwide. We have a stand just down there. My interest is in AI and public services and the balance between regulation and innovation.

Balancing Innovation and Regulation Deciding the Future of AI

The Importance of Innovation

Simon

So look, absolutely fantastic set of people on the panel here and I'm delighted to have the chance to have the discussion. So let's dive in and start with pro-innovation. London Tech Week's all about getting the best out of technology and seeing what's going on. So if we start with the innovation side of things, what are the key arguments for prioritising innovation over regulation as we think about the development of AI technologies? And I'm going to let you choose who wants to start on that one.

Yatin

I'm happy to go first. I thought you might. I think there is a genuine drive for AI. I've been in the industry a very long time, as you can probably tell. I think this is genuinely going to change the way we deliver services. There are some consequences and challenges, which we will talk about under regulation. But if you look at what's happening in pharma, precision medicine, time to market for drugs, what is happening in all other areas of AI as it takes up speed, I think it's a big, big opportunity.

Kai

I can maybe just follow up a little bit on it, because as I said, we had huge debates in the European Parliament exactly on that point. On the pro side, so innovation side, we, so I'm working for the Conservative Party in the Parliament, we were always underlining that without technology, we will not have the chance, for example, to reach our climate goals in Europe, the Green Deal. It would allow us to save a lot of energy, but also to fix other things like supply chain, dependencies, and so on and so on. So if you look there on the chances of AI, there is a lot, and the second thing that was very strong in the European Parliament and actually shared by all political groups was this idea of, okay, our companies are lacking behind, especially on technological terms, and AI as a new, and especially gen AI as a new general purpose technologies gives us a chance to catch up, to build up some things that maybe can compete again, European-wide and worldwide. So this, we have the main positive points. Emma.

Secure your spot at London Tech Week 2025!

Emma

So I just think if you look at the interface of physical infrastructure and digital infrastructure and the amount of data that we've produced by our wonderful devices, we've reached that tipping point where we've got information overload, but also putting more people, applying more people to a physical infrastructure problem doesn’t solve it. We have to find a way of doing more without adding more people to it. So I think AI, there is a huge opportunity. We can talk about whether it's AI, whether it's just automation, and I think that's an issue around the language we use, but I do think there has to be a drive for some of our very complex problems that the technology has to be applied to them.

Balancing Innovation and Regulation Deciding the Future of AI

Navigating Ethical Considerations in AI

Simon

So, Zinnya, thinking about this discussion then, so it sounds like innovation is really important, we need to do it. How do we do all of that and make sure that we don't compromise some of those other standards that are out there from an ethical point of view, from a security point of view? What do we need to start focusing on?

Zinnya

I think that we need to take a step backward. That is, first of all, we need to, of course, as a country or as a region, to have a clear strategy, AI strategy or digital strategy and use the AI according to that. And of course, with this AI strategy, we can filter the use cases where we need to push them for innovation, for example, crisis, for example, yes, climate crisis, that we have a lot of climate crisis right now. And in some cases, for example, we can't access very rich data, for example, the CDRs, called the records of telecom companies because of the regulation, and we can't innovate because of that. So in these cases that you have a flute, for example, you can't use this type of data and it's not, you can innovate, you can go improve with all the technology that you have. But I think that AI or a digital strategy is needed for each country or region. And we can talk in general about AI in general with all the use cases. We need to filter them. We need to create a strategy for each country, for each region, for because we can talk in a general way.

 

The Role of Governance in AI

Simon

So Kai, building on that, Kai, you spoke about regulation and making sure that we've got the right governance in place. How do we make sure it's deployed in a way that people get value from it, but at the same time have the safety and the security and the knowledge in that governance? Because there's a balance there.

Kai

So, it's a really good question and also a tricky question, but because a lot of things in the world are not working so well, here is a thing that is actually working well, because the international community of states was rather successful in agreeing on a common set of principles. The OECD adopted a list of ethical principles on AI in 2019. This was then adopted by G20, so also countries like India and so on and so on, China. And what we are seeing now across the world is basically a lot of different AI policies, also AI regulations, but at least they are all based on the same set of core principles. In the European Union, as I said, I see points in the AI Act rather negative, other points very positive. It is positive that we were trying to, again, include those internationally accepted principles, but then let them, yeah, specified by industry players, by standardization bodies via harmonized technical standards. So basically, you have the AI act as a kind of general direction, and then you have concrete design choices via technical standards. The big risk right now, we were actually talking a little bit about it in our prep talk, is that, of course, standardization bodies are often dominated by a few players. Your former company is also very strong for example. How can you say that? And so it's now basically all about including smaller market players, civil society, academics, and so on, to make this, in general, good system that I was talking about, it really works.

 

The Intersection of Regulation and Innovation

Simon

Emma, you're looking at a lot of this talking and advising many companies from the legal point of view. How do you balance keeping you the focus on innovation whilst keeping the organisation safe? What's your advice in that area?

Emma

So I think regulation and innovation are two sides of the same coin, in the sense that actually for investment, people need to know what the registry framework might be. And five years ago, the forum was called the Institute of AI, and we were told it was a bit niche, that AI was a long way away, and obviously things have changed. But, you know, five years ago, it was a bit of a Wild West, and obviously, countries are looking to catch up, but that's an age-old problem that governments can't respond as quickly to these issues. So I think what the EU has done is, a bit like it did with GDPR, is commendable. I fear that actually, with very overburdensome regulation, it can sometimes allow those that are already big market players, I'm not using significant market players, because that has a legal meaning, to entrench their position and maybe stifle innovation. But when we're, it depends obviously on the companies that we're advising, if we're negotiating against Uber on an algorithm, that's a very different discussion than a startup and how it commercializes its problems. On compliance.

Unlock innovation: learn more about passes for London Tech Week 2025!

Simon

And it's always difficult. At times you're sort of building the plane while you're flying, trying to work out the regulation that's going to come and you're already in the air. So it is really difficult to advise people while you're doing that.

Emma

Yeah and so that's why countries that are you know Japan has very clearly indicated that it's not looking to over-regulate in the AI space. The EU has taken a different approach and we've even got the states we'll see what the UK brings but at the moment it has although I appreciate you know commend the regulators being given the ability to kind of regulate in their industry sector we probably need a bit more certainty.

 

Real-World Applications of AI

Simon

So Yatin, while we're talking about this, this is pro-innovation, this section, how are people using AI to transform? I mean, what's the real-life case that you're seeing from a cognizant point of view?

Yatin

So we're seeing quite a few in public services. People are looking at case working as an ideal opportunity to do that. It alleviates the burden. It frees up people to do other things. In health, we've created health avatars in the Middle East where people are undergoing their diagnosis and remedy treatment via AI, contextual driven AI, Gen. AI. So there's a lot of opportunities like that. I did want to pick up on the regulation point, if I may, from a UK perspective. I think, as Emma said, you can regulate too much. But for us in the UK, we have DESIT, Department of Science, innovation and technology driving the regulators in a certain way, coordinating, looking to make it transparent in terms of algorithms and so on. So if you're really bored, read the innovation paper, pro-innovation regulation paper that they published. I have a view that maybe we need an AI regulator in the sense of an agile one. But I think that's an important part, whether it's the Financial Conduct Authority or a CMA look alike, because our market is going to be different from the European Union now. And that is a key drive going forward. So it'll be interesting to hear what Peter Kyle says when he speaks tomorrow. So, so, yes, and it's.

Balancing Innovation and Regulation Deciding the Future of AI

The Challenges of Regulation

Simon

It's a good segue to the other half of this argument, which is we've been talking about pro-innovation and all the great stuff that AI can deliver and making sure we unlock it. But let's sort of take the other side of the coin for a second and then we'll bring it back together. But if we think about regulation leading on from what you said and you've all worked on the regulation side of things, there is a need for regulation, sensible regulation. Where do we go with that? Where does it fit? Where does regulation fit so that we make sure that we can still unlock that innovation but actually at the same time have the right regulation in place? Yeah, go on, am I?

Emma

Okay, so we have equalities legislation. We've had it for quite a long time. That doesn't mean we have a kind of no gender pay gap. So I'm not convinced that we need a great deal more regulation. Whether we need more certainty around where we're heading on the regulatory front and capability and capacity within our current regulators. This is from a UK perspective, in order to enforce and to have those conversations and to make sure, because I do believe that AI is a digital infrastructure. It will move as it will be all pervasive to actually recognize it as this. So I'm going to date myself now, but I worked on the telecoms regulatory framework in the late 90s. It was seen as an infrastructure and I believe that a lot of the lessons that we learned around the telco regulation, we could apply in this situation rather than just letting it rip. But I think a lot of the legislation is there. We just need to know how to apply it and enforce it. And we talk about AI as this amorphous magic that happens somewhere. It's maths. It has consequences. We need to apply our current rules to ensure those outcomes and consequences are what we expect to happen. So we haven't been controversial.

 

Thoughts on Regulation

Simon

Enough regulation, no more regulation, but deploy it, enforce it. Do others agree with that or are there certain areas where we need to help? Disagree by all means, go for it.

Are you interested in learning more about AI innovation and regulation? Book your place at London Tech Week 2025 - register early to save!

Kai

Now. On a personal level, I'm very close to Emma because I do agree that AI is in many use cases or sectors new and actually with the existing legislation also in the European Union, we could already cover most of it at least. Now my other cap from the European Parliament and I talked about those two positive points that we need to underline. What we always discussed as three reasons why we need to regulate artificial intelligence on top of all that we have already is going back again to market concentration. We saw from the very beginning a situation where big players are also very dominant on the new AI market. Secondly, there was this fear that AI is called black box for a reason. That for example, discrimination in the future would be even more hard to detect and therefore we wanted with the European AI Act to make it traceable, bring more information, technical documentation in order to make it possible to detect those hidden, for example, biases in AI systems and a third point and this especially for the older generation in the European Parliament was a main point was this idea that AI is replacing human decisions, is replacing the human individual per se and therefore we have now for example also going back to the OECD principles. One article on human oversight in risky situations where for example let's think about a surgery in a hospital, the AI system cannot make all the decisions and for example in the end decide this person isn't worth it anymore because yeah the chances of surviving are too minor that in those situations there would need to be a human doctor that is basically making the final call.

 

Ensuring Transparency in AI

Simon

So transparency becomes a critical item in the discussion there. So Zinnia, as we think about this and we think about unlocking innovation, how do we put this regulation in without stifling human innovation? How do we access the data? How do we access what's going on? Because we've got to balance the two things.

Zinnya

Yes and I think that it is related with this part of transparency and we can't do it and now the LLM model, the models of large language models that we are using when you are when some persons as directly the CEO of DeepAI that is the leader in this sector of generative AI he's very open he didn't never he gave information about what type of data he is using right so and he can't do it because we can't implement a regulation there we can't see what is happening inside we can't see the data that they are using we can't see the model itself the algorithm how is working so how you can regulate that and it's that day I think that the European Parliament talk about it okay we have this regulation now then how we implement this and finally hang without it this that's the big challenge and it's a technical challenge that until now we don't see this communication.

Balancing Innovation and Regulation Deciding the Future of AI

Addressing the Human Element

Simon

So it's not it's not down to AI or no AI regulation or no regulation. It's down to how we implement it in our organisations with transparency, accessing, accessing the data. So Kai, you mentioned the human side of things, you mentioned people, we can't move forward without talking about people. So is AI going to take everybody's jobs? Is there a regulation to stop that? Is it going to help humans be better humans? Is AI actually brilliant at experiences and cultural nuances? Where does it all fit?

Kai

Well, in Brussels, as you know, we had now also elections, and soon there will be a new commission and so on and so on, and every new policymaker or political player wants to do his or her own things, so there will be very likely new initiatives, and employment could be one of those areas where we have maybe a Lech Spitziales AI Act, which is then covering certain problems with AI deployment or replacement of workers and so on. Personally, again, we had general purpose technologies before, and I believe, yes, there is always a transition period, which is hard, and there are definitely governments, and I say it as a more liberal conservative person needs to jump in and make the transition period easier to handle, but I believe that humans will find new areas where they are necessary. For example, in the coding or in the graphic designer area, maybe the drafts are then now in the future, all coming from shared GPT and others, but it will be still the human designer that is making the final touch, let's call it like this. That's good to hear.

 

Concerns for Future Generations

Simon

There's still a role because there's a lot of humans here. It's nice to see so many humans. It'll be really boring if it was just some sort of AI sitting out there, although it would be a lot quieter. But anyway, we'll see where we go to any other views on that on the human side of it and people. And it was want to chip in on their views on how we make sure regulation supports the human side of things so that people can excel and benefit from AI as opposed to AI eating their lunch, eating their jobs.

Zinnya

I am very worried about children, for example. And I think that we need to focus on children right now. Right now, because of this generation, they are using a lot of all these new technologies and they are losing their creativity. So I think that for the most part of us, we were born without AI, right? So now our children are working with AI since the beginning of their lives. So what happened with the creativity, what happened with their thoughts, what happened with this big problem of Pensee Unique in French, that all of us, we will have the same source of information. Is that right for our children?

Balancing Innovation and Regulation Deciding the Future of AI

The Role of Skills in AI Adoption

Simon

Emma, looks like you want to chip in.

Emma

Well, so my experience of using machine learning in a legal capacity was that it saved about 20% of time. And Klarna reported yesterday that that's their experience of 20% of the workforce. I think there has to be an honest conversation. And I think there has to be a skills connection back to it. Because we are becoming more efficient if we use AI. We need to have a conversation around AI being a tool, not the thing. It's a tool for us in the same way that a power tool is. But then it focuses back on skills and making sure that actually we're equipping people to be able to build the trust and the adoption because we won't build the business cases otherwise.

Be part of moments like this one this summer at London Tech Week 2025!

Simon

The world is changing and we either change with it or we get left behind. Yes, it looks like you want to make a comment here. Anything on the skills side of things, maybe from Cognizant.

Yatin

So I think what we see happening is a democratization of technology. We see a new category of skills emerging. We call them prompt engineers, which means you don't always have to be the coder in Python. You have to know how to interact with a large language model to get the optimum result that you need for the business. So there is a new class of skill emerging that we're training for now and we see a lot of potential there. But to Emma's point, I think there will be some displacement. And that's why when you come back to, for example, regulation, it's about trust. If the citizen doesn't trust how you're going to use their data and the algorithm you use to make a benefit decision or an immigration decision or a prime decision, they're not going to trust AI.

 

Evaluating Global Regulation Approaches

Simon

Everyone trusts it when it's in their favor, but as soon as it's not in their favor, it's like a really black box, what's going on? So look, this is about pro regulation here. Before we close on this piece of it, what are the examples of great regulation? Are there other countries that are really getting it? You mentioned Japan, you mentioned Europe, you've mentioned the UK, you know, who's really getting it right? And what are the examples of great regulation that unlock innovation, that enable technology, that enable people to be their best?

Zinnya

Yes, I think that one of the countries that we need to learn about is China. They have a really clear strategy of what they want to do with digital technologies with AI and they know exactly their regulation, right? Another one is Brazil. They copy more or less the same road map of China that is first of all to have a clear strategy of what I want AI for our country and then I regulate according to that. So yes, I have these two countries that are in a good way for regulation.

Simon

Yeah, Kaya looks like you want to jump in there. Go on, go for it.

Kai

No, so to complement here, the European Union is always good in making plans or genders and so on, but we are terrible in implementing it. We also discussed it already. So there I definitely agree. It would be nice if, for example, now an AI coordinated plan, AI white paper, where there are those general ideas on how we bring it and the continent forward is really implemented. Maybe more to your question, I think there is actually regulatory competition out there because the idea was for the EU to be the first with our shiny European AI act, but it took us so long that others caught up.

There's the Canadian AIDA law, another horizontal law. There are indeed automated decision-making laws in China and others. There is of course the UK, the US approach, and if you compare it, I would say there are three clusters. First of all, horizontal AI legislation. I mentioned Canada, the European Union, Brazil, maybe China. Then there is a sectoral approach where you have maybe ethical guidelines, but then sectorial adjustments like in the UK, like with the AI bills of rights in the United States, and you have, let's call it a soft law approach, like in Singapore, where they have non-binding code of conduct.

Japan is also going in a similar direction and so on. Right now, the jury is still out, so we are way too early to say this is the best, this is the best. Based on my experience, there will also not be the best system, but probably there is a mix and a back of best practices that in the future, especially those countries that are not at the forefront right now, will cherry pick a little bit, will take something from China that is working, from the EU that is working, and so on.

 

Looking Forward: Advice for Founders

Simon

So this brings us to the point that innovation and regulation need to sit together. So, Emma, as you think about these two areas here, and everybody's sitting in here going, God, what do we do with this? There's so much going on. Where do we go? What advice would you give to maybe start with some of the founders that are sitting here? And they're really looking to embrace AI, but they also need to navigate the complexities of innovation and regulation and how they can move forward. Is there advice that you turn around and say, just think about this or focus on that or get started here?

Emma

I'll just pick up on this other point and make the distinction, it's very different to try and impose regulation if you're in an authoritarian regime to some of the Western countries and I think there's a geopolitical aspect that will play into this, just a flag. And what I'd say to a founder is this actually affects the investment piece because what you see is the greater concerns around the foreign direct investment. And so where it's AI or similar technologies, they're considered sensitive and the legislation is coming in more and more where countries are looking as to where that investment's coming through and obviously we've got this, I think I was quoted as saying the splinter net where we have more authoritarian regimes that are less popular being considered more questionable when making investments in certain sectors in certain countries. So I think you have to look at your target market and where you're also going to raise money because if you're looking to target the EU you're going to have to comply. I was at an event in February in Slovenia and we had the Chinese government, US Special Envoy for AI and someone from the EU Commission all speaking and they all said we're exporting our AI governance to the rest of the world, one of them will win out and I suspect it'll either be the US or China, but that's my advice to founders, it's where you're getting your investment in from but also the market that you're targeting.

 

Business Strategies for AI Implementation

Simon

Yeah, so both sides of it, and I guess data comes into it as well because of the regulations around the data side of things, really important in terms of where that's residing.

Emma

And to be fair, with the GDPR, that kind of set the gold standard and everyone. But it was because the US didn't have anything. As Kai said, it's taken slightly longer than anticipated. So it'll remain to see whether it kind of sets the pace.

Simon

So if we look at the business side of things, that's the founders and sounds like good advice and watch where your capital comes from, because where the investment comes from is going to make a massive difference. But if you think about the business side of things, and yes, maybe this is one for you to kick off on, how do businesses stay agile and have led large teams in multinational organisations, global organisations? Well, you know, where do they manage it? How do they make sure they can get the most out of it, stay agile, and at the same time harness the regulation, harness the innovation and help their clients move forward?

Yatin

Well, I think that's a challenge for all large organisations. In our case, what we have done is launch a framework, and it's generated around 80,000 to 100,000 ideas in AI, so specific framework sponsored by the CEO. In some organisations, it is easier to do that. So we've ended up with AI for our payroll, for our recruitment in various geographies. And I can see government organisations and other private sector organisations looking to do the same thing. So actually, it's a framework for innovation that is driving the AI part of it. And for those who are building businesses, because we're all startups, including Cognizant in AI, the key thing we found is you start with a proof of concept, you look to make sure it's going to be scalable. I think what I would call narrow band AI is better than saying I'm going to build an LLM to solve world hunger. That's very hard to do. What we find is proof of concept, scalability, talk to their CISO, talk to the data protection people. Most of those basics are actually in place. And sometimes, as Emma said, it's machine learning, but we're actually driving a lot of context-driven AI now as well. And we already have around 500 people working in the UK doing that, which is not bad out of the workforce of around 7,000. So it’s rapidly growing. I think that's very key to it, Simon, is the framework to do that.

Balancing Innovation and Regulation Deciding the Future of AI

Strategies for Navigating Regulation

Simon

So frameworks linking to governance, is there any other comment on it? How do big businesses walk this tightrope of driving innovation, embedding using AI, but at the same time maintaining regulatory compliance in often an international marketplace where they're working across the different approaches to it?

Kai

Yeah, maybe also to complement a little bit. I think what I'm now saying is especially true for the European Union, but also applies to most of the other regions in the world, and it was basically both said already. Those companies that are early movers or early adapters, so that are now already engaging with the regulators, with the enforcers, will really benefit a lot. Because in the European Union, the Commission is completely understaffed, underfunded, and so on, have a huge list of tasks that they now need to do, guidelines, delegated eggs, and so on and so on. So very likely, if now a company is coming and talking with them, showcasing certain things that are working, that indeed maybe are coming from the medical area, medical devices, but can be then scaled up also for other sectors, regulators will be extremely happy and maybe even grateful. And as a company, I think you will benefit a lot from such a, let's call it VIP relationship with the regulator. And the second thing is everyone struggles right now, especially again going back to the European AI Act. It's rather vague, it's rather unclear, and if you are an international company that now needs to fulfill what is coming soon in Canada, in the United States, in the EU, the only way I think even large companies, as you mentioned it, will only manage to do it if they are using a network effect.

Discover more about our 2025 agenda.

So coming together in Germany, we have an initiative called Applied AI, but you can also use trade associations and figure out together how to, for example, do a proper risk assessment that works in the United States, but also in the European Union. If you do both, I think you are very good prepared. And you have a huge competitive edge because most companies are not that active right now.

 

Audience Questions and Closing Remarks

Simon

Now, we probably ask one question if there's one in the audience. Who's brave enough in this small choir audience? Anyone have a question that they want to put on the table? I can't see any hands. Nobody out there. There's a question over there. You know what? I have no idea if somebody's going to come to you. So I'm going to come down here and I'm going to give you the mic. So what's your question?

Audience Member 1

Hi, I'm Frank Hesse, reporter at EMLEX. I'm just interested in what Mrs. Zenn has been saying about implementation of the AI Act. What do you think the big problems are for, can you hear?

Yatin

We can't hear you at all over here, I'm afraid.

Simon

Question about the implementation of the AI Act and we'll get the other half of the question now.

Audience Member 1

Is the AI office up to scratch? Is it ready? What are we going to see?

Simon

Is the AI office up to scratch? What are we going to see? Somebody's phoned on the floor, but it's not mine. Anyone lost a phone? Anyone want a phone? Is it, and what's yours? It's over here. I can have it. I'm not doing your email. I've got enough of my own. Did you get the question up there? Yeah, go for it.

Kai

I think I will reply, so right now, at least to my readings, they are not ready, the commission is always good in presenting, like everything is working and so on and so on, but at least from our perspective from the parliament, there are a lot of things that needs to be done right now. There are around eight people, only eight people that are working on content and they have a lot of deliverables, so for example, prohibitions in the field of AI are kicking in already at the end of the year and there needs to be guidelines out there because otherwise, how should companies understand if they are falling under social scoring or not? The big question, what is an AI system, is unsolved. We use the AI system definition from the OECD, which is great, but still legally, there are a lot of open questions. The good thing is that the commission gets a lot of pressure from us, but also from stakeholders, so they are looking into hiring a lot of people, 140. The first round is already over, the second round is starting now. The only big question mark is, are they ready in time? As I said, the first deliverables are already there at the end of the year, then the big next one is the code of practice for general purpose AI models, foundation models, which needs to be ready next year in April or June. If I would need to bet, I say they will not be ready by then, which would mean huge legal uncertainty for companies.

Audience Member 2

My name is Devika and I work at Avelaire Health for a consulting firm, but I'm a founder of a company on the side called IMLA, which is all about women's health. My question was around the fact that the DHSC has just slashed 111 million pounds into the NHS AI lab. What are your guys' thoughts on it? Why has it happened? I know there was loads of stuff that was apparently going to be helping reduce waiting lines and some amazing stuff that was coming out of it, so it's such a shame and I don't know why it happened really. If you have any inside intel, it'd be great to know.

Emma

So I think the overall message is there's not much money floating around at the moment quite frankly and I think there was also a sense that maybe some commitments have been made that might not have been delivered. What I do think is that no matter whether it's red or blue come July the 5th everyone recognizes that we're only going to get kind of incremental improvements through the adoption of more tech whether that's AI or not I don't know. I think the interesting bit around the female health bit is that no one seems to be focused on in fact we're different and where that then goes you know lots of us are still trying to raise that but I think just from the fact you know we're going to have to get that money spread but that's my sense around it.

Book your place at London Tech Week 2025 or learn more about the 2025 agenda.

Simon

Okay, so look, I think we're coming towards the end of this session. So my my summary of it is the following. The discussion was and this by the way, if you were asleep during it, and you need to go back to the office to demonstrate the value of being here, take notes now. So opportunity and debate physical and digital important data is critical. We've got to unlock new ways and target new uses. We want some certainty at the same time, but we want to enforce what we've got rather than generate more market concentration is a risk. At the same time, AI black box is not acceptable. So we want transparency, the human impact needs to be considered children and creativity is certainly critical democratization and new skills are absolutely essential regulatory competition. The splinter net came up at the same time the framework for innovation, we want narrow-band AI relationships and ecosystems fit in. If you didn't catch all of that my summary is new skills, transparency, implementation, and staying agile because things are going to continue to change. So I'd like to say thank you very much to Zinnia to Emma to Kai and to Yatin for joining me here. And thank you to all of you for putting up with us in the noisy environment we're in. And I hope you could hear most of it. Enjoy the rest of London Tech Week and do go out and talk to other people. Fantastic opportunities. Thank you everybody. Thank you panel.