In this episode I will be welcoming back another previous guest of Narratives of Purpose, Bart De Witte, to hear more about his journey of making medical AI accessible.
Bart is the founder of the Hippo AI Foundation, which is designed to develop medical AI for the common good by liberating all future medical knowledge by making it open to everyone.
We spoke in November 2022, and before you tune into today’s episode, I highly encourage you to listen back to the conversation from Episode 41 to be reminded of Hippo AI’s mission and listen to today's episode to hear how Bart's work has evolved since our last conversation.
Following the recording of this episode, you will find information from Bart’s keynote speech at the WHO Europe Conference here.
If you enjoyed this episode, please leave us a review or connect with us via email: email@example.com.
Unlock exclusive content and access to our podcast while supporting our show. How is that possible? Become a Narrative of Purpose patron patreon.com/noppodcast
Hello and welcome to a new episode of Narratives of Purpose, a place for conversations with inspiring leaders that is all about amplifying social impact. I bring you unique stories of changemakers, of people who are contributing to make a difference in society, and by showcasing these individual journeys, I would like to inspire you to take action. If you are tuning in for the first time, my name is Claire Murigande and I am your host on this podcast. In this new season, I am welcoming back previous guests to find out how their companies or their organisations have grown since they were first featured on artists of purpose. In today's episode, I am catching up with Bart De Witte, one of the most influential thought leaders of the digital health era. Bart is based in Germany, he is the founder of the Hippo AI Foundation, a nonprofit organisation that focuses on creating data and artificial intelligence comments for the digital ecosystem to fight health disparities and ensure health equity. With this foundation, Bart aims to be instrumental in driving the democratisation of medical artificial intelligence. Please take a moment to read and review our show. Wherever you listen to your podcasts. This will help other listeners find Narratives of Purpose and further amplify the stories of change we bring on our show. Alright, now let's jump right into the discussion with Bart.
It's quite an exciting technology - a bunch of technologies. And there is no single definition of AI in that sense. So at this stage AI is more wellbeing in that historical data of our decisions that we have made to replicate the sort of 'decision making' in healthcare, this is about recognising patterns in histopathologic images, or in radiology images or CT scans. But it's also about trying to understand perhaps the language of life, which is encoded in four different letters ACTG which is a mysterious language still, but I think machine learning will allow us to decrypt that language and accelerate discoveries in that sense. And I started to ask myself the question, "what is the purpose of the technology?" and I saw the power, because I think it was the missing element between the internet, computer technologies, but then you need an AI, and if you have these three elements, you can truly democratise knowledge. And if you do it in the right way, we could make the world so much better. So I started thinking how I could do this. And that's why I decided to leave my career and create a nonprofit. And I had this very naive thought that this nonprofit should be something like a mixture of the Linux Foundation, Doctors Without Borders and the Ocean Cleanup Project, because I think these purpose driven organisations are coming more and more into fashion, and I also understood that if Doctors Without Borders is able to collect 1.6 billion in donations every single year... if somebody can manage to do that at scale, and you can collect that same amount of money to create data and licence it as open accessible available data, that you probably could change so much as I never would be able to do so in any job.
That was a short clip from my first interview with Bart, which was released exactly one year ago. He was featured in Episode 41 that was published in November 2022. I encourage you to listen again to that conversation to hear about Bart's absolutely fascinating background, and what drove him to create the hippo AI foundation. Like every guest I talked to on the podcast. I have been following his work since we first spoke. I caught up with him a few weeks ago to learn more about how the hippo AI foundation evolved in the past 12 months, but also to find out about new developments and Bart's personal journey as a founder, take a listen.
Thank you so much Claire for having me again and allowing me to update you on my journey.
It's a pleasure to have you because I have to say, I've been fascinated by your journey and I've been following a lot of what Hippo AI is doing. And I know that you've launched something that is called Ask Paper, and there's also some quite exciting news. You will be speaking at a conference, the World Health Organisation Conference very soon. And you will also be at the UN General Assembly. But before we jump into it, I want you to quickly in a couple of sentences briefly remind our listeners, what is Hippo AI? And what is your overall mission with this foundation?
Yeah, so my name is Bart De Witte. For those who don't know me, I've worked 20 years in healthcare, IT tech industry. And after 20 years, I asked myself the question, why don't we see the effects of democratisation that normally technology has in healthcare? So I left my corporate life and started a mission to democratise the most important technology that is connected to healthcare, which is AI. I believe that as humanity we would make a big mistake, if we started to close down all the knowledge and make it exclusively available to only a few, when it's protected by intellectual property rights. I believe that if AI is trained on outpatient data, that the outcomes and the model should be open and accessible to all and I believe you can truly make a game changer out of that to establish health equity. It's been, I think, the most difficult journey of my life. But the most rewarding because I'm fighting a lot of powers that want to capture a lot of value and capture that value before it's actually deployed on the market. But I'm advancing, so I'm quite satisfied with where I am.
Well, that's good news. And it's good to know that you're advancing even though it's challenging. I want to speak about the changes since we spoke last time. So we spoke exactly one year ago, and I have seen in your newsletter that you've launched something called Ask Paper. Tell me about that, and how is it serving the healthcare community?
Last year, I decided to change the strategy slightly. So there are two directions in that strategy; One is directed to building a platform that is focusing on hosting a community and enabling them to collaborate faster, working together faster, and giving them access to resources and tools so they can accelerate the development of open source medical AI models. And for that, we are building a stewardship model 'for profit' that is connected to the nonprofit where we are getting, hopefully, very soon a positive answer on a larger funding pool so we can go from Ask Paper to the whole deployed platform.
So what is Ask Paper? As you know, when you start with a big platform vision, you need to start with your small MVP. And the smallest thing that we could build is to solve a problem that a lot of AI researchers have today, is that there are so many models out there and there are so many datasets out there being published, but they are not findable. And the reason why they're not findable is that they are published within the corporates of a paper that is behind a paywall. It is as simple as this. Like Nature magazine or Elsevier or Springer outlet puts these papers behind the paywall and in the abstract, it is not written that there is a model published or there is not written that has data published. You need to go in the paper and start reading it and somewhere in the paper, there is a link. And that link leads then to GitHub or to some sort of repository where you can download the model. If you search Google, you can't find it either. If you go to GitHub, you won't find it either, because you cannot use clinical terms. So what we did is look at the workflow of these researchers today and give them a tool, so they can do this faster. And so we use the large language model, we now also have a browser extension, so when you are visiting Nature magazine or reading the paper, you can choose the browser extension, for example, "right extract data sets", and then it will extract the data sets that are mentioned in the paper in your personal account profile. So by doing so, these researchers gain two to three hours a day of work, and they can start building a knowledge base of all the datasets and the models extracted.
The reason why we did this MVP is of course, threefold, first of all to deliver value to these clients to see if he can manage to deliver value, and the usage data and NPS course, which is measuring satisfaction, is really good. The second thing I want to do is that we decided to not create our own user profile database, but connect this to Discord. That means that we wanted to capture the community. So we can start working with the community to build a community platform because we want to do this very user centric, and now on Discord we have 100s of AI researchers coming from all continents, like from, from Kenya, to Korea to South America, North America... And we can now tap into these communities and start working together with them. So our Discord server is where people can introduce them. There is a manifesto, and it starts building that community platform that we want to build ourselves. So Discord is like an intermediate place where we give our community a home. And the third reason was that we want to build a proof of concept of a technology that allows us to build the next phase.
So what we're gonna do is, after Ask Paper, we're using the technology and we have contact with organisations that allow us to scan 14 million papers at once. So we extract all these datasets at once and classify them using clinical terms, so you can then use MeSH terms and say, "Okay, I want to look in dermatology for this sort of model, and then we will make them available without anybody having to read that paper".
So the reason why Elsevier and Springer are not offering this is because if you look at copyright laws, they don't own any copyrights on the link where that data or that model is stored. What I also learned is that a lot of the supporters are actually physicians or younger biotech engineers, bioinformaticians, they are all looking for resources. And so what we are going to do is, first on the platform development is working on project based learning skills. So we want to give the students access to tools, and educational tools, so they can start working and building their own and training their own AI models. And with them together with the community that we're building, we're gonna go and build that whole platform that allows them from an idea to a clinical grade AI model to develop that in a co-creation mode globally, of open source principles. So the main goal is to build a platform that hosts the biggest community of open source medical AI, and we will, on the platform, make tools. So the research that has been done and the models that have been trained can be used by the industry, in a commercial use, but cannot be privatised like it's still open source. That's the first thing of the strategy, like building the platform.
The second thing that I changed from last year is... having a nonprofit is really, sorry for my words now, but it's a pain in the ass. Living from charitable money doesn't really work in Europe. So I needed to find a new finance model, and that's a good thing, the more you are challenged, the more creative you become. And there was one thing that out of all the activities that I learned is that A) there was one event once were I was at the AI For Good conference, where a company from Mountain View, California, a big internet search company was presenting their work in India where they used eye, like iris scan data to detect blindness. But you need ophthalmological skills for doing that. But if you don't have to have an ophthalmologist, you could do it with AI, so the company was presenting how good they were at doing that. And I asked them "Is the data that you extract being published as open data so that these people in India have access? B) Is the model that you trained on the data that is open? And then C) Did you ask for consent?" And in all these three questions I didn't really get an answer, it was a corporate “blah, blah” and nobody was giving a real answer. So, the answer was three times no. No consent, the data was not open, the model was privatised. And then I said, "Well, I come from Belgium, and this reminds me of my great great grandparents, they went to Congo and then were told "We're giving you health care in Congo. And we're taking all the natural resources out of your country" and I call this data colonialism." I got really a few evil looks, saying "How dare you question this?" and I'm like "Well, this is not good. There is no good in this. This is, for me, asocial behaviour, the common goods there, the resources, you grab them and you privatise them, and then you create dependencies and you say "Well, we are doing good. But you are now dependent on us.” It has nothing to do with sustainability, with increasing resilience and building local communities, and all these things.” So, that was one event that triggered me.
The second one was the work that I did on breast cancer. We got support from a big pharma that comes from the UK and Sweden, and they said, "We want to support more in open source. But if we do that, then the competitor that is in Switzerland, somewhere in Basel also profits from that without having to invest." So that's called the prisoner's dilemma, a game theory. So if one shares and the other one doesn't share, the one who shares 'loses' so there's a lack of incentive for openness. If these companies want to invest in openness, you need to incentivize them with something positive. With all these things together, I started to look into the ESG framework, because the Ecological Social Governance model should actually support the sustainable development goals. So I said, "From 2025, when the European Union is going to mandate for these companies to publish ESG governance reports, that means there is Ecological, everybody understands the carbon footprint of a company. But what is the Social footprint?" So I started to think further on that, and now we have created a new framework. It's called Regenerative AI, and it comes from the regenerative economy. So we put Regenerative AI in opposition to Extractive AI.
So what is extractive AI? If you look at open AI, which published GPT Four, it took all the common data for Wikipedia, everything that the scientific community contributed to openness, took everything and then didn't publish anything anymore. When you have a transaction with GPT Four every transaction learns from your data, so they take take take, but they don't give back. So that is asocial behaviour for us in that sense. You create asymmetries which lead to power asymmetries or information asymmetries. And in healthcare, this leads to inequalities. So we say "This is extractive AI ''. So on the generative AI side, there are principles that are connected to open source licensing, and other principles that we are now defining that we can measure. So the idea is that we create what we then call an index on Regenerative AI, where we measure all the AI systems in healthcare, and we score them. If we score them, Moody's and rating agencies can use these ratings to kind of increase or decrease ESG scores based on the social or asocial behaviour of these companies. And we only do this on healthcare.
So with this, I wrote down a concept and then I contacted 70 people, there are two Nobel Prize laureates and many, many different authors from everywhere that I have connected with in the last few years, and also we want to become co authors, so I'm creating an open source project out of this, but I want this framework to be established. And in order to get acceptance, it's good that you have quite a lot of authors there that support it. So what we are doing now is we are fine tuning the concept with four core authors. We pre-submitted it to a very renowned paper, and then we're going to publish it by this year. And then by the end of this year, we're going to do a kickoff event in Geneva, where we will present that and then we are hoping and we are planning to reestablish the Hippo AI foundation as a Swiss foundation in Geneva, where the focus will be about publishing the index, and then introducing a concept which we call a Health Data Offsetting, so in that sense, people criticise that way, because companies can liberate themselves by spending money and it's like, well "Somebody needs to pay for the creation of the commons." So if we can use these certificates that we then sell for health data offsetting, and if all companies say like “I want to increase my ESG score on the social level, and I give 100 million for open source development”, then we can grant these funds for the development for innovative life saving innovations in the AI space.
So that is the whole concept. And everybody I talked to was really fascinated and that's the reason why next week, I'm doing the first keynote at the European World Health Organisation Conference, where quite a lot of ministers are gonna listen to this story. And in two weeks, I'm going to be in New York at a side event at the General Assembly, where I was invited to the Advisory Board meeting of the AI council for AI governance. So I think I'm onto something with this, suddenly you see traction, and then we can finally point fingers at that Mountain View, California company, where the European Union in Brussels without the regulations didn't manage to do anything and lower the power asymmetries that these companies have. So with this, we can. We can support and benchmark asocial behaviour, but we do it only on healthcare.
I'm quite surprised that up until now, just even given the example you just shared about this company with the iris screening about blindness. How is it that these established institutions, or even, the European Union, or maybe governments are not able to challenge these companies and what they're doing?
Yeah, it's called diplomacy. If you're a diplomat, you always avoid taking any strong standpoint. And I think if you look at the United Nations, the World Health Organization's they have to be diplomatic because they have funders like the Bill and Melinda Gates Foundation is a large funder of the World Health Organisation. They don't like open source too much like Bill Gates was calling open source 20 years ago 'communism' and I started to realise that the influence of capital and industrial interests on research is quite large. Like even in Germany, third party funding for research comes from the industry where even if you want to become a professor and you want to get a chair, you need to come with external funding. And that external funding is really kind of already influencing decision making on "isn't that what we need?" And I had quite a lot of discussions here in Germany, where they said "Well, we need big tech, we need these third party contracts for funding our research infrastructure." And I say "What?! First we missed the opportunity to build our own cloud infrastructure. And now we are dependent, so we are going to take that money to become even more dependent?" It doesn't make any sense. There was one point where we needed to cut ties here. And take this very strong standpoint. And if the politicians and the parties don't do it, and if the academic called infrastructures cannot do it anymore, because of the dependencies, then mostly, these things are driven by civic society movements, or NGOs or nonprofits so I think I'm in the right spot to do this. Because I can be non diplomatic and strong in defending these norms and values and principles that we stand for.
So to this point, what are you perhaps expecting or envisioning as outcomes, for instance, to your talk at this World Health Organisation conference, because as you say, you're very independent. Still, you have a platform there and you have a voice and you will be addressing yourself to these people. So what are you hoping or expecting in terms of outcomes of what you're going to deliver there?
Okay, I have 20 minutes. And in that 20 minutes, I need to nail it. And nailing is like, A) breaking all the dogmas around open source. There's way too many people that still think that open source means giving everybody a free beer, that's not what we do. What we do is allow everybody to brew their own beer at something very different. We give everybody the possibility to be independent, sovereign and resilient in digital healthcare needs. So if I can already explain that, that is a huge win, this is not about communism, this is not anti-capitalistic at all, this is about finding a common way of collaboration between industry partners. The second thing that I want to reach is that they understand that there is no argument anymore against open source AI. Since we talked, and now the world has completely changed in the large language models. It started with Bloom last year, I think I mentioned that in our call that Bloom was just published. And then we have seen a tsunami of collaborations and to give you one example, Stable Diffusion, which came out in model 2.5 a month ago, which is a diffusion model for text to image generation, was published as an open source model, one and a half months ago, and it has been downloaded 2.7 million times. Downloaders are coders and hackers who download this - these are 2.7 million people, there is no corporation in the world that employs 2.7 million people. Like if you have so many people working on a model, they're going to cross pollinate innovation, they're going to make this thing more efficient. And that's what we have seen happening the whole last 12 months. I made jokes that some of the models can soon run on my bread toaster, because you don't need any compute power, nearly, they became so efficient that even Meta, Facebook started to publish open source models. Now we have to be careful with open source. Because they don't publish the source code, they don’t publish the data, they only publish the weight file. But at least with a weight file, there is a lot of value that they created. But we need to watch out that we don't over hype it because it's still not open source, and they don’t share the data. But with Llama, which is their model version two, the last one that came out, they released it on a commercial open source licence, which completely changed the game. One week later, Microsoft said "We're not going to only do open AI, we're going to work with Llama, open source." So they have a model on the Microsoft cloud that competes with open AI, which is a closed model. I don't see a future for open AI anymore. I think the open source world is eating the closed world very rapidly.
So what I want to explain next week in Porto, there is no reason in healthcare for us to drive the closed world like we need in Europe, regulations that drive only openness. Like there is no reason because the whole system has proven the last 12 months that openness is faster. What do we need to save lives? faster innovation. It is cheaper. What do we need in health care? More access. So all these arguments that I'm going to bring will hopefully silence everybody who thinks that the intellectual lightened people say "You need patent protection in order to get investments”. That is not true. If 2.3 million people download a model in one and a half weeks time, there is no investment for all of these people. People have other drivers to work on this. So that's going to be my other take. And the third one is going to be, of course, I'm going to present the Regerative AI model and try to find partners. Because we need strong partnerships for that.
It's amazing what happened in just 12 months. And as you say, I mean, when everybody has the same access, then obviously, that's where the magic happens, right? Because you don't have restrictions. And people are really creative enough to build the solutions that are needed.
When we talked last time, we were still closer to the post COVID period, but there is something called 'The rubber band effect'. During COVID, everybody saw what openness meant, how fast we were, we did not have any barriers, the first candidates of the mRNA vaccine went to clinical trial phase one clinical trial, two weeks after the Chinese published the sequence data on virology.org - Two weeks, because there was no barrier. Everybody learned how to collaborate. The reason why is because we had a shared enemy and a shared enemy leads to collaboration. Now, the rubber band effect is that we moved further, but then suddenly, the pandemic was let go, and then you need to go the rubber band and we are back there like nothing changed. Now Moderna is pushing the prices of their drugs like we are back in the very old world where everything is being siloed. And it doesn't make any sense. We live in the year 2023, we are talking on a computer, and it's connected to the internet, which is based on open source. Every person living in Africa, I can talk to on the internet, but when it comes to healthcare, that person would give his data for getting a digital diagnosis based on some sequence data, and then there was some therapy, and then not give him access anymore. We need different models, we need to completely reinvent how we create value in healthcare. And I'm not against value in terms of earning money and being profitable. Not at all. But I think we need to stop competing on owning life saving goods and only giving 40% of the world access to that. And we need to start competing on experiences, who creates the best experience. And then it can well be that some pharma will go with Gucci and create the 500 million drug therapy or cancer therapy for some Saudis that want to afford it, that's fine. As long as the life saving substance of that 500 million therapy is also available in a RyanAir concept. If we can move to that, we still will see growth, we still will see investments, and the industry will collaborate and we will compete on experiences. And everybody who has been a patient knows that experience really sucks. Sorry for my words, but like I think my daughter was already born last time we spoke but when we went to the University Hospital here in Berlin, it's horrible. Like even when bringing your child to the world. There was no effort on experience. If you look at taking drugs, you still have a box with a paper you cannot read, with small prints. If you go and they are cars that have been pulled off the market, because they have to call back a car, everybody who owns that car is being informed. With drugs, nobody is informed, they don't tell you "oh, we had a bad batch. Now, we need to inform 100,000 patients" Nobody cares about the experience in that sense. We are stuck in power plays of Games of Thrones, and we need to democratise this in order to create value on what I call the experience economy level.
I think you're onto something as you were saying, and I am looking forward to seeing what is the outcome of the feedback of your talk next week in Porto and how this awareness grows. And what follows after this discussion.
I also have a few tractions, I have a Swiss documentary from Swiss television that is following me then and there are things happening that people start realising "Okay that nutcase from Berlin, with his open, romantic, idealistic Don Quixote dreams", that's all names I've been called in the last few years, by the way, I'm a Don Quixote, I am a Romanticist, I am a dreamer, I'm an idealist, I'm a communist. All these things, no, I'm none of them. I'm a realist. And I'm more of a realist than all those who close their eyes that we are creating a health system in the digital space that is going to lead to even more inequality if we allow data to be hoarded in some single platform companies and doesn't flow anymore. All the others are romanticising, because they believe that everything is going to be as cheap as Uber. But they forget that the prices of Uber in New York increased in the last 10 years when they started by a factor of five or something, and now they have a monopoly and they have shareholders, they need to see growth, of course, they're going to increase prices. It's not because it's digital that it's gonna get cheaper.
And that actually comes back to saying that the models that we've been evolving in the past decades need to change, like you were saying, Uber started at a low price. But now if they have shareholders, they are basically going to the system that's been built already and once you're in that system, then basically you can offer what you had disrupted before. So basically the same thing for healthcare, you need to disrupt the way it is now because as digitization comes along, if you haven't disrupted that foundation of it, you're basically just carrying out the same principles just in a new environment. Right?
Absolutely. And I think what Uber has is that they own the supply and demand data of some cities, in that that's why they can do this. What is disruption anyway? Disruption is mostly new technology that leads to cheaper prices, that leads to more empowerment of everyone, for me at least. And healthcare hasn't been disrupted. I've been watching startup pitches since 2004, the first time I was in Silicon Valley. That's over 100 billion of venture capital investments in startups ago. And in 2004, the total cost of healthcare in the US was 14% of GDP. Now we are 20%. I ask myself, "Okay, we invested 100 billion in startups, where is the disruption?" Everybody was promising disruption and now US citizens are paying 6% of the GDP more, while life expectancy is going down, price of insulin went up 2,200% nothing really democratised. No, it's extraction in the financial markets like we're being sucked out... life is being capitalised in that sense. And I think that should not be the model of healthcare, because it turns us into some sort of slavery where we are dependent on these systems in order to live. That is the absolute contrast of freedom and everything that we connect with a democracy. I don't understand why even in the US, people are going on the streets and starting to scream out very loud. It cannot be that the top 1% lives 15 years longer than the rest in one single country. And it was funny because I was at the Lindauer Nobel laureate meetings and I had to moderate a panel with a young researcher, like with three Nobel laureates in medicine and two young researchers. And there was one young researcher from Harvard. And she was not very pleased, because the topic of my panel was AI and health equity, and she said, "There is no problem with health equity in that sense". And then, at the end, she made this thing that was like, "Well, I was once in India, and I saw people living without a shelter. And so we need to look at the sources why health equity exists." And I said to her, "You don't have to travel to India to see health inequalities, you are living in the US and you're talking about India," but she's completely blinded that in her country, 70% of the personal bankruptcies are because of health care costs, people selling their houses, because of cancer treatments, all these things that for me, this is absolute dystopia. And I think we need to, in Europe, not repeat that and if the US can learn from us then it's fine.
So my question now to you before I also let you go, as you mentioned previously, as well that people call me a dreamer, you know, romanticist, whatever. How would you encourage, or what would you advise to people who are also seeing this potential and how they can contribute to make things different and beneficial for everyone in the end?
That question, I couldn't answer at the beginning, but now I’ve started to find an answer to the question, because I think my main driver is about focusing on the problem. And it's a very complex problem. And I think Einstein once said that if you have an hour to solve a problem, focus 50 minutes of the time on the problem and 10% on the solution. So start really understanding your problem, I see way too many people repeating what other people said, "I'm going to solve it without understanding system theory and all the dynamics around it." So focus on the problem, then once you understand the problem, be very focused on your solution and if nobody says you're crazy, it's cliche, but probably your solution is not disruptive enough. And believe in yourself. I think the most challenging thing for me is really doubtful moments. And the reason why I kept on going is because I've had a good marriage and a wife that supports me. It's really true. Like it's like she's like "Believe in yourself." And sometimes you need it from an external view that says, "Hey, I didn't see progress at all." I was like, "Okay, like, I don't even know how to finance things and everything else." But then it's really about believing in what you want to achieve. And if your mission is really touching people, a lot of people will agree they will not speak out for you because they are captured in that system. I have many people working in pharma say "I love what you do, but I cannot support you publicly because we are captured in our system". So I think you need to stick to your vision, believe in yourself, and then if you have patience - don't go for short term exits - don't believe that you can change complex problems within one to three years. That doesn't work. Everybody who claims that they did something in two years very quickly, never ever did that, like you don't change complex issues. If you claim to do that, you just jumped on somebody else's work, and probably were surfing the waves that already was realised by others. In that sense.
I like it, believe in yourself and be patient and have to say, it's a consistent advice, or at least an experience I've been hearing for entrepreneurs is really patience, persistence, as well, just to really stick to your vision, whatever comes along. And I think something you also mentioned is having the right support system, because in moments of doubt, there will be more than you can imagine. Challenges as well, so having that support system is also helpful. Thank you very much for sharing that.
Thank you. Also, people are provoked by my hat, because it's, of course, a provocation.
"Make AI open again." So you have a red baseball cap, and it's written in white to "Make AI open again!"
People of course, directly as a presidential candidate, but it's also the colours of Switzerland. And of course, "Make AI open again", it's a play, of course, to "Make America great again" But it's provocative, and what I learned as well is that we need these provocations to shake up people. I had a journalist who was really upset about my cap. And I said, like, "Are we going to really discuss my cap? Or are you upset that our data is being appropriated? Do you get more upset by this cap, or the fact about the problem that we're discussing?" Because if it's the cap, everything is wrong in our world. If people find somebody wearing a cap more, living in Europe, not affected by anything of the President - it's not our president, and then getting upset...
They missed the point, basically.
But that's how people work. They get triggered, because they have been triggered the whole time on social media, and then they get a reaction. And that's why I provoke this, and it works really well. Even Anne Lévy, the General Director of the Ministry of Health in Switzerland, I have a picture with her. I have quite a lot of Nobel Prize laureates wearing the cap. But it definitely starts the conversation. And that's why I'm so happy that you have me here. We need to think way more about the future of our healthcare systems, because it's gonna affect all of us.
Well, thank you. Thank you, as well, for coming back again, it's been really a pleasure for me. And I'm happy that the platform that I've built in the past few years, is also something where I can have someone like you who opens up conversations and maybe topics people don't think about that are actually relevant for everyday life. So thank you so much.
Excellent. Thank you so much Claire.
The mission of Bart's organisation, the Hippo AI Foundation, is to develop medical AI for the common good by liberating all future medical knowledge, essentially by making it open to everyone. If you wish to support the foundation, or even join the hippo AI community, then check out their website at hippoai.org. As always, you will find the link in the show notes. I would also like to share a couple of additional resources you might want to look into that were published after we recorded this conversation. The first one is a blog post titled "The elevator, trust and the data commons" Bart DeWitte makes the case for open AI for health at WHO Europe. It was written by Jane Sarasohn-Kahn on her platform Health Populi. And the second one is the 20th edition of the Hippogramme, the hippo AI newsletter, titled, "Regeneration, sustainability". All the links to these resources are of course available in the show notes. Thank you so much for tuning in today. I appreciate your taking the time. That was episode 61, "A new conversation with Bart DeWitte on making medical AI a Common Good." Make sure you leave us a review everywhere you listen to podcasts. And if you like what you're hearing, remember to share this episode with a friend or colleague or even a family member.
Until the next episode, take care of yourselves, stay well and stay inspired.