Join me for today’s conversation where I will be speaking with Bart De Witte, a leading expert in digital transformation for healthcare in Europe and founder and CEO of the Hippo AI Foundation. As our world moves into the digital area, it’s the perfect time for this episode to provide you with insight, understanding and knowledge of how medical artificial intelligence can impact our ever changing world for the better.
At a young age, Bart was inspired and empowered by technology, and at the age of 18 wrote a school paper that predicted the impact of technology on the African continent and beyond. This conversation will give you a powerful insight on how working together and utilising artificial intelligence through open technologies, in the healthcare sector, can shape our world into a more equal place for our future and many generations to come.
Bart shared that some of his favourite books are The Surveillance Capitalism by Shoshana Zubov and The Undoing Project by Daniel Kahneman. And somebody he would love to listen to on this show is Nick Couldry from London School of Economics.
This year, Hippo AI will be hosting the first open health data and AI summit on 1-2 December. You can register your spot here.
Hello dear listeners and welcome to Narratives of Purpose, you are now tuned into a new episode showcasing unique stories of changemakers of people who are contributing to make a difference in society. This show was created to amplify social impact by sharing individual journeys of ordinary people who I believe are making extraordinary impact within their communities and around the world. My name is Claire Murigande. I am your host on this podcast. And if you want to be inspired to take action, then look no further, you are in the right place, get comfortable, and listen in to my conversations. My guest on the show today is Bart De Witte. Bart is an award winning social entrepreneur, a keynote speaker, he is also an artificial intelligence and sustainability strategist. In fact, you can narrow it down to say that he is a leading expert in digital transformation for healthcare in Europe. Bart is the founder and CEO of the Hippo AI Foundation, an organisation that focuses on creating data and AI commons for the digital ecosystem. The foundation facilitates and supports communities to accelerate the open source development of medical artificial intelligence. Now, this might sound somewhat complicated, but trust me, this is a fascinating, timely and quite important topic to address, and Bart is the right person to speak to about it. So as you can tell, we will have a lot to unpack, I'm really excited to share this conversation with you. And for this episode to reach even more people, I invite you to take a moment and share your feedback by giving us a review, on Apple podcasts or Goodpods. This will help other listeners find our show and further amplify the stories we bring on Narratives of Purpose. Alright, now let's dive into the discussion with Bart.
So hello, Bart, welcome to the Narratives of Purpose. How are you doing today?
I’m quite well. Thank you, Claire. Thank you for having me.
Before we start going into introductions, that's what I usually do. I want to pick your brains immediately and kick off with one burning question, because I want to make sure that all our listeners have the same understanding of what we're going to be speaking about. And that is artificial intelligence. So you are the founder of a foundation called Hippo AI. And if I would sum it up in just a few short words, basically, your goal is to make medical artificial intelligence a common good. So Bart, give us a comprehensive definition of artificial intelligence. What is AI?
That's the question everybody gets asked, and everybody gives a different answer. For me, AI is a box or tool set of technologies that enable people to A) discover insights that are embedded within large datasets. And B) gives us the ability to scale that insight in every corner of this planet because it becomes replicable in that sense. So to make it more concrete in healthcare, it will allow us to accelerate medical discoveries and it will allow us to scale that knowledge that is created in each corner of this planet if we do it right. So it's quite an exciting technology, or a bunch of technologies. There is no single definition of AI in that sense. It has definitely nothing to do with mimicking the brain or creating a brain like a computer, it is more about mimicking intelligence or human decision making in a sense. We humans actually are also pattern recognition machines. A lot of our behaviour and decision making is based on intuition. And intuition is also a form of pattern recognition. So, at this stage AI is more delving into that historical data of our decisions that we have made to replicate the sort of decision making and in healthcare, this is about recognising patterns in histopathologic images, or in radiology images or CT scans. But it's also about trying to understand perhaps the language of life, which is encoded in four different letters ACTG and which is a mysterious language still, but I think machine learning will allow us to decrypt that language and accelerate discoveries in that sense. And that's what we see happening as well in the field of research. So it can be used in very different ways and forms. And it's also what people underestimate. It's one of the fastest accelerating technologies I've seen in capacity and performance.
Let's rewind a bit and tell me more about yourself; you know, who is Bart? And why did you create this foundation? What is the origin story behind that?
I grew up in a quite isolated region, close to Antwerp in Belgium, my parents didn't want to live in the city. So I was quite isolated in my childhood, because we didn't have that many neighbours to play with. But my father was an engineer, and he gave me access to computers much sooner than any of my friends. So as a child, I was programming and I was playing, gaming and games were some sort of intelligence, it was my virtual friend to play with, I was quite fascinated by the ability to create things. And I saw also the progress at that time, that suddenly, I had access to technologies that normally you only had access to in banks. But as technology was democratising, I suddenly had access. And that was the first thing I wrote about. I wrote a paper in 1989 at high school about artificial intelligence, because my uncle was building expert systems in banks. And I got a really bad note on that, because my teacher didn't understand anything that I had written there, but I wrote one sentence, which I'm really proud of I wrote at the time that "AI will give African doctors the ability to get the same expert level diagnostics as European," and that was in 1989, when I was 18 years old. That was kind of the beginning but then I didn't study Computer Sciences, because they told me at school, I was too creative. And I had to do something with my hands. So I studied dental medicine. But the most boring thing I ever could imagine doing in my life, sitting in a room and having patients who you can't even talk to. I finished those studies and I moved then to Sport Science because I was quite active in that sense as well. And that's how I kind of ended in a weird trajectory in Switzerland working for the Swiss Olympic Institute in Magglingen. That was in 1996, and then I was again isolated there. And then there was the internet. And I started to discover the internet as one of the early adopters. And I had access to MEDLINE, which was the time the database online to get access to papers; and I certainly was empowered and having discussions with orthopaedic surgeons and others and I was much more empowered because I had access to information they didn't have access to. And then I suddenly understood the power of computer technologies combined with the Internet. And I was hoping that this would lead to a real democratisation of knowledge. That's when I decided to leave everything that I studied for, I moved into SAP, which is the weirdest thing you probably could imagine doing, working for an accounting software company. And then I went in this whole career of being a consultant implementing these systems, in large hospitals, understanding the logistics, supply chains, billing, clinical information systems - I know a hospital inside out through that. And then I did project management, I became a product manager and went on in my career, becoming a regional leader working for IBM later, and then working on the strategic side - I was heading Central Eastern Europe for 26 countries. I had this immense career in big tech. But then I started asking myself, because since 2010, machine learning AI came into fashion again, I was at IBM where that whole hype was generated through Watson. And I started to ask myself the question, what is the purpose of the technology and I saw the power, because I think it was the missing element between the internet, computer technologies, but then you need AI. And if you have these three elements, you can truly democratise knowledge. And if you do it in the right way, we could make the world so much better. So I started thinking how I could do this. And that's why I decided to leave my career and create a nonprofit. I had this very naive thought that this nonprofit should be something like a mixture of the Linux Foundation, Doctors Without Borders and the Ocean Cleanup Project. Because I think these purpose driven organisations are growing more and more into fashion. And also understood that if Doctors Without Borders is able to collect 1.6 billion in donations every single year. I was like, if somebody can manage to do it at scale, and you can collect that same amount of money to create data and licence it as open, accessible available data, that you probably could change so much as I never would be able to do in any job. I started that mission with a lot of naivety as said, but you always have to do that. You cannot think too much upfront about everything, what's going to happen, you just have to start doing it. And that's what I started doing. And the name itself Hippo, I did do a lot of research and started reading about the history of open knowledge. And open access came from Hippocrates, in ancient Greece, the godfather of the ethical foundation for medicine. So in his original oath, Hippocrates, he mentioned that physicians always needed to share their knowledge for free, without economic interest with their peers, it was not open knowledge, it was only accessible for their peers, which were physicians, but it was de-economised. And I wanted to have that same principle on data and AI to say, "let's make it free. Let's make it available. So everybody can still build products for free without being dependent on any big tech platform."
And coming back to machine learning, my understanding is that, you know, we are dealing with huge amounts of data, right? And whatever tool we develop, it needs to be trained. And that's where machine learning comes in. Is that correct?
Yes and no, it can also train itself. There are different ways of learning, there's what we call supervised learning, it's like when you would have a kid next to you that reads a book and starts to look at elephants, and you would, as a parent, say, "that's an elephant." And you would repeat it quite a lot of times, and the kid knows that over time, it's an elephant. The unsupervised way would be that the kid reads that book the whole time and starts recognising that there are different elephants in there and starts clustering them in a group, doesn't know what they are called but knows that they have similarities, and if you then give the kids the data or you start 'training it', it learns that if it says ‘these are tigers’ this is wrong, that over time the kid will learn by itself that these are elephants, without supervision. That's just one element of machine learning, there are now very different ways of learning something that is very similar to human behaviour, where you get rewards if you make the right decision. So that's what we normally do, when we create habits, we have a trigger, that's something that creates an action and, and then you get a reward. This is a bit like how some of the machine learning models work, they try X combinations, and when they are correct, they are rewarded and learn from that reward. The trend is really more and more to less supervised, and more unsupervised, as the technology is getting more performant. And now there is something called Zero Shot learning, where you just need to train your model once and it really understands, for example human language, in a way that you didn't need to teach any concept, you didn't have to teach what "a tiger" is. And it starts to understand the concepts of language and can do all sorts of things. And, and that's human language. But you can do the same thing in biological language and where you see the same technologies being applied to our genetic code, or the four letters that I mentioned before, where we start applying them in that field.
Can you give us some examples, whether you have been actively involved or you know of that you think is worth mentioning here for our audience to understand how this is evolving and how this is also influencing healthcare at this point?
Yes, and I want to take a step back to let people understand why it's more powerful than human decision making. I used to give in my talks, an example of a research experiment, where they trained pigeons to become hysto-pathologists. And they gave these pigeons images of breast cancer, histopathologic images. And when they selected the right images and told this is cancerous samples, they got rewarded with food. But as these pigeons see a much wider array of light than humans, the human eyes are limited in the amount of colours we see. So they can receive and detect more patterns, which led them to the case that these pigeons became better in recognising patterns than humans. And it just shows that these pattern recognition techniques are not unique to a machine or to humans. They are everywhere in nature. When you have sensors and the ability to see and go beyond the human ability you have, then you will be able to detect these patterns much better. If you look at physicians, how they make decisions, we think that every physician makes the same decision, which is not the case. I used to make this joke that "half of the physicians are worse than the average," which is just a normal Gaussian distribution. So to put it into practice, I had somebody in my family who had leukaemia or was diagnosed and I said, "please go to that friend of mine, who is the leader in leukaemia diagnostics in Germany, because he does 60% of all diagnosis in Germany. And he has a reference lab where everything is standardised and he does 80,000 diagnoses a year with leukaemia" which is a lot. And my family member, who is an extended family member, was at a hospital that only had 20 leukaemia cases a year. And what actually happened is that he probably, we don't know, got wrongly diagnosed, and he died just a few months later. And this is the case in some areas, you need a really precise diagnosis, because there are therapies. And if your physician doesn't recognise these patterns, or hasn't that experience, he will make bad decisions. And so the question is here, how do we take the data of the best experts and the best decisions and use that data as a training layer for machine learning and give every single doctor, also those with less experience, that same ability. And if you do that you flatten down that Gaussian curve, and then all physicians will be average in that sense, because the average will be the best. And I think that is kind of very promising that you start augmenting intelligence with the best knowledge that is out there. And now to give more practical examples, you can do that in imaging is quite easy. Why we do this imaging is because you can anonymize data quite well. And you can thereby access data quite well. But it can be used in every domain in healthcare. It is used even now for finding so called biomarkers, which are predictors that are data points where that helps us to predict something. And now they are using this with, for example, voice. People probably heard that there were algorithms out there that could recognise if you had COVID or not based on your voice and the sound. And you can also use this, you have biomarkers that detect an onset of Parkinsons, or specific neurodegenerative diseases that influence your motor control of your voice. So you can, there is definitely a pattern that we humans cannot recognise that these algorithms start to recognise. So it becomes a tool that allows us to combine phenotyping, like the physical parts that we can detect out of a patient together with the genetic data, the genotype data, and allow us to make much more accurate diagnostics. And thereby, by knowing this subclassification find better therapies that fit through that much more precise diagnosis.
So basically, it helps us connect what we see and what we cannot see, to make sure that we can make the right decisions and the right informed decision, right?
An example that I had when I was working in Africa, we were building cancer registries in Sub Saharan Africa. That was at least the idea, I was sent there by IBM to build it and I started to discover that when you build the cancer registry, you first need a pathologist but there are no pathologists in this country. I flew to India because there was a physician that had built a cancer registry with very low cost, and also not having access to pathologists. But he used a very old way of detecting HPV, a very different way to detect HPV infections by using a kitchen acetone and applying this to the cervix and then using a coloscope to look at colourisation. And he trained nurses and he showed me a book of 1000 images that he used to train his nurses. I said “these images, are they free?”. And he said yes. So together with a startup in Israel, actually they did the work but I initiated a part of that work. We used these images to create an AI that could be used on a smartphone. And then we 3D printed a coloscope that was attached to the smartphone with a tenfold visual magnification, and we started taking pictures of the cervix and matching that, of course with pap smear. So you need to always have a correlation with ground truth. And then five years later the startup published a paper that confirmed that the visual way of detecting HPV became much more accurate to the Pap smear. And that was where I had an eye opener. I said, "Wow. This is so powerful. I came to Africa, there was no pathologist. But we don't need the pathologists because we suddenly have access to this tool, which is a handphone which automatically generates the register data, and everything is digitised." The only bad part of that story is that then that startup got so many investors on board that then said after they published the paper, "well, that's not interesting doing this in Africa, if you have such high accuracy, we're going to focus on the US, because that's where we're going to generate the most profit" So they the defocused, and then the investors push them to go to high income countries. And that's where I started to learn that if you do this, you need to liberate the AI from economics.
This is a fascinating example where you connected one country in Africa, you connected India, you came to Israel, and you came up with a solution. So this shows the collaboration and again, as you were mentioning the open access. So what is your foundation or your platform exactly creating? Are you basing your work on this kind of collaboration? Can you expand a bit more?
The first years, as I said, there was a lot of naivety. So there was also a lot of learning involved. So as I wrote down, I analysed and I wrote down 75 hypotheses that I said, "I'm going to test them out in the next few years," I gave myself time, I have the luxury that while I started following this path that I was more and more starting to get invited to speak about it. And I was able to finance my life just by speaking about it. So it gives you a lot of freedom to start focusing on your actual work. And one of the things that I learned and that was COVID came along when I started. And I started to de-shift from AI and open source to using these open source principles and 3D printing because during COVID, we all had a need for medical supplies. And we started with a friend of mine in Berkeley, a Facebook group that focused on bringing people together to create an open source ventilator. Everybody said like, okay so you want to build a ventilator open source? Well, the power of open collaboration is unimaginable, so we decided - let's try this. And eight weeks later, we had a group of 80,000 people globally that started to do all sorts of other things. There are research papers out there that show that the whole group just altruistically, with no economic incentives involved, eight weeks after we launched, the Facebook group created 1 million 3D printed products a week. And this was not a ventilator. These were face shield masks, these were tubes, these were things that people needed. But it showed the power of massive open collaboration. And the reason why we did this is because we had a joint enemy, and having a joint enemy accelerates everything. So that was my learning to say, "how do I replicate with Hippo AI? What is the enemy?" And the enemy that I chose is inequalities, because it can affect everyone. If people think in Western Europe, they are free of health inequalities? Well, if everything goes wrong, wait just a few years or 10 years, and it's already happening now with drugs that cost $2.1 million for a gene therapy for children, where mothers in Belgium where I come from start doing crowdfunding campaigns and not knowing if their child will survive, although the drug is available. That's inequality. And that's the kind of inequality I think we can fight together. And I believe we need to shift that model from scarcity to a model of abundance, and then shift the economical value of experience, which means that those who create a better brand or a better experience are allowed to ask more to become more profitable, but the substance that saves your life is still available for everyone.
My next question would be you know, how far are you in that endeavour? How do you find partners and how do you bring your story to a larger audience and especially to decision makers? Tell me more about that.
It’s a good question because it's difficult. I thought it was going to be easier, but people don't understand AI. They think it's a terminator. There is a lot of abstract in there. My mother even had breast cancer during that period and she did not understand my first breast cancer project and she didn't support it as a person coming out of it, so even, I was not able to even tell her the importance of that, apart that we need open knowledge. So, spreading that is difficult. That's why I started writing the newsletter, which went really well. I started beginning this year and I have 15,000 subscribers after just not even a year. The most difficult part is funding, when you set up a nonprofit. I learned that most NGOs have their own agenda, I thought this would be something for Bill and Melinda Gates Foundation. But then I learned that they work very closely with corporates and closed knowledge, and you just feel that you are doing something very new. And if you're doing something very new that nobody knows if it's going to work, then this is difficult. That's pioneer work. But I managed, for example, to win AstraZeneca for the first project, which was one of my targets. Can I convince a big pharma company to invest in open source and to give data for a project? And everybody who knows Big Pharma is like: never ever you’re gonna get that! And they made a global announcement that they were partnering with us on breast cancer. They stood for open source. If you look at their website, they are pushing a lot for open data and open source. So I think you can change a lot if you change the mindset of a few people, and then make these few people excited to change others, but it's a long, long, grass-rooted movement. On the part of politicians and having support, I tried this a lot, I lost so much time and I learned that open source is not popular because it doesn't bring any tax revenue. And if you don't bring tax revenue, you're not interesting. I started to learn that a lot of politicians that I talked to, don't understand that concept. If you open certain things, like if you would open source AI, I believe that we will even have a bigger economy, it's kind of similar when you would make an analogy to Gutenberg. And if Gutenberg would have licenced every single printed letter. And every time you would have written a book, you had to pay 2000 times the letter A or 3000 times a letter B. And you had to pay licences because somebody thought it was a good business model. Because everybody who writes a book needs to pay for licences. Everybody knows that this model would have never created all the publishers, the Enlightenment, and it would never progress society. So sometimes you need to give access to tools like a public library that gave us access to books, so people could educate them for free. If AI is the tool that creates knowledge and serves knowledge, then these tools need to be accessible to everyone. And so from a political perspective, it's hard. I worked with the Commission, I tried to push for Europe to say, "let's put open source AI as a standard in our European healthcare systems. Because if we do so we get a principle of solidarity in our systems in our digital systems. So we can reinvent our solidarity analogue systems in a digital way. So we can do what Europe has been doing really well is serving everyone for quite low cost compared to the US, for example." As a last part, we are now starting to build a for profit platform that will give tools to all those who want to build into the open. So we want to understand what they are looking for? What are their needs, I don't know what the business model will look like. But that's not important. I think if you are able to get a community of, for example, 100,000 people together on a platform, you will find ways to create new services that they find valuable they will pay for, but I want to make it as free as possible. So the principle AI will always be open and accessible and data as much open and accessible. And we start with search, like we want to make everything that is already out there accessible.
Now coming back to something you mentioned before, I'd like you to speak more about data protection and regulation. Obviously every country is different. What are the barriers at this point that might not enable you to go about achieving your mission so to speak?
There are different sorts of data there is like data that is personal data then there is data that is abstracted from existing personal data. So data is multi-dimensional. I've been in healthcare since 20 years and we have tried to build, Germany is trying to build since 18 years, a shared electronic health record and it hasn't happened. And this is because people don't want to share it in the system. So I started looking at that whole data sharing from a game theory perspective. And if you do this, you start to understand why people don't share because there is value for them. And if you apply the prisoner's dilemma of the game theory, that means that if the one who shares loses, and the one who doesn't share us get benefits, nobody will start sharing. So you could ask that question, if a physician would share a patient data in a shared electronic health record, but then the people who are implementing that record message, "yeah, we want to reduce the double amount of diagnostic tests," then the physician says like, "why should I then share if you're going to punish me and I cannot perform in a different test?" Right? Like, these are not the incentives that that you should communicate about, you should communicate about what is it going to bring for all of us? And then perhaps you need them to create an ecosystem that has a certain set of values, norms and principles that are all based on sharing. So everybody who is in that group shares and that's the thing, but I think not a lot of cultures or countries have achieved this system where you have that collective, which is a closed ecosystem. So you don't have the risk of somebody only unilateral profiting from this? And how do you create such a system in a way that we all benefit. And I think you need to set and apply rules for doing so. The rule that I took is, data cannot have any economic value, and then the extractions of data should lead to open knowledge. And if you do these two things, you completely like Hippocrates do the economised knowledge. And if you economise it within a closed system, people will start sharing because we all benefit from it. So from a game theory perspective, this is the only way to do this. Otherwise, you will always have winners and losers. It's like we all give our data to Facebook. And suddenly they can use the data to create algorithms that make us addicted to their platform. So they are using that information against us, creating information asymmetries and power asymmetry, so we have no control anymore. And I think people understand that. And that's why there is no sharing, because they are afraid that there will be an extraction of value or an appropriation that is going to be used against them as an asymmetry of power. And you can have that from a societal and private industry perspective, you can do this from a Western, African perspective. So there were, for example, big tech companies in Silicon Valley, going to Africa, telling they are doing good and extracting the data, as there is no data protection there. And these countries are happy that they get services, and they have access to health care. But these companies are doing exactly the same as in colonial times, they are extracting the value, which is data out of the countries and a friend of mine, Professor Nick Couldry from the London School of Economics calls this 'data colonialism'. And he says like this is repeating in a NEO colonial way that what we have always been doing, like we tell that we are doing good, but we’re extracting. And I don't think as long as we look at data in healthcare as something that is extractable for economic value and creates asymmetries, because knowledge is protected by IP and not shared, then we will never see and accelerate as fast and we will never see all these benefits that data could give and machine learning could give. So I don't think I answered, it's super complex. And I am totally a privacy advocate. Because I think data, yes can be shared but needs to be anonymised. I've seen too much in the sense of data brokers selling certain sets of data. But I do think that instead of always talking about data protection, we should start talking more about anti discrimination laws. Because if you can detect mental diseases based on just phenotype data from a voice, then somebody could listen to our conversation and now take an algorithm and say "Well, Bart has a 60% risk of getting depression based on his voice." And if these algorithms are getting better, I need to be protected by anti discrimination laws. And I think that we should shift that discussion more about how do we protect against that power asymmetry that people will be confronted with? And how do we protect our rights because we're all becoming similar to like a child with Down Syndrome, who is very visible and has Down Syndrome, there are quite a lot of anti discrimination laws, you can not discriminate against these people. But I think we are all becoming some sort of classified person that has deficits in certain areas. But then we cannot classify people, humans, based on what an algorithm tells you.
And I think it's important to mention that and to speak about anti discrimination and bring that more to the forefront. So then my question would be, how do you see this now evolving? Do you see that it's too early that people don't really take that into account? Or do you see that here and there, some voices are getting out there and making people more aware of that?
I think the majority is still trying to gain out of the asymmetries they can conquer. I'm not a huge fan of the word equity in a sense, I'm more in a certain sense, more the equality: giving everybody access to it, because equity can be used in a very colonial way. We need as we digitise, there is this, the concept of common goods like the ‘commons’, there is always more and more data. Data cannot be overused. So I think what we need to do is start moving towards finding people together that create commons. And you don't need the whole world population to do that. You need people who are building commons. And once you have a common, nobody can take it away. It's a good that it is there. So once you have a common database on breast cancer data, it's going to be there for everyone to use. The more commons we will have, the more we start to build up a healthcare system where we have a common layer of shared knowledge that is digital, which are digital artefacts and these artefacts can be software code, can be AI models, can be data, whatever. But the more we collectively build, the easier it becomes. And I think in 10 years, this will look very differently, you will see much more tools being openly accessible.
Just to conclude, I like to ask a couple of questions to my guests. And one of them is you know, do you have any specific recommendation for our listeners? Be it a book that you think people need to read about? A movie, whatever you name it, what would you recommend for our audience today?
I think a book that everybody in the tech field should read is The Surveillance Capitalism from Harvard Business School professor Shoshana Zubov, it makes you aware of the asymmetries that are already out there. I did like the book called The Undoing Project, which was one of my favourite books. It's from the same author that created Moneyball about the life of Daniel Kahneman, who is the neuroscientist who actually won the Nobel Prize and it is written as a nonfiction book, but it's really a beautiful book about human decision making.
Finally, before I let you go, one last question, do you have anyone that you would recommend I invite on this show to someone interesting, whose story or whose endeavour you would like to listen to on the podcast?
Well, I think the concept of data colonialism is a nice concept. So I could connect you with Professor Nick Couldry from the London School of Economics, who wrote a book The Cost of Connection.
Thank you so much Bart, I think I'll let you go. It's been a pleasure. And thank you so much for taking the time. I really look forward to staying in touch and to continue to see how Hippo AI is evolving.
Thank you so much, Claire for having me and helping me spread the narrative. I would just want to give something, that we have all the ability to shape our future. And it's really important to unite, and we think what are the values, what is really what we want to build when we digitise. Digitalization is not a purpose, per se, it is a tool. We are creating worlds where our children will live in and we can define ourselves what kind of a world it is. And it's not always the world that is promised on stages from large conferences. Sometimes it's the less popular variant that leads to perhaps the most prosperous future for all of us.
Shifting the healthcare industry towards open technologies, open data and AI standards, with the goal of uniting data to defeat inequalities is most certainly a noble mission, and quite frankly, a very ambitious one. That being said, you need to give it a try. And that's what Bart is doing. Have a look at the hippo AI Foundation website to learn more at hippoai.org That's HIPPOAI.org And if you turn out to be as interested as I was, then do subscribe to the Hippogram newsletter at blog.hippoai.org. You will find all these links in the show notes. One final mention to inform you that the Hippo AI foundation is hosting the first open health data and AI summit on December 1st 2022 around the topic of inventing digital solidarity, we'll also link the registration page for you in the show notes. That's it for today. Thank you so much for tuning in. I appreciate you taking the time. That was episode 41, a conversation with Bart De Witte on making medical AI a common good. Remember to share this episode with your network and your friends. If you are enjoying our show, we would also love to get your five star rating on Spotify. We are always keen on hearing from our audience, so feel free to connect with us through our social handles. You'll find us on Instagram at narrativesofpurpose_podcast, on LinkedIn at narratives of purpose podcast and on Twitter at nop_podcast. Until the next episode, take care of yourselves, stay well and stay inspired.