Transcript of FIR Interview

Based on the audio recording of the FIR Interview: "Monsignor Paul Tighe on AI, Ethics, and the Role of Humanity" recorded on 22 July 2025 with Monsignor Paul Tighe. It has been lightly edited for clarity and syntax, and formatted for the reader experience.
Neville Hobson
As artificial intelligence transforms society, who speaks for humanity? In this FIR interview, The Vatican’s Monsignor Paul Tighe shares why the Church is stepping forward in the global tech debate, exploring how dignity and ethical responsibility must lead the way.
V/o and intro music
This is For Immediate Release, the podcast for communicators.
Shel Holtz
Welcome everyone to a For Immediate Release interview. I’m Shel Holtz in the US and I’m joined today by my co-host Neville Hobson in the UK and our guest co-host Silvia Cambié in Italy. And I’m very pleased to welcome our interview guest Monsignor Paul Tighe, who is secretary of the Dicastery for Culture and Education at The Vatican, responsible for the Culture section.
In January of this year, the Dicastery for Culture and Education, along with the Dicastery for the Doctrine of the Faith, published Antiqua et Nova, a “Note on the Relationship Between Artificial Intelligence and Human Intelligence.” Given our interest in AI on FIR, we’ve been intrigued by and even reported on both Pope Francis’ and now Pope Leo’s focus on AI and humanity.
Monsignor, we’re very pleased to welcome you to FIR. Thank you for joining us.
Paul Tighe
Thank you. I’m very happy to be with you and I look forward to our conversation.
Shel Holtz
I wonder if you might share with us, before we jump into artificial intelligence, a little bit of your background.
Paul Tighe
My own background, originally, 100 years ago, I studied law in Ireland, civil law, subsequently studied theology here in Rome, and eventually ended up teaching a kind of combination in the field of ethics, the relationship between law and morality and public policy and those issues.
When I came to work in Rome about 20 years ago, more or less, I worked in the communications department where we were working on the integration of digital technologies into the Holy See. And what really began to fascinate me at that stage was how digital technologies were really impacting culture in general, how we behave as individuals, how we form community, how we learn from each other, how we communicate, how every aspect of life has been transformed.
And I think that led me into the area of reflecting on AI as something that has an extraordinary potential to impact how we as human beings live as individuals and in community.
Silvia Cambié
I think we can get started then with a few questions we have, Monsignor Tighe. And if I may, on the first end, I would like to refer to Antiqua et Nova. And in it, the Church highlights some dimensions of human intelligence, the relational dimension, the spiritual dimension, the embodied dimension.
There is, however, a school of thought out there which says that that artificial intelligence is helping us to truly understand what makes us human. Because artificial intelligence helps us with repetitive tasks, with pedestrian tasks, it actually gives us the time to focus on different abilities that we have as humans, like spiritual self-respect, empathy, insight.
What do you say to that?
Paul Tighe
Yeah, I mean, undoubtedly AI has the potential to facilitate a lot of human reflection and human thought. It’s interesting, one of the earlier terms that was used before artificial intelligence really won the round was to talk about augmented intelligence. AI is something that we can work with, that we can use to increase our own capacities, to expand our own capacities in certain fields. But I think the limitation would be is that those fields are of necessity somewhat limited.
One of the things that we were very interested in questioning and maybe it’s AI that has forced us to think about this is what do we mean by intelligence? What is it that we really qualify as a human intelligence?
AI is extraordinarily good at certain types of tasks to which we would give the title intelligence, calculation, analysis, pattern spotting, processing of more and more data, extraordinarily capable in those arenas.
But there are other issues that I think are questions for human intelligence, too, which are the questions, well, what should we do with these new potentials, with these new capacities, and how do we ensure that they actually serve the true good of human beings? And that’s a type of intelligence, I think, that we still have to maintain a certain agency over and a certain responsibility for.
Shel Holtz
Monsignor, The Vatican has warned that while AI systems are able to make decisions – and I’ve seen research that says particularly among Gen Z – a lot of people are turning to AI to make day-to-day decisions for them that they used to have to make for themselves. These AI systems don’t bear any moral responsibility. That’s on us.
Now, just this week, OpenAI has released ChatGPT Agents. We have seen some early agents, but now anybody can, if they have the right account, and more accounts are going to be coming online soon for OpenAI’s customers and even free users, able to use agents that I just read Ethan Mollick today saying, really for the first time, does that analogy of an intern working beside you feel apt?
With this coming, and I know that one of the things I’ve been focused on is how managers are going to need to help their employees through this transition to an age where we’re working alongside AI employees. For what it’s worth, that’s basically what they’re going to be.
What do you think are the implications of this in terms of the dignity of workers and the humanity of work?
Paul Tighe
Yeah, I think there’s a whole range of issues you’ve raised there, you know, and the idea that people might hand over decision making to some sort of an AI program, even relating to personal questions. You know, I hope they would be vigilant about the kind of replies that we’re going to get.
And I think we also have to recognize in the past, people might have been going to fortune tellers or to horoscopes looking for advice on how to spend their lives. So one has to be recognized.
I mean, there are certain tasks which I think you would ask an AI platform to undertake. There are certain types of scoping exercises you might ask them to work on. But I think one of the things one has to be able to retain is something of your capacity to recognize, to be able to critically engage with the results you’re getting, to understand those results. And even at times to have an intuition that this may be something that’s somehow not in the right line or is a hallucination of some sort or other.
So I think of… for example, I belong to a generation that grew up with the very first calculators that could do speedy mathematics. And as we were coming to our final examinations at school, one of the issues, could you bring the calculator into the exam with you? And one of the things people said was you needed to keep your own certain numeracy and ability with numeracy that you could spot a gross error at least immediately.
And I think that’s one of things that we need to keep a sense of tuning our own capacities, not becoming de-skilled. My worry would be we would become de-skilled and we’d become blindly attentive or dependent on a technology.
I mean, another analogy I would choose sometimes is I think we’ve all got a little lazy about using GPS systems and different navigation systems as we drive; when they don’t work or when we suddenly find ourselves without a signal, sometimes we find some of our abilities have been lost.
That ability to navigate, to to map read, to find our way around, I use that as an analogy for life. We have to keep that, our own ability to ask the right questions and to know what are more likely to be the dependable capital questions.
I think the area of obviously the introductions of having an intern working beside you in the form of a platform. That’s another issue though, but it’s an issue that will become important about the impact of that on how people work and on the amount of jobs that may or may not be available in the future and how we think about issues about that.
Silvia Cambié
I have another question about Antiqua et Nova, and in it there is a very powerful observation about the fact that AI often forces workers to adjust to the pace and demands of machines rather than machines having to adjust to the needs of humans. And when I read that, that really resonated with me.
I’ve worked in managed services throughout my career, so I know the pain of having to adjust to that pace and the anxiety. So I would like to ask you, what do you think societies and employers can do to make sure that AI respects the needs of workers, particularly those workers in vulnerable and outsourced roles in the Global South?
Paul Tighe
Yeah, I think that you’ve hit a hugely important issues. One of the things I think when we think about AI, we have to think about AI as something that will be an accelerator and a multiplier of existing practices. And one of the things we have to recognize, there are many contexts in which human work is not valued, in which the dignity of workers is already not valued.
And the risk is that AI in certain ways could exacerbate that, where people will see AI as something that can displace workers, that can work all hours. This is becoming some… as you know, there’s been some talk about unshoring employment into the first world again. But a lot of that unshoring is likely to be handed to machines rather than human beings.
And we’ve already seen anecdotally and in literary examples I’ve been reading about, something about workers in an Amazon warehouse, or in any warehouse you care to mention, where there are logistical processes that have been directed by algorithmic concerns and that somehow the humanity of the worker is lost.
There was an extraordinary article in the New Yorker magazine or the New York magazine looking at food delivery operators in New York who essentially are driven into competition with each other by an algorithm that will give work to those who are the speediest in their delivery tasks. And therefore dehumanizing and also creating competition between people who might previously have been seen as working together.
So I think we need to reflect and to be not complacent about how this could impact. At another level, there are ways in which it may displace certain types of work that are already meaningless, and maybe less than worthy of human dignity.
So I think it’s in the balance, my worry would be, and this is a thing that Pope Francis often brought out: it’s not just about the technology that’s there and it can be used for good or it could be used for bad. It’s that the technology is born out of a certain mentality.
And if the mentality, the commercial mindset that is giving birth to the technology has at its heart exclusively values about efficiency and profitability, then the chances are that the dignity and worth of individuals will not be respected.
Whether you blame that on the AI or on the commercial mentality is probably we need to be careful not to do that.
The other example, if I may, that struck me because we talked, it links with something Shel said earlier, which is the one is like people are saying AI can liberate us from some of the more menial or less important tasks and allow us to come in at the specifically human level.
And an example of that was in the field of medicine where AIs and AI platforms will be able to process enormous amount of diagnostic materials, do comparisons with other X-rays, prescribe individual drug regimes relating to the genetic makeup of the individual.
And some doctors began to say, this is great. This will allow us to recover what is the essence, what is at the core of our being as doctors, which is that we aren’t able to give time and care and attention to our patients. And that was a very kind of positive view and understanding of the role.
Other doctors kicked back and said, in reality, we wouldn’t be so complacent because the real risk is that we will now be expected to see more patients and to see them more speedily rather than giving them more time and giving them more individualized attention. They said that was for two reasons. One, an obvious one, is that so much of health care is commercially driven and there are drug companies and other investors requiring a certain throughput.
But they said more subtly was that an AI can measure the amount of time you spend with a patient. It can measure how many patients you see, but it can’t necessarily quantify the quality of the interaction. So therefore, the quality of the care, the attentiveness of the communication gets lost because it’s not capable of being measured. So one of the dangers with the AI thing that we suddenly have to fit into a world where everything has to be measured and where some of our most important human tasks and human achievements are not necessarily capable of pure measurement or external measurement.
Silvia Cambié
So if I may, Neville, very quickly, I’ve got a follow up question. So, you know, those roles being dehumanized, as you described, that’s definitely an issue, again, going back to my experience in managed services. But often, you know, when someone works in that environment, as you say, that’s the culture, right? You have to deliver.
You participate in a lot of international discussions and you sit on different fora. Is there a serious effort to counter that mentality?
Paul Tighe
What I’ve found is sometimes an attentiveness of some of the people from the tech side, being very attentive to say, well, we will be benevolent towards those who are displaced. And we need to look at the possibility of having a system of a universal social benefit or some sort of a universal income that we will share with people that will compensate them for the work that they lose.
What we would be bringing to some of those discussions is a sense that, yeah, but work isn’t just about economic reward. Work is where I express my creativity. It’s where I express my identity. It’s where often I socialize. So in terms of some of the costs of loss of employment, it’s not simply about an economic argument. It’s back to more qualitative issues about what it means to be human.
Neville Hobson
Before I get to my specific question, it’s interesting, this topic, by the way, I find very interesting indeed, this particular segment of our conversation.
I guess my question is quite a broad one as a follow up to this broad topic, which is, if we’re talking about deploying AI efficiently, showing that it uplifts people rather than displaces people, I wonder in today’s climate, where morality seems to be absent in many organizations in terms of how they’re approaching their business and the treatment of employees, I guess the question I wanted to ask you is, what do you see as the moral obligation of companies and other organizations developing or deploying AI? What would you say to that?
Paul Tighe
Well, in a previous life, I used to teach business ethics. So I’m back to that famous article that’s saying that the business of business is to produce profits, but I think one of the things we’re all becoming much more aware of is that need to breach the corporate veil in terms of commercial activities that we ensure that corporations don’t somehow level down people’s sense of ethical responsibility.
So one of the issues that we would be kind of promoting is if you look at the area, the future of governance in the broader sense of this area, there’s the kind of governmental and multi-government multinational regulation that may be needed. Many of the companies talk about their commitment to establishing their own ethical practices.
But in the middle of that, I would also want to maintain something about the individual, the responsibility of the individuals who make up those companies. And our history in the world of treating whistleblowers is not great. They might get a wonderful film made about their life 30 years later after everything has fallen apart.
But it’s about somehow creating environments in which people feel they have a freedom to look beyond the simple tech moment of their task and to reflect on the broader human impact of what they’re doing. The language of many of the companies pays tribute to that, but it’s how do you change the culture of a corporation to ensure that the high and noble ethical standards that they often hold are in fact effective in day-to-day happenings.
There’s that old statement that, you know, culture eats strategy for breakfast.
So what is the culture of a group? I can have a lovely set of ethical principles, but I may be learning in my day-to-day work that all my boss is interested in is expediency and quick turnaround, and that we’re ahead of the competition in what we’re working on. So one of the challenges here is to try and get, which is again, precedes and will continue to be an issue, precedes AI is that issue about how do we create environments where ethics are important.
And one of the hopeful things I see in that area is that you see something like professional associations trying to articulate for their own members standards that they would hold to. So the IEEE, the electrical engineering groups, have been very good on kind of articulating standards that they will work to.
So I think it’s… we did a small thing with the Santa Clara University, which sits in Silicon Valley, who produced a handbook on how a corporation could intentionally create an ethical culture where individuals will be empowered to actually take seriously the high ethical standards that the companies may be trying to hold for themselves.
But I wouldn’t want to exaggerate or be naive in that matter. I think competition is driving a lot of what’s happening at the moment. The next standard is to get the next standards in. You know, there’s a small, relatively small number of companies who are in competition with each other to get ahead of the game on AI.
There’s a huge amount of money invested in it. And then there’s the geopolitical desire to keep one’s country ahead of any other country. So it’s complex and just in necessity, the environment to the broader sense is not necessarily conducive to the best ethical thinking and reflection.
Neville Hobson
Yeah. Ok. You’ve described the need for a wisdom of the heart in shaping AI’s future. And Antiqua et Nova mentions how a technological product tends to reflect the worldviews of developers, owners, users, and regulators. And in fact, this fits very nicely to an extension of what we’ve just been discussing. Because my question specifically is, what would be your advice to a corporate communicator trying to create a culture respectful of the dignity of technology users in their company.
Paul Tighe
Yes… I’m thinking here now! I would say that, I mean, I think the communicators, those who communicate on behalf of companies have a particular responsibility, which is to go beyond simply selling of their own product, of their own platforms.
I think on that area that I think there’s something at stake here that if I’m trying to get traction with the corporate world, it’s somehow about issues about trust and about risk. If companies behave in such ways that they destroy the trust that people should have on them, they will pay the price of that ultimately.
Maybe not immediately, but ultimately I’d like to think they will pay the price of that. And we’re beginning to see it with people being much less… people becoming even more suspicious and I think rightly so, more critical of the big companies and the platforms. I think people are becoming much more alert to the business models of companies… are becoming more alert to use that idea if the product is free, you’re the product. And I think cultivating that kind of sense of responsibility.
So if I was talking to the communicators of the companies, I think I would try and imagine that I am trying to create ultimately a public that is more educated, more alert, and more capable of being critical of what I am presenting rather than trying to bamboozle or to simply, I think even the long term, I will lose trust, yeah.
Neville Hobson
I think a kind of mini follow up to that would be given the reality of societies generally, particularly in the developed world, the Global North, if you will, fast moving, it’s fragmented, there’s so much disagreement and different opinions. Ethics doesn’t seem to get much of a look in, it seems to me. I don’t mean in terms of behavior and how people demonstrate ethics but conversations about ethics.
So my question to you on that then is how do we encourage deeper conversations about ethics, particularly in organizations? What’s your thinking on that?
Paul Tighe
My own thinking on that one is as somebody who’s worked in ethics over the years and we’re to recognize that sometimes we’re inclined to privatize ethics. We’re inclined to say, look, that’s somebody’s own view and just leave it to them. And that there’s a fear almost of entering.
But I think we need to empower people to ask questions about the choices that we make as individuals and that we make as a society. Which choices and which developments and which attitudes are actually conducive to what the Greeks would have called human flourishing, meaning human flourishing in terms of my individual sense of wellbeing, but also the wellbeing of the society in which I live.
And I’m conscious that the task of ethics, particularly around issues around AI, becomes complicated because we’re living in a very fractured world with different political systems, different religious beliefs, different philosophical commitments, different value systems.
But ultimately, I believe that there is something about being a human – and maybe this is back to the wisdom of the heart – that it is possible for human beings to discern together on what are the types of choices, developments, attitudes that actually promote a sense of well-being? And maybe more easy to get agreement on what are the attitudes and approaches that are certainly not conducive to well-being?
And in terms of thickening out an ethics dialogue and empowering people, one thing sometimes, I remember doing this for many years, I would have been invited into different professional associations to talk to them about ethics. And you had to remind them, you’re the experts, not me. I can teach you a method and a way of thinking and a way of analysis that is ethical. But ultimately, you’re the person who needs to think, reflect of what it is that gives value, worth and purpose to your life and to the life of people around you.
And that’s even in terms of some of the issues we had in one interesting dialogue with some people from China. And the issue that came in that discussion was a little bit like the Chinese were quite critical of the Western approach to ethics, which is highly individualistic. We were starting out being a little bit critical of AI in China and totalitarianism.
But what we began to perceive was maybe there was a corrective in the middle in how we think of what it is to be human. And to be human is to live in society with others. So even if we begin to think about how the decisions, the choices I make are not just impacting me, but others, and in the broader sense, the whole human family, then I think we get a possibility of finding coherence.
That’s not going to be easy, but I think I look at, I think, and in that one, I think it’ll be very much people coming from the humanities will help us there. Writers who can kind of get us thinking more critically.
I’ve read some novels by David Eggers. I don’t know if you’ve seen The Circle and The Every. I wouldn’t say they’re great novels, but they do capture something about a tendency in human beings, particularly in the Western world, to be willing to exchange their privacy, their own autonomy for convenience. He’s looking at that in terms of the world of social media. I think that will be even more so in the world of AI.
Are people going to be looking for convenience and ease and their own personal immediate satisfaction? Or will it be a capacity to think and reflect on a more grounded experience of what it is that really makes life worthwhile?
So I do think AI in its own ways – and Silvia, this back to you – is forcing us to open up some of those more limited questions about what it is to be human, what it is that gives us satisfaction in life, what is it that makes life worthwhile?
And I think the hope we had in producing Antiqua et Nova was kind of to say, look, here are our perceptions, here are some ideas we have, but this has to be a truly global debate involving people from different traditions and different perspectives, and we cannot be left either to the so-called ethical experts or to the so-called technical experts.
Shel Holtz
We have some time left and I’d love to follow up with a question on that global ethical debate. I know that The Vatican has supported the Rome Call for AI Ethics and has advocated UN level treaties of various types. Just recently we had Mark Zuckerberg announced that Meta would not sign on to the EU framework for AI ethics and I would have to characterize the US government’s approach to this as all gas no brakes.
Do you think that a global consensus is realistic and I’m wondering what role you think religious institutions should play and how that would work with input and dialogue from other types of institutions?
Paul Tighe
Yeah, no, think, I mean, at a time… one of the sad realities is a time when the world needs some form of global governance or agreed standards and agreed attitudes and some of those institutions have never been so weak. What I think, certainly, part of our Catholic social teaching tradition has always been to insist on the need for attempts to develop international… to strengthen the international organizations so that we have some sort of a global input into the thinking about our future.
I’m not trying to be naive here. I probably don’t want to become either despondent or to give up on it, but I mean, it’s been quite disappointing as you mentioned, you referenced the AI Act for the European Union, which was a relatively limited initial first step, which was beginning to get a little bit of traction until political considerations, I think, empowered the companies to feel that they don’t need to take it as seriously as I would have hoped they would.
So I was at the AI Summit in Paris at the end of January, beginning of February this year which was certainly disappointing in terms of the kind of reluctance to establish any overall governance standards and to somehow the geopolitical considerations of keeping one’s country to the forefront and therefore supporting the companies who are doing that was quite considerable. Yeah.
But I do think at the same time where I would build back and say, it’s not any one religion. And one of the interesting things is trying to develop kind of a community of religious voices who may have perspectives to offer on it, and who more particularly may through their own interaction with people who are working in the industry might be able to have an influence.
So one of the ones I mentioned, the document we worked on with the people from Santa Clara University and with people who work in Silicon Valley, many of whom were self-declared Catholics, who say I want to somehow find a harmony between my professional work and my actual religious convictions, my human convictions.
But I want to do that in a way that I can engage other people within my company who share the same ethical and moral concerns, may not have the religious beliefs or the religious vocabulary, but are no less committed to trying to ensure that we are attentive to the impact of what we’re doing in the broader sense on society.
So I think it’s… I have great hope in what individuals will do, and we’ve seen that in some situations where people have been willing to sacrifice their own jobs rather than doing something that they’re uncomfortable with.
Silvia Cambié
I had a follow-up question, Monsignor Tighe, about that need to find a balance between one’s professional obligations and one’s religious beliefs. So that’s something that in these days in tech, I kind of struggle with. And I encounter people in delivery jobs in tech who just have to roll out tech and do what they are told and basically meet their targets. But they are also struggling because of the privacy issues, because of the data issues. So what would be your advice to them?
Paul Tighe
Certainly, Silvia, I don’t want to be simplistic on this one, but I often, when I’m talking with people like that, I remind them that the, you know, what is a profession? Profession actually begins with a kind of a religious etymology, that it was standing for something. I profess something. And the skills I have as an engineer, the skills I have, the capacities I have as a doctor, enable me to almost intrinsically stand for certain values.
So the important issues that emerge there for me, I think, is to enable people to have that. What do you stand for? What are the limits of what you’re going to do? And how do you think about your own ethical and moral responsibility? That, again, in the professional issue here would be that is often achieved better when we can do it cooperatively with others when we take a stance together and professions traditionally limited entrance, they decide who’s qualified to have this term, whose behavior means we exclude them from the profession.
So that if a profession and people who work together can find a way of working in solidarity with each other and collaboratively to defend certain values rather than being picked off one by one and forced into things that they’re not comfortable.
Shel Holtz
Monsignor, looking back at the digital communication work that you have done and in looking at your background, it’s considerable media transformation at The Vatican, you’ve been involved with the Internet Governance Forum, South by Southwest panels.
I wonder… I’d love to get your views on what kind of communication strategies we in the communication profession should be looking at that, based on your experience, would best translate the ethical complexities about AI to our audiences, whether it’s employees inside our organizations or to the publics our organizations engage with.
Paul Tighe
Yeah, one of the things, Shel, I’ve done a lot of is peering in unlikely venues, dressed like this and all that goes with that. At least I don’t have to introduce myself. one thing that I’ve learned and I think I have no formal qualifications in communications.
What I learned, though, was what I call an insecure teacher, which is the insecure, the lecturer gives the lecture and if the student doesn’t understand, that’s his or her fault. The insecure teacher is the one who’s watching around the classroom to see, do they understand what I’m saying? Where I feel a responsibility not simply for the transmission but for the reception of a message.
So one of the things that I would say that for any people working in communications is to try and test effectively what people who are hearing you and listening to you are taking from what you’re saying. So that you try and close out that loop. If there’s a gap in the communication, I think it’s the communicator’s fault.
And I say that as I say the insecure teacher who had to correct examinations and could see these horrible answers that were coming back from people whose interest was to as accurately repeat what you had said as possible. I saw the mistakes.
So what I learned was to know your audience, know who you’re talking to, know their mentality and find a language that can bridge between you and them. So for people working communications in the tech standard, one of the things I would do is try and do whatever your communication is to bring people along so that they acquire enough understanding to maybe become more difficult and more awkward and ask harder questions, but to commit to empower them through your communication.
Neville Hobson
That’s a good response, I think.
We’re approaching the end of our time today, Monsignor, and we always ask this question in our interviews with interesting guests, such as you, for instance. I think we’ve covered a great deal on this fascinating topic that is very much at the heart of all the conversations we have had in episodes of this podcast over the last year or two about artificial intelligence that’s business focused.
And it’s terrific to get your insights on all of that. But I guess the concluding question we would have would be to say, if there’s a question you wish we had asked, but we haven’t asked it, what would that be?
Paul Tighe
I’m always nervous that I don’t get to speak enough about the potential and the good potential of the AI platforms. And I think properly developed and with adequate human buy-in, I think they will have extraordinary and positive transformative effects.
And I think of the area particularly of medicine. It won’t happen automatically because our medical systems are already just orientated towards the rich and not necessarily fully in keeping, but there is extraordinary potential there to offer new diagnostic tools, new pathways. That I would… there are the sort of things I would always want to say.
At the same time, back to the wisdom of the heart, I think it’s about… even there, we have to avoid the seduction of a technology that’s going to save us and get back to healthy practices in ways of living our life. And maybe AIs can become a kind of a way of tracking how we’re living, what we’re doing with our time, how we’re, I think it has a potential to make us more alert to who we are and what we’re doing and maybe therefore making healthier choices.
Shel Holtz
Excellent. Well, Monsignor, thank you very much for your time today. It’s been a fascinating conversation and we’re very grateful that you carved out the time to spend with us today.
Paul Tighe
No, thank you very much. Thank you. And look forward to seeing you all again sometime. Thank you.
Shel Holtz
It would be interesting to catch up again in a couple of years.
Silvia Cambié
Thank you.
Neville Hobson
Thank you.
Paul Tighe
Bye bye. Thank you. Or my avatar will do it. Cheers. bye. Thank you. Thank you.
Silvia Cambié
Thank you.
Neville Hobson
Thank you.
Fade in outro music, Neville Hobson pre-recorded narrative
You’ve been listening to an FIR interview podcast. FIR Interviews are just one of the podcasts you’ll find on the FIR Podcast Network, which is anchored by For Immediate Release, a monthly show hosted by Neville Hobson and Shel Holz, with tech reports from Dan York.
Neville and Shel also host short-form episodes during the working week.
Visit us at FIR Podcast Network dot com to find all the public relations and organizational communication podcasts available for listening and following.
References: