At the fourth International Strategic Communication Summit (Stratcom Summit'24) organised by our Presidency's Directorate of Communications under the theme "AI in Communication: Trends, Traps and Transition," a panel titled "Ethics in AI-Enabled Communication: Navigating Transparency and Accountability" was held.
At the panel moderated by Assoc. Prof. Gaye Aslı Sancar Demren from Galatasaray University, Prof. Ulrich Brückner from Stanford University, Open Channel Culture Founder Marisa Zalabak, Forward Thinking Director of Programs Jordan Morgan, Head of AIOps at IBM Andreas Horn and China Media Group Deputy Acting Director of the Asian and African Languages Programming Center Nan Si participated as speakers.
Prof. Brückner from Stanford University noted that there has been a rise in awareness regarding the use of artificial intelligence.
Brückner highlighted that artificial intelligence has the potential for both positive and negative applications, stressing the importance of establishing clear normative definitions for it.
Brückner emphasised the need for a more structured approach to managing the negative and positive consequences of artificial intelligence, stating, "With artificial intelligence, we have found ourselves in such a situation that ideas create us, not us ideas."
Prof. Brückner emphasised that the creation of ethical standards and their implementation have emerged as worldwide concerns.
Zalabak, the Founder of Open Channel Culture, addressed the question regarding the role of policymakers in creating an ethical framework.
Noting that artificial intelligence requires imagination, Zalabak addressed the negative consequences of artificial intelligence.
Pointing out that international codes of ethics should be set up, Zalabak stated that outdated business practices should be abandoned.
Highlighting the need to protect children, the elderly and vulnerable people from the negative effects of artificial intelligence, Zalabak called attention to the threat of weaponising artificial intelligence.
"Israel uses artificial intelligence to create target lists"
Director of Programs at Forward Thinking, Morgan, stated that artificial intelligence can be useful in taking action in incidences such as terrorist attacks, natural disasters, and outbreaks and added that artificial intelligence is very practical in this regard.
Noting that artificial intelligence is also used in the "context of war," Morgan said, "Artificial intelligence is employed in communication; however, we are aware that when things go wrong and when mistakes are made, then there are repercussions."
Morgan added, ”We know that Israeli soldiers have been using artificial intelligence since October 7. They use it to create a target list."
Emphasising that the decision-makers of artificial intelligence are humans, Morgan stated that it is necessary to understand how these decisions are made in order to ensure transparency.
Highlighting the accountability of artificial intelligence, Morgan said, “Let's suppose artificial intelligence has prepared a list. Then, who is responsible for the harm caused to civilians? Or, who is responsible when international law is violated? We need to be cautious in regulating artificial intelligence."
Morgan also talked about individual responsibility in the use of artificial intelligence.
Head of AIOps at IBM, Andreas Horn, stated that artificial intelligence is not a novel idea; it has been discussed since the 1940s and is a concept that has been around for 60–70 years.
Pointing out that attention should also be paid to the positive aspects of emerging technologies, Horn underlined, ''Although some governments may decide to restrict Open AI or certain websites, other countries will keep developing it and will surpass you in the competition. Furthermore, they could ultimately replace you in some areas with artificial intelligence. Therefore, it is essential to get involved in the flow and take full advantage of it.''
Horn underlined the necessity of strong leadership in AI ethics.
"The use of artificial intelligence should be human-centred"
China Media Group Deputy Acting Director of the Asian and African Languages Programming Centre, Nan, stated that artificial intelligence technologies have dramatically altered people's lives and learning, with the communication industry playing a critical role in this process.
Nan said, ''We can learn how to use AI more effectively. We can also learn to manage it more effectively.''
Reminding that there are guidelines for the use of artificial intelligence, Nan remarked, "The use of artificial intelligence should be human-centred. This calls for us to adopt a human-first philosophy. We must put people's welfare and well-being first.''
Nan stated that reality and security are critical in the use of artificial intelligence and that artificial intelligence should be used by all.