Empowering mindful innovation – the crucial role of responsible AI

Artificial Intelligence (AI) has become a transformative technology across various industries, including the travel sector. It has the potential to revolutionize customer experiences, enhance personalization, and improve operational efficiency. However, the rapid advancements in AI also raise concerns about responsibility, security, and privacy. To address these challenges, the discussion about the state of responsible AI and the role of regulation has become increasingly important.

I recently had the opportunity to discuss this highly relevant matter during a panel discussion at the Phocuswright Conference in Ft. Lauderdale, moderated by Laura Chadwick, President & CEO, The Travel Technology Association and joined by Robert Cole, CEO, RockCheetah, and Lara Tennyson, Head of US Federal Affairs, Booking Holdings. Here are my main take-aways from our conversation between panel members and numerous industry participants that attended the event.

Transparency is Crucial

One of the primary concerns with AI is the lack of transparency. Users need to know when they are interacting with AI, especially in the case of AI chatbots. Transparency builds trust and helps users understand the limits of AI capabilities. It enables them to make informed decisions and ensures that AI systems are not misused or misunderstood. Companies like Booking.com and Priceline have taken steps to ensure transparency by clearly indicating when users are interacting with AI-powered systems.

Another important aspect is knowing the sources and origins of information that AI powered systems are basing their answers (or recommendations) on. Explainable AI (or XAI) is an active field of research and given the problems associated with “hallucinations” with Generative AI models that are built to optimize next word prediction tasks – the importance of generating and explaining the results will be important to users and help improve the acceptance of such AI algorithms in the long term.

Accuracy is Essential

AI models, particularly large language models, are not infallible. They rely on vast amounts of data, and inaccuracies can occur. It is crucial for companies to continuously work on improving the accuracy of their AI systems. This requires ongoing monitoring, testing, and refinement. Booking.com, for example, emphasizes the importance of accuracy in its AI systems and invests in continuous improvement to provide reliable information to its customers. While traditional AI and statistical techniques have built in safeguards to guarantee and optimize accuracy – generative AI models still have challenges with respect to hallucinations and mathematical calculations. Important advances in fine-tuning, retrieval augmented generation (RAG) as well as newer conformal mapping techniques are promising avenues to reduce these adverse effects and increase the overall capabilities of generative AI systems.

Anticipating the Future

The future of AI is uncertain, and it is crucial to have the right guardrails in place to avoid potential pitfalls. As technology evolves, regulations must keep pace. Governments worldwide are actively working on AI regulations, and it is essential to strike a balance between enabling innovation and ensuring responsible use of AI, most importantly relative to security and governance. The EU, Biden administration (see Executive Order) and Congress in the United States are actively engaging in understanding AI technology and creating impactful policies (see NIST AI Risk Management Framework). Private companies such as Google (who recently introduced their Secure AI Framework) are also active in this field and working to incorporate security and compliance related issues that arise with the use of such AI systems into an overall security related processes for software systems.

Collaboration and Input from Diverse Perspectives

The development of AI regulations requires collaboration and input from various stakeholders. Government agencies, industry experts, civil society groups, nonprofits, and startups must all have a seat at the table. The US government, through AI Insight forums and AI councils, is actively seeking input from diverse views and expertise. We are part of organizations like the Travel Technology Association to help ensure that regulations are comprehensive, inclusive, and a balance between regulation and innovation.

Balancing Innovation with Regulation

Regulation plays a crucial role in creating a level playing field and ensuring responsible AI practices. However, striking the right balance between regulation and innovation is essential. Overregulation can stifle innovation, particularly for startups and smaller companies that may not have the resources to comply with extensive regulatory requirements. It is important to consider the potential impact on competition and encourage a fair and open market for AI technologies. Given the capital required to develop LLMs (large language models) a variety of open-source initiatives have also sprung up in parallel – and several participants expressed a fear of regulatory capture where large industry players with deep pockets would influence regulations that safeguard their interests and stifle innovation that may be crucial to solve some of the problems identified above.

Protecting Consumer Interests

Responsible AI should prioritize consumer protection and data privacy. Companies must adhere to robust principles, including self-determined decision-making, human supervision, data protection, privacy, consumer protection, and fair competition. These principles should be integrated into AI systems from the very beginning, irrespective of the industry. Governments need to create meaningful and impactful policies that safeguard consumer interests while fostering innovation. This is especially important in the travel vertical where PII (personal identifiable information) and PCI (payment card industry) data along with information such passport information etc. are often used, stored, and transmitted across passenger service and departure control systems – that interact with users through a variety of interfaces.

Global Collaboration and Interoperability

AI is a global phenomenon, and regulations must reflect this reality. Collaboration between different countries and regions is crucial to ensure interoperability and avoid fragmented regulatory frameworks. The US and European Union (EU) are actively engaged in discussions on AI regulations through forums like the Trade and Technology Council. Harmonizing regulations and creating interoperable standards will facilitate the responsible use of AI technologies across borders.

The Role of Startups and Incumbents

Startups and incumbents both have a role to play in the AI landscape. Startups often bring fresh perspectives, innovative ideas, and agile approaches to problem-solving. They can disrupt traditional models and drive industry-wide transformation. Incumbents, on the other hand, have the advantage of resources, data, and infrastructure. They can provide stability, scale, and expertise. Collaboration between startups and incumbents can lead to synergistic outcomes, fostering innovation while leveraging existing industry knowledge and resources.

Ongoing Education and Learning

As AI continues to evolve, ongoing education and learning are vital. Policymakers, industry professionals, and consumers need to stay informed about the latest developments in AI technology and its implications. Understanding the potential risks and benefits of AI will enable informed decision-making and effective regulation. Platforms like podcasts, industry conferences, and open forums provide opportunities for sharing knowledge and fostering dialogue.

Striking a Balance Between Security and Innovation

AI presents both security risks and innovative opportunities. While there is a need to protect AI systems from malicious use, it is equally important to promote innovation and harness the potential benefits of AI. Striking a balance between security measures and innovation-friendly policies will ensure responsible AI practices without stifling progress.

Building Trust and Ethical AI

Responsible AI goes beyond regulatory compliance. It encompasses building trust with users, ensuring ethical AI practices, and prioritizing the well-being of individuals and society. Companies must adopt ethical guidelines and frameworks that promote fairness, accountability, and transparency in their AI systems. By prioritizing responsible AI, organizations can foster trust among users and contribute to the positive impact of AI on the travel industry.

The Future of Responsible AI

The future of responsible AI lies in a multifaceted approach that combines regulatory measures, collaboration, innovation, and ongoing education. Governments, industry players, and consumers must work together to shape regulations that strike the right balance between enabling innovation and ensuring ethical, secure, and transparent AI practices. By doing so, AI can drive the transformation of the travel industry, providing personalized, efficient, and responsible experiences for travelers worldwide.

In summary, the state of responsible AI and the role of regulation are critical considerations in the travel industry and beyond. Transparency, accuracy, collaboration, and a balanced approach to regulation and innovation are key principles that shape the future of responsible AI. By prioritizing consumer interests, fostering global collaboration, and promoting ethical practices, the travel industry can leverage the power of AI to enhance customer experiences while ensuring responsible and secure use of this transformative technology.

About the Author

Sundar Narasimhan, SVP President Sabre Labs & Product Strategy