As AI-powered chatbots become increasingly embedded in our daily lives and business operations, it’s easy to be drawn in by their speed and convenience. But The Voice of Reason – our chief ideas officer, Simon Clayton urges a moment of caution.
The events industry is embracing artificial intelligence, and chatbots may be seen as a simple and effective first step into using AI. These ‘digital assistants’ offer efficiency, cost savings and round-the-clock support. However, a recent case involving Air Canada should serve as a warning to any company considering handing over customer interactions to AI without sufficient supervision.
In a widely reported case, Air Canada was forced to provide a partial refund to a passenger after its chatbot misled him about the airline’s bereavement fare policy. The chatbot incorrectly informed the customer that he could apply for a discounted rate retrospectively, a statement that directly contradicted the airline’s actual policy. When the customer later sought the refund, Air Canada refused, only to be overruled by a tribunal that held the company accountable for the misinformation generated by its AI system. The ruling dismissed Air Canada’s claim that the chatbot was a separate entity and confirmed that businesses are responsible for all information presented on their websites, regardless of whether it was written by a human or an AI.
For event organisers and venues, the implications are clear. Chatbots may seem like an easy way to provide quick responses to attendee queries or assist exhibitors, but they come with risks. If an AI-driven system provides incorrect information – about stand pricing, refund policies, venue availability, event access, or health and safety regulations – the organisation could be held liable.
Unlike a human who can apply discretion and verify details, a chatbot works on predefined logic and data. If that data is outdated or incomplete, the chatbot could confidently deliver incorrect answers, leaving customers frustrated and businesses legally exposed.
The events industry thrives on trust and relationships. While chatbots can play a role in customer support, they should never be treated as a replacement for humans. Here’s how to ease the risks:
1. Ensure regular updates and fact-checking:
AI models are only as good as the information they are trained on. Ensure that your chatbot’s responses are reviewed and updated regularly to reflect current policies, pricing and terms.
2. Always provide a human backup:
Chatbots should never be the sole point of contact for customer queries, especially those involving financial transactions or bookings. Providing a clear and easy route to escalate issues to a human is essential
3. Monitor and audit responses:
Regular audits of chatbot interactions can help identify recurring inaccuracies. If a chatbot consistently provides misleading answers, it may be time to retrain or reconsider its use in customer interactions.
4. Be transparent about AI limitations:
Customers should be aware when they are interacting with a chatbot rather than a human. Clearly stating that the chatbot is an automated system and advising users to verify important information can help manage expectations and reduce the risk of disputes.
The Air Canada case serves as a cautionary tale for any industry using AI in customer service roles. While chatbots can enhance efficiency, they should not be treated as infallible. Over-reliance on AI without proper oversight can lead to misinformation, customer dissatisfaction, and legal liabilities.
The lesson is clear: AI can be a useful tool, but it must be implemented with care, regular monitoring, and a safety net of human expertise. Otherwise, businesses risk finding themselves in the same predicament as Air Canada – having to stand by an AI-generated mistake, no matter the cost.
By Simon Clayton, Chief Ideas Officer of RefTech
First featured on Exhibition News April 2025