On June 19th, 2023, I had the honor of being a guest for the second time on the Data Culture Podcast, hosted by my long-time friend Carsten Bange. As always, we had an interesting conversation, this time focusing on the rapid rise of generative AI and its practical implications for global logistics.
The main points discussed were:
The Strategic View on Generative AI
-
The "iPhone Moment": The release of ChatGPT was a turning point that brought AI to the forefront of corporate attention, though for expert teams, it represents an evolution of existing machine learning rather than a total reset.
-
A Year for Learning: 2024 is viewed as a period for organizational learning and experimentation rather than a year of total business transformation.
-
Managing Expectations: It is critical to balance the high public enthusiasm with a realistic understanding of the technology's current limitations.
Infrastructure and Security
-
Secure Internal Access: To protect intellectual property, an internal platform was developed using Microsoft Azure OpenAI Services, ensuring that company data is not used for public model training.
-
Vendor Agnosticism: The infrastructure is designed to be flexible, allowing for the future integration of various models, including a shift toward Open Source alternatives.
-
Cross-Divisional Collaboration: Implementation is driven by a diverse team including central data experts, IT services, and divisional leads to ensure the technology meets specific business needs.
Practical Use Cases in Logistics
-
HR and Recruiting: AI generates culturally relevant, branded visuals for global job postings by utilizing an internal library of thousands of stock photos.
-
IT Service Desk: Generative AI assists in the accurate creation of IT tickets and helps support staff pre-generate high-quality responses.
-
Knowledge Management: The technology is used to "vectorize" massive internal manuals, allowing employees to ask questions in plain language and receive direct answers with source references.
Future Challenges and Risks
-
The Problem of Hallucinations: Large Language Models can confidently present false information, which remains a significant hurdle for factual accuracy.
-
Cybersecurity: There is an increased risk of sophisticated social engineering and automated phishing attacks.
-
Data Integrity: As more AI-generated text enters the internet, future models may struggle with a "feedback loop" where they are trained on their own artificial output.