The door is slammed shut on generalist large language models for Health & Safety Information

13 February 2024

The door is slammed shut on generalist large language models for Health & Safety Information

THE AUTHOR: SIMON WRIGHT – CEO, INTUETY

Forget the days of trusting the recent barrage of unproven AI models such as Chatbot and Gemini for your critical health and safety insights and predictions.

The finalisation of the EU’s AI Act marks a watershed moment, slamming the door shut on the flagrant use of large language models (LLMs) as standalone sources of critical information. This landmark legislation champions governance, transparency, and accountability, prioritising responsible AI development in areas where accurate information is a life-or-death matter.

While the Act undergoes its final legislative polish, its core principles are clear. We’re entering a future where AI must be specific to the task at hand and act as a collaborator, not a replacement, in safeguarding our well-being. Gone are the days of blind faith in opaque algorithms that deliver potentially harmful health advice or manipulate safety-sensitive data.

Here’s how the AI Act rewrites the rules:

  •  High-risk AI under the microscope – Systems impacting healthcare, infrastructure, law enforcement, and recruitment face stricter scrutiny. Think of it as a firewall shielding these sensitive areas from unreliable AI.
  • Transparency, demystifying the black box – Developers must explain how their AI works, opening the door for independent evaluation and ensuring users understand the decision-making process. No more black boxes dictating your health or safety!
  • Human in the loop –  The Act mandates human oversight for high-risk AI, placing crucial decisions firmly in responsible hands. AI becomes a powerful tool, not an autonomous overlord.
  • Data responsibility is non-negotiable –  Responsible data collection and processing become paramount, safeguarding sensitive information and preventing discrimination. Your data, your safety, and your rights are protected.
  • Enforcement with teeth –  Independent bodies ensure compliance, preventing rogue AI from jeopardising your health and safety.

The AI Act doesn’t stifle innovation, it channels it responsibly. Establishing clear ethical and legal boundaries creates a level playing field, fosters public trust, and ensures AI serves society, not the other way around.

Conclusion:

At last, the built environment industry will be made to wake up to smell the coffee and realise that the outputs from large language models such as Chat and Gemini (Bard) are nothing more than fictional creative writing and cannot be relied upon for information.

To achieve the critical relied upon information, you need to be using a Large Risk Model, an example being the world’s largest and best developed that is operated by Intuety.