
The rapid rise of artificial intelligence (AI) has been one of the defining technological shifts in recent years, especially with the explosion of generative AI. Consequently, we have witnessed increasing concerns around risk and ethics when using AI. The challenge now is how businesses can implement the technology in line with business goals, and in a safe and responsible manner.
Recently, the Albanese government proposed 10 mandatory AI guardrails and a voluntary safety framework, marking a significant move towards balancing innovation with accountability, transparency, and risk management. The suggested framework will also outline best practices for the responsible and safe use of AI. However, there are additional elements the government must consider, such as data integrity and backup.
Recognising the positive and negative impacts of AI
At Veeam, we know AI can enhance data protection and resiliency. It has enabled us to improve threat prediction, data analysis and data recovery, thereby reducing downtime for our customers. As part of our recent partnership with Copilot, which expands on our multi-year relationship with Microsoft, we are also invested in exploring new AI innovations that can enhance data protection and ransomware recovery solutions.
However, we do recognise the potential risks of AI, such as hallucinations, where AI-generated outputs can be inaccurate or fabricated. AI misuse is another concern, where algorithms may be manipulated to produce biased or misleading data. Often, these issues stem from compromised data integrity, where data within the AI model may be incomplete or corrupted. Without rigorous data integrity measures, businesses risk security breaches, compromised decisions, and a loss of trust. In a landscape increasingly shaped by AI, data integrity is not just a technical requirement but a business imperative.
The government’s proposed AI guardrails, which plans to mandate testing, risk management systems, data governance measures and accountability, will help address these concerns around data quality. Moving forward, the government should detail best practices on how businesses can ensure data integrity, such as limiting unnecessary access to data, using error detection software, and regularly testing data.
Data protection as a vital component of AI best practice
Remember: AI is powered by a large pool of data. Data must not only be accurate, but secure and backed up if it is to produce reliable outputs and protect sensitive information. This is particularly pertinent amidst the boom of generative AI tools, such as chatbots and content generation software, where personal information may be fed into AI models, and recalled in future to provide tailored outputs. The government’s best practice guidelines for AI should incorporate data protection, backup and recovery as part of the framework to prevent or minimise business disruption if data is compromised.
As AI continues to evolve, balancing its benefits with responsible use should be a collaborative effort among businesses and the government. The Albanese government’s proposed AI guardrails are an encouraging first step towards empowering organisations to build trust and resilience while capitalising on the benefits of new technology. However, this is just the beginning of an ongoing journey towards managing risks around AI. The government should consider providing more detailed guidance on critical areas such as data integrity and data backup. As businesses increasingly rely on AI, ensuring that data is accurate and recoverable will be key to the safe and efficient use of the technology.