Skip to main content

Aboriginal and Torres Strait Islander people are advised that this website may contain images and voices of deceased people.

Artificial Intelligence (AI) Transparency Statement 2026

Purpose 

The National Indigenous Australians Agency (NIAA) is committed to the safe, ethical, and transparent use of Artificial Intelligence (AI). The NIAA will leverage AI to improve outcomes for Aboriginal and Torres Strait Islander peoples while ensuring alignment with the Digital Transformation Agency (DTA) Policy for the Responsible Use of AI in Government (v2.0).

The NIAA’s approach is guided by Australia’s AI Ethics Principles

The NIAA supports Indigenous Data Sovereignty and applies the Framework for Governance of Indigenous Data to ensure that any use of AI strengthens community control, supports Closing the Gap priorities, and protects cultural rights through safe stewardship of Indigenous data. 

How NIAA uses AI 

The NIAA classifies its AI usage according to the DTA Classification System. Based on this classification, the NIAA currently uses AI for workplace productivity. The NIAA has deployed enterprise tools (Microsoft 365 Copilot Chat) for administrative efficiency, including drafting correspondence and summarising meetings. This is governed under the DTA Policy for the Responsible Use of AI in Government and aligned with the National Framework for the Assurance of Artificial Intelligence in Government.

The NIAA adopts human oversight policy for AI use in the Agency. The NIAA does not use AI for decision-making. All AI-generated outputs are reviewed and verified by an appropriately authorised NIAA staff member. 

Staff training and capability 

To ensure the responsible use of technology, the NIAA mandates that all staff complete AI literacy and ethics training as a core requirement under the APS AI Plan 2025.

All NIAA staff will complete the AI in Government Fundamentals course via the APS Academy. This training covers ethical implications, security risks, and output verification. 

Governance, reporting, and compliance 

To maintain public trust and meet Commonwealth mandatory requirements, the NIAA has implemented the following governance framework and will work on these measures: 

  1. Accountable Official: The NIAA Chief Information Officer (CIO) is responsible for ensuring agency-wide compliance with AI policies.
  2. Risk Management: The NIAA will establish an assessment approach aligned to the DTA AI Impact Assessment to identify privacy, security, and cultural safety risks for AI use cases. 
  3. Mandatory Reporting: The NIAA will maintain an internal AI Use Case Register. The NIAA will formally notify the DTA of transparency compliance and report any high-risk AI applications as required via the DTA Reporting Portal. 
  4. Technical Standards: The NIAA systems are integrated within PROTECTED ICT environment, managed in partnership with the Department of the Prime Minister and Cabinet (PM&C). 

Transparency and feedback 

The NIAA reviews this statement annually. This statement was published on 27 March 2026 and will be reviewed annually, or when: 

  1. If there is a notable change to the NIAA’s approach to AI, or  
  2. Any Commonwealth and/or DTA measures are introduced which will impact this statement. 

We encourage feedback from the public regarding our use of these technologies. You can contact us on the following email addresses:

Feedback

Did you find this page useful?