Policy University Management Artificial Intelligence Policy

Artificial Intelligence Policy


Print Friendly and PDFPrint Friendly

Intent

This Policy establishes governance structures and principles to ensure the ethical and responsible use of Artificial Intelligence (AI) within James Cook University (JCU; the University), and guides the use, procurement, development, management and integration of AI across educational, administrative and research functions.

Scope

This Policy applies to all Authorised Users of the University’s information management systems regardless of location, whether during or after business hours or whether on JCU-owned or privately owned devices.

The Policy applies to all AI technologies including machine-assisted decision-making applications, machine learning, robotic process automation (RPA), natural language programming, deep learning (neural networks), computer vision and robotics.

Definitions

Except as otherwise specified in this Policy, the meaning of the terms used are as per the Digital Policy Glossary.

AI Technology

AI Technology encompasses tools, systems, and applications that utilise artificial intelligence to perform tasks requiring human-like intelligence. This includes machine learning, natural language processing, computer vision, large language models, and generative AI, which enable machines to analyse data, recognise patterns, make decisions, and generate content autonomously. AI Technology is applied in areas such as image and video analysis, text generation, language translation, and automation, driving efficiency and new capabilities in digital environments.

Computer Vision

Computer Vision (CV) empowers computers to 'see' and comprehend the visual world, analysing images and videos like humans. CV algorithms analyse images and videos for tasks like object detection, face recognition, and self-driving cars.

Generative AI

Generative AI generates new content such as images, text, software code or music using algorithms and machine learning techniques.

Image generators

Image generators, a type of generative AI, create realistic images based on user prompts. They use machine learning to learn patterns, styles, and semantic information from large image datasets. These models can produce images with unique visual attributes, textures, and structures. Examples include MidJourney, Stable Diffusion, and DALL-E.

Large Language Models

Large Language Models (LLMs) use deep learning to understand and generate human-like text. They learn from vast text collections to grasp language patterns, semantics, and context. LLMs can handle complex queries, provide detailed answers, understand programming languages, generate code snippets, offer code suggestions, detect errors, and improve code. They enable sophisticated natural language interactions and have applications in customer service, content creation, information retrieval, and decision-support, enhancing developer productivity by reducing errors and promoting consistency. Examples include OpenAI ChatGPT, Google Bard, Bloom, GitHub Copilot, and Amazon CodeWhisperer.

Machine learning

Machine learning (ML) is a subset of AI that allows computers to autonomously learn and improve without being explicitly programmed. ML algorithms are trained on data to make predictions or decisions.

Natural Language Processing

Natural language processing (NLP) is a field of AI that deals with the ability of computer systems to understand and generate human language. NLP algorithms are used to analyse text, comprehend, converse with users and perform tasks like language translation, sentiment analysis, and question answering.

Policy

1. At JCU, AI may be used to support teaching, learning, research and administrative functions and activities, including but not limited to:

1.1 making administration and service delivery more efficient (increasing consistency, speed, agility and scalability);

1.2 improving competitiveness to quickly adapt to changing conditions;

1.3 improving speed and agility to identify issues, risks and opportunities or improve controls;

1.4 protecting JCU’s physical and digital resources;

1.5 verifying identity and highlighting potential academic misconduct;

1.6 providing feedback to students;

1.7 to improve learning outcomes or provide tailored assistance;

1.8 engaging in student support activities;

1.9 improving student and staff experiences and/or capabilities;

1.10 recommending study pathways towards identified career goals;

1.11 marketing activities including content personalisation;

1.12 advancing research productivity, innovation and impact by allowing new approaches to data analysis and efficient summarisation of large volumes of data; and

1.13 enabling frontier research.

2. JCU acknowledges that:

2.1 there are legitimate concerns about the use and reliability of AI and will seek to identify the balance between opportunity for improvement and automation, while constantly managing and mitigating risks to the University and its community;

2.2 algorithmic bias may result in erroneous or unjustified differential treatment which could have unintended or serious consequences for groups of individuals and/or for their human rights; and

2.3 the use of AI will be increasingly regulated and, as such, this policy and any associated procedures must be maintained to ensure compliance with current and emerging regulatory standards and government advice.

3. Use of AI systems must comply with the Information Privacy Policy, Information Security Policy, Information Security Management Framework, Data Governance Policy and the Digital Technologies Acceptable Use Policy as appropriate.

Ethical Principles for the use of AI at JCU

4. The University recognises the ethical implications of AI technologies, and seeks to align the application of AI with societal values and contribute positively to human, societal, and environmental wellbeing.

5. JCU upholds the following guiding principles to ensure the ethical and responsible integration of AI across the University. These principles promote transparency, accountability, and adherence to ethical norms across all AI initiatives, fostering trust among stakeholders and enhancing decision-making processes while mitigating associated risks.

5.1 Transparency, Accountability, and Contestability. The University maintains transparency in decision-making processes involving AI, holds individuals and work units accountable for the outcomes of AI systems, and provides mechanisms for individuals to contest decisions and provide feedback (see breaches and complaints below).

While the capability of AI for analysing and looking for patterns in large quantities of data, undertaking high-volume routine process work, or making recommendations based on complex information is recognised, AI-based functions and decisions must always be subject to human review and intervention. AI Product Owners and Business Owners are responsible for the management of their AI systems.

5.2 Human-Centered Approach. The University designs AI applications with a focus on human rights, diversity and inclusivity, ensuring they respect individual autonomy and prevent discrimination. AI should deliver the best outcome for human users and provide key insights into decision-making.

5.3 Fairness. The University strives for AI systems to be inclusive, accessible, and free from bias, and actively engages with impacted communities to mitigate negative impacts. Use of AI will include safeguards to manage data bias or data quality risks. The best use of AI will depend on data quality and relevant data as well as careful data management to ensure potential data biases are identified and appropriately managed (refer also Data Governance Policy).

5.4 Privacy Protection and Security. The University upholds privacy rights and data protection standards, implementing robust security measures to safeguard sensitive information. AI systems will include appropriate levels of assurance tailored to the risk level of specific projects. The JCU community must have confidence that data is used safely and securely in a manner that is consistent with privacy, data sharing and information access requirements (refer also Information Privacy Policy, Information Security Policy and Records Management Policy).

5.5 Reliability and Safety. The University ensures AI systems operate reliably and safely, meeting intended purposes and complying with relevant legislation and standards.

5.6 Decision-Making, Oversight, and Reporting Structures. The University will ensure decision-making and accountability structures oversee the development and implementation of AI technologies. Governance bodies will be responsible for reviewing and accepting the usage of AI technologies, with reporting avenues to senior university leadership.

AI Risks and Opportunities

6. All AI systems exist on a spectrum of risk, ranging from low-risk (not automated, often non-operational or basic research, does not contain personal or sensitive data and does not have direct impacts on individuals) to high-risk automations (highly autonomous, usually operational AI systems with minimal controls that could impact individual and institutional safety and wellbeing).

7. JCU will, in accordance with the University’s Risk Appetite Statement for technology risk:

7.1 assess and manage the use of AI to understand and effectively manage these risks; and

7.2 balance risks and opportunities to protect JCU and individuals and their human rights.

8. The University assesses and manages risks associated with AI technologies, considering both the potential benefits and unintended consequences as per the JCU Risk Management Policy and Risk Management Framework and Plan.

9. A Gen AI inquiry-based Framework within which to determine the risks and opportunities to inform the application of AI to education at JCU will be developed through the AI@JCU program.

Roles and Responsibilities

10. Council. Council is ultimately responsible for approving, and committing to, the risk appetite statement for technology risk, and (through the Audit, Risk and Compliance Committee) monitoring and reviewing the University’s compliance with the Ethical Principles for the Use of AI at JCU.

11. Vice Chancellor. Through the establishment of AI@JCU, the Vice Chancellor is responsible for providing a mechanism for the training and upskilling of staff in AI technologies, development of resources to support staff and student use of AI, and to support communities of practice for the application of AI across the University’s operations, and for ensuring that this Policy remains current with emerging technologies and regulatory changes.

12. Director, AI@JCU. Responsible for leading and directing the activities of AI@JCU, providing direct support and advice to the Vice Chancellor and senior university leadership on the opportunities to adopt and employ AI technologies across the University as a key component of digital transformation ensuring alignment with the University's strategic goals and objectives. Undertake knowledge sharing and community engagement aspects of AI within the University including:

12.1 establishing the risk and ethical guardrails for AI;

12.2 ensuring systems adhere to industry standards and best practices in AI governance; and

12.3 working collaboratively with the Chief Digital Officer and Technology Solutions Directorate to ensure all AI projects undergo thorough risk and ethical evaluations before implementation.

13. Chief Digital Officer. Responsible for the operationalisation of the University’s policies and frameworks around the lifecycle management of AI systems and applications, and the technology capability to support AI including data governance, management and cybersecurity, including:

13.1 oversight of AI system procurement and implementation ensuring that for all AI projects the risk and ethical evaluation processes are tailored to the nature and risk level of the project;

13.2 that AI systems adhere to industry standards and best practices in AI governance;

13.3 establishing procedures to enable the delegation of AI risk assessments for low and medium risk AI projects; and

13.4 form a cross-university working group (jointly chaired by the Chief Digital Officer and Director, AI@JCU) to establish processes for JCU-wide risk assessments, ensuring privacy compliance and governance standards in AI projects.

14. AI Product Owner. AI Product Owners must work with Business Owners and the Technology Solutions Directorate in the ongoing management, governance and monitoring of AI in systems once procured or implemented, including regular audits and assessments of AI systems to ensure they are functioning as intended and adhering to the Ethical Principles for the Use of AI at JCU, and ensuring mitigating and resolving identified biases or errors.

15. Privacy Officer. Oversight of the AI Privacy Impact Assessment process ensuring that all AI projects undergo thorough privacy evaluations before implementation.

16. Authorised Users. Authorised Users are responsible for ensuring their use of AI applications complies with the principles of this Policy, regardless of whether these technologies are created internally, procured, or sourced through external partnerships, and are only used for the purposes intended.

AI endorsement, approval and oversight

17. The Ethical Principles for the Use of AI at JCU are applied at each phase of the AI system lifecycle. The lifecycle stages include:

17.1 problem identification and requirements analysis;

17.2 planning, design, data collection and modelling;

17.3 development and validation (including training and testing stages);

17.4 deployment and implementation; and

17.5 monitoring, review and refinement (including when fixing any problems that occur) or destruction (removal of the system from use).

18. AI@JCU will develop institutional knowledge and insights about the use, management and control of AI for the purposes of education, research, administration and service delivery at JCU. Business owners will engage with AI@JCU and its relevant community of practice in the development of any proposal to adopt AI technologies.

19. Business Owners must adhere to a formal approval process for AI technologies, as established by the Chief Digital Officer, obtaining feedback before initiating any development or procurement activities.

20. Endorsement for the use of AI as part of an identified business activity or solution, in advance of the procurement and/or development of AI systems must occur through the Vice Chancellor’s Committee (VCC). Where the VCC considers a system to be high risk, further advice (internal or external) may be sought prior to reconsideration by VCC.

21. As per the roles and responsibilities above, AI Product Owners and Business Owners are responsible for the monitoring, review and refinement of AI systems and applications.

Privacy and records management

22. Where personal information is captured, used or stored by an AI system the Business Owner must ensure compliance with the requirements of the Information Privacy Policy, Records Management Policy, Information Security Policy and, where relevant, the University’s research ethics approval processes.

23. Data used to develop algorithms or AI systems, and any data generated, shared, managed and/or recorded as part of an AI system’s operation or algorithm, is considered corporate data and must be managed in line with the Data Governance Policy and the Information Security Policy. This requirement applies to the full system lifecycle and to the lifetime of the data (whichever is the longer).

Breaches and complaints

24. Any person can report what they believe is inappropriate use of AI to the relevant Business Owner to address.

25. Any complaint regarding the use of AI at the University will be managed through the processes identified in the Staff or Student Code of Conduct, and may also refer to the Digital Technologies Acceptable Use Policy.

26. Any allegation of academic misconduct using AI will be managed in accordance with the Academic Misconduct Procedure for students, the Staff Code of Conduct or the Managing and Investigating Potential Breaches of the JCU Code for the Responsible Conduct of Research Procedure as appropriate.

27. Any data or privacy breaches (suspected or confirmed) will be managed in accordance with the Information Privacy Policy and where relevant the Personal Information Data Breach Procedure.

Related policy instruments

Academic Misconduct Procedure

Data Governance Policy

Digital Technologies Acceptable Use Policy

Information Privacy Policy

Information Security Policy

Information Security Management Framework

JCU Risk Management Policy

Managing and Investigating Potential Breaches of the JCU Code for the Responsible Conduct of Research Procedure

Personal Information Data Breach Procedure

Records Management Policy

Records Management Framework

Right to Information Policy

Risk Management Framework and Plan

Staff Code of Conduct

Student Code of Conduct

Schedules/Appendices

Nil

Related documents and legislation

Information Privacy Statement and Collection Notice

Privacy and Right to Information Guidelines

Fact Sheet Privacy and Right to Information

Information Privacy Act 2009 (Qld)

Right to Information Act 2009 (Qld)

Privacy Act 1988 (Cth)

Administration

NOTE: Printed copies of this policy are uncontrolled, and currency can only be assured at the time of printing.

Approval Details

Policy Domain

University Management

Policy Sub-domain

Digital

Policy Custodian

Deputy Vice Chancellor, Services and Resources

Approval Authority

Vice Chancellor

Date for next Major Review

05/09/2029

Revision History

Version no.

Approval date

Approved by

Implementation date

Details

Author

24-1

05/09/2024

Vice Chancellor

05/09/2024

Policy established.

Chief of Staff

Keywords

Artificial Intelligence, generative AI, ethical considerations, information security, data governance, robotic process automation, machine learning, large language models

Contact person

Chief of Staff