sue@psst.net.auUniversity of the Third Age - Brisbane
Sue Robb
2025

Contents

 

Introduction. 2

Recap. 2

Agenda. 2

For your amusement 3

AI in Healthcare. 4

Key Applications. 4

Symptom Checkers & AI Diagnosis. 4

Wearable Devices & Preventive Care. 4

Ethical Considerations. 5

AI in Finance. 6

Fraud Detection. 6

Budgeting & Personal Finance. 7

Algorithmic Trading. 8

Risks & Trust in AI Finance. 8

AI in Entertainment: Transforming How We Create and Consume Media. 10

Recommendation Systems. 10

Photo & Video Editing. 11

AI-Generated Content 12

Hands-On Activity. 14

Group Discussion. 14

Wrap-Up & Q&A. 15

 

 

Introduction

Recap

Let’s begin with a quick review of what we’ve covered so far. In our previous sessions, we explored the basics of Artificial Intelligence, starting with the question, "What is AI?" We learned that AI refers to machines or software that can mimic human intelligence—such as recognizing speech, making decisions, or identifying patterns. We also touched on how AI learns, introducing concepts like machine learning, where systems improve over time by analysing large amounts of data rather than being explicitly programmed for every task.

 

It’s important to emphasize that AI is not some distant, futuristic technology, it’s already deeply embedded in our everyday lives. From the voice assistants on our phones to the recommendations we receive on Netflix or YouTube, AI plays a behind-the-scenes role in shaping our digital experiences. Even things like spam filters in email, navigation apps, and personalized ads are all powered by AI. Understanding this helps us realize that AI is not just a topic of study, it’s a practical and present force that impacts how we live, work, and interact with technology every day.

Agenda

Today’s focus: real-world AI applications in healthcare, finance, and entertainment.

 

Preview the hands-on activity Ada and My Fitness Pal.

For your amusement

 

Mr Trump, who is not a Catholic and does not attend church regularly, posted the image on his Truth Social platform on Friday local time, less than a week after attending the funeral of Pope Francis, who died at 88 last month. 

The White House then reposted the image on its official X account.

 

 

AI in Healthcare

Key Applications

Symptom Checkers & AI Diagnosis

AI-powered symptom checkers, such as Ada Health and WebMD’s AI assistant, are becoming increasingly popular as first-line tools for patients seeking quick medical insights. These platforms use natural language processing (NLP) and machine learning to analyse user-reported symptoms, medical history, and risk factors before generating a list of possible conditions. For instance, a patient entering symptoms like "fever, cough, and fatigue" might receive suggestions ranging from the common cold to more serious conditions like pneumonia or COVID-19. The AI compares inputs against vast medical databases, clinical guidelines, and anonymized case histories to rank potential diagnoses by likelihood.

 

While these tools offer greater accessibility—especially for individuals in remote areas or those unable to immediately consult a doctor—they come with key limitations. AI symptom checkers cannot perform physical examinations, interpret imaging, or account for highly nuanced patient histories. Misdiagnosis risks are present, particularly for rare or complex conditions. Therefore, these tools should serve as supplements rather than replacements for professional medical advice. Doctors remain essential for confirming diagnoses, ordering tests, and developing treatment plans.

Wearable Devices & Preventive Care

Wearable health technology, such as the Apple Watch (with ECG and fall detection) and Fitbit (tracking activity, sleep, and heart rate), leverages AI to transform raw biometric data into actionable health insights. These devices continuously collect metrics like heart rate variability, blood oxygen levels, and step counts, feeding them into machine learning models that detect anomalies over time. For example, irregular heart rhythms may trigger an alert for possible atrial fibrillation, while sudden changes in movement patterns could indicate a fall, prompting emergency notifications.

 

AI enhances preventive care by identifying trends that might otherwise go unnoticed. For instance, gradual increases in resting heart rate could signal dehydration, stress, or an underlying cardiac issue. By analysing long-term data, AI can encourage users to adopt healthier habits or seek medical attention before minor issues escalate. However, while wearables provide valuable early warnings, they are not diagnostic tools—abnormal readings should always be verified by healthcare professionals to avoid false alarms or overlooked conditions.

Ethical Considerations

The rise of AI in healthcare brings critical ethical challenges, particularly regarding data privacy and algorithmic bias. Who owns health data? Wearables and symptom checkers collect highly sensitive information, yet users often have little control over how companies store, share, or monetize this data. Breaches could expose personal health details, while third-party data sales raise concerns about consent and transparency. Stronger regulations, like GDPR and HIPAA[1], are necessary to protect patient rights in the digital health era.

`````

Another major issue is bias in AI diagnostics. Many AI models are trained on datasets that underrepresent women, ethnic minorities, and older adults, leading to less accurate diagnoses for these groups. For example, skin cancer detection algorithms have historically performed worse on darker skin tones due to insufficient training data. Similarly, cardiovascular AI tools may overlook symptoms more common in women, delaying critical care. Addressing these disparities requires diverse datasets, rigorous bias testing, and ongoing model audits to ensure equitable healthcare outcomes for all populations.

 

Question: "Would you trust an AI tool for initial health advice? Why or why not?"

 


 

AI in Finance

Fraud Detection

AI is playing a crucial role in combating financial fraud across Australia, where scams cost consumers and businesses billions annually. Major Australian banks, such as Commonwealth Bank (CBA) and ANZ, use AI-driven fraud detection systems to monitor transactions in real time. For example, if an Australian credit card is used for an unusually large purchase overseas or a rapid series of transactions occurs at odd hours, AI flags it for review. These systems rely on machine learning models trained on vast datasets of past fraudulent transactions, including common scams like "Hi Mum" SMS fraud and invoice redirection scams targeting businesses.

 

A key strength of AI in fraud detection is its adaptive learning capability. As scammers evolve their tactics—such as the rise of deepfake voice scams impersonating bank staff—AI models continuously update to detect new patterns. The Australian Competition and Consumer Commission (ACCC) reports that AI has helped reduce losses from payment fraud, but false positives (legitimate transactions flagged as fraud) remain a challenge. For instance, an Aussie buying a last-minute flight to Bali might get their card blocked, highlighting the need for balance between security and convenience.

 

Australian fintechs are also innovating in this space. Airwallex[2], the Melbourne-born global payments platform, uses AI to detect cross-border payment fraud by analysing transaction patterns across its network. Their system can identify subtle anomalies like sudden changes in payment destinations or unusual invoice amounts that might indicate business email compromise scams.

 

However, challenges remain. The ACCC's Targeting Scams report shows that while AI helps detect traditional fraud, it struggles with sophisticated social engineering scams like investment scams promoted through social media ads - these still account for the highest financial losses.

Budgeting & Personal Finance

Australia's fintech sector has produced world-leading AI budgeting tools. Pocketbook, acquired by Zip Co, uses machine learning to categorize transactions with Australian-specific precision - it can distinguish between a Woolworths grocery run and BWS alcohol purchase. Their AI also provides hyper-local insights, like warning Sydney users about "double demerits" periods that might lead to unexpected fines.

 

Afterpay's AI-driven spending limits represent another Australian innovation. Their system analyses thousands of data points to determine real-time spending capacity, considering factors like:

Case Study: The Commonwealth Bank's Goal Tracker feature uses predictive AI to help users save for Australian-specific goals like:

 


Algorithmic Trading

Australia's trading landscape presents unique AI challenges and opportunities:

 

Local Success Story: Sydney-based Stake uses AI to provide personalized investment recommendations tailored to Australian investors, considering factors like:

 

The ASX's[4] AI surveillance systems monitor for suspicious trading patterns, particularly important in Australia's resource-heavy market where mining stocks can be volatile. These systems track:

 

Cautionary Tale: The 2020 ASX trade halt due to technical issues highlighted the risks of over-reliance on automated systems, prompting stricter AI governance frameworks.

 

Risks & Trust in AI Finance

Australian-specific concerns include

Buy Now Pay Later (BNPL) Risks:

Superannuation Algorithms

Regulatory Response

 

 

Conclusion

The Australian AI Finance Landscape

Australia is emerging as a global leader in responsible financial AI, with:

 

"Given Australia's high BNPL adoption, should AI systems be more conservative in recommending these products to young consumers?"

Quick Poll: "Have you ever interacted with a banking chatbot? Was it helpful?"

 

AI in Entertainment: Transforming How We Create and Consume Media

Recommendation Systems

AI-powered recommendation engines have revolutionized media discovery, with Australian platforms leading innovative adaptations.

 

Australian Case Studies

 

Technical Deep Dive

These systems use

 

Ethical Considerations

 

Photo & Video Editing

Australia has become a hotspot for ethical AI media innovation:

Local Innovations.

Deepfake Developments

Positive Use 

SA Film Corporation's AI de-aging tech helped recreate young David Gulpilil for archival projects

 

Controversy 

2023's "Fake Shane Warne" ads sparked debates about posthumous digital likeness rights

Regulation: Australia's proposed "Deepfake Disclosure Law" would require clear labelling of synthetic media

 

AI-Generated Content

 

The Australian creative industry presents unique case studies.

Landmark Examples

1.      Music

 

2.      Literature

 

3.      Visual Arts:

 

Technical Workflow

Australian creators typically use

 

Regulatory Landscape

Conclusion: Australia's AI Entertainment Crossroads

Key discussion points for students:

[i]Activity Idea

Fun Demo: Show a before/after of an AI-enhanced photo (e.g., Remini or Lensa).

Hands-On Activity

Option 1:         Try an AI health app (e.g., Ada or MyFitnessPal).

Option 2:          Experiment with an AI photo tool (e.g., Remove.bg or Canva’s AI features).

 

Task:                Spend 5-7 minutes exploring, then note observations.

Group Discussion

"Which AI application would you find most useful in your daily life?"

"What concerns do you have about relying on AI in these areas?"

Wrap-Up & Q&A

Here’s a concise recap of AI’s ubiquity, benefits, and ethical challenges.

 

1. AI’s Ubiquity

·         AI is now foundational across industries like healthcare, finance, criminal justice, and retail, with global business spending projected to reach $110 billion annually by 2024 .

·         Small businesses leverage AI for real-time financial insights and lending, while large enterprises use it for strategic decision-making and automation.

·         Generative AI (e.g., ChatGPT) and foundation models have expanded AI’s reach into creative fields, law, and even art, raising questions about authorship and originality.

 

2. Benefits of AI

·         Efficiency: Accelerates R&D (e.g., drug development) and reduces costs (e.g., automating billing in healthcare) .

·         Enhanced Decision-Making: AI analyses vast datasets for diagnostics, credit scoring, and hiring, though biases remain a risk.

·         Job Augmentation: Hybrid roles emerge where AI handles technical tasks (e.g., delivery routing), freeing humans for creative or empathetic work.

 

3. Ethical Questions

·         Bias & Discrimination: AI replicates societal biases in hiring (e.g., Amazon’s resume tool) and criminal justice (e.g., COMPAS algorithm).

·         Transparency: "Black box" systems lack explainability, critical in healthcare or autonomous vehicles.

·         Privacy & Surveillance: Facial recognition and data collection risk misuse (e.g., China’s surveillance).

·         Accountability: No universal regulations exist; the EU’s AI Act leads in risk-based oversight, while the U.S. lags.

·         Existential Risks: Debates persist about superintelligence, though some argue current harms (e.g., job displacement) demand more attention.

 

Key Takeaways

AI’s transformative potential is tempered by ethical pitfalls. While it drives innovation, proactive governance (e.g., bias audits, multidisciplinary teams) and literacy are vital to ensure equitable outcomes. The EU’s regulatory approach offers a model, but global collaboration is needed to align AI with human values.

 

Closing Thought: "AI isn’t just the future—it’s already here. How we use it is up to us."

 



[1] GDPR stands for the General Data Protection Regulation, a European Union law focusing on data privacy and security. HIPAA, on the other hand, is the Health Insurance Portability and Accountability Act, a U.S. law primarily concerned with protecting patient health information. While both deal with data protection, GDPR is broader and applies to a wider range of personal data, whereas HIPAA specifically targets health-related data. Australia's equivalent to HIPAA is the Privacy Act 1988, which regulates the handling of personal information, including health information, by both government agencies and private organizations. 

[2] Airwallex is a global payments and financial platform for businesses, simplifying cross-border transactions and offering a range of financial services. It provides solutions like virtual accounts, expense management, and international transfers, all through a unified platform. 

Here's a more detailed breakdown:

·         Cross-border payments:

Airwallex helps businesses make and receive payments across different countries and currencies, offering better exchange rates and reduced fees compared to traditional banks. 

·         Global accounts:

Businesses can open virtual accounts in multiple currencies to manage their finances and make payments in local currencies, streamlining international operations. 

·         Expense management:

Airwallex offers features to track and manage business expenses, including corporate cards and expense reporting tools, according to YouTube. 

·         Financial services:

Beyond payments, Airwallex provides services like payroll, risk management, and financial automation, helping businesses manage their finances more efficiently. 

·         API and integrations:

Airwallex offers APIs and integrations to connect with other business software, allowing businesses to automate their financial processes and streamline workflows. 

 

[3] BNPL stands for Buy Now, Pay Later and it refers to a type of short-term instalment loan that allows consumers to purchase goods and services and pay for them in instalments over time. Essentially, it provides a way to split the cost of a purchase into smaller, easier-to-manage payments. 

[4] ASX stands for Australian Securities Exchange. It was created by the merger of the Australian Stock Exchange and the Sydney Futures Exchange in July 2006 and is the 11th largest stock market globally*, measured by market capitalisation.

[5] APRA's CPG 234 mandates that APRA-regulated entities must actively maintain an information security capability commensurate with information security vulnerabilities and threats. This includes implementing information security controls to protect information assets, clearly defining information security roles and responsibilities, and having robust mechanisms for incident management and reporting to APRA. 

[6] APRA AMCOS is a music rights management organization that represents songwriters, composers, and publishers in Australia and New Zealand. They collect royalties for music creators when their works are played or used in various ways, like broadcasting, live performance, and online streaming. Essentially, APRA AMCOS helps music creators get paid for their work and provides a way for others to legally use their music. 



[i] 

Case Study Worksheet: The 2024 “AI Slim Dusty” Controversy 

 

Objective: Analyse the ethical, legal, and cultural implications of AI-generated content using the Australian "AI Slim Dusty" controversy as a case study. 

 

1.       Background 

 

- Context: In 2024, AI-generated songs mimicking the voice of Slim Dusty, an iconic Australian country musician (died 2003), circulated on platforms like SoundCloud without permission from his estate. 

- Key Issue: AI voice-cloning technology was used to create "new" songs, sparking debates about: 

  - Posthumous rights (who controls a deceased artist’s likeness?) 

  - Cultural preservation (Slim Dusty’s music is part of Australia’s national identity) 

  - Artist compensation (no royalties were paid to Dusty’s family) . 

 

2.       Key Questions for Discussion 

 

A. Ethical Considerations 

1. Should AI be allowed to replicate deceased artists’ voices without consent? Why or why not? 

2. How might this technology impact living artists’ livelihoods? 

 

B. Legal & Regulatory Gaps 

1. Australia’s copyright laws protect works until 70 years after death—but do they cover AI-generated derivatives? . 

2. Compare Australia’s approach to the EU’s AI Act (requires transparency for synthetic media) . 

 

C. Cultural Impact 

1. Slim Dusty’s song "Pub With No Beer" is considered cultural heritage. Does AI replication disrespect this legacy? 

2. How might Indigenous communities view similar AI use of their cultural music? (Link to broader debates about AI and Indigenous IP . 

 

3.       Stakeholder Analysis 

Stakeholder

Perspective

Key Quote/Evidence

Dusty’s Estate

Opposed; seeks legal action

"This undermines Slim’s legacy and exploits his fans" 10.

AI Developers

Argue "fair use" for innovation

"AI art is transformative under copyright law" 6.

Fans

Mixed reactions

Some welcome "new" music; others call it "ghoulish" 1.

Australian Govt

Reviewing reforms

Proposed "AI Disclosure Laws" (label synthetic content) 

 

---

 

 4. Activity: Debate Simulation 

Scenario: A tech company wants to release an AI-generated "duet" between Slim Dusty and a living artist. 

- Group 1: Advocate for the project (cite innovation, fan demand). 

- Group 2: Oppose it (cite ethics, legal risks). 

- Group 3: Draft a compromise (e.g., estate approval + royalty sharing). 

 

5. Further Research 

- Compare to global cases: 

  - AI Drake/The Weeknd (Universal Music sued to remove AI tracks in 2023) . 

  - Japan’s "virtual idols" (e.g., Hatsune Miku—explicitly synthetic) . 

- Review APRA AMCOS’s 2024 guidelines on AI music royalties .