EEMT 5220 Presentation
Generative AI / LLM
Analytics and Problem
Solving in Hong Kong
Healthcare System
GROUP 17
CHAN, CHUNG KAI
HO, UN PANG
TONG, LAI HIM
WONG, MAN WAI
YEUNG PAK TO
Content
● Challenges in Healthcare System & “Why Now
● What are AIGC/LLMs?
● Recent Advances
● Case Study
● Risks & Governance
● Implementation Playbook & KPIs
● Key Takeaways
Challenges in
Healthcare System &
“Why Now”
Why Now?
- Maturity of LLMs
- Regulatory Support
What are AIGC/LLMs?
Definitions:
❖
❖
❖
Large Language Models as reasoning engines over
text/tabular/structured data; can summarize, extract,
and generate structured outputs
Retrieval-Augmented Generation (RAG) to ground
answers in hospital guidelines, formularies, SOPs, and
literature
Multimodal and agentic capabilities: read PDFs/images,
call tools (EHR query, scheduling), output JSON for
downstream apps
Core capabilities for HK
healthcare:
❏
❏
❏
❏
Summarize: Discharge/admission notes,
MDT discussions.
Extract/Structure: Medications, allergies,
problems, timelines into JSON/EHR fields.
Generate: First-draft discharge summaries,
patient letters, SOP updates with citations.
Translate/Speech: English/Chinese clinical
language; Cantonese dictation to structured
notes.
Recent Advances in Medical Field
AI Agent
Agent Hospital May 24’
Doctors
76.2
Multimodal LLM
Med-Gemini 1.0 Apr 24’
RAG
Med-PaLM 2 Mar 23’
42 AI Doctors & Nurses
Expert-Level
21 Departments
score 86.5% at MedQA
(Doctors average 76%)
✔ Process Full Patient History
✔Grounded by Literatures
✔Minimized Hallucination
300 Diseases Covered
Nov 24’ HKUST mSTAR & MedDr
reduce analysis time by 30%
93% Accurate Diagnostics
Problem
●
●
Clinical documentation is time-consuming
and inconsistent.
EMR fragmentation and high cognitive load
lead to inefficient handoffs, discharge notes,
and continuity records.
Solution Architecture
●
●
●
●
●
Case Study:
Agentic LLM Architecture for
EMR Summarization
Data Ingestion: HL7 / FHIR → AWS pipeline.
Data Lake: Normalize codes (ICD-10, CPT,
RxNorm).
LLM Summarization: Amazon Bedrock + RAG.
Clinical Review: SMART on FHIR UI + RLHF.
Integration: FHIR API, HL7 broker, real-time
agents.
Impact Metrics
●
●
●
●
Docs time ↓ 50–60%.
Patient time ↑ 30%.
Latency < 2 sec per note.
Accuracy ↑ via clinician feedback.
Risks & Governance
Risks of LLM Deployment
• Hallucination
Potentially causing misdiagnoses or unsafe treatments
• Patient Privacy
Sensitive data may leak if not anonymized
• Accountability
Who should take the responsibility?
Governance
• Human-in-the-Loop Systems
Clinicians as final decision-makers
• Personal Data (Privacy) Ordinance (PDPO) for data protection
Compliance with PDPO is mandatory
• Generative AI Technical & Application Guidelines
Provide governance principles
Implementation Playbook & KPIs
AI and LLM in Queueing System:
Automatically calculate the ideal number of staff needed
•
•
•
Prioritize patients based on how urgent their condition is
Make the whole hospital or clinic system faster and more efficient
Waiting time
.
AI and LLM in Patient Portfolio:
•
•
•
.
Data Aggregation & Summarization
Personalized Risk Prediction
Multilingual Communication
Patient satisfaction
Doctor time saved per visit
•
Key Takeaways
AI and LLMs will reduce the time for queueing and improve
the efficiency of the whole hospital cycle.
Pros
and
Cons
•
Multilingual capability is crucial to handle Cantonese,
English, and Mandarin in hospital settings.
•
Compliance and data privacy is one of the main problem
will be faced in adopting this latest technology.
Recommendation
•
Start a Pilot Scheme in a small scope or a part
of cycle first.
•
Adjust the model and monitoring the result and
outcome in a real time
Reference
https://www.researchgate.net/publication/390364521_AI-Powered_W
orkflow_Optimization_in_Smart_Hospitals_Reducing_Physician_Burnout
https://www.digitalpolicy.gov.hk/en/our_work/data_governance/poli
cies_standards/ethical_ai_framework/doc/HK_Generative_AI_Technic
al_and_Application_Guideline_en.pdf