2024
- (19 February 2024)
New preprint - Enhancing Patient Outcome Prediction through Deep Learning with Sequential Diagnosis Codes from structural EHR - A systematic review - on researchgate
Dr Tuankasfee Hama led a systematic review to identify and summarise existing deep learning studies predicting patient outcome using sequences of diagnosis codes, as a key part of their predictors. Additionally, this study also investigates the challenge of generalisability and explainability of the predictive models.
Briefly, the main conclusion is the application of deep learning in sequence of diagnosis has demonstrated remarkable promise in predicting patient outcomes. Using multiple types of features and integration of time intervals was found to improve the predictive performance. Addressing challenges related to generalisation and explainability will be instrumental in unlocking the full potential of DL for enhancing healthcare outcomes and patient care.
Read it at here.
- (15 February 2024) Tiny paper - Hallucination Benchmark in Medical Visual Question Answering accepted by ICLR 2024
Many congratulations to Jinge Wu and Yunsoo Kim, PhD students at KnowLab, on the acceptance of a paper to ICLR 2024 - one of the top AI/ML conferences! not just an acceptance but also a super positive review from the area chair - “This particular work is worth presenting at a notable level, as it introduces a dataset that people in the field should be aware of – it is a substantial contribution that can spur further advancements in the VLM field.”
Read it at arXiv:2401.05827.
- (11 January 2024)
New preprint - Hallucination Benchmark in Medical Visual Question Answering - on arxiv
The recent success of large language and vision models on vision question answering (VQA), particularly their applications in medicine (Med-VQA), has shown a great potential of realizing effective visual assistants for healthcare. However, these models are not extensively tested on the hallucination phenomenon in clinical settings. Here, we created a hallucination benchmark of medical images paired with question-answer sets and conducted a comprehensive evaluation of the state-of-the-art models. The study provides an in-depth analysis of current models limitations and reveals the effectiveness of various prompting strategies.
Read it at arXiv:2401.05827.
2023
- (20 December 2023)
New preprint - Benchmarking and Analyzing In-context Learning, Fine-tuning and Supervised Learning for Biomedical Knowledge Curation - a focused study on chemical entities of biological interest - on arxiv
We had a task to implement an automated approach for knowledge curation for a biomedical ontology - ChEBI (Chemical Entities of Biological Interest). We asked ourselves the above question and decided to compare and analyze three NLP paradigms for curation tasks - in-context learning, fine tuning, and supervised learning. We broke down the general question into four specific questions. After comprehensive experiments and analysis on 3 GPT models (including gpt4), one domain specific PubmedBERT, 6 embedding models for supervised learning in 15 experiment setups on >1.8m triples, we believe we obtained good evidence for answering them properly.
Read it at arXiv:2312.12989 or this short LinkedIn post.
- (20 December 2023)
New preprint - Exploring Multimodal Large Language Models for Radiology Report Error-checking - on arxiv
Given all the exciting developments of generativeai and foundation models, in the context of radiology, JINGE WU and Yunsoo Kim set out to ask the question “whether these models can be good assistants in spotting errors in radiology reports by cross-checking radiographs”? To answer this question, they conducted a study on using multimodal large language models (LLMs) for assisting radiologists to check errors in their reports. 1,000 reports with “synthetic errors” were created using two real-world Chest X-ray datasets. Two types of tasks were introduced - binary (is there an error) vs multiclass (what types of errors) classifications.
Read it at arXiv:2312.13103 or this short LinkedIn post.
Acknowledgement: Logo designed by Yuchen Wu