Large Language Models Are Few-Shot Health Learners

Large Language Models are Few-Shot Health Learners.
Large language models (LLMs) can capture rich representations of concepts
that are useful for real-world tasks. However, language alone is limited. While
existing LLMs excel at text-based inferences, health applications require that
models be grounded in numerical data (e.g., vital signs, laboratory values in
clinical domains; steps, movement in the wellness domain) that is not easily or
readily expressed as text in existing training corpus. We demonstrate that with
only few-shot tuning, a large language model is capable of grounding various
physiological and behavioral time-series data and making meaningful inferences
on numerous health tasks for both clinical and wellness contexts. Using data
from wearable and medical sensor recordings, we evaluate these capabilities on
the tasks of cardiac signal analysis, physical activity recognition, metabolic
calculation (e.g., calories burned), and estimation of stress reports and
mental health screeners.

Read in full here:

This thread was posted by one of our members via one of our news source trackers.