- Developing real-time EMS triage pipeline using multimodal AI for trauma prediction.
- Collaborating with surgeons on AI-assisted decision support systems.
I am a Research Fellow at Harvard Medical School and Beth Israel Deaconess Medical Center (BIDMC), and Co-founder of Omnis Labs, an AI-driven DeFi liquidity positioning protocol.
My research focuses on multimodal AI systems for emergency medicine, spanning clinical biosignal processing, speech and language understanding, and quantum machine learning. Previously, I was an AI Trainer at OpenAI, CTO of Neuro Industry, Inc. and a Digital IC Design Engineer at MediaTek.
I hold an M.S. in Computer Science and Bioinformatics from National Taiwan University and dual bachelor's degrees (B.Eng. & B.S.) in Physics and Electrical Engineering from National Chiao Tung University (now National Yang Ming Chiao Tung University).
I am seeking PhD opportunities in clinical AI and/or quantum machine learning, starting Fall 2026.
You can contact me at: cchen34 [at] bidmc.harvard.edu | m50816m50816 [at] gmail.com
I aim to build clinically deployable AI systems that bridge quantum computing, multimodal learning, and emergency medicine. Modern clinical environments generate rich, heterogeneous signals β EEG, audio, imaging, text β yet real-time decision support remains fragmented. My work addresses this gap through two complementary directions: (1) designing hybrid quantum-classical architectures that capture complex temporal dependencies in biosignals, and (2) engineering end-to-end multimodal pipelines that integrate speech, language, and physiological data for trauma triage and psychiatric treatment prediction. A core principle of my research is clinical translation: my EEG-based models are already serving real patients at the Precision Depression Intervention Center in Taipei.
Developing AI systems for clinical neuroscience and emergency medicine.
My EEG-based depression treatment prediction models have been Clinically Deployed at the
Precision Depression Intervention Center (PreDIC)
at Taipei Veterans General Hospital, serving real outpatient patients.
At Harvard/BIDMC, I am building real-time EMS triage pipelines using multimodal AI for trauma prediction,
collaborating with surgeons on AI-assisted decision support systems.
I also develop multimodal contrastive learning methods for EEG-image alignment, such as MUSE
.
Building speech and natural language processing systems for clinical settings, including EMS audio transcription, automated clinical documentation, and emergency page generation for trauma prediction workflows.
Designing hybrid quantum-classical architectures for time-series and sequential data,
including the Quantum Adaptive Self-Attention (QASA) Transformer, QuantumRWKV
, and QEEGNet
for quantum EEG classification.
Applications span EEG signal processing, financial time-series forecasting, and image generation.
Applying deep learning to visual recognition tasks, including surgical instrument detection for intraoperative safety,
and fine-grained food classification with foundation models such as Res-VMamba .
Developing interpretable and geometric deep learning methods for time-series analysis,
including FreqLens for frequency-domain attribution in forecasting and SPD Token Transformers
for EEG classification with Riemannian geometry. Applications extend to DeFi yield prediction
and urban telecommunication forecasting, as well as Transformer-assisted learning in open quantum systems (Lindblad dynamics).
ICML 2026, KDD 2026, MICCAI 2026, ICASSP 2026