My research focuses on multimodal AI systems for emergency medicine,
spanning clinical biosignal processing, speech and language understanding,
and quantum machine learning.
Previously, I was an AI Trainer at OpenAI, CTO of Neuro Industry, Inc. and a Digital IC Design Engineer at
MediaTek.
I aim to build clinically deployable AI systems that bridge quantum computing, multimodal learning, and emergency medicine.
Modern clinical environments generate rich, heterogeneous signals β EEG, audio, imaging, text β yet real-time decision support
remains fragmented. My work addresses this gap through two complementary directions:
(1) designing hybrid quantum-classical architectures that capture complex temporal dependencies in biosignals,
and (2) engineering end-to-end multimodal pipelines that integrate speech, language, and physiological data for trauma triage and psychiatric treatment prediction.
A core principle of my research is clinical translation: my EEG-based models are already serving real patients at the
Precision Depression Intervention Center in Taipei.
Selected Publications
Prediction of Antidepressant Responses to Non-Invasive Brain Stimulation Using Frontal EEG Signals
CT Li, CS Chen (co-first), CM Cheng, CP Chen, JP Chen, MH Chen, YM Bai, et al.
Developing AI systems for clinical neuroscience and emergency medicine.
My EEG-based depression treatment prediction models have been Clinically Deployed at the
Precision Depression Intervention Center (PreDIC)
at Taipei Veterans General Hospital, serving real outpatient patients.
At Harvard/BIDMC, I am building real-time EMS triage pipelines using multimodal AI for trauma prediction,
collaborating with surgeons on AI-assisted decision support systems.
I also develop multimodal contrastive learning methods for EEG-image alignment, such as MUSE.
Building speech and natural language processing systems for clinical settings,
including EMS audio transcription, automated clinical documentation,
and emergency page generation for trauma prediction workflows.
Designing hybrid quantum-classical architectures for time-series and sequential data,
including the Quantum Adaptive Self-Attention (QASA) Transformer, QuantumRWKV, and QEEGNet for quantum EEG classification.
Applications span EEG signal processing, financial time-series forecasting, and image generation.
Computer Vision
State Space ModelsFine-Grained RecognitionSurgical SafetyYOLO
Applying deep learning to visual recognition tasks, including surgical instrument detection for intraoperative safety,
and fine-grained food classification with foundation models such as Res-VMamba.
Developing interpretable and geometric deep learning methods for time-series analysis,
including FreqLens for frequency-domain attribution in forecasting and SPD Token Transformers
for EEG classification with Riemannian geometry. Applications extend to DeFi yield prediction
and urban telecommunication forecasting, as well as Transformer-assisted learning in open quantum systems (Lindblad dynamics).
Research Experience
Department of Surgery, Harvard Medical School & Beth Israel Deaconess Medical CenterMA, USA
Searching new possible unconventional superconductors among Co-based quaternary chalcogenides with diamond-like structure CuInCoβAβ / AgInCoβAβ (A = Te, Se, S).
Contributed to Reinforcement Learning from Human Feedback (RLHF) pipelines through high-complexity AI data labeling, preference rankings, and model-behavior assessments for instruction following, multimodal reasoning, and safety alignment.