A research practice doesn't read as a list — it reads as a few questions you keep returning to. These are mine.
01 / 05
▣ Multimodal Pipeline
Dissertation · 2023 – 2025
Ranking the features that reveal student collaboration
Across speech, gesture, gaze, and position, which signals matter? A multimodal study with engineering undergraduates in CSTEPS.
PythonRandom ForestSHAPOpenPose
02 / 05
CEASAR · 2020 – 2024
Joint attention in an immersive astronomy classroom
HoloLens 2 and tablets, used together. We coded gesture and joint attention with ENA & ONA — and found that where you look together predicts what you understand together.
HoloLens 2ENA / ONAEmbodied Cognition
03 / 05
HoloOrbits · 2024 – 2025
Replicating an LLM tutor agent on a 3.8B-parameter model
Fine-tuned Microsoft Phi-3 on synthetic GPT-4o data — and recovered 85% of the larger model's tutoring behavior, while flagging systematic hallucinations.
Phi-3RAGSynthetic Data
04 / 05
Method · 2023 – 2025
360-degree cameras vs. traditional cameras
A comparative study (J. of Educational Data Mining, 2025): a single GoPro Max captures a small group's facial affect 4× better, and pose 10× better, than a wall of fixed cameras.
GoPro MaxOpenFaceOpenPose
05 / 05
Radicality Ministries · 2025 – present
A 188-page devotional, designed end-to-end
Curriculum, typography, interior layouts and cover art — and a full Squarespace rebuild that lifted impressions 10% in month one.