Talks & forums · participant

6th Annual High School Neuroscience Virtual Forum.

Stanford Medicine · April 18, 2026 · hosted by the Departments of Neurosurgery, Neurology & Neurological Sciences, and Psychiatry & Behavioral Sciences. Selective international high-school cohort. I attended as an invited participant.

Hosts and speakers.

The forum brings together international high-school students with Stanford faculty across three neuroscience departments. The 2026 program was anchored by two keynotes and four guest neuroscientists.

Faculty hosts and introductions

  • Dr. Michael Lim, Professor and Chair, Department of Neurosurgery; Stanford Medicine Endowed Chair of the Department of Neurosurgery
  • Dr. Antonio Omuro, Chair, Department of Neurology and Neurological Sciences; Joseph D. Grant Professor and Professor of Neurology
  • Dr. Victor G. Carrión, The John A. Turner, M.D. Professor for Child and Adolescent Psychiatry; Vice-Chair, Department of Psychiatry & Behavioral Sciences; Director, Stanford Early Life Stress and Resilience Program

Keynote speakers

  • Dr. Frank Willett, Assistant Professor of Neurosurgery; Co-Director of the Neural Prosthetics Translational Laboratory (NPTL)
  • Dr. Tina Duong, Department of Neurology and Neurological Sciences

Guest neuroscientists

  • Dr. Fred Lam
  • Dr. Maria Del Mar Sanchez

Brain-Computer Interfaces to Restore Communication.

The driving question: when paralysis or disease severs the path from intent to speech, can we read the intent directly from motor cortex and route it to text or synthesized voice? Willett anchored the talk on a single number, words per minute, and walked the field's progress.

The words-per-minute landscape

  • BCI Point-and-Click (Pandarinath 2017): ~10 wpm, the prior baseline
  • Handwriting BCI (Willett et al., Nature 2021): ~20 wpm
  • QWERTY typing (able-bodied baseline): ~50 wpm
  • Speech BCI (Willett, Kunz, Fan et al., Nature 2023): ~62 wpm at large vocabulary
  • Formal speech: ~120 wpm. Conversational speech: ~150 wpm

The architecture

Microelectrode arrays in motor cortex record threshold-crossing features. A recurrent neural network (RNN) maps the time-series neural data to character probabilities (handwriting BCI) or phoneme probabilities (speech BCI). An online thresholding step gives a raw output stream; an offline Viterbi search combined with an English language model produces the final text. The same backbone, retargeted from characters to phonemes, is what made the speed jump from handwriting to speech possible.

Performance

  • Handwriting: up to ~90 characters per minute on copy-typing tasks; character error rate near zero with the language-model post-processor
  • Speech: ~10% word error rate on a 50-word vocabulary, ~25% on a 125,000-word vocabulary, meaningful at conversational scale
  • Recent collaborator results (Card N. et al., NEJM 2024): doubled electrode count and improved real-time language model push performance closer to natural speech

Cortical regions involved: middle precentral gyrus (area 55b), ventral premotor cortex (6v), primary motor cortex (4).

Digital biomarkers and clinical endpoints in neurology.

Duong's framing question: what are we actually measuring when we measure a disease? The ICF model (International Classification of Functioning, Disability and Health) splits disease impact across three layers: body function (structure & physiology), activity (what a person can do), and participation (life roles and daily living). Trials that fail often fail because they measured the wrong layer.

Digital biomarkers in real environments

Consumer-grade and research-grade wearables now capture body function and activity continuously, in the patient's real environment, instead of at a single clinic visit:

  • ActiGraph, step count
  • Sysnav, gait velocity
  • OpenCap, joint angles from video
  • Wearable HR / SpO2, heart rate and oxygen saturation

Context of use (COU)

The same biomarker plays different roles in different trials. The FDA/NIH BEST framework categorizes a biomarker's role as monitoring, predictive, prognostic, primary efficacy, or secondary efficacy. Patient-reported and functional clinical endpoints sit alongside these (e.g., SV95C wearable, PGI-C anchored MCID, ACTIVLIM-NMD PRO, DHT Fatigue Index).

Neuroethics of "Silent Notifications" — Gaurika & Vedika Gautam (Delhi Private School, UAE, Grade 11).

Subtitle: How Silent Alerts Shape Teen Focus, Stress, and Control Over Attention. Premise: The smallest digital interruption may have the biggest cognitive cost.

The five-stage neural cascade of a notification

  1. Notification cue, an external digital stimulus arrives
  2. Salience detection, anterior insula and dorsal anterior cingulate cortex (dACC) flag behavioral relevance and shift attention
  3. Reward motivation, ventral striatum / nucleus accumbens encode the anticipated social reward
  4. Affective amplification, amygdala increases emotional salience, urgency, and anxiety
  5. Top-down control (lagging), prefrontal-parietal executive systems attempt regulation, but in adolescents these systems remain developmentally immature, so regulation arrives late

Empirical findings

  • Mean ~237 (SD 89) notifications per day, about 39 per waking hour
  • Focus drop after a notification: −1.7 points on a 0–10 scale (p < 0.001, d = 0.42); largest drop during study sessions (−2.3 pts)
  • Stress increase after a notification: +1.9 points (p < 0.001, d = 0.48); amplified in academic contexts (+2.6 pts)
  • Phone- or self-focused thoughts 2.1× more likely in the 5-minute window after a social alert
  • Attention task: silent-condition accuracy 91% (RT 428ms) vs. interrupted 79% (RT 492ms); −12% accuracy, +15% reaction time when interrupted
  • "Silent" notifications still triggered checking 41% of the time

Headline survey numbers

  • 69% of teens feel addicted to checking their phones
  • 42% report anxiety from unread notifications
  • 35% report shorter attention spans
  • 23% report slower reaction times
  • ~60–70% of Gen Z youth show moderate-to-severe nomophobia (anticipatory stress about being without the phone)

Their conclusion: notifications as pervasive neuromodulation of the adolescent brain. Constant micro-notifications place adolescents in an unusually dense environment of digital interruptions that subtly but persistently erode focus, raise stress, and shift attention toward self-focused and phone-focused thought.

Spatial mapping of white matter injury — Amanda Chen, Keira Wong, et al. (Stanford Medicine, with PVA + CIRM).

The question: after cervical spinal cord injury, why do similarly-sized lesions produce very different functional outcomes? Hypothesis: location matters more than size. The team built a probability map of region-of-interest damage across animals against a standard cervical spinal cord atlas.

Findings

  • Lesion area showed weak correlation with behavioral outcome scores
  • Probability mapping revealed two consistent damage patterns: the lateral funiculus (carrying major motor and sensory tracts) and the ventral gray matter (motoneuron region)
  • Functional outcome tracks where the damage is, not how much damage there is

The clinical implication: imaging-based prognosis after SCI should weight tract involvement, not just lesion size. The same logic carries to aphasia, outcomes after stroke depend on which language pathway was damaged, not just stroke volume.

Essencia — Rümeysa Cennet Ocak.

A subscription-free EEG software platform: a researcher-facing GUI for real-time visualization and configuration, plus a programmatic API for custom workflows, automation, and advanced analytics. Targets 128-electrode EEG with reported 98% accuracy via an upscaling pipeline. The team explicitly disclosed a current limitation, the prototype hardware does not yet integrate a PCB, so signal noise floor is higher than clinical-grade devices, an honest framing worth modeling.

Things that kept coming up.

  • Everyone showed their sources. Faculty and students alike cited specific papers: Pandarinath 2017, Willett 2021 and 2023, Card 2024, the FDA/NIH BEST framework, the actual effect sizes for the notification study. Nobody was hand-waving.
  • Knowing exactly where mattered. Willett targets specific motor cortex regions (55b, 6v, 4). Chen identifies specific spinal tracts. The Gautams trace a specific circuit (insula → ventral striatum → amygdala → prefrontal). “The brain” on its own is too vague a unit to think with.
  • Adolescence is a sensitive window. What the brain is exposed to during these plasticity-heavy years shapes the adult brain, not just current behavior. This is why teen mental health isn’t a phase to wait out.
  • The same shape kept appearing across very different talks. Noisy fast signal → feature extraction → sequence model → language/context model → structured output. Willett’s BCI, Essencia’s EEG pipeline, and modern speech recognition all share that pipeline.
  • Getting it from the lab to the patient is its own work. Every speaker had a concrete plan for translation: clinical endpoints, design guidelines, hardware platforms. Discovery isn’t enough on its own.

Where my projects fit.

  • A.R.A.I.A. is doing something complementary to Willett’s BCI work. BCIs restore raw output by reading from motor cortex when language pathways are still working. Aphasia rehab, what A.R.A.I.A. is built on, goes the other way: the language pathways are damaged, but the right hemisphere can pick up the slack through music. Same recovery problem, different angle.
  • NeuroCalm fits Duong’s framework: it measures the body (EEG signals) but the point of it is to extend what a person can actually do (stay regulated longer before sensory overload tips over). If it ever scaled past single-channel consumer EEG, something like Essencia is the kind of hardware that would make sense next.
  • NeuroSense is already built around the same idea I kept hearing: every claim should trace back to its source. The Gautams’ five-stage breakdown of what a notification does to the teen brain would make a strong early NeuroSense episode: clear mechanism, real numbers, immediately relevant to anyone my age.
  • The Wellbeing page on this site is getting the Gautam findings added: 39 notifications per waking hour, focus drops 1.7 points after each one, stress jumps 1.9, and thoughts shift toward yourself or your phone twice as often in the five minutes afterward.

Primary sources to follow up.

  1. Pandarinath et al., "High performance communication by people with paralysis using an intracortical brain-computer interface," eLife 2017
  2. Willett FR et al., "High-performance brain-to-text communication via handwriting," Nature 2021
  3. Willett FR, Kunz E, Fan C et al., "A high-performance speech neuroprosthesis," Nature 2023
  4. Card N. et al., "An accurate and rapidly calibrating speech neuroprosthesis," NEJM 2024
  5. FDA-NIH Biomarker Working Group: BEST (Biomarkers, EndpointS, and other Tools) Resource
  6. WHO ICF (International Classification of Functioning) overview
← Back to home