Research
Making what you learn usable in new situations—research on accumulating compound interest.
Leveraging the information in mistakes to correct gaps in understanding.
Designing personalized learning paths to maximize the effect of compound interest.
Building a common framework for reusing learning support knowledge across domains.
What This Lab Aims For
“We want what we learn to work on the next problem too.”
Have you ever applied logical thinking acquired in mathematics to programming, or used structured thinking developed in programming for essay writing? Insights gained from solving one problem carry over like interest to the next new problem. We call this “compound interest in problem-solving.”
Our research goal is to elucidate the conditions under which this compound interest arises and to realize them as working systems.
To do this, we use the concept of “intermediate representations.” Human ways of thinking and learning share common patterns that cut across domains. By describing these patterns in a form that computers can process, we can compare “correct knowledge” and “learner understanding” within the same framework, enabling the system to pinpoint where and how gaps exist.
Four Research Projects
CHUNK: Making What You Learn Usable in Other Situations
Even after learning “for loops” in programming, you can’t use them in new problems. Even after memorizing mathematical formulas, you can’t solve application problems. “I learned it, but I can’t use it”—this is a barrier many learners experience, and it is a problem of knowledge transfer. How do we accumulate compound interest?
To overcome this barrier, we organize knowledge into three layers—"what it’s used for (function)," “how it works (behavior),” and “how to write it (structure).” By developing mechanisms that enable understanding “why it works” rather than superficial memorization, we aim to make what is learned in one context naturally usable in another. A representative effort is Compogram, a learning environment where learners gradually build up component knowledge while visualizing the “behavior” of programs.
CLOVER: Learning Effectively from Mistakes
Conventional systems only judge “correct/incorrect,” but mistakes contain valuable information about “where understanding was ambiguous.” If there are gaps in the compound interest you’ve accumulated, they need to be corrected.
In the CLOVER project, we visualize wrong answers in the form of “what would happen if this were correct,” enabling learners to realize on their own, “Wait, this is different from what I expected.” Rather than directly telling them “this is wrong,” we design experiences where learners notice for themselves. By using intermediate representations, we can pinpoint not just “correct or incorrect” but “which part of their understanding has a gap,” and provide awareness experiences tailored to that gap. A representative effort is EBS (Error-based Simulation), an environment that promotes awareness of errors by showing simulations based on learners’ answers in subjects like mechanics.
OCEAN: Designing Optimal Learning for Each Individual
“I don’t know where to start studying.” “I’m not motivated today."—In today’s information-rich world, learners easily get lost. To maximize the effect of compound interest, each person needs a learning path suited to them.
In the OCEAN project, we develop adaptive learning environments that comprehensively grasp learners’ “understanding state,” “learning style,” “motivation,” and “goals,” and propose learning approaches suited to each individual. Representative efforts include CORAL, a motivation support framework focusing on the emotion of “it’s too much trouble,” the ARK model that organizes “what to learn,” “what materials to use,” and “what to do,” and WHALE, an educational agent that uses ARK to make comprehensive recommendations.
CCS: Making “Design Knowledge” for Learning Support Shareable
“Programming education” and “writing instruction” actually deal with similar thinking skills, yet their systems are built separately. Using intermediate representations as a common language should reveal shared patterns, yet knowledge isn’t shared, and wheels are reinvented. Formalizing the mechanisms of compound interest themselves and making them shareable across domains—that is the aim of the CCS project.
We develop frameworks for describing “thinking skills” handled by different learning support systems in a common vocabulary, aiming to make support methods that were effective in one system reusable in others.
Research Approach
Our research proceeds by (1) understanding human thinking and learning, (2) theorizing it through intermediate representations as a medium, and (3) constructing and verifying theories as “working systems.” The key point is that system development is not mere “app development.” The system working correctly is itself a verification of the theory’s correctness.
Under this “understanding by building” approach, we conduct research across multiple fields including artificial intelligence, knowledge engineering, learning analytics, cognitive science, and educational psychology.
For more on the lab’s methodology and values, see About the Lab.
Acknowledgements
These research projects emerged from discussions with many colleagues, students, and collaborators. I am particularly deeply grateful to the following mentors who guided me: Takahito Tomoto, Tsukasa Hirashima, Tomoya Horiguchi, Hiroaki Ogata, Izumi Horikoshi, Rwitajit Majumdar, H. Ulrich Hoppe, Riichiro Mizoguchi, and Takako Akakura.