The Logical Implication Steering Method for Conditional Interventions on Transformer Generation

0
citations
#2278
in ICML 2025
of 3340 papers
1
Top Authors
4
Data Points

Abstract

The field of mechanistic interpretability in pre-trained transformer models has demonstrated substantial evidence supporting the ''linear representation hypothesis'', which is the idea that high level concepts are encoded as vectors in the space of activations of a model. Studies also show that model generation behavior can be steered toward a given concept by adding the concept's vector to the corresponding activations. We show how to leverage these properties to build a form of logical implication into models, enabling transparent and interpretable adjustments that induce a chosen generation behavior in response to the presence of any given concept. Our method, Logical Implication Model Steering (LIMS), unlocks new hand-engineered reasoning capabilities by integrating neuro-symbolic logic into pre-trained transformer models.

Citation History

Jan 27, 2026
0
Feb 13, 2026
0
Feb 13, 2026
0
Feb 13, 2026
0