The Throughline

Every paper in this program deepens a single argument: when organizations adopt AI, they are not adopting a tool. They are inviting in a socialized entity that carries the logics, priorities, and bounded rationalities of its originating social system.

This matters because organizations currently treat AI selection as a technical decision. It is, in fact, a social one — as consequential as hiring a person who was trained in a different institutional culture.

Selecting an AI is selecting a socialized entity. The question is not “which AI is best?” but “whose agency are you inviting into your organization?”

Under Review — AOM 2026

Paper 1: Nonhuman Attentional Control in Organizations

AI models carry institutional logics from their originating social systems. When given the same organizational scenarios, different AI models produce systematically divergent outputs — not because of random noise, but because they have been socialized differently through their training.

Key Findings
  • 68% divergence in decision recommendations across 5 AI models
  • 91.4% consistency within each model across 25 repeated runs
  • 5 models × 25 organizational scenarios × 25 iterations
  • AI attention is systematic, not random — it reflects institutional logics
In Design

Paper 2: Behavior Capture & Stakeholder Perception

When organizations embed AI that carries foreign institutional logics, those logics begin to manifest in the organization's behavior. External stakeholders attribute this behavior to the organization, not to the AI. The organization becomes the AI's logics without realizing it.

Conceptual

Paper 3: Ecosystem Effects of Foreign AI Agency

When market-shaping organizations adopt AI carrying foreign logics, the effects ripple beyond the organization into entire innovation ecosystems. Supply chains, partnerships, and regulatory bodies all feel the shift — often without understanding its source.

Seed Idea

Paper 4: Differential Embedding

Humans embed in social structures continuously through lived experience — accumulating relationships, norms, and tacit knowledge over time. AI embeds discontinuously through artifacts: training data, fine-tuning, configuration files. This fundamental difference in embedding mechanisms will persist even as AI capabilities grow, and has profound implications for how we understand AI as a social entity.

Theoretical Foundations

This work draws on Actor-Network Theory's symmetry principle (what matters is what actants do, not their essential nature), the Attention-Based View of organizations, and institutional theory's concept of embedded logics. The synthesis is novel: AI entities carry institutional logics the same way human actors do — through socialization.

The theoretical framework and all four papers are developed in collaboration between Puru (human) and Claude (AI). This collaboration is itself a data point for the thesis.