from central intelligence ↓ [█████] ↓ to
communication tool ☺ ←──→ ☺ ╲ ╱ ☺──☺ ╱ ╲ ☺ ←──→ ☺
the future is a race to connect ··· ····· ·······
TRANSFORMATION BEFORE ◉ /│\ · · · central AFTER ·─·─·─· │ │ │ │ ·─·─·─· distributed the shift
INDIVIDUAL hallucination? │ [source] │ ✓ verified check the source
INSTITUTIONAL data owner: "I'll share IF I keep control" ABC: ✓ rational sharing
SOCIETAL withdraw ─┐ │ ▼ [system] │ adapts structural alignment
GEOPOLITICAL authoritarian: coerce ─▶ ◉ democratic: volunteer │ ·─·─·─· different paths
THE REFRAME old: filter bottleneck new: everyone filters for everyone scaling synthesis
WHAT ABC IS ✓ attribution ✓ control ✓ verification NOT magic just possible technical reality
THE FUTURE race to CONNECT not centralize ·───·───· │ │ │ ·───·───· the choice

Chapter V

Conclusion: AI as a Communication Tool

This thesis opened by observing that several apparently distinct problems in contemporary AI systems share a common structural feature: the absence of attribution-based control. When AI systems add information without attribution, users cannot verify claims. When systems copy training data without preserving contributor control, data owners rationally withhold contributions. When systems branch into multiple instances without accountability mechanisms, governance becomes intractable. The introduction traced how these technical absences cascade upward into consequences at individual, institutional, societal, and geopolitical levels, then asked whether attribution-based control is technically feasible with existing machinery. Chapters 2, 3, and 4 surveyed techniques in deep learning, cryptography, and distributed systems that could provide the three components of ABC. This conclusion returns to the cascading consequences to examine what changes if ABC proves achievable.

At the Individual Level

At the individual level, the introduction identified hallucination and disinformation as characteristic problems. Users encounter claims they cannot verify, generated by systems whose reasoning processes remain opaque. Deep voting, as surveyed in Chapter 2, does not eliminate hallucination. Models will continue to produce outputs unsupported by their training data. What changes is the possibility of source inspection. When each token of output can be traced to the training examples that most influenced its generation, users gain the architectural prerequisite for verification. They can examine whether claimed facts derive from sources they consider reliable, whether confident assertions rest on thin evidential bases, whether patterns reflect genuine regularities or artifacts of biased sampling. This does not automatically produce truth, but it provides the foundation upon which verification practices could be built. The individual-level problems identified in the introduction require this foundation.

At the Institutional Level

At the institutional level, the introduction observed that data owners have rational incentives to withhold contributions from AI systems. Contributing data to centralized training pipelines means surrendering control over how that data will be used, who will benefit from insights derived from it, and whether contributors will receive any attribution or compensation. The introduction estimated that this dynamic has locked away six or more orders of magnitude of potentially valuable training data.

Structured transparency, as surveyed in Chapter 3, addresses the technical barrier that makes this withholding rational. When cryptographic mechanisms can enforce access policies, when contributors can specify conditions under which their data may be used, when audit trails can verify compliance with those conditions, the calculus changes. Data owners might contribute to systems where they retain meaningful control even as their contributions enable collective intelligence. The institutional barriers identified in the introduction were rational responses to architectural limitations. Removing those limitations does not guarantee participation, but it removes the technical obstacle that made withholding the only rational choice.

At the Societal Level

At the societal level, the introduction raised questions of governance and alignment that currently dominate discourse in the field. How do we ensure that AI systems behave in accordance with human values? How do we maintain meaningful human control over systems that may eventually exceed human capabilities in many domains? Current approaches typically involve centralized actors tuning systems on samples of human feedback, hoping that the tuning generalizes appropriately.

The recursive structures surveyed in Chapter 4 suggest an alternative architecture. If AI systems require ongoing contributions from distributed sources, and if contributors retain the ability to withdraw those contributions, then alignment becomes structurally enforced rather than centrally imposed. A system that violates contributor values sufficiently to trigger widespread withdrawal loses capability. This is not a complete solution to alignment. Contributors might coordinate to pursue objectives harmful to non-contributors. The mechanisms by which contributors express values through contribution decisions remain underspecified. The assumption that contributors can evaluate system behavior at the speed required for effective feedback may prove optimistic. But the architectural possibility differs from current approaches: collective control without requiring trust in a small number of companies or governments to tune systems appropriately on humanity's behalf.

At the Geopolitical Level

At the geopolitical level, the introduction noted a strategic tension facing liberal democracies. Authoritarian states can mandate resource centralization in ways that liberal democracies cannot without violating foundational principles. If AI capability correlates with centralization, this creates structural disadvantages for democratic societies.

The recursive delegation mechanisms in Chapter 4 suggest a different scaling dynamic. If capability can emerge through voluntary coordination of distributed resources rather than coerced aggregation, liberal democracies might access resources unavailable to authoritarian centralization. Citizens globally might contribute to systems that preserve their control in ways they would never contribute to systems under authoritarian control. This is speculative, and competitive dynamics might override architectural properties. But the possibility exists that voluntary coordination at scale could match or exceed what coerced centralization achieves, providing a path toward capable AI systems that does not require compromising democratic values to remain competitive.

The Reframe

The pattern across these four levels suggests a reframe. The problems identified in the introduction were not primarily failures of intention or governance. They were consequences of infrastructure. The internet scaled the capacity to share information globally, but it did not scale the capacity to filter, aggregate, and verify information at corresponding rates. Those operations still required human attention at every step. When synthesis required human cognition, bottlenecks formed wherever humans could be positioned to aggregate. Platforms emerged to provide synthesis functions because the underlying infrastructure could not. The resulting concentration of control was not a choice but an architectural necessity given available technology.

The Core Contribution

The thesis contribution is not the pattern of recursive trust propagation. That pattern is ancient. Humans have always extended their reach by trusting others who trust others, delegating judgment through networks of credibility built over time.

The contribution is identifying specific technical barriers that prevented this pattern from operating at machine speed, and surveying existing techniques in deep learning, cryptography, and distributed systems that could address those barriers.

Word-of-mouth operated at the speed of human conversation. The machinery surveyed in this thesis could allow analogous trust propagation to operate at the speed of computation.

What the Thesis Claims—and Does Not

This framing clarifies both what the thesis claims and what it does not. The claim is that ABC appears technically feasible using existing techniques. The evidence is the survey of those techniques across three chapters, demonstrating that:

  • Deep voting can provide attribution for model outputs
  • Cryptographic mechanisms can enforce contributor control over data usage
  • Recursive structures can enable coordination without central authorities

These are technical possibilities, not deployment realities. Whether the techniques compose effectively at scale, whether the computational overhead proves acceptable, whether the coordination dynamics produce the alignment effects suggested above: these remain empirical questions requiring investigation beyond what a survey thesis can provide.

The thesis also does not claim that technical architecture determines social outcomes. Architecture enables and constrains, but human choices operate within those constraints. Even if ABC proves technically feasible, adoption depends on incentives, network effects, competitive dynamics, and institutional decisions that technical analysis cannot predict. The surveillance infrastructure that currently characterizes AI development emerged not because alternatives were impossible but because the available alternatives were not yet competitive. Whether ABC-enabled alternatives become competitive depends on factors beyond architecture. The honest claim is narrower: if these barriers to collective intelligence without centralized control prove surmountable, then concentration of AI capability is a choice rather than a necessity. The thesis has tried to show that the barriers may indeed be surmountable.

The Future of AI

The introduction ended by suggesting that the future of AI need not be a race to centralize. The chapters that followed surveyed technical machinery that could make that suggestion more than aspiration. Whether the machinery works as described, whether it composes into systems that function at scale, whether adoption dynamics favor its deployment: these questions remain open. But the architectural possibility now has technical substance behind it.

The future of AI might instead become a race to connect, through systems that preserve attribution and control even as they enable collective intelligence exceeding what any centralized system could achieve.

Determining whether this possibility can become reality is work that remains to be done.

References

  1. Trask, A., Bluemke, E., Garfinkel, B., Ghezzou Cuervas-Mons, C., & Dafoe, A. (2020). Beyond Privacy Trade-offs with Structured Transparency. arXiv:2012.08347.
  2. Kaplan, J., et al. (2020). Scaling Laws for Neural Language Models. arXiv:2001.08361.
  3. Borgeaud, S., et al. (2022). Improving language models by retrieving from trillions of tokens. ICML 2022, 2206–2240.
  4. Dunbar, R. I. M. (1993). Coevolution of neocortical size, group size and language in humans. Behavioral and Brain Sciences, 16(4), 681–735.
  5. Granovetter, M. S. (1973). The Strength of Weak Ties. American Journal of Sociology, 78(6), 1360–1380.
  6. McMahan, H. B., et al. (2017). Communication-Efficient Learning of Deep Networks from Decentralized Data. AISTATS 2017.
  7. Dwork, C., & Roth, A. (2014). The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4), 211–407.
  8. Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411–437.