Amplifying Original Thought in an AGI Economy
Anomaly Detection Engine (ADE)
Type your hypothesis or idea.
This interface evaluates predictability, not truth. It decomposes your input, positions each component against existing knowledge, and estimates where—if anywhere—your thinking departs meaningfully from prior art. Outputs include an originality score, partial originality flags, and cited lineages for derivative components.
(Implementation follows later in the post.)
Thirty Minutes of Thinking as Signal
This essay was produced from roughly thirty minutes of concentrated, uninterrupted thought. That fact matters more than the word count that follows.
In an AGI-saturated environment, outputs are no longer scarce. Summaries, explanations, and derivative prose can be generated on demand at near-zero marginal cost. What remains scarce is non-derivative cognition: the act of forming a hypothesis that meaningfully departs from prior formulations while remaining coherent enough to test, extend, or falsify.
From an economic perspective, those thirty minutes are not valued by time, but by leverage.
If a single concentrated thinking session produces a defensible conceptual distinction, a reframing of human–AGI roles, and a designable interface concept, its value is no longer comparable to wage labor or content production. It resembles early-stage research and hypothesis generation—activities whose downstream value compounds disproportionately.^[1]
A conservative valuation makes this clear. Senior research, strategy, or product-definition work routinely clears $150–$300 per hour. Thirty minutes, priced once, is $75–$150. But original ideas are not consumed once. They are reusable, refinable, and amplifiable—especially by AGI.
When an idea becomes a template for further reasoning, its value shifts from hourly compensation to option value. Even if only a small fraction of such ideas mature into tools, products, or research programs, the expected value of the originating cognitive act plausibly reaches orders of magnitude beyond its initial time cost.^[2]
The precise number is secondary. The structural point is this:
in an AGI economy, the unit of value is not output, but original signal.
What LLMs Actually Do (and Don’t)
Large language models do not discover new knowledge. They do not form hypotheses in the scientific or philosophical sense, and they do not generate originality ex nihilo.
What they do—exceptionally well—is position user input against a vast, compressed representation of prior human knowledge. When a person provides an idea, the model decomposes it into latent components, maps those components onto existing patterns, and estimates their likelihood, coherence, and adjacency.^[3]
This is why LLMs feel creative while remaining fundamentally derivative. They are mirrors with context: they reflect your input, surrounded by everything that has already been said.
Crucially, this also makes LLMs effective detectors of non-originality. They can identify when an idea closely tracks established formulations, when it recombines familiar components, and when it departs from known trajectories in statistically or conceptually unusual ways.^[4]
The model is not having the original thought. It is recognizing that a human has produced something anomalous relative to the corpus.
This distinction defines the correct division of labor in an AGI economy: humans supply anomaly, intuition, and conceptual risk; AGI supplies context, compression, verification scaffolding, and amplification.
Anomaly, Novelty, and Truth
Three terms must be separated.
Truth concerns correspondence with reality. An idea may be true or false regardless of whether it is original.
Novelty concerns surface-level difference. Familiar ideas rearranged or rephrased may feel new while remaining fully derivative.
Anomaly concerns statistical and conceptual deviation within a structured space of prior knowledge. An anomalous idea does not closely match known formulations, is not easily predicted from adjacent concepts, yet remains internally coherent.
Anomaly is the scarce signal AGI can detect but not generate—and the signal most correlated with downstream discovery and creative value.^[5]
The ADE operates strictly at this level. It does not assess truth. It does not reward novelty for its own sake. It identifies where human thought departs meaningfully from precedent.
The Anomaly Detection Engine (ADE)
The ADE is not a creativity engine. It is an epistemic instrument.
Its purpose is to answer one question:
To what extent does this human input meaningfully deviate from what is already known?
Core process
The system performs semantic decomposition, manifold positioning, predictability estimation, and lineage tracing. Complex ideas are split so that partial originality can be detected. High-likelihood components are flagged as derivative; low-likelihood but coherent components are flagged as anomalous.
The output is not judgment, but orientation: an anomaly score with uncertainty, a breakdown by sub-idea, and citations for detected lineages.
Originality becomes legible.
Anomaly Detection Is Already Doing Real Work
The ADE’s mechanics are not speculative. Variants of anomaly detection already underpin productive uses of machine learning in protein folding, materials science, astrophysics, and genomics—where models surface deviations worth human attention rather than inventing discoveries themselves.^[6]
In each case, the epistemic role is the same: evaluate inputs relative to an existing distribution, highlight departures from expectation, and return control to human judgment.
The ADE applies this pattern to ideas themselves.
Early Signals: Humans Using Models to Accelerate Discovery
A growing class of tools uses models not as originators, but as context engines for human-led innovation. One early signal is :contentReference[oaicite:0]{index=0}, situated within a broader ecosystem of computational biology platforms that explore design spaces otherwise beyond human reach.
In these workflows, the scarce input is not computation but hypothesis framing: deciding what to explore, what constraints to relax, and which anomalies are worth pursuing.
As these systems mature, the bottleneck shifts from compute to high-quality human input. That is the economic hinge most AGI narratives miss.
Education as an Institutional Artifact of a Pre-AGI World
Education emerged under conditions of evaluation scarcity.
When it was difficult to directly assess competence or originality, societies relied on proxy signals: degrees, institutional affiliation, and standardized credentials. These compressed large amounts of information into legible markers for employers and funders.^[7]
This system was functional given its constraints.
AGI changes the cost structure of evaluation. When models can analyze reasoning directly, compare ideas against prior art, and surface where thinking is derivative or anomalous, the need for coarse credentialing weakens.
Education does not disappear. It decomposes. Learning, mentorship, and scholarship remain essential. What loses justification is debt-financed gatekeeping built around proxy signals rather than direct evaluation.
The ADE makes this shift explicit: it evaluates thinking itself, not its institutional packaging.
When Originality Becomes a Compensable Signal
Once originality is legible, compensation stops being speculative.
The ADE converts human thought into a measurable signal—not by scoring truth or usefulness, but by estimating anomaly under constraint. That estimate correlates with downstream option value in discovery, design, and research.^[8]
Compensation can attach upstream through bounties, downstream through attribution and royalties, or longitudinally through reputation-weighted incentives. None of this requires AGI to be creative. It requires AGI to be selective.
Conclusion: Humans as Anomaly Generators
The error in most AGI discourse is assuming intelligence is exhausted by output.
Large models compress and recombine what is already known at superhuman scale. That does not erase the human role—it clarifies it.
Humans matter because they introduce conceptual discontinuities. They propose hypotheses that do not yet have a place in the manifold. They supply anomaly.
AGI does not replace this function. It makes it visible.
The Anomaly Detection Engine is not a creative substitute. It is a recognition layer. It identifies where human thought is doing real work and amplifies that signal downstream. As AGI grows more powerful, genuinely original human input becomes more—not less—valuable.
The future implied here is not post-human and not post-thinking. It is post-derivative. Its scarce input is non-predictable cognition under constraint.
That is not something models discover on their own. It is something they are built to recognize—when humans provide it.
References
^[1]: Arrow, K. (1962). “Economic Welfare and the Allocation of Resources for Invention.” NBER.
^[2]: Manso, G. (2011). “Motivating Innovation.” Journal of Finance.
^[3]: Bender, E. et al. (2021). “On the Dangers of Stochastic Parrots.” FAccT.
^[4]: Bommasani, R. et al. (2021). “On the Opportunities and Risks of Foundation Models.” Stanford CRFM.
^[5]: Boden, M. A. (1998). “Creativity and Artificial Intelligence.” Artificial Intelligence.
^[6]: Ruff, L. et al. (2021). “Unsupervised Anomaly Detection.” Foundations and Trends in ML.
^[7]: Spence, M. (1973). “Job Market Signaling.” Quarterly Journal of Economics.
^[8]: Kitch, E. (1977). “The Nature and Function of the Patent System.” Journal of Law & Economics.