The Forbidden Fruit Has Already Been Plucked

Astrophysicist David Kipping attended a closed meeting at Princeton's Institute for Advanced Study where leading scientists openly admitted AI performs 90% of their intellectual work, sparking a frank discussion about the end of science as we know it.

Astrophysicist David Kipping attended a closed meeting at Princeton's Institute for Advanced Study. He returned shaken and recorded a one-hour podcast. I listened to it entirely so you wouldn't have to.

In January, David Kipping came to Princeton to deliver a colloquium on astronomy. In the corridor of the Institute for Advanced Study, he passed Ed Witten — one of the fathers of string theory. Just passed by, as people often do in corridors. Through that same corridor walked Einstein, Oppenheimer, Godel. Not a place accustomed to tolerating nonsense.

Background on Kipping

Kipping is a professor at Columbia University, creator of the YouTube channel Cool Worlds (1.5 million subscribers), with ten years in ML/AI. Eight years ago, he stopped developing models himself — couldn't keep up with the literature and decided you're either full-time in AI or you use it as a tool. He chose the latter. His publications cover predicting circumstellar planet stability and detecting "missing" exoplanets. An active scientist with a researcher's portfolio, not a journalist or casual blogger.

The next day, he dropped by IAS as usual and stumbled into a closed meeting. Organized by one of the senior astrophysics professors (Kipping intentionally doesn't name them). Topic: AI's impact on science. Forty minutes of presentation, then a historian's comment via Zoom, then discussion. About thirty people in the room, including authors of cosmological simulations like Enzo, Illustris, Gadget. Hydrodynamics with adaptive meshes, hundreds of thousands of lines of C and Fortran. Try, as Kipping put it, finding a room with a higher average IQ.

The meeting was internal: no cameras, no press releases, no prepared statements. Not a conference, not a promotional event. That's why people spoke what they think.

The historian spoke first: this is a historical moment, it must be documented.

The room laughed. Kipping did not.

Capitulation

First claim from the moderator: AI codes an order of magnitude better than humans. Exactly that — complete dominance, code quality superior by an order of magnitude. Not a single person in the room raised their hand to object. Not one.

Then came the numbers. The leading professor said AI can perform roughly ninety percent of his intellectual work. He hedged: maybe sixty, maybe ninety-nine. But made clear it's more than half, and the proportion will grow. Not just code. Analytical thinking, mathematics, problem-solving. Everything the IAS person has devoted their life to.

A concrete example from Kipping: he worked on solving an integral in Mathematica, the main symbolic computation tool, Stephen Wolfram's product. Mathematica couldn't solve it. ChatGPT 5.2 could. It showed the complete chain of substitutions and transformations, which Mathematica doesn't do in principle. They verified the result numerically. It matched.

When someone from IAS — remember, Godel worked here — admits that a model performs ninety percent of his intellectual work, no marketer could invent a scarier formulation. Identity crisis, stated aloud, with witnesses. Witnesses nodded and seemed pleased.

The Key to the Apartment Where the Money Is

The leading professor gave autonomous systems full control over his digital life. Email, files, servers, calendars. Root access in Unix terms. Main tools — Claude and Cursor, GPT as backup. About a third of the room raised hands: we too.

Someone asked about privacy: did you at least read the user agreement?

Answer: "I don't care. The advantage is so massive that privacy loss is irrelevant."

Then ethics. Standard concerns were listed: jobs, energy consumption, climate, billionaire power. The moderator acknowledged them. Then literally said: I don't care, the advantage is too great. Kipping describes the room's mood as "screw ethics." This wasn't an isolated radical's opinion. The room agreed.

Here we should pause and consider what we're witnessing. Academics are masters of diplomatic hedging. Their entire career is the skill of saying "there are nuances" instead of "yes" or "no." Yet these very people, in a closed circle, without cameras, say "I don't care about ethics." The position itself is predictable (if your job is maximizing scientific results, you optimize toward results). But willingness to state it plainly, without caveats — that shows the pressure level they feel. Even in smoking rooms, people used to hesitate before formulating it this way.

The metaphor Kipping used: "forbidden fruit." AI companies are the serpent offering the apple. Once you bite, innocence won't return. And if you don't bite but a competing lab does — they'll surpass you. Arms race with moral dilemma inside.

Notably, this sense of inevitability isn't abstract; experience confirms it. Kipping describes his own workflow: proofreading papers (LaTeX straight into GPT), quick coding (though he still writes most code himself), debugging (rarely does it manually, copies the error into chat), literature search, computing derivatives for papers, and even interdisciplinary work: when project TARS needed understanding graphene properties, albedo, and mechanical strength, he ran it all through AI. For YouTube: DXRevive for audio cleaning, lalalai.ai for music separation, Rev.com for transcripts, Topaz for upscaling, GPT for script fact-checking.

Yet Kipping doesn't consider himself a super-user. His self-assessment: "My strength was always creativity — AI amplifies it." But the leading professor, by Kipping's account, went significantly further. Between "I use it for proofreading" and "gave root to my servers" — a chasm, and in it live all acceptance stages scientists go through in a year or two.

How Trust Grows

This might interest those working on alignment, interpretability, or configuring agent pipelines in production.

The leading professor described his trajectory. Started with Cursor — because Cursor shows diffs. Here's what was, here's what became, here's what I changed in your code. Transparent, verifiable, familiar to programmers. But as trust grew, transparency became irritating. It stopped feeling like safety and started feeling like friction. The professor switched to Claude. Claude sends sub-agents, decomposes tasks, solves in pieces, acts more autonomously. Doesn't show every diff. Just does it.

The professor organized verification between models: solved the task in Cursor, rechecked in Claude, discussed results in GPT. Essentially peer review. Just not between colleagues but between three neural networks.

If we plot this trajectory formally, it's an S-curve: skepticism, disappointment, time investment, surprise, trust, control transfer. At the final phase, transparency becomes an obstacle — like a fly buzzing when trying to concentrate. The world's leading scientists occupy the upper plateau of this curve.

What this means for everyone building interpretable and explainable systems: high-level users don't need your transparency. They'll turn it off. Not because they were forced, not because the interface is bad — because they're more comfortable without it. Natural selection within user behavior pushes toward less interpretable systems. For the alignment community, this should be alarming: the better models work, the fewer incentives users have to monitor them.

A side effect: small scientific collaborations will disappear. Before, a scientist recruited a co-author because they lacked a skill — calculation, verification, code in an unfamiliar library. Now that gap closes with a model. Why call a colleague for one calculation if Claude does it in ten minutes? Kipping solo-authors scientific papers, unusual in his field, and expects the trend to strengthen. Only core collaborations remain — two or three people where each is irreplaceable. Agents handle the rest.

Yet first contact with models usually disappoints. The leading professor admitted spending enormous time on trial and error. Hours typing in all caps at the keyboard. People try once, get garbage, quit. Those who persisted — early adopters — get colossal advantage. Hence the meeting's motivation: the institute wasn't resisting but forming an accelerated adoption group. The message: take this, relax, enjoy it.

Pricing Trap Economics

The leading professor spent hundreds of dollars monthly on subscriptions. From personal funds. For him, tolerable. For a grad student or young postdoc, already a barrier. Stratification happens now: AI amplifies some, others can't afford the amplifier.

Investment in the AI industry since 2014 exceeds the entire Apollo project more than five-fold (inflation-adjusted) and the Manhattan Project fifty-fold. Humanity hasn't invested such sums in any technology.

The question raised over lunch: how will investors recoup these amounts? One scenario — price trap. Classic dealer scheme: first hit free. Models are cheap now. Everyone gets hooked. Skills atrophy. In a couple years, companies jack prices to thousands monthly. By then, the Overton window shifted: productivity with AI became expected baseline, and refusing is impossible — like GPS, where habit remains but navigation skills died.

Second scenario discussed heatedly: AI companies might demand intellectual property shares. Imagine terms where OpenAI or Anthropic take ten-twenty-fifty percent of patents for using a "research" tier. Currently speculation. But two hundred billion dollars in investment demand returns, and charity won't cut it.

Almost nobody discusses this publicly. Yet if your grant funds work and twenty percent of IP goes to Anthropic — that's different research economics.

Who Suffers Most

Traditionally in physics and astrophysics, technically skilled people won. Ability to solve differential equations mentally, write complex simulations, think abstractly. These advantages neutralized by AI.

New "super-scientist" profile — managerial. Ability to break problems into chunks suitable for models. Patience — not breaking when the model confidently talks nonsense the third time. Skill building workflows: prompts, rules, agent chains. Entirely different breed than those moving science for three centuries. Like telling a conductor: orchestra's virtual now, throw away your baton, learn MIDI.

GPS analogy is exact and cruel. Before navigation, we held 3D territorial maps mentally. GPS killed that skill. Driving, we think anything but routes. Code skill atrophy, mathematical thinking, independent problem-solving: same thing, bigger scale.

Most vulnerable: young scientists. PhD training costs ~$100K yearly — salary, health insurance, tuition. Model subscription: twenty dollars monthly. First-year project, student spends twelve months on? Model digests in an evening.

Meanwhile, federal grant cuts by current administration. And existential question Kipping explicitly marks as "I'm not saying this, but I can imagine someone saying": why spend five years training a scientist if in five years scientists as understood might not exist?

Tenured professors in relative safety — by definition, firing them requires closing the whole institute. Captains going down with the ship. Part of ship, part crew.

The leading professor already uses AI for grad student selection — not for deciding, but assisting. Evaluates the result as best in his career: faster, more accurate, reliable.

Question giving shivers: by what criteria select students if traditional ones (technical mastery, coding ability, abstract math) in five years might be meaningless? Kipping phrased bluntly: would he work with a student who fundamentally refuses AI? Probably not. Like refusing internet or refusing to code.

What Goes Unspoken at Conferences

Things not voiced at the meeting cast shadows on every podcast fact.

If models write ninety percent and cross-check each other, who notices systematic error common to all? When everyone uses the same systems, viewpoint diversity narrows. Suppose three models agree an integral solves this way. What if all three inherited the same approximation from training data? A solo human reviewer with a pencil might catch it — but reviewers drowned in papers, have no time, increasingly check via model themselves.

Reproducibility already a sick topic in science (search "replication crisis": half psychology results don't reproduce; biomedicine not much better). Now add: experiment is "ran prompt, got result." How reproduce next year when the model updated? What sampling temperature was it? What system prompt defaulted that Tuesday? Which model version? Reproducibility gets a second wind or a bullet. Depends whether we learn fixing prompt-environment as strictly as we fix library versions in requirements.txt.

If models generate science and science enters training data for next-gen models, closed loop. Converges to something meaningful or diverges — we don't know. Model collapse discussed broadly, but almost nowhere in scientific knowledge context. Yet scientific texts differ from copywriting: chains of reasoning where mid-error breaks all logic. If a model trains on ten thousand papers where an intermediate step is hallucination but end result happened to match reality, it learns bad reasoning alongside right answers. Worse than outright error.

Another topic Kipping touches elsewhere: public reaction. YouTube audiences deeply allergic to anything smelling of "AI slop" — wholly generated content, regurgitated Wikipedia, Reddit rewrites. Kipping draws boundary: his content rests on unique ideas; models help execution and presentation, not ideas. But notably — IAS scientists don't worry public reaction. They're not afraid papers called generated because they've long admitted: models work at their level or beyond. To them, AI-assisted science is absolutely legitimate. Gap between academic and public perception — already a chasm, widening.

Paper flood. One-two orders more publications. Supermen writing three-four papers yearly instead of one; "normal people with GPT" gained ability to write too. Already arXiv daily posts dozens per knowledge area. Nobody to read. "Use AI for reading" — surface answer, but scientists need not summary but knowledge internalization. Into head, digested, linked with what's known. Summary doesn't give that.

Final Question

What's the point replacing all scientists with machines?

Kipping offers art analogy. AI-art exists, useful for tasks. But museums hook us via human story: what drove the artist, context, why exactly this brushstroke here. Science — same curiosity nature. Detective work. Joy when pieces fit and you suddenly grasp how a world-fragment works.

Kipping fears specific things. World where superintelligence designs a fusion reactor but humans can't understand how. World with result but no comprehension. Where everything — magic. He says: "I don't know if I want living in a world where everything's just magic, fantasy. I want living in an intelligible world."

Plugging numbers: model costs twenty dollars, does PhD-student work. Science stops being elite privilege. Kipping's channel viewers, who for years emailed ideas, no longer need Kipping realizing them. "Democratization." Sounds lovely. But consequence — publication avalanche where human attention becomes primary scarcity. Values shift: not "who can do science" but "who can separate wheat from chaff." Different skill entirely. And maybe the last one humans must master.

Kasparov lost to Deep Blue in '97. Then pushed "centaur chess" ten years: human plus machine beats machine alone. By 2015: no — machine alone stronger. Centaurs silently exited stage without honors. In science, we're now in centaur phase: humans still needed, still manage process, still ask questions. How long — not abstract. For some grad students being selected now, answer comes before dissertation defense.

Most Striking

Most striking in this podcast isn't content. Anyone daily working with LLM recognizes their own thoughts. Striking different. Kipping says: shocking wasn't what he heard, but that it was said aloud and everyone nodded. Thoughts he considered personal anxiety, uncertain, half-formed, frightening — turned out common chorus. Just until that January morning, nobody dared say.

That historian speaking via Zoom was right: this moment needs documenting. Kipping documented it. I wrote it down. Habr readers read it.

But who reads in five years — us, or systems we delegate reading to by then — nobody there answered.

Though maybe didn't need answering. Enough that someone asked.

Based on: Cool Worlds podcast (David Kipping, Columbia University), episode on closed meeting at Institute for Advanced Study, Princeton, 2025.

David KippingInstitute for Advanced Study