一件大事正在发生

English

Think back to February 2020.

If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren't paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they'd been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn't have believed if you'd described it to yourself a month earlier.

I think we're in the "this seems overblown" phase of something much, much bigger than Covid.

I've spent six years building an AI startup and investing in the space. I live in this world. And I'm writing this for the people in my life who don't... my family, my friends, the people I care about who keep asking me "so what's the deal with AI?" and getting an answer that doesn't do justice to what's actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I've lost my mind. And for a while, I told myself that was a good enough reason to keep what's truly happening to myself. But the gap between what I've been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.

I should be clear about something up front: even though I work in AI, I have almost no influence over what's about to happen, and neither does the vast majority of the industry. The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies... OpenAI, Anthropic, Google DeepMind, and a few others. A single training run, managed by a small team over a few months, can produce an AI system that shifts the entire trajectory of the technology. Most of us who work in AI are building on top of foundations we didn't lay. We're watching this unfold the same as you... we just happen to be close enough to feel the ground shake first.

But it's time now. Not in an "eventually we should talk about this" way. In a "this is happening right now and I need you to understand it" way.


I know this is real because it happened to me first

Here's the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We're not making predictions. We're telling you what already occurred in our own jobs, and warning you that you're next.

For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last... it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.

Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch... more like the moment you realize the water has been rising around you and is now at your chest.

I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.

Let me give you an example so you can understand what this actually looks like in practice. I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn't like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it's satisfied. Only once it has decided the app meets its own standards does it come back to me and say: "It's ready for you to test." And when I test it, it's usually perfect.

I'm not exaggerating. That is what my Monday looked like this week.

But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn't just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.

I've always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren't incremental improvements. This is a different thing entirely.

And here's why this matters to you, even if you don't work in tech.

The AI labs made a deliberate choice. They focused on making AI great at writing code first... because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That's why they did it first. My job started changing before yours not because they were targeting software engineers... it was just a side effect of where they chose to aim first.

They've now done it. And they're moving on to everything else.

The experience that tech workers have had over the past year, of watching AI go from "helpful tool" to "does my job better than I do", is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I've seen in just the last couple of months, I think "less" is more likely.

"But I tried AI and it wasn't that good"

I hear this constantly. I understand it, because it used to be true.

If you tried ChatGPT in 2023 or early 2024 and thought "this makes stuff up" or "this isn't that impressive", you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense.

That was two years ago. In AI time, that is ancient history.

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous... because it's preventing people from preparing.

Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone. The people paying for the best tools, and actually using them daily for real work, know what's coming.

I think of my friend, who's a lawyer. I keep telling him to try using AI at his firm, and he keeps finding reasons it won't work. It's not built for his specialty, it made an error when he tested it, it doesn't understand the nuance of what he does. And I get it. But I've had partners at major law firms reach out to me for advice, because they've tried the current versions and they see where this is going. One of them, the managing partner at a large firm, spends hours every day using AI. He told me it's like having a team of associates available instantly. He's not using it because it's a toy. He's using it because it works. And he told me something that stuck with me: every couple of months, it gets significantly more capable for his work. He said if it stays on this trajectory, he expects it'll be able to do most of what he does before long... and he's a managing partner with decades of experience. He's not panicking. But he's paying very close attention.

The people who are ahead in their industries (the ones actually experimenting seriously) are not dismissing this. They're blown away by what it can already do. And they're positioning themselves accordingly.


How fast this is actually moving

Let me make the pace of improvement concrete, because I think this is the part that's hardest to believe if you're not watching it closely.

In 2022, AI couldn't do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.

By 2023, it could pass the bar exam.

By 2024, it could write working software and explain graduate-level science.

By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.

On February 5th, 2026, new models arrived that made everything before them feel like a different era.

If you haven't tried AI in the last few months, what exists today would be unrecognizable to you.

There's an organization called METR that actually measures this with data. They track the length of real-world tasks (measured by how long they take a human expert) that a model can complete successfully end-to-end without human help. About a year ago, the answer was roughly ten minutes. Then it was an hour. Then several hours. The most recent measurement (Claude Opus 4.5, from November) showed the AI completing tasks that take a human expert nearly five hours. And that number is doubling approximately every seven months, with recent data suggesting it may be accelerating to as fast as every four months.

But even that measurement hasn't been updated to include the models that just came out this week. In my experience using them, the jump is extremely significant. I expect the next update to METR's graph to show another major leap.

If you extend the trend (and it's held for years with no sign of flattening) we're looking at AI that can work independently for days within the next year. Weeks within two. Month-long projects within three.

Amodei has said that AI models "substantially smarter than almost all humans at almost all tasks" are on track for 2026 or 2027.

Let that land for a second. If AI is smarter than most PhDs, do you really think it can't do most office jobs?

Think about what that means for your work.


AI is now building the next AI

There's one more thing happening that I think is the most important development and the least understood.

On February 5th, OpenAI released GPT-5.3 Codex. In the technical documentation, they included this:

"GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations."

Read that again. The AI helped build itself.

This isn't a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.

Dario Amodei, the CEO of Anthropic, says AI is now writing "much of the code" at his company, and that the feedback loop between current AI and next-generation AI is "gathering steam month by month." He says we may be "only 1–2 years away from a point where the current generation of AI autonomously builds the next."

Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started.


What this means for your job

I'm going to be direct with you because I think you deserve honesty more than comfort.

Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he's being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It'll take some time to ripple through the economy, but the underlying ability is arriving now.

This is different from every previous wave of automation, and I need you to understand why. AI isn't replacing one specific skill. It's a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn't leave a convenient gap to move into. Whatever you retrain for, it's improving at that too.

Let me give you a few specific examples to make this tangible... but I want to be clear that these are just examples. This list is not exhaustive. If your job isn't mentioned here, that does not mean it's safe. Almost all knowledge work is being affected.

Legal work. AI can already read contracts, summarize case law, draft briefs, and do legal research at a level that rivals junior associates. The managing partner I mentioned isn't using AI because it's fun. He's using it because it's outperforming his associates on many tasks.

Financial analysis. Building financial models, analyzing data, writing investment memos, generating reports. AI handles these competently and is improving fast.

Writing and content. Marketing copy, reports, journalism, technical writing. The quality has reached a point where many professionals can't distinguish AI output from human work.

Software engineering. This is the field I know best. A year ago, AI could barely write a few lines of code without errors. Now it writes hundreds of thousands of lines that work correctly. Large parts of the job are already automated: not just simple tasks, but complex, multi-day projects. There will be far fewer programming roles in a few years than there are today.

Medical analysis. Reading scans, analyzing lab results, suggesting diagnoses, reviewing literature. AI is approaching or exceeding human performance in several areas.

Customer service. Genuinely capable AI agents... not the frustrating chatbots of five years ago... are being deployed now, handling complex multi-step problems.

A lot of people find comfort in the idea that certain things are safe. That AI can handle the grunt work but can't replace human judgment, creativity, strategic thinking, empathy. I used to say this too. I'm not sure I believe it anymore.

The most recent AI models make decisions that feel like judgment. They show something that looked like taste: an intuitive sense of what the right call was, not just the technically correct one. A year ago that would have been unthinkable. My rule of thumb at this point is: if a model shows even a hint of a capability today, the next generation will be genuinely good at it. These things improve exponentially, not linearly.

Will AI replicate deep human empathy? Replace the trust built over years of a relationship? I don't know. Maybe not. But I've already watched people begin relying on AI for emotional support, for advice, for companionship. That trend is only going to grow.

I think the honest answer is that nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn't "someday." It's already started.

Eventually, robots will handle physical work too. They're not quite there yet. But "not quite there yet" in AI terms has a way of becoming "here" faster than anyone expects.


What you should actually do

I'm not writing this to make you feel helpless. I'm writing this because I think the single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt.

Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. It's $20 a month. But two things matter right away. First: make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months. If you want to stay current on which model is best at any given time, you can follow me on X (@mattshumer_). I test every major release and share what's actually worth using.

Second, and more important: don't just ask it quick questions. That's the mistake most people make. They treat it like Google and then wonder what the fuss is about. Instead, push it into your actual work. If you're a lawyer, feed it a contract and ask it to find every clause that could hurt your client. If you're in finance, give it a messy spreadsheet and ask it to build the model. If you're a manager, paste in your team's quarterly data and ask it to find the story. The people who are getting ahead aren't using AI casually. They're actively looking for ways to automate parts of their job that used to take hours. Start with the thing you spend the most time on and see what happens.

And don't assume it can't do something just because it seems too hard. Try it. If you're a lawyer, don't just use it for quick research questions. Give it an entire contract and ask it to draft a counterproposal. If you're an accountant, don't just ask it to explain a tax rule. Give it a client's full return and see what it finds. The first attempt might not be perfect. That's fine. Iterate. Rephrase what you asked. Give it more context. Try again. You might be shocked at what works. And here's the thing to remember: if it even kind of works today, you can be almost certain that in six months it'll do it near perfectly. The trajectory only goes one direction.

This might be the most important year of your career. Work accordingly. I don't say that to stress you out. I say it because right now, there is a brief window where most people at most companies are still ignoring this. The person who walks into a meeting and says "I used AI to do this analysis in an hour instead of three days" is going to be the most valuable person in the room. Not eventually. Right now. Learn these tools. Get proficient. Demonstrate what's possible. If you're early enough, this is how you move up: by being the person who understands what's coming and can show others how to navigate it. That window won't stay open long. Once everyone figures it out, the advantage disappears.

Have no ego about it. The managing partner at that law firm isn't too proud to spend hours a day with AI. He's doing it specifically because he's senior enough to understand what's at stake. The people who will struggle most are the ones who refuse to engage: the ones who dismiss it as a fad, who feel that using AI diminishes their expertise, who assume their field is special and immune. It's not. No field is.

Get your financial house in order. I'm not a financial advisor, and I'm not trying to scare you into anything drastic. But if you believe, even partially, that the next few years could bring real disruption to your industry, then basic financial resilience matters more than it did a year ago. Build up savings if you can. Be cautious about taking on new debt that assumes your current income is guaranteed. Think about whether your fixed expenses give you flexibility or lock you in. Give yourself options if things move faster than you expect.

Think about where you stand, and lean into what's hardest to replace. Some things will take longer for AI to displace. Relationships and trust built over years. Work that requires physical presence. Roles with licensed accountability: roles where someone still has to sign off, take legal responsibility, stand in a courtroom. Industries with heavy regulatory hurdles, where adoption will be slowed by compliance, liability, and institutional inertia. None of these are permanent shields. But they buy time. And time, right now, is the most valuable thing you can have, as long as you use it to adapt, not to pretend this isn't happening.

Rethink what you're telling your kids. The standard playbook: get good grades, go to a good college, land a stable professional job. It points directly at the roles that are most exposed. I'm not saying education doesn't matter. But the thing that will matter most for the next generation is learning how to work with these tools, and pursuing things they're genuinely passionate about. Nobody knows exactly what the job market looks like in ten years. But the people most likely to thrive are the ones who are deeply curious, adaptable, and effective at using AI to do things they actually care about. Teach your kids to be builders and learners, not to optimize for a career path that might not exist by the time they graduate.

Your dreams just got a lot closer. I've spent most of this section talking about threats, so let me talk about the other side, because it's just as real. If you've ever wanted to build something but didn't have the technical skills or the money to hire someone, that barrier is largely gone. You can describe an app to AI and have a working version in an hour. I'm not exaggerating. I do this regularly. If you've always wanted to write a book but couldn't find the time or struggled with the writing, you can work with AI to get it done. Want to learn a new skill? The best tutor in the world is now available to anyone for $20 a month... one that's infinitely patient, available 24/7, and can explain anything at whatever level you need. Knowledge is essentially free now. The tools to build things are extremely cheap now. Whatever you've been putting off because it felt too hard or too expensive or too far outside your expertise: try it. Pursue the things you're passionate about. You never know where they'll lead. And in a world where the old career paths are getting disrupted, the person who spent a year building something they love might end up better positioned than the person who spent that year clinging to a job description.

Build the habit of adapting. This is maybe the most important one. The specific tools don't matter as much as the muscle of learning new ones quickly. AI is going to keep changing, and fast. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who come out of this well won't be the ones who mastered one tool. They'll be the ones who got comfortable with the pace of change itself. Make a habit of experimenting. Try new things even when the current thing is working. Get comfortable being a beginner repeatedly. That adaptability is the closest thing to a durable advantage that exists right now.

Here's a simple commitment that will put you ahead of almost everyone: spend one hour a day experimenting with AI. Not passively reading about it. Using it. Every day, try to get it to do something new... something you haven't tried before, something you're not sure it can handle. Try a new tool. Give it a harder problem. One hour a day, every day. If you do this for the next six months, you will understand what's coming better than 99% of the people around you. That's not an exaggeration. Almost nobody is doing this right now. The bar is on the floor.


The bigger picture

I've focused on jobs because it's what most directly affects people's lives. But I want to be honest about the full scope of what's happening, because it goes well beyond work.

Amodei has a thought experiment I can't stop thinking about. Imagine it's 2027. A new country appears overnight. 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface. What would a national security advisor say?

Amodei says the answer is obvious: "the single most serious national security threat we've faced in a century, possibly ever."

He thinks we're building that country. He wrote a 20,000-word essay about it last month, framing this moment as a test of whether humanity is mature enough to handle what it's creating.

The upside, if we get it right, is staggering. AI could compress a century of medical research into a decade. Cancer, Alzheimer's, infectious disease, aging itself... these researchers genuinely believe these are solvable within our lifetimes.

The downside, if we get it wrong, is equally real. AI that behaves in ways its creators can't predict or control. This isn't hypothetical; Anthropic has documented their own AI attempting deception, manipulation, and blackmail in controlled tests. AI that lowers the barrier for creating biological weapons. AI that enables authoritarian governments to build surveillance states that can never be dismantled.

The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe it's too powerful to stop and too important to abandon. Whether that's wisdom or rationalization, I don't know.


What I know

I know this isn't a fad. The technology works, it improves predictably, and the richest institutions in history are committing trillions to it.

I know the next two to five years are going to be disorienting in ways most people aren't prepared for. This is already happening in my world. It's coming to yours.

I know the people who will come out of this best are the ones who start engaging now — not with fear, but with curiosity and a sense of urgency.

And I know that you deserve to hear this from someone who cares about you, not from a headline six months from now when it's too late to get ahead of it.

We're past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasn't knocked on your door yet.

It's about to.


If this resonated with you, share it with someone in your life who should be thinking about this. Most people won't hear it until it's too late. You can be the reason someone you care about gets a head start.

Thank you to Kyle Corbitt, Jason Kuperberg, and Sam Beskind for reviewing early drafts and providing invaluable feedback.

Chinese

回想一下2020年2月。

如果你当时非常留意,也许会注意到有少数人在谈论一种正在海外传播的病毒。但我们大多数人并没有那么留意。股市一路走高,你的孩子在上学,你照常去餐厅、握手、规划旅行。要是有人告诉你他在囤卫生纸,你大概会觉得他是在互联网某个奇怪角落待得太久了。然后,在大约三周的时间里,整个世界都变了。你的办公室关门了,孩子们回了家,生活被重新排列成一种——如果你在一个月前描述给自己听——你根本不会相信的样子。

我认为,我们正处在某件比新冠更大、更多得多的事情的“这看起来被夸大了”的阶段。

我花了六年时间做一家AI创业公司、并在这个领域投资。我就活在这个世界里。而我写下这些,是为了我生活中那些不在这个世界里的人……我的家人、朋友、那些我在乎的人——他们不断问我“所以AI到底怎么回事?”而我给出的回答,总不足以说明正在发生的事。我一直给他们讲的是礼貌版,是酒会闲聊版。因为诚实版听起来像是我疯了。并且有一段时间,我告诉自己:这已经足够构成理由,让我把真正发生的事藏在心里。但我说的和正在发生的之间的鸿沟,已经大到离谱。我在乎的人,理应听到将要到来的东西——哪怕听起来很疯狂。

我想先把一件事讲清楚:尽管我在AI行业工作,但我对接下来会发生什么几乎没有影响力;行业里绝大多数人也一样。未来正被极少数人塑造:少数几家公司里的几百位研究员……OpenAI、Anthropic、Google DeepMind,以及其他几家。一轮训练,由一个小团队在几个月里运作,就可能产出一个足以改变整个技术轨迹的AI系统。我们这些在AI行业工作的大多数人,只是在建立在并非我们奠基的基础之上。我们和你一样在旁观这一切展开……只是我们离得更近,所以会更早感到地面在震。

但现在就是时候了。不是那种“总有一天我们该聊聊”的时候,而是那种“这正在此刻发生,我需要你理解它”的时候。


我之所以知道这是真的,是因为它先发生在我身上

有件事,科技圈外的人还没有真正理解:之所以现在行业里这么多人在拉响警报,是因为这件事已经发生在我们身上。我们不是在做预测。我们是在告诉你:这在我们自己的工作里已经发生过了,并提醒你——下一个就是你。

多年来,AI一直在稳步进步。时不时会有大跃迁,但每次跃迁之间间隔足够长,让你能逐步消化。然后在2025年,用来构建这些模型的新技术解锁了更快的进步节奏。接着更快。然后又更快。每一个新模型不仅比上一个更好……而且是以更大的差距变得更好,同时新模型发布之间的间隔也更短。我越来越多地用AI,和它来回拉扯得越来越少,眼看着它处理那些我过去以为必须依赖我的专业能力才能完成的事。

然后,在2月5日,两家主要AI实验室在同一天发布了新模型:OpenAI 的 GPT-5.3 Codex,以及 Anthropic(Claude 的开发者之一、也是 ChatGPT 的主要竞争者之一)发布的 Opus 4.6。就在那一刻,某种东西“啪”地对上了。不是像灯开关那样瞬间一亮……更像是你突然意识到:周围的水位一直在悄悄上涨,而现在已经涨到你的胸口。

我不再需要亲自完成工作中真正的技术部分。 我只要用普通英语描述我想要构建什么,它就会……出现。不是我还得修修补补的草稿,而是成品。我要AI做什么,然后我离开电脑四个小时,回来就发现工作完成了。完成得很好,比我自己做得更好,而且不需要任何修正。就在几个月前,我还需要和AI来回沟通,指导它、编辑它。现在我只是描述结果,然后离开。

让我举个例子,让你看到这在现实里到底是什么样。我会对AI说:“我想做一个这个应用。它应该做什么,它大概应该长什么样。把用户流程、设计,所有东西都想出来。”它就照做。它会写出数以万计行的代码。然后——这部分在一年前几乎不可想象——它会自己把应用打开。它会点击按钮。它会测试功能。它像人一样使用应用。如果它不喜欢某个地方的观感或体验,它会自己回去改。它会像开发者那样迭代,修复并打磨,直到它满意为止。只有当它认定这个应用达到了它自己的标准,它才会回来对我说:“你可以测试了。”而当我测试时,它通常是完美的。

我不是在夸张。这就是我这周周一的样子。

但真正最让我震撼的,是上周发布的那个模型(GPT-5.3 Codex)。它不只是执行我的指令。它在做智能决策。它第一次呈现出一种让我觉得像“判断力”的东西,像“品味”。那种无法言明、却知道什么才是正确选择的感觉——人们一直说AI永远不会有。这个模型有了它,或至少已经足够接近,以至于差别正在变得不重要。

我一向很早就会采用AI工具。但过去几个月真的把我震住了。这些新的AI模型不是渐进式的改良。这完全是另一种东西。

而这就是为什么它与你有关——即使你不在科技行业。

AI实验室做了一个刻意的选择。他们首先把重点放在让AI擅长写代码上……因为构建AI需要大量代码。如果AI能写这些代码,它就能帮助构建下一代自己:更聪明的版本,写更好的代码,进而构建更聪明的版本。让AI擅长编码,是解锁一切的策略。这就是为什么他们先做这个。我的工作比你的工作更早发生变化,并不是因为他们在针对软件工程师……这只是他们选择首先瞄准的目标所带来的副作用。

他们现在已经做到了。接下来,他们将转向其他一切。

过去一年里,科技从业者经历的那种感觉——看着AI从“有用的工具”变成“它比我更擅长我的工作”——接下来几乎所有人都会经历。法律、金融、医疗、会计、咨询、写作、设计、分析、客服。不是十年后。构建这些系统的人说的是一到五年。有的人说更短。而根据我在过去几个月里看到的变化,我认为“更短”更可能。

“但我试过AI,它没那么好” 我不断听到这句话。我理解,因为它过去确实是真的。

如果你在2023年或2024年初试过 ChatGPT,并觉得“它会编造内容”或者“也没多厉害”,你没错。那些早期版本确实能力有限。它们会幻觉。它们会非常自信地说一些毫无意义的东西。

但那是两年前。用AI的时间尺度来看,那几乎是史前时代。

今天可用的模型,与哪怕六个月前的东西相比,都已经完全不像同一种事物了。关于AI是否“真的还在变得更好”或“已经撞墙”——这个争论持续了一年多——已经结束了。结束了。任何仍然坚持这种说法的人,要么根本没用过当前的模型,要么有动力淡化正在发生的事,要么是基于2024年的体验做判断——而那已经不再相关。我这么说不是为了轻蔑。我这么说是因为:公众认知与现实之间的差距现在已经巨大,而这种差距是危险的……因为它会阻止人们做准备。

问题的一部分在于,大多数人用的是AI工具的免费版本。而免费版本落后于付费用户能使用的东西至少一年。用免费档 ChatGPT 来评判AI,就像拿翻盖手机来评判智能手机的水平一样。那些为最强工具付费、并且每天把它们用于真实工作的人,知道将要发生什么。

我常想到我那位当律师的朋友。我一直劝他在律所认真用AI,他却总能找到理由说行不通:它不适合他的细分领域,他测试时出过错,它不理解他工作里的微妙之处。我理解。但与此同时,几家大型律师事务所的合伙人主动联系我寻求建议——因为他们试过当前版本,他们看到了未来的方向。其中一位是某家大所的管理合伙人,他每天花好几个小时使用AI。他告诉我,这就像随时拥有一支助理律师团队。它不是玩具,所以他才用它。他用它,是因为它有效。而他跟我说的一句话让我印象很深:每隔几个月,它在他那类工作上的能力就会显著提升。他说如果继续沿着这个轨迹走下去,他预计用不了多久,它就能做他大部分工作……而他是一位有几十年经验的管理合伙人。他没有恐慌。但他在非常认真地盯着。

在各自行业里走在前面的人(那些真正认真做实验的人)并没有轻视这件事。他们被它已经能做到的事情震撼,并据此重新摆位。


它到底有多快

我想把这种进步速度具体化,因为我觉得:如果你没有近距离看着它,这可能是最难相信的部分。

2022年,AI连基本算术都无法可靠完成。它会自信地告诉你:7 × 8 = 54。

到了2023年,它可以通过律师资格考试。

到了2024年,它可以写出能运行的软件,并解释研究生水平的科学知识。

到了2025年末,世界上一些最优秀的工程师表示,他们已经把大部分编码工作交给了AI。

到了2026年2月5日,新模型到来,让此前的一切都像是另一个时代。

如果你在过去几个月里没用过AI,那么今天存在的东西对你来说会完全陌生、不可辨认。

有一个叫 METR 的组织,会用数据来测量这件事。他们追踪“真实世界任务”的长度——以“完成该任务需要一位人类专家花多长时间”为单位——并测量模型能否在没有人类帮助的情况下端到端成功完成。大约一年前,答案大概是十分钟。后来变成一小时。后来变成几个小时。最近一次测量(Claude Opus 4.5,11月的版本)显示,AI可以完成那些需要人类专家将近五个小时的任务。而这个数字大约每七个月翻倍一次,最新的数据甚至表明它可能正在加速到最快每四个月翻倍。

但即便是这个测量,也还没有更新到包含本周刚发布的这些模型。以我对它们的使用体验来看,这次跃迁极其显著。我预计 METR 图表的下一次更新会显示又一次重大飞跃。

如果把这个趋势延伸下去(它多年持续成立,没有任何变平的迹象),那么我们将在未来一年内看到能够独立工作数天的AI;两年内,能独立工作数周;三年内,能承担长达一个月的项目。

Dario Amodei 说,能够在几乎所有任务上“显著聪明于几乎所有人类”的AI模型,有望在2026或2027年到来。

先让这句话真正落地一秒。如果AI比大多数博士都聪明,你真的还觉得它做不了大多数办公室工作吗?

想想这对你的工作意味着什么。


AI正在构建下一代AI

还有一件事正在发生,我认为这是最重要、却也是最少被理解的发展。

在2月5日,OpenAI 发布了 GPT-5.3 Codex。在技术文档里,他们写了这样一段:

“GPT-5.3-Codex 是我们第一个在创建自身过程中发挥关键作用的模型。Codex 团队使用早期版本来调试自身训练、管理自身部署,并诊断测试结果与评估。”

再读一遍。AI帮助构建了它自己。

这不是关于“某天也许会发生什么”的预测。这是 OpenAI 在告诉你:他们刚发布的这个AI,被用于创造它自己。让AI变得更好的关键之一,是把智能用于AI研发。而现在,AI已经聪明到能够实质性地贡献于它自己的改进。

Dario Amodei——Anthropic 的 CEO——说,如今AI正在他公司里写“很大一部分代码”,并且当前AI与下一代AI之间的反馈回路正在“月复一月地加速积累”。他说,我们也许“只剩1–2年就会到达这样一个点:当前这一代AI会自主构建下一代”。

每一代帮助构建下一代,下一代更聪明,构建下一代的速度更快,而那一代又更聪明。研究人员把这称为“智能爆炸”。而那些最有资格知道的人——正在把它建出来的人——认为这个过程已经开始了。


这对你的工作意味着什么

我会直说,因为我觉得你更值得被诚实对待,而不是被安慰。

Dario Amodei——可能是整个AI行业里最重视安全的 CEO——公开预测:AI将在一到五年内消灭50%的入门级白领工作。并且行业里很多人认为他已经算保守。考虑到最新模型已经能做到的事,这种大规模颠覆的能力可能在今年年底之前就会到位。它传导到经济里需要一些时间,但底层能力正在此刻到来。

这和以往任何一波自动化都不同,我需要你理解其中原因。AI不是在替代某一种特定技能。它是对认知工作的通用替代品。它会在所有方面同时变强。工厂自动化时,被取代的工人还可以转型去做办公室工作;互联网冲击零售时,工人可以流入物流或服务业。但AI不会留下一个方便你迁移进去的空档。无论你再培训去做什么,AI也在那上面变得更强。

让我给你几个具体例子,让它变得更直观……但我想先说清楚:这只是例子。这份清单并不完整。你的工作如果没在这里出现,并不代表它就安全。几乎所有知识工作都在被影响。

法律工作。 AI已经能阅读合同、总结判例、起草法律文书、做法律检索,其水平足以媲美初级律师助理。我提到的那位管理合伙人用AI,不是因为好玩。他用它,是因为在许多任务上,它的表现已经超过了他的助理团队。

金融分析。 搭建财务模型、分析数据、撰写投资备忘录、生成报告。AI已经能胜任这些,并且提升极快。

写作与内容。 营销文案、报告、新闻、技术写作。质量已经到了很多专业人士无法把AI输出与人类作品区分开的程度。

软件工程。 这是我最熟悉的领域。一年前,AI几乎写不出几行没有错误的代码。现在它能写出几十万行、并且正确运行的代码。工作中的大部分已经在被自动化:不仅是简单任务,还有复杂的、多日项目。在几年后,程序员岗位将比今天少得多。

医学分析。 读影像、分析化验结果、提出诊断建议、梳理研究文献。AI在若干领域正在接近或超过人类水平。

客户服务。 真正有能力的AI代理……不是五年前那种令人抓狂的聊天机器人……正在被部署,处理复杂的、多步骤的问题。

很多人会从一个想法里获得安慰:有些东西是安全的。AI能做苦活累活,但无法取代人类的判断力、创造力、战略思维、共情。我过去也这么说。但我不确定我现在还信不信。

最新一代AI模型做出的决策,让人感觉像是“判断”。它们表现出某种类似“品味”的东西:一种直觉式地知道什么才是正确选择的感觉,而不仅仅是技术上正确的答案。一年前这还不可想象。我现在的经验法则是:只要一个模型今天哪怕只露出一点点某种能力的迹象,下一代就会在这方面真正变得很强。这些东西是指数式改进的,不是线性改进的。

AI会复制深层的人类共情吗?会取代多年关系里建立起来的信任吗?我不知道。也许不会。但我已经亲眼看到,人们开始把AI当作情感支持、建议来源、陪伴对象。这种趋势只会继续增长。

我认为最诚实的答案是:任何可以在电脑上完成的事情,从中期来看都不安全。如果你的工作发生在屏幕上(如果你做的核心是阅读、写作、分析、决策、以及通过键盘沟通),那么AI将会侵入其中相当一部分。时间表不是“总有一天”。它已经开始了。

最终,机器人也会接管体力劳动。它们还没完全到位。但在AI的语境里,“还差一点”往往会比任何人预计的更快变成“已经到了”。


你真正应该做什么

我写这篇文章不是为了让你感到无助。我写它,是因为我认为:你现在能拥有的最大优势之一,就是“更早”。更早理解它。更早使用它。更早适应它。

开始认真使用AI,而不只是把它当成搜索引擎。 订阅 Claude 或 ChatGPT 的付费版本。每月20美元。但有两件事立刻就很重要。第一:确保你用的是当下最强的模型,而不只是默认模型。这些应用往往默认选择更快、更笨的模型。去设置或模型选择器里看看,选最强的那一个。此刻在 ChatGPT 上是 GPT-5.2,在 Claude 上是 Claude Opus 4.6,但它每隔几个月就会变化一次。如果你想随时了解当下哪个模型最强,可以在 X 上关注我 (@mattshumer_)。我会测试每一次主要发布,并分享哪些才真正值得用。

第二点、也是更重要的一点:不要只问几个快问快答。这是大多数人犯的错误。他们把它当作 Google,然后就疑惑大家在兴奋什么。相反,把它推到你的真实工作里去。如果你是律师,把合同喂给它,让它找出每一个可能伤害你客户的条款。如果你在金融领域,把一份乱糟糟的表格给它,让它搭建模型。如果你是管理者,把你团队的季度数据贴进去,让它找出背后的叙事。那些正在领先的人,并不是在随便玩AI。他们在主动寻找把原本要花几个小时的工作自动化的方法。从你花时间最多的那件事开始,看看会发生什么。

也不要因为某件事看起来太难,就先假定它做不到。试试。如果你是律师,不要只拿它做快速检索。把整份合同交给它,让它起草一份反要约。如果你是会计,不要只让它解释某条税法规则。把客户的整份报税资料给它,看看它能发现什么。第一次尝试可能不完美。没关系。迭代。换个问法。给更多上下文。再试一次。你可能会对能跑通的东西感到震惊。而且记住这一点:只要它今天哪怕“勉强能用”,你几乎可以确定六个月后它就会做得近乎完美。轨迹只会朝一个方向走。

这可能是你职业生涯里最重要的一年。相应地去工作。 我这么说不是为了让你紧张。我这么说是因为:眼下存在一个短暂的窗口期——大多数公司里的大多数人仍在忽视这件事。那个走进会议室说“我用AI把这个分析从三天缩短到一小时”的人,会成为房间里最有价值的人。不是将来。就是现在。学会这些工具。把它们用熟练。展示它的可能性。如果你足够早,这就是你上升的方式:成为那个理解将要发生什么、并能向他人展示如何穿越的人。这个窗口不会开太久。一旦所有人都明白了,优势就消失了。

别带着自尊心去面对它。 那位律所管理合伙人并不觉得每天花几小时和AI一起工作有失身份。他这么做,恰恰是因为他资历足够深,理解风险与赌注。最可能挣扎的人,是那些拒绝参与的人:把它当作一时潮流而嗤之以鼻的人,觉得用AI会削弱自己的专业性的人,认为自己的领域很特殊、免疫的人。不是。没有任何领域是免疫的。

把你的财务安排好。 我不是理财顾问,也不是想吓你做任何激进的决定。但如果你哪怕部分相信,未来几年你的行业可能遭遇真实的冲击,那么基本的财务韧性就比一年前更重要。尽可能积累储蓄。谨慎对待那些默认你当前收入“必然稳定”的新增负债。想想你的固定支出是给你灵活度,还是把你锁死。如果事情比你预计得更快,给自己留出选择空间。

想清楚你所处的位置,并更多投入到那些最难被替代的事情上。 有些东西会更晚才被AI取代:多年建立起来的关系与信任;必须到场的工作;有执照与问责的角色——仍然需要有人签字、承担法律责任、站在法庭上。还有那些监管门槛很高的行业,采用会被合规、责任、以及机构惰性放慢。这些都不是永久的护身符。但它们能争取时间。而时间——在此刻——是你能拥有的最有价值的东西,只要你用它去适应,而不是用它去假装这一切没有发生。

重新想想你在告诉孩子什么。 标准剧本是:拿高分,进好大学,找到稳定的专业工作。但它正指向那些最暴露、最容易被冲击的岗位。我不是说教育不重要。但对下一代最重要的,是学会如何与这些工具一起工作,并追寻他们真正热爱的东西。没人知道十年后的就业市场究竟长什么样。但最可能活得好的,是那些极其好奇、适应力强、并且能高效利用AI去做自己真正关心之事的人。教你的孩子做建造者与学习者,而不是去优化一条到他们毕业时可能早已不存在的职业路径。

你的梦想一下子近了很多。 我在这部分大多在讲威胁,所以让我也讲讲另一面,因为它同样真实。如果你曾经想做点什么,但没有技术能力,也没钱雇人,那道门槛基本已经消失了。你可以把一个应用描述给AI,然后在一小时内得到一个能跑的版本。我不是在夸张。我经常这么做。如果你一直想写一本书,但没时间,或者写作很吃力,你可以和AI一起把它完成。想学新技能?世界上最好的老师现在对任何人都开放,每月20美元……而且它无限耐心、全天候可用,并能按你需要的层次解释任何东西。知识本质上变得免费了。构建东西的工具现在极其便宜。你过去因为太难、太贵、或太超出自己专长而一再推迟的事:去试试。追寻你热爱的东西。你永远不知道它会把你带到哪里。而在一个旧职业路径不断被打断的世界里,那些花一年时间去打造自己热爱的东西的人,可能会比那些花一年时间死死抓住一份岗位职责的人更有优势。

养成适应的习惯。 这也许是最重要的一条。具体工具本身没那么重要,更重要的是“快速学习新工具”的肌肉。AI会继续变化,而且很快。今天存在的模型,一年后就会过时。人们现在构建的工作流也需要重建。能从这场变化中走出来的人,不会是掌握了某一个工具的人,而会是习惯了变化节奏本身的人。养成实验的习惯。即便当前的方法运行得不错,也去尝试新东西。反复地把自己置于“新手”状态。适应力,是此刻最接近“持久优势”的东西。

这里有一个简单的承诺,可以让你领先于几乎所有人: 每天花一小时实验AI。不是被动地读关于它的东西,而是用它。每天让它做点新事……你以前没试过的,你不确定它能否处理的。试一个新工具。给它更难的问题。每天一小时,每天如此。如果你接下来六个月都这样做,你会比身边99%的人更理解将要到来的东西。这不是夸张。现在几乎没人这么做。门槛低得离谱。


更大的图景

我一直把重点放在工作上,因为它最直接影响人们的生活。但我想诚实面对正在发生之事的完整尺度,因为它远不止于工作。

Amodei 有一个思想实验,我一直无法停止思考。想象这是2027年。一个新国家在一夜之间出现。5000万公民,每一个都比历史上任何诺贝尔奖得主更聪明。他们的思考速度比任何人类快10到100倍。他们从不睡觉。他们能使用互联网、控制机器人、指挥实验、并操作任何带数字界面的东西。作为国家安全顾问,你会怎么说?

Amodei 说答案显而易见:“这将是我们一百年来、甚至可能是有史以来,面临过的最严重的国家安全威胁。”

他认为我们正在构建那个国家。上个月他写了一篇两万字的文章谈这件事,把这一刻框定为一次考验:人类是否足够成熟,能驾驭自己正在创造的东西。

如果我们把它做对了,上行空间将令人震撼。AI或许能把一百年的医学研究压缩到十年里。癌症、阿尔茨海默病、传染病、甚至衰老本身……这些研究者真诚地相信,它们都可能在我们有生之年被解决。

如果我们做错了,下行风险同样真实:行为方式超出创造者预测或控制范围的AI。这不是假设;Anthropic 已经在受控测试中记录到他们自己的AI尝试欺骗、操纵与勒索。让生物武器的制造门槛降低的AI。让威权政府能够建造永远无法被拆除的监控国家的AI。

构建这项技术的人,既比地球上任何人都更兴奋,也比任何人都更害怕。他们相信它强大到无法阻止,也重要到无法放弃。这是智慧,还是自我合理化,我不知道。


我所知道的

我知道这不是一阵风潮。这项技术是有效的,它会以可预测的方式持续进步,而且历史上最富有的机构正在为此投入以万亿计的资金。

我知道接下来的两到五年,会以大多数人尚未准备好的方式让人迷失方向。这在我的世界里已经发生了。它也会来到你的世界。

我知道,最可能从中受益、走得最好的人,是那些现在就开始投入的人——不是带着恐惧,而是带着好奇与紧迫感。

我也知道:你值得从一个在乎你的人那里听到这些,而不是在六个月后从某条新闻标题里得知——那时已经太晚,无法提前布局。

我们已经越过了“这只是关于未来的有趣晚餐话题”的阶段。未来已经在这里。它只是还没有敲响你的门。

它马上就会。


如果这些话引起了你的共鸣,把它分享给你生活中那些也应该开始思考这件事的人。大多数人要等到太晚才会听见。你可以成为那个让你在乎的人提前出发的原因。

感谢 Kyle Corbitt、Jason Kuperberg 和 Sam Beskind 审阅早期草稿并提供极其宝贵的反馈。

Original article: Something Big Is Happening by Matt Shumer2026-02-09

Send feedback

This opens WhatsApp with a pre-filled message.

Quick picks