アイコン_インストール_ios_web アイコン_インストール_ios_web アイコン_インストール_android_web

サブプライム AI 危機: 暗号通貨 x AI の再考

分析2ヶ月前发布 6086cf...
29 0

Original author: Edward Zitron

元の翻訳: ブロックユニコーン

サブプライム AI 危機: 暗号通貨 x AI の再考

If you are paying attention to AI in the crypto industry or AI in the traditional Internet, you need to think seriously about the future of this industry. The article is quite long, if you don’t have patience, you can leave immediately.

What I write in this article is not intended to spread skepticism or bashing, but rather to provide a sober assessment of where we are today and where our current path may lead us. I believe that the AI boom—more specifically, the Generative AI boom—(as I have argued before) is unsustainable and will eventually crash. I also worry that this crash could be devastating to Big Tech, severely undermine the startup ecosystem, and further erode public support for the tech industry.

I’m writing this today because it feels like the landscape is changing rapidly, with multiple AI “pocalypse signs” already emerging: OpenAI’s (hastily) launched “o1 (codename: Strawberry)” model being called “a big, stupid magic trick” (a false illusion); rumors of price increases for future models at OpenAI (and elsewhere); layoffs at Scale AI; and leadership departures from OpenAI. These are all signs that things are starting to fall apart.

So I thought it was necessary to explain the crisis of the current situation and why we have found ourselves in a phase of disillusionment. I wanted to express my concern about the fragility of the movement and the obsession and lack of direction that has led us to this point, and I hope some people can do better.

Additionally—and perhaps this is a point I haven’t paid enough attention to before—I want to emphasize the human costs of a bursting AI bubble. Whether Microsoft and Google (and other large generative AI backers) gradually slow down their investments, or whether OpenAI and Anthropic (and their own generative AI projects) are sustained by sapping corporate resources, I believe the end result will be the same. I worry that thousands of people will lose their jobs, and large parts of the tech industry will realize that the only thing that can grow forever is cancer.

There won’t be much lightheartedness in this post. I’m going to paint you a dark picture — not just of the big AI players, but of the entire tech industry and its employees — and tell you why I think the messy and destructive end is coming sooner than you think.

Go ahead and enter thinking mode.

How does generative AI survive?

Currently, OpenAI—a nominally nonprofit organization that may soon become for-profit—is in the process of raising a new round of funding at a valuation of at least $150 billion, with an estimated $6.5 billion and possibly as much as $7 billion. The round is led by Josh Kushner’s Thrive Capital, with rumors that NVIDIA and Apple may also participate. As I have previously detailed, OpenAI will have to continue to raise unprecedented amounts of money to survive.

To make matters worse, according to Bloomberg, OpenAI is also trying to raise $5 billion in debt from banks in the form of a revolving credit line, which typically comes with higher interest rates.

The Information also reported that OpenAI is in talks with MGX, a $100 billion investment fund backed by the UAE, seeking to invest in AI and semiconductor companies, and may also raise funds from the Abu Dhabi Investment Authority (ADIA). This is an extremely serious warning sign because no one voluntarily seeks money from the UAE or Saudi Arabia. You only choose to ask them for help when you need a lot of money and are not sure you can get it from elsewhere.

Side note: As CNBC points out, one of MGX’s founding partners, Mubadala, holds about $500 million in equity in Anthropic, which was acquired from the FTX bankruptcy assets. One can imagine how “happy” Amazon and Google must be about this conflict of interest!

As I discussed in late July, OpenAI needs to raise at least $3 billion, and more likely $10 billion, to stay afloat. It expects to lose $5 billion in 2024, a number that will likely continue to increase as more complex models require more computing resources and training data. Anthropic CEO Dario Amodei has predicted that future models could require up to $100 billion in training costs.

Incidentally, the “$150 billion valuation” here refers to the way OpenAI prices its shares for investors — although the word “shares” is a bit vague here, too. For example, in a normal company, investing $1.5 billion at a $150 billion valuation would typically give you “1%” of the company, but in OpenAI’s case, things are a bit more complicated.

OpenAI attempted to raise money at a $100 billion valuation earlier this year, but some investors balked at the high price, in part due to (quoting The Information reporters Kate Clark and Natasha Mascarenhas) growing concerns that generative AI companies are overvalued.

To complete this round of funding, OpenAI may be transitioning from a nonprofit to a for-profit entity, but the most confusing part is what investors are actually getting. Kate Clark of The Information reports that investors in this round were told (quote) that they would not receive traditional equity for their investment… Instead, they were given units that promised a share of the companys profits – once the company becomes profitable, they would get a share of the profits.

It’s not clear whether switching to a for-profit entity would solve this problem, since OpenAI’s odd “nonprofit + for-profit” corporate structure means that Microsoft is entitled to 75% of OpenAI’s profits as part of its 2023 investment — though in theory, a switch to a for-profit structure could include equity. However, when you invest in OpenAI you get “profit-sharing units” (PPUs), not equity. As Jack Raines writes in Sherwood, “If you own OpenAI’s PPUs, but the company never makes a profit, and you can’t sell them to someone who thinks OpenAI will eventually make a profit, then your PPUs are worthless.”

Over the weekend, Reuters published a report saying that any $150 billion valuation would be dependent on whether OpenAI can restructure its entire company and, in the process, lift a cap on investor profits that is currently limited to 100 times the original investment. The profit cap was established in 2019, when OpenAI said any profits above that would be returned to nonprofits for the benefit of humanity. In recent years, the company has modified that rule to allow for a 20% increase in the profit cap each year starting in 2025.

Given OpenAI’s existing profit-sharing agreement with Microsoft — not to mention the massive losses it’s mired in — any return would be theoretical at best. At the risk of sounding flippant, even a 500% gain is still zero.

Reuters also added that any switch to a for-profit structure (thus increasing its valuation above its recent $80 billion) would force OpenAI to renegotiate with existing investors as their stakes would be diluted.

The Financial Times also reportedly noted that investors must sign an operating agreement which states: Any investment in [OpenAIs for-profit subsidiary] should be considered in the spirit of a donation and that OpenAI may never make a profit. Such terms are indeed insane, and anyone who invests in OpenAI who suffers from them is doing so entirely at their own peril, because its an extremely absurd investment.

In reality, investors didn’t get a piece of OpenAI, or any control over it, but rather a stake in the future profits of a company that’s losing more than $5 billion a year and will likely lose more by 2025 (if it makes it that far).

OpenAI’s models and products — we’ll discuss their usefulness later — are extremely unprofitable to operate. The Information reports that OpenAI will pay Microsoft about $4 billion in 2024 to support ChatGPT and its underlying models, and that’s on top of Microsoft’s discounted price of $1.30 per GPU per hour, compared to the $3.40 to $4 per hour that other customers typically pay. This means that without a deep partnership with Microsoft, OpenAI could be spending as much as $6 billion per year on servers — not including other expenses like employee costs ($1.5 billion per year). And, as I’ve discussed before, training costs are currently $3 billion per year and will almost certainly continue to increase.

Although The Information reported in July that OpenAIs annual revenue was $3.5 billion to $4.5 billion, The New York Times reported last week that OpenAIs annual revenue now exceeds $2 billion, meaning the year-end figure is likely to be toward the low end of that estimated range.

In short, OpenAI is “burning money” and will only burn more money in the future, and in order to continue burning money, it will have to raise funds from investors who have signed a statement that “we may never be profitable.”

As I’ve written before, another problem for OpenAI is that generative AI (which extends to the GPT model and the ChatGPT product) isn’t solving the kinds of complex problems that justify its huge costs. The models are probabilistic, which leads to huge, intractable problems — in other words, they know nothing and are just generating answers (or images, or translations, or summaries) based on training data, which model developers are running out of at an alarming rate.

The phenomenon of “hallucination” — where a model explicitly generates information that is not real (or generates something that looks wrong in an image or video) — cannot be completely solved with existing mathematical tools. While it may be possible to reduce or mitigate hallucination, its existence makes generative AI difficult to truly rely on for critical business applications.

Even if generative AI solves technical problems, it’s unclear whether it actually brings value to the business. The Information reported last week that customers of Microsoft’s 365 suite (which includes Word, Excel, PowerPoint, and Outlook, among others, and especially many of the enterprise-focused packages that are also closely tied to Microsoft’s consulting services) have barely adopted its AI-driven “Copilot” products. Just 0.1% to 1% of 4.4 million users, at $30 to $50 each, are paying for the features. One company that’s testing the AI features said, “Most people don’t see much value in it right now.” Others said, “Many businesses haven’t seen breakthrough gains in productivity and other areas yet” and they’re “not sure when they will.”

So how much is Microsoft charging for these unimportant features? An eye-popping $30 extra per user per month, or up to $50 per user per month for the sales assistant feature. This effectively requires customers to double their existing fees—on an annual contract, by the way!—for products that dont seem all that useful.

One thing to add: Microsofts problems are so complex that they may require their own news content in the future.

This is the state of generative AI — the leader in productivity and business software can’t find a product that customers are willing to pay for, partly because the results are too mediocre and partly because the costs are too high to justify. If Microsoft needs to charge so much, it’s either because Satya Nadella wants to achieve $500 billion in revenue by 2030 (a goal revealed in a memo released during the public hearings on Microsoft’s acquisition of Activision Blizzard), or because the costs are too high to lower the price, or both.

However, almost everyone emphasized that the future of AI will shock us – the next generation of large language models is just around the corner, and they will be amazing.

Last week, we got our first real glimpse into that so-called future. And it was a disappointment.

A silly magic trick

OpenAI released O 1 — codenamed Strawberry — late Thursday with the kind of excitement that comes with a visit to the dentist. Sam Altman described O 1 as OpenAI’s “most powerful and aligned model yet” in a series of tweets. While he acknowledged that O 1 “still has flaws, is still limited, and after using it for a while it’s not as impressive as it was when you first used it,” he promised that O 1 would provide more accurate results on tasks that have clear correct answers, such as programming, math problems, or scientific questions.

This in itself is pretty revealing — but we’ll get into that in a bit. First, let’s talk about how it actually works. I’ll introduce some new concepts, but I promise not to go into too much detail. If you really want to read OpenAI’s explanation, you can find it in their article on the official website — Learning to Reason with LLMs.

When faced with a problem, o1 breaks it down into individual steps—steps that hopefully will eventually lead to the right answer, a process called the “Chain of Thought.” It’s easier to understand o1 if you think of it as two parts of the same model.

At each step, one part of the model applies reinforcement learning, and another part (the part that outputs the result) is rewarded or punished based on the correctness of its progress (its reasoning step), and adjusts its strategy when it is punished. This is different from how other large language models work, because the model generates output and then looks back, and instead of just generating an answer and then giving it directly, it will ignore or recognize good steps to arrive at the final answer.

While this sounds like a major breakthrough, or even another step towards the much-praised artificial general intelligence (AGI) — it isn’t, as evidenced by the fact that OpenAI chose to release o1 as a standalone product, rather than an updated version of GPT. The examples OpenAI showed — such as math and science problems — were tasks where the answers were known in advance, where the answers were either correct or incorrect, allowing the model to guide the “chain of thought” at each step.

You’ll notice that OpenAI didn’t show how the o1 model would solve complex problems where the answer is unknown, whether it’s a math problem or something else. OpenAI itself admits that it has received feedback that o1 is more prone to “hallucinations” than GPT-4o, and that o1 is less willing to admit that it doesn’t have an answer than previous models. This is because, although there is a part of the model that is responsible for checking its output, this “checking” part can also hallucinate (sometimes AI will make up answers that seem reasonable, thus creating hallucinations).

According to OpenAI, O1 is also more convincing to human users due to a “chain of thoughts” mechanism. Because O1 provides more detailed answers, people are more inclined to trust its outputs, even if those answers are completely wrong.

If you think I’m being too harsh in my criticism of OpenAI, consider how the company promotes o1. It describes the reinforcement training process as “thinking” and “reasoning,” but in reality it’s just guessing, and at every step it’s guessing whether it’s right, and the final result is often known in advance.

This is an insult to humans—real thinkers. Humans think based on a complex set of factors: from personal experience to a lifetime of knowledge to brain chemistry. While we do “guess” whether certain steps are correct when tackling complex problems, our guesses are based on concrete facts, not clumsy math like o 1.

And, boy, it was expensive.

o1-preview is priced at $15 per million input tokens and $60 per output token. That means o1 costs three times as much as GPT-4o for input and four times as much for output. However, there is a hidden cost. Data scientist Max Woolf points out that OpenAI’s “inference tokens” — the output used to arrive at the final answer — are not visible in the API. This means that not only is o1 more expensive, but the nature of its product requires users to pay more frequently. All content generated to “consider” the answer (to be clear, the model is not “thinking”) is also charged, making the answers to complex problems like programming potentially extremely expensive.

Now lets talk about accuracy. On Hacker News, a Reddit-like site owned by Sam Altmans former company Y Combinator, there were complaints that o1 was making up nonexistent libraries and functions for programming tasks and making mistakes when answering questions that couldnt be easily answered online.

On Twitter, startup founder and former game developer Henrik Kniberg asked o1 to write a Python program to calculate the product of two numbers and predict the output of the program. Although o1 wrote the code correctly (although the code could be more concise and only one line is needed), the actual output was completely wrong. AI company founder Karthik Kannan also took the programming task test, and o1 also made up a command that did not exist in the API.

Another user, Sasha Yanshin, attempted to play chess with o1, only for o1 to create a chess piece out of thin air on the board and subsequently lose the game.

Because I was being playful, I also tried asking o1 to list the states with an A in their names. It thought for eighteen seconds and came up with 37 states, including Mississippi. The correct answer should be 36 states.

When I asked it to list the states with a W in their names, it pondered for eleven seconds and actually included North Carolina and North Dakota.

I also asked o1 how many times the letter R appeared in its code name Strawberry, and it answered two.

OpenAI claims that O1 performs on par with PhD students on complex benchmarks in physics, chemistry, and biology, but it apparently performs poorly in geography, basic English language tests, math, and programming.

Remarkably, this is exactly the “big, dumb magic” I predicted in my previous newsletter. OpenAI launched Strawberry just to prove to investors and the public that the AI revolution is still going on, but what it actually launched is a clunky, boring, and expensive model.

Worse, it’s hard to explain why anyone should care about o1. While Sam Altman may boast about its “reasoning power,” those with the money to continue funding him see 10-20 second wait times, problems with basic factual accuracy, and a lack of any exciting new features.

No one cares about a “better” answer anymore—they want something radically new, and I don’t think OpenAI knows how to achieve that. Altman tries to anthropomorphize O1 by having it “think” and “reason,” clearly suggesting it’s some kind of step toward artificial general intelligence (AGI), but it’s hard to get even the staunchest AI advocates excited.

In fact, I think o1 shows that OpenAI is both desperate and uncreative.

Prices didn’t drop, the software didn’t get more useful, and the “next generation” models we’ve been hearing about since November turned out to be a dud. These models are also desperate for training data, to the point where nearly every large language model ingests some kind of copyrighted content. This urgency led Runway, one of the largest generative video companies, to launch a “company-wide effort” to collect thousands of YouTube videos and pirated content to train its models, while a federal lawsuit in August accused NVIDIA of doing similar things to many creators to train its “Cosmos” AI software.

The current legal strategy is largely a matter of willpower, hoping that these lawsuits don’t go so far as to set any legal precedent that could make training these models a copyright infringement — which is exactly what a recent interdisciplinary study sponsored by the Copyright Initiative concluded.

Those lawsuits are moving forward, and in August a judge granted the plaintiffs further copyright infringement claims against Stability AI and DeviantArt (which used the models), as well as copyright and trademark infringement claims against Midjourney. If any of the lawsuits succeed, it would be a catastrophic blow to OpenAI and Anthropic, and even more so to Google and Meta, which use datasets of millions of artists’ works, because it would be nearly impossible for AI models to “forget” their training data, meaning they would need to be retrained from scratch, which would cost billions of dollars and greatly reduce their effectiveness at tasks they are not particularly good at.

I’m deeply concerned that this industry is building a fortress on the beach. Large language models like ChatGPT, Claude, Gemini, and Llama are unsustainable and there seems to be no path to profitability because the computationally intensive nature of generative AI means that they cost hundreds of millions or even billions of dollars to train and require such vast amounts of training data that these companies are effectively stealing data from millions of artists and writers and hoping to get away with it.

Even if we set these issues aside, generative AI and its related architectures don’t seem to be revolutionary, and the hype cycle around generative AI doesn’t really fit the meaning of the term “artificial intelligence” at all. Generative AI is only occasionally able to correctly generate some content, summarize documents, or conduct research at some indeterminate “faster” speed at its best. Microsoft’s Copilot for Microsoft 365 claims to have “thousands of skills” and “endless possibilities” for enterprises, but the examples it shows are nothing more than generating or summarizing emails, “starting presentations with prompts,” and querying Excel tables—functionality that may be useful, but is by no means revolutionary.

We are not in the “early stage”. Since November 2022, large tech companies have spent over $150 billion in capital expenditures and investments on infrastructure and emerging AI startups, as well as their own models. OpenAI has raised $13 billion and can hire anyone they want, and the same can be said for Anthropic.

However, the result of this industry version of the Marshall Plan to promote the rise of generative AI is only the birth of four or five nearly identical large language models, the worlds least profitable startups, and thousands of expensive but mediocre integrated applications.

Generative AI is being marketed with multiple lies:

1. It is artificial intelligence. 2. It will get better. 3. It will become true artificial intelligence. 4. It is unstoppable.

Leaving aside terms like “performance” — which are often used to describe the “accuracy” or “speed” of generated content, rather than the skill level — large language models have actually plateaued. “More powerful” often doesn’t mean “does more”, but “more expensive”, which means you’ve just created something that costs more but doesn’t increase its functionality.

If the combined might of every venture capitalist and big tech giant still hasn’t found a truly meaningful use case that a lot of people are willing to pay for, then there won’t be new use cases. Large language models — yes, that’s where all these billions are going — aren’t suddenly going to become more capable just because the tech giants and OpenAI throw another $150 billion at it. No one’s trying to make these things more efficient, or at least no one’s succeeding in doing so. If someone succeeded, they’d be hyping it up.

We are faced with a collective delusion – a dead-end technology based on copyright theft (as is the case with every generation of technology), which requires constant capital to keep running, provides services that are dispensable at best, disguised as some kind of automated functionality that is not actually provided, costs billions of dollars and will continue to do so. Generative AI does not run on money (or cloud computing credits), but on confidence. The problem is that confidence – like investment capital – is a finite resource.

My concern is that we may be in the middle of an AI crisis similar to the subprime mortgage crisis — thousands of companies are integrating generative AI into their businesses, but prices are far from stable and even further from profitability.

Almost every startup that claims to be “AI-driven” is based on some combination of GPT or Claude. These models were developed by two companies that are deeply loss-making (Anthropic expects to lose $2.7 billion this year), and their pricing strategies are designed to attract more customers rather than turn a profit. As mentioned before, OpenAI relies on Microsoft funding – both the “cloud computing credits” it receives and the favorable pricing provided by Microsoft – and its pricing is entirely dependent on Microsoft’s continued support as an investor and service provider. Anthropic’s deals with Amazon and Google face similar problems.

Based on their losses, I speculate that if OpenAI or Anthropic were pricing closer to actual costs, the price of API calls could increase ten to a hundred times, although it’s hard to say exactly without actual data. But we can consider the numbers reported by The Information, which predicts OpenAI’s server costs at Microsoft will reach $4 billion in 2024—which, I might add, is two and a half times cheaper than Microsoft charges other customers—plus the fact that OpenAI is still losing more than $5 billion a year.

It’s highly likely that OpenAI charges a fraction of what it needs to run its models, and can only maintain that status quo if it can keep raising more venture capital than ever before and continue to get favorable pricing from Microsoft, which recently said it sees OpenAI as a competitor. While it’s impossible to be sure, it’s reasonable to assume that Anthropic is getting similar favorable pricing from Amazon Web Services and Google Cloud.

Assuming Microsoft gives OpenAI $10 billion in cloud computing credits and OpenAI spends $4 billion on server costs, plus an assumed $2 billion in training expenses—costs that will surely increase with the launch of the new O-1 and “Orion” models—OpenAI may need more credits by 2025, or start paying Microsoft in actual cash.

While Microsoft, Amazon, and Google may continue to offer favorable pricing, the question is whether these deals are profitable for them. As we saw after Microsoft’s latest quarterly earnings report, investors have expressed increasing concerns about the capital expenditures (CapEx) required to build generative AI infrastructure, and many are skeptical about the potential profitability of this technology.

What we don’t really know is how profitable Generative AI is for these massive tech companies, because they factor these costs into other revenues. While we can’t be sure, I imagine if these businesses were profitable at all, they would talk about the revenue they’re getting from it, but they’re not.

The market’s extreme skepticism about the generative AI boom and the lack of substantive answers from Nvidia CEO Jensen Huang about the return on investment in AI caused Nvidia’s market value to plummet by $279 billion in a single day. This was the largest stock market crash in the history of the US market, with the total value lost being equivalent to the peak of nearly five Lehman Brothers. While the comparison stops there—Nvidia was not even at risk of failure, and even if it did, the systemic impact would not be that severe—it is still a staggering sum and shows the distorting power of AI on the market.

Microsoft, Amazon, and Google all took a beating in early August for their massive AI-related capital spending, and they will face more pressure if they can’t show significant revenue growth next quarter from the $150 billion (or more) they’ve invested in new data centers and NVIDIA GPUs.

It’s important to remember that there is no other market for ideas for big tech companies other than AI. When companies like Microsoft and Amazon began to show signs of slowing growth, they also began to rush to show the market that they were still competitive. Google, a multi-risk monopoly that relies almost entirely on search and advertising, also needed something new and eye-catching to attract investors’ attention – however, these products did not bring enough utility, and it seemed that most of the revenue came from companies that tried AI and found that it was not worth it.

Currently, there are two possibilities:

1. Big tech companies realize they are in deep trouble and are choosing to reduce AI-related capital spending out of fear of Wall Street disapproval.

2. In order to find new growth points, large technology companies decided to cut costs to maintain their disruptive operations, laying off employees and transferring funds from other businesses to support the death race of generative AI.

It’s not clear which scenario will happen. If big tech companies accept that generative AI is not a future reality, they won’t really have anything else to show Wall Street but could adopt a “year of efficiency” strategy similar to Meta’s, reducing capital expenditures (and laying off employees) while promising to “lower investment” to a certain degree. This is the most likely path for Amazon and Google, because while they’re eager to please Wall Street, at least for now they still have their profitable monopolies to fall back on.

However, actual revenue growth from AI needs to be seen in the coming quarters, and it needs to be substantial, rather than vague statements about AI being a “mature market” or “annualized growth rate.” If capex increases follow, then this actual contribution will need to be significantly higher.

I don’t think that growth is going to happen. Whether it’s in the third quarter of 2024, the fourth quarter of 2024, or the first quarter of 2025, Wall Street will start punishing big tech companies for their greed for AI, and that punishment will be much harsher than it is for Nvidia, which, despite Huang’s empty words and useless slogans, is the only company that can actually show how AI can increase revenue.

Im somewhat concerned that the second scenario is more likely: these companies are so convinced that AI is the future that their culture is so disconnected from developing software that solves real problems that it could burn the company out. Im deeply concerned that mass layoffs will be used to fund this movement, and the past few years dont make me think theyll make the right choice to leave AI.

Big tech has been thoroughly poisoned by management consultants — Amazon, Microsoft and Google are all run by MBAs — and has surrounded itself with similar monsters like Google’s Prabhakar Raghavan, who drove out the people who actually built Google Search so he could take control.

These people don’t really face human problems, they create cultures that focus on solving imaginary problems that software can fix. Generative AI may seem a little magical to people who spend their entire lives in meetings or reading emails. I guess Satya Nadella’s (Microsoft CEO) success mentality is largely “let the technologists solve the problem.” Sundar Pichai could have ended the whole generative AI craze by simply mocking Microsoft’s investment in OpenAI — but he didn’t because these people don’t have any real ideas and these companies are not run by people who have experienced the problems, let alone those who actually know how to solve them.

They are desperate too, and this situation has never been this serious for them, except for Meta burning billions of dollars on the Metaverse. However, this situation is much more serious and ugly because they have invested so much money and have tied the AI so tightly into their company that pulling the AI out would be both embarrassing and hurtful to the stock, effectively a tacit admission that this is all a waste.

This could have stopped sooner if the media were actually holding them accountable. This narrative is sold through the same scam as previous hype cycles, with the media assuming these companies will “solve the problem” even though it’s clear they won’t. Do you think I’m being pessimistic? What’s next for generative AI? What will it do next? If your answer is that they will “solve the problem” or that they have “amazing stuff behind the scenes”, then you are an unwitting participant in a marketing operation (think about this for a minute).

Author’s aside: We really need to stop being fooled by this stuff. When Mark Zuckerberg claimed we were about to enter the Metaverse, a ton of media outlets — like The New York Times, The Verge, CBS News, and CNN — joined in on promoting an obviously flawed concept that looked terrible and sold itself on outright lies about the future. It’s clearly nothing more than a crappy VR world, but the Wall Street Journal still called it “a vision for the future of the internet” six months after the hype-cycle had clearly expired. Same thing with crypto, Web3, and NFTs! The Verge, The New York Times, CNN, CBS News — these outlets are once again involved in promoting technology that is clearly useless — I should mention The Verge specifically, and it’s actually Casey Newton, who, despite his good reputation, claimed in July that “having one of the most powerful large language models could provide the basis for all kinds of money-making products for companies” after three consecutive hypes about the technology, when in reality, the technology only loses money and has yet to provide any truly useful and lasting products.

I believe that at the very least, Microsoft will start reducing costs in other areas of the business to help sustain the AI boom. In emails shared with me by a source earlier this year, Microsoft’s senior leadership team requested (but ultimately shelved) that power requirements be reduced in multiple areas of the company to free up power for GPUs, including moving compute for other services to other countries to free up compute capacity for AI.

On the Microsoft section of the anonymous social network Blind (company email verification required), a Microsoft employee complained in mid-December 2023 that AI is taking their money, saying that the cost of AI is too high, it eats up salary increases, and things wont get better. Another employee shared their anxiety in mid-July, saying that they clearly felt that Microsoft had a marginal addiction to operating cash flow from cutting costs to fund Nvidias stock price and that this practice deeply hurt Microsofts culture.

Another employee added that they believe Copilot will kill Microsoft in FY2025 and that Copilot focus will drop significantly in FY2025, also revealing that they know of large Copilot deals in their country that have less than 20% usage after nearly a year of PoCs, layoffs, and adjustments, and said that the company took too many risks and that Microsofts huge AI investments will not pay off.

While Blind is anonymous, it’s hard to ignore the fact that a large number of online posts tell of cultural problems at Microsoft in Redmond, Washington, particularly that senior leadership is out of touch with actual work and will only fund projects with the AI label attached. Many posts express frustration with the “rhetoric of gibberish” from Microsoft CEO Satya Nadella and complain about the lack of bonuses and promotion opportunities in an organization focused on chasing an AI craze that may not exist.

At the very least, it can be seen that there is a deep cultural sadness within the company, with many posts saying I dont like working here and Everyone is confused about why we are investing so much in AI, but on the other hand they feel they have to accept it because Satya Nadella doesnt care at all.

The Information article mentioned that Microsoft has a worrying problem hidden in the actual adoption rate of its AI feature Office Copilot: Microsoft has reserved enough server capacity in its data centers for 365 Copilot to handle millions of daily users. However, it is not clear how this capacity is actually being used.

According to estimates, Microsofts current Office Copilot users may be between 400,000 and 4 million, which means that Microsoft may have built a lot of idle infrastructure that is not being fully utilized.

While one could argue that Microsoft is positioning itself based on the expectation of future growth in this product category, it’s worth considering another possibility: What if that growth never comes? What if — as crazy as it may sound — Microsoft, Google, and Amazon are building these massive data centers to capture demand that may never come? Back in March of this year, I made the point that I couldn’t find any company that could achieve significant revenue growth with generative AI. Nearly six months later, the question remains. The current approach by large companies seems to be to tack on AI capabilities to existing products in the hope of increasing sales that way, but this strategy has not shown any signs of success anywhere. Just like Microsoft, the “AI upgrades” they are launching don’t seem to be delivering actual business value to the enterprise.

So this raises a bigger question: Are these AI investments sustainable? Have the tech giants overestimated the demand for AI tools?

While some companies may be driving some of the spending on Microsoft Azure, Amazon AWS, and Google Cloud as they “integrate AI,” I’d assume much of this demand is driven by investor sentiment. These companies are “investing in AI” more to satisfy the market than based on cost/benefit analysis or actual utility.

However, these companies have spent a lot of time and money embedding generative AI capabilities into their products, and I think they may face the following scenarios:

1. These companies develop and launch AI features, only to find that customers are unwilling to pay for them, as Microsoft has found with its 365 Copilot. If they can’t find a way to get customers to pay now — during the AI hype — they will only be more vulnerable when the hype passes and bosses stop asking employees to “get on the AI bandwagon.”

2. These companies develop and launch AI features, but cannot find a way to get users to pay extra for them, which means they can only embed AI features into existing products without increasing profit margins. Ultimately, AI features may become a parasite that eats into the companys revenue.

Jim Covello of Goldman Sachs also noted in his report on generative AI that if the benefit of AI is just efficiency (such as being able to analyze documents faster), then competitors can do that. Almost all generative AI integrations are similar: some form of collaborative assistant to answer customer or internal questions (such as Salesforce, Microsoft, Box), content creation (Box, IBM), code generation (Cognizant, Github Copilot), and the upcoming intelligent agents, which are actually customizable chatbots that can connect to other parts of the website.

This question reveals one of the biggest challenges of generative AI: while it is “powerful” to some extent, this power is more reflected in “generating content based on existing data” rather than true “intelligence”. This is also why many companies’ introduction pages about AI on their websites are full of empty words, because their biggest selling point is actually “Uh… figure it out yourself!”

What I’m worried about is a knock-on effect. I believe that many companies are “trialing” AI right now, and once those trials are over (Gartner predicts that by the end of 2025, 30% of generative AI projects will be abandoned after the proof-of-concept phase), they will likely stop paying for those additional features or stop integrating generative AI into their company’s products.

If this happens, already depressed revenues for hyperscalers and large language model vendors like OpenAI and Anthropic that provide cloud computing for generative AI applications will be further reduced. This could put further pressure on prices at these companies, as their already loss-making margins will deteriorate further. At that point, OpenAI and Anthropic will almost certainly have to raise prices, if they haven’t already done so.

While the big tech companies can continue to finance the boom — after all, they almost entirely fueled it — that doesn’t help the smaller startups that have become accustomed to discounted prices because they won’t be able to keep operating. While there are cheaper alternatives, such as independent vendors running Meta’s LLaMA model, it’s hard to believe they won’t face the same profitability issues as the hyperscalers.

Note also that the hyperscalers are also very afraid of pissing off Wall Street. While they could theoretically (as I fear) improve margins through layoffs and other cost-cutting measures, these are short-term solutions that will only work if they can somehow shake some money out of this barren generative AI tree.

Regardless, it’s time to accept that the money isn’t here. We need to stop and examine that we are in the third era of the tech industry’s illusion. However, unlike cryptocurrencies and the Metaverse, this time everyone is in on the money-burning binge, pursuing an unsustainable, unreliable, unprofitable, and environmentally harmful project that was packaged as “artificial intelligence” and promoted as something that would “automate everything” but never actually had a path to actually achieve that goal.

Why does this keep happening? Why have we gone from cryptocurrencies to the metaverse and now generative AI, technologies that don’t really seem to be designed for ordinary people?

This is actually the natural evolution of a tech industry that is completely focused on increasing the value it extracts from each customer, rather than delivering more value to customers. Or, in other words, they don’t even really understand who their customers are and what they need.

Today, the products you’re marketed to will almost certainly try to tie you into an ecosystem — at least as a consumer, controlled by Microsoft, Apple, Amazon, Google. This makes it increasingly expensive to leave that ecosystem. Even cryptocurrency — ostensibly a “decentralized” technology — quickly abandoned its laissez-faire ethos in favor of aggregating users through a handful of large platforms (like Coinbase, OpenSea, Blur, or Uniswap), which are often backed by the same venture capital firms (e.g., Andreessen Horowitz). Rather than becoming the standard-bearer for a new, entirely independent online economy, cryptocurrencies have been able to scale only through the connections and money that funded other waves of the internet.

As for the Metaverse, it is a hoax, but it is also Mark Zuckerbergs attempt to control the next generation of the Internet, and he hopes to make Horizon the main platform. As for generative AI, we will talk about it later.

All of this is about further monetization — that is, increasing the average value of each customer, whether by getting them to use the platform more so as to show more ads, market “semi-useful” new features, or creating a new monopoly or oligopoly where only the tech giants with huge financial reserves can participate, while providing very little actual value or utility to customers.

Generative AI is exciting (at least to a certain kind of people) because the tech giants see it as the next big moneymaker — by adding fee-based avenues to everything from consumer tech to enterprise services. Most generative computing flows through OpenAI or Anthropic and back to Microsoft, Amazon, or Google, generating cloud computing revenue that keeps their growth stories going. The biggest innovation here isn’t what generative AI can do, but the creation of an ecosystem that is hopelessly dependent on a handful of hyperscale companies.

Generative AI may not be terribly practical, but it is very easy to integrate into all kinds of products, allowing companies to charge for these “new features.” Whether it’s a consumer app or a service for an enterprise software company, such products can earn millions or even billions of dollars by upselling them to as many customers as possible.

Sam Altman was smart enough to realize that the tech industry needed a “new thing” — a new technology that everyone could take a piece of and sell. While he may not fully understand technology, he did understand the economic system’s desire for growth and productized generative AI based on the Transformer architecture as a “magic tool” that could be easily plugged into most products, bringing some unique features.

However, the rush to integrate generative AI everywhere reveals a huge disconnect between these companies and actual consumer needs or effectively operating businesses. For the past 20 years, simply “making new stuff” seemed to work — launching new features and having sales teams hard sell them was enough to sustain growth. This trapped tech industry leaders in a toxic and unprofitable business model.

The executives running these companies — almost all MBAs and management consultants who have never built a product or tech company from scratch — either don’t understand or don’t care that there is no path to profitability for generative AI, and probably think it will just naturally become profitable like Amazon Web Services (AWS) did (which took 9 years to become profitable) , even though the two are very different things. Things “just worked out” in the past, so why not now?

Of course, besides the fact that rising interest rates have dramatically changed the venture capital market, reducing VCs’ war chests and shrinking fund sizes, the fact that attitudes toward tech have never been more negative, along with a host of other factors, is too numerous to discuss in this 8,000-word article as to why 2024 will be very different from 2014.

What’s really worrying is that many of these companies don’t seem to have any new products other than AI. What else do they have? What else can they use to continue to grow? What other options do they have?

No, they have nothing. And that’s the problem, because if AI fails, the impact will inevitably be felt by other companies across the tech industry.

Every major tech player — both in the consumer and enterprise space — sells some kind of AI product that integrates large language models or their own models, often running in the cloud on Big Tech’s systems. To some extent, these companies are dependent on Big Tech’s willingness to subsidize the entire industry.

I speculate that a subprime AI crisis is brewing, in which nearly the entire tech industry is complicit in a technology that is sold at dirt-cheap prices, is highly centralized, and is subsidized by Big Tech. At some point, the alarming and pernicious rate at which generative AI burns money will catch up with them, leading to price increases or companies releasing new products and features that charge so much — like Salesforce’s $2 per conversation for its “Agentforce” product — that even enterprise customers with ample budgets can’t justify the expense.

What happens when the entire tech industry becomes dependent on a piece of software that only loses money and doesn’t have much real value on its own? What happens when the pressure is too great, these AI products become irreconcilable, and these companies have nothing else to sell?

I really don’t know, but the tech industry is headed for a terrible reckoning, a lack of creativity enabled by an economic environment that rewards growth over innovation, monopoly over loyalty, and management over actual creation.

オリジナルリンク

This article is sourced from the internet: Subprime AI Crisis: Revisiting Crypto x AI

Related: One-week financing express | 19 projects received investment, with a total disclosed financing amount of approximately U

According to incomplete statistics from Odaily Planet Daily, there were 19 blockchain financing events at home and abroad announced from September 2 to September 8, a significant decrease from last weeks data (29 events). The total amount of financing disclosed was approximately US$40.74 million, a significant decrease from last weeks data (US$203 million). Last week, the project that received the most investment was Web3 security company Hypernative ($16 million); Web3 e-cigarette Puffpaw followed closely behind ($6 million). The following are specific financing events (Note: 1. Sort by the amount of money announced; 2. Excludes fund raising and MA events; 3. * indicates a traditional company whose business involves blockchain): Web3 security company Hypernative completes $16 million Series A financing, led by Quantstamp On September 3, Web3 security company Hypernative announced…

© 版权声明

相关文章