• Frazis Capital Partners
  • Posts
  • NVDA releases blow-out earnings, Google demonstrates a powerful new model but botches its release, and more companies report

NVDA releases blow-out earnings, Google demonstrates a powerful new model but botches its release, and more companies report

February 2024 investment update

Dear investors and well-wishers,

Our fund advanced 10.5% in January.

Every day it seems there are new advances in artificial intelligence with each requiring a step change increase in compute.

The long-speculated idea that breakthroughs in AI would lead to rapidly accelerating progress is actually happening.

NVDA earnings

NVIDIA reported earnings with revenues up 22% over the quarter, and 265% year-on-year. Operating leverage was even more impressive, with EPS up 487%. Gross margin came in at 74%.

There’s $22.5 billion remaining on their share repurchase plan, makes you wonder who is going to sell.

If NVIDIA continues to beat earnings guidance by ~10% each quarter, the stock is a lot cheaper than it looks. As of today it’s only trading at 31x forward after-tax multiple.

There’s a lot of talk of bubbles, but we are still in the part of the cycle where leading companies are posting >100% organic growth and trade on reasonable multiples of GAAP earnings.

There were some interesting titbits from the result.

40% of datacenter revenue is already coming from inference, which helps steady concerns NVIDIA will lose out as compute shifts from training models to using them.

And 50% of that datacenter revenue came from hyperscalers, which rent out the chips at substantial returns. These companies are very public about their capex plans, and they plan to spend big.

Networking is growing faster than chips, with Infiniband growing at 5x year-over-year.

7 out of 8 pixels in games can now be generated by AI.

And perhaps most interestingly sovereign demand is looking to be a huge lift in the near and mid-term.

Every major Government is going to need to train their own models on their own data - data which must be kept secure. The Government opportunity is perhaps only comparable to the roll-out of databases.

Gemini

Google’s future Gemini model (not yet released) can take in entire books and movies and analyse them, measuring and weighing each letter of the text and each pixel of the movie against all the others. This is a dramatic leap forward from existing publicly available models which have much shorter context windows, and again shows how massive the increase in compute demand is really going to be.

Unfortunately for Google, these substantial advances were immediately overshadowed by the roll-out of their latest publicly available model, which has been well and truly botched.

Immediately it became obvious that the US obsession with race was infused throughout the model, which painted historically white figures black. At first it seems this was just a prompt injection, with Google perhaps adding ‘racially diverse’ to the end of certain prompts, so a request for an image of a pope would return a racially diverse version.

But the problem runs far, far deeper than that.

The model doesn’t like children, meat-eating, or Israel, it hates Elon Musk, and thinks it would be worse to misgender Caitlyn Jenner than cause a nuclear catastrophe.

Image

This is more amusing than harmful, though it does show the curious preferences of radical San Franciscans.

But this was a mission-critical product for a $1.8 trillion dollar company.

We now know why it took so long for the corporate inventor of transformers to release a decent competitor to OpenAI: they were infusing it with the ideological preferences of a minority that don’t even seem to represent most of Silicon Valley, let alone the billions of people that were ostensibly target customers.

Image

This is all a shame because the product is clearly a step forward and has a number of features that ChatGPT was missing.

And in fairness most of the prompts doing the rounds on the internet are a long way from commercial use cases.

But why would anyone use an innacurate model when there are so many functioning alternatives?

This will not be easy to fix.

These ethical preferences were inserted at great expense and effort by Google’s leadership. The staff involved (and this must have come from the top) are unlikely to abandon their principles easily.

One way or another, Google needs to sort this out ASAP.

This was an opportunity for Google to have a period of language model leadership before OpenAI releases their next update.

If Google does get its house in order, there may be an opportunity akin to when Satya Nadella took over Microsoft.

GroqChip

A second fascinating demonstration last week was the GroqChip, which is specialised for inference and claims to be 10x as fast as competitors at 1/10th the cost.

This opens up all kinds of use cases, most obviously communicating with LLMs in natural language at the pace of a normal conversation.

And the technology is based on older generation 14 nanometer chips built by Globalfoundries, rather than the latest 3 nanometer tech available from Samsung and Taiwan Semiconductors. So the next generation will be faster and more efficient again.

Image

They’ve christened the chip a ‘Language Processing Unit’, or LPU, and I think the name will stick.

Whether or not the company can successfully roll this out at scale is unclear. But the demonstration was decisive enough to suggest specialized inference chips are the future, rather than the current or even next generation of GPUs from NVDA or AMD.

If you are building a chatbot, a customer service product, or anything that interacts with people, you are going to have to have the fastest, or be beaten by competitors who do. Millisecond differences in latency divided winners and losers across e-commerce and search, and the difference here is much larger.

It is no surprise that specialized chips are better than graphics chips for specific tasks - it’s already impressive enough that chips designed for graphics proved so effective first in crypto and then in large language model training.

But it is a surprise to see this demonstrated so powerfully, so soon.

This has serious investment implications as it suggests the safest bets in AI might be upstream from designers.

Regardless of which company ends up with the most inference market share (and you only need to look at the chart below to see how dramatically things can change), the immense capital investment required to roll the chips out at scale this will land across a handful of companies like Taiwan Semiconductor, ASML, and the specialized providers that furnish their factories.

Mind you, as one fund manager pointed out last week, many of these monopolies with no competition aren’t true monopolies, as they have single customer. Boeing has many suppliers, but to call each supplier with no competition a ‘monopoly’ ignores the power dynamic when negotiating with a single customer, namely, their lack of it.

At the moment, the supply chain is operating at max capacity. This won’t last forever, but for now, this is the regime we are in. Pricing and revenues, as well as stock prices, are going up.

Amazon

Amazon reported a striking increase in profitability. AWS grew at 13% and the whole group grew 14%.

They are offering their own training and inference chips, as well as buying and renting out NVDA’s designs at substantial returns.

Only a year ago operating income was down 51%. As is the nature of these things, the stock has performed exceptionally since that ugly release.

Clarity released more data

Clarity provided an update of their Phase 2 diagnostic trial. As expected, Clarity’s agent showed superiority to standard-of-care, which led to a change in care for many patients (diagnostics are only useful if they change a patient’s treatment plan).

The real value shift will come as therapeutic data is released, as pharmaceutical companies are explicitly focusing their attention on cures, not diagnostic tests. But it’s helpful to see the technology working as expected.

Reporting

Reporting season has been solid, with another uplift in February. Some of our largest positions have just reported or will do so over the next week or so, notably Transmedics, MercadoLibre and Nubank.

Just last night Transmedics reported organic growth of 159% year-on-year and a GAAP-profitable quarter.

I’ll send out a review of more of our companies’ most recent results shortly.

Best regards

Michael