Clouded Judgement 1.9.26 - The Education Advantage in AI
Every week I’ll provide updates on the latest trends in cloud software companies. Follow along to stay up to date!
The Education Advantage in AI
I have been thinking a lot lately about education in AI. More often than not, I am seeing education as a core go to market problem (but sometimes an advantage) in AI. The companies that are best able to educate the world and their prospects are winning. Part of the reason for this is that everything in enterprise AI is still so new. Many people, teams, and organizations are facing what I think of as a blank canvas problem. They know AI is powerful, but they don’t even know where to start. Where to use it, where to augment existing workflows, where to create net new workflows, etc.
What surprises me the most - even when teams think they know what functionality they want build, it’s not obvious to them how to build it. Build versus buy is just where it starts. Even inside “build,” there are dozens of options / design philosophies / tradeoffs. Do you build with an opinionated platform or something more composable? Managed service versus self hosted? General purpose vs specialized tooling? Vendor A’s worldview vs Vendor B’s? In practice, a huge amount of education in AI is not about buy versus build at all. What it’s really about is how you want to build something in the first place
This is what makes education such a powerful advantage. There are a million things people can do with AI. The companies seeing early traction are the ones that can convince the market of three important things. First - a specific problem is worth prioritizing. Second - this problem should be solved in a particular way (typically in the way that the vendor approaches it). And third - their product is the best embodiment of that approach.
What makes this especially hard is that almost every AI company is still selling against a similar alternative: we will (try to) build this ourselves. And at the beginning, that instinct feels rational. Models are accessible, APIs are cheap, and early demos aren’t that hard to put together. From the outside, it can feel more like an engineering project.
The catch is that no vendor can talk a customer out of this belief. You cannot educate someone into believing they should not build. Every explanation sounds like salesmanship (because of course every vendor is biased). Every warning about edge cases sounds theoretical. Until a team tries to build and operate something themselves, they simply do not believe it.
But even once teams commit to building, the education problem does not go away. It just changes shape. Teams now need to learn which architectural choices matter, which ones do not, and which ones will come back to bite them later. They need to understand where abstraction helps and where it hides complexity. This is where vendors are no longer competing just on features, but on worldview. Each product encodes a point of view about how AI systems should be built and operated.
This is also why traditional education still falls short. Blog posts, webinars, and docs can explain what a product does, but they rarely teach why an approach works better in practice. You are still asking users to reason abstractly about systems they have not lived with. If you do not know where the sharp edges are, every approach sounds roughly equivalent.
The real education happens through experience. Teams learn by building something, watching it break, feeling the operational burden, and discovering where complexity actually accumulates. That process is slow, but it is unavoidable. And it is why so many AI buying decisions feel stalled - people are learning in progress.
The best AI companies design their products to accelerate this learning. They demonstrate capability while also surfacing tradeoffs. They make certain paths easy and others intentionally hard. In doing so, they teach users not just how to use the product, but how to think about the problem itself. The product becomes an opinionated guide.
This is also why free tiers, sandboxes, and fast time to first value matter so much in AI. They’re educational tools! They help users move from thinking about things in the abstract to concretely understanding them. Once that happens, the conversation shifts. Build versus buy becomes a little bit clearer. Vendor choice becomes clearer. What felt like an open ended design space starts to coalesce around a smaller number of viable approaches.
Stepping back, this helps explain why AI adoption can feel slow and fast at the same time. Slow at the beginning of a cycle or wave, because education cannot be rushed. Fast when markets start to ever-so-slightly mature, because once users internalize the right mental model decisions snap into place. Teams move from the experimention phase to the standardizing phase. And this is the crux of the post - as a startup, once you’ve crossed this chasm, the “takeoff” can be extraordinary. The revenue ramp can be extraordinary.
The AI companies that win will not just explain their value better. They will teach the market how to build, and then convince the market that their way is the right one.
Top 10 EV / NTM Revenue Multiples
Top 10 Weekly Share Price Movement
Update on Multiples
SaaS businesses are generally valued on a multiple of their revenue - in most cases the projected revenue for the next 12 months. Revenue multiples are a shorthand valuation framework. Given most software companies are not profitable, or not generating meaningful FCF, it’s the only metric to compare the entire industry against. Even a DCF is riddled with long term assumptions. The promise of SaaS is that growth in the early years leads to profits in the mature years. Multiples shown below are calculated by taking the Enterprise Value (market cap + debt - cash) / NTM revenue.
Overall Stats:
Overall Median: 4.7x
Top 5 Median: 20.2x
10Y: 4.2%
Bucketed by Growth. In the buckets below I consider high growth >22% projected NTM growth, mid growth 15%-22% and low growth <15%. I had to adjusted the cut off for “high growth.” If 22% feels a bit arbitrary, it’s because it is…I just picked a cutoff where there were ~10 companies that fit into the high growth bucket so the sample size was more statistically significant
High Growth Median: 13.9x
Mid Growth Median: 7.7x
Low Growth Median: 3.5x
EV / NTM Rev / NTM Growth
The below chart shows the EV / NTM revenue multiple divided by NTM consensus growth expectations. So a company trading at 20x NTM revenue that is projected to grow 100% would be trading at 0.2x. The goal of this graph is to show how relatively cheap / expensive each stock is relative to its growth expectations.
EV / NTM FCF
The line chart shows the median of all companies with a FCF multiple >0x and <100x. I created this subset to show companies where FCF is a relevant valuation metric.
Companies with negative NTM FCF are not listed on the chart
Scatter Plot of EV / NTM Rev Multiple vs NTM Rev Growth
How correlated is growth to valuation multiple?
Operating Metrics
Median NTM growth rate: 12%
Median LTM growth rate: 13%
Median Gross Margin: 76%
Median Operating Margin (1%)
Median FCF Margin: 19%
Median Net Retention: 108%
Median CAC Payback: 36 months
Median S&M % Revenue: 37%
Median R&D % Revenue: 23%
Median G&A % Revenue: 15%
Comps Output
Rule of 40 shows rev growth + FCF margin (both LTM and NTM for growth + margins). FCF calculated as Cash Flow from Operations - Capital Expenditures
GM Adjusted Payback is calculated as: (Previous Q S&M) / (Net New ARR in Q x Gross Margin) x 12. It shows the number of months it takes for a SaaS business to pay back its fully burdened CAC on a gross profit basis. Most public companies don’t report net new ARR, so I’m taking an implied ARR metric (quarterly subscription revenue x 4). Net new ARR is simply the ARR of the current quarter, minus the ARR of the previous quarter. Companies that do not disclose subscription rev have been left out of the analysis and are listed as NA.
Sources used in this post include Bloomberg, Pitchbook and company filings
The information presented in this newsletter is the opinion of the author and does not necessarily reflect the view of any other person or entity, including Altimeter Capital Management, LP (”Altimeter”). The information provided is believed to be from reliable sources but no liability is accepted for any inaccuracies. This is for information purposes and should not be construed as an investment recommendation. Past performance is no guarantee of future performance. Altimeter is an investment adviser registered with the U.S. Securities and Exchange Commission. Registration does not imply a certain level of skill or training. Altimeter and its clients trade in public securities and have made and/or may make investments in or investment decisions relating to the companies referenced herein. The views expressed herein are those of the author and not of Altimeter or its clients, which reserve the right to make investment decisions or engage in trading activity that would be (or could be construed as) consistent and/or inconsistent with the views expressed herein.
This post and the information presented are intended for informational purposes only. The views expressed herein are the author’s alone and do not constitute an offer to sell, or a recommendation to purchase, or a solicitation of an offer to buy, any security, nor a recommendation for any investment product or service. While certain information contained herein has been obtained from sources believed to be reliable, neither the author nor any of his employers or their affiliates have independently verified this information, and its accuracy and completeness cannot be guaranteed. Accordingly, no representation or warranty, express or implied, is made as to, and no reliance should be placed on, the fairness, accuracy, timeliness or completeness of this information. The author and all employers and their affiliated persons assume no liability for this information and no obligation to update the information or analysis contained herein in the future.















The observation that vendors compete on worldview ratehr than features is spot on. Most enterprise teams I've worked with spend months in what looks like analysis paralysis, but it's really just the educational lag you described. What makes this even harder is that the "build vs buy" framing misses the deeper issue - most teams haven't internalized what production AI actually costs in terms of ongoing maintennance. The part about products being opinionated guides is key. Free tiers and sandboxes don't just reduce friction - they compress months of abstract reasoning into a few weeks of tactile experience. I've watched teams flip from "we can build this" to "oh, we need this" almost overnight once they hit their first real edge case in production.