4 Comments
User's avatar
Jayson Winchester's avatar

You nailed the assistive → authoritative line. The missing piece I keep seeing is that “authority” isn’t just a product toggle or more guardrails , it’s a closeable system object: decision procedures, rules-in-force at decision time, a sealed decision record for replay, and a non-bypassable commit/permit boundary from decision to execution. Otherwise you get faster inconsistency, not authority.

I wrote a longer take on “authority as a primitive” (Dec 17) if useful: https://thepropagation.report/p/the-ai-doesnt-know-who-you-are?utm_source=comment

Expand full comment
Samuel S.'s avatar

"It will take time for people to trust AI systems. But once they do, the floodgates open."

That is ultimately a problem with the models though.

Expand full comment
Roxane Googin's avatar

Yes Deal Director, having lived through maybe 5 of these cycles, adoption comes in two phases. The first phase is where big buyers use new technology to do old things in safely new ways. This preserves power structures but accomplishes little. Then true attackers, using the new technology to fundamentally alter business processes and maybe the businesses themselves, gut the old business models and get very rich. We are in phase one, waiting on phase two. That is when the fun begins.

Expand full comment
The Deal Director's avatar

One way to approach this is based on whether we are implementing AI in the context of the existing way that users work vs how models are able to deliver the biggest outcomes I.e APIs, able to ingest easily outside context (instructions with md files for example), being able to do tool calling, etc.

GUI vs API says a lot about the direction being taken.

Expand full comment