Defining an MVP Headline Replacement Feature with a Go to Market Strategy
And when was it that you began to know everything? Also, I’m Turning on Payments
This is what corporate stock photos will look like in the future if I have my way.
Internet Friends, Space Romans, Digital Countrymen, lend me your ears!
If you read my post How to Make an Information Super Weapon you’re probably thinking, “Well that was quite a lot. Lot of roadmap to build out there. Good luck with the funding and call me in three years when it’s done.”1
If you don’t want to click on that link, the idea is that you build a sort of shared browsing experience that follows you around the internet that lets you pick up all the wisdom of the people you know, plus the wisdom of other groups. My one-line pitch is “Community Notes for the entire Internet.” That way you have something other than your first gut instinct to help guide you toward what is true. You’d have a right to defense, adjudication, and all sorts of other stuff thrown in there as well. This is eventually going to be needed when AI slop overfills everything, anyway. We want to get ahead of that. At scale, we want a tool that incentivizes people who know what they’re talking about to stand up and say it in a way that is productive.
However, at scale, that’s a lot. You probably couldn’t get funding to go and build it all at once, anyway.
This post is to lay out what we can build as the first minimum step. What can we build that an end digital citizen can use, that is also actually valuable for them to use, that isn’t throwaway work as you scale, and will attract some users to the system? Oh, and is small enough to be affordable.
In my mind, it’s a Handy-Dandy Automatic Headline Replacement Feature.
We’ve picked up a lot of new readers in the past few weeks —thanks again to Scott Alexander, Nadia Bolz-Weber, and Rod Dreher— so I will explain. News headlines are written to be on-purpose sensational. News is a game of clicks. The headline of any article you see will be written to make you click. That would be fine except there aren’t enough actual things going on that deserve that kind of sensational headline. So editors are forced to write bullshit that triggers your limbic system instead.
This is making us all go a little bit insane.
So what if people had an incentive structure where they had to be honest? Or at least, honest as far as anyone can understand it. Nobody goes to college and leaves with a skillset like, “now you know every true thing.”
This is the deeper problem I will repeat again and again. Check out this latest piece from Pirate Wires for some examples. People can sense the need for something like the system I’m describing, except they are defaulting to a fascistic totalitarian vision of that something. While it’s true that you can’t have so much bullshit out in the world that everyone has to do a research project every single time they want to know if something is true, the answer to that problem is certainly not to give one small group the total and unchallenged right to proclaim what is and is not correct.
We can see a class of people rising up who presume to not only know everything, with no explanation of how they acquired such a broad expertise, but also presume to tell you what you must believe. This is exactly backwards. For someone to convince you something is true they must actually, first and foremost, really convince you. Only then should you start to invest trust into that person.
Nobody just owns the truth, like it’s a crown you place on your head so that ever after everything you say is true. No single person can perfectly filter misinformation because no single person is in possession of total knowledge. Pursuing truth is hard. People are wrong all the time. That’s normal and quite a bit different than misinformation. Anyone who tells you otherwise is a propagandist.
I will provide an example of how this would all work.
Take this drama from several months ago when Substack was alleged to have a “Nazi problem.”
What does this headline make you think happened, exactly? Do you think that Substack is a haven of white supremacist content and more so than other social media platforms? It certainly gives that impression! It even has a boot stomping down in order to signal the oppression! I had an exchange with the author of this piece shortly after it was published. It wasn’t productive. It was like two people from two different universes trying to describe physics to each other. When I took a deeper dive into the substance of the article myself it didn’t even really seem like there was a story here let alone a “Nazi Problem.”
I think this headline could not be more misleading. From my perspective, there is no fair reading of events that justifies that headline.
What happened is that someone, who was yes a white supremacist, put a link up on his substack to yet another financial service where someone could send him money. So, in the most uncharitable reading of the situation, substack was not even involved in the transfer of money. At no point was any attempt made to compare this against other social media platforms and the presence of such groups there. There’s nothing to indicate this person couldn’t have done the same thing on a different platform. It was written entirely without context. And there were no attempts to put this problem into any clarifying perspective, one of which is that before this article was written all of the other white supremacist substacks mentioned had less than one hundred viewers. Collectively.
At the end of the day, there was one podcast that was Nazi leaning that I think, maybe earned its hosts enough to buy a used KIA Optima. Maybe. It was one of those things where I worked backward and doubled a lot of my assumptions to come up with a higher-end estimate. The article was unfair and misleading, especially the headline.
That’s my take on it. I could write up a whole piece on why that is my take, explaining it in further detail, but then how would you find it if you just happened to stumble across the Atlantic article while following a link? You’d have to spend at least a few hours chasing things down. This is the problem with news navigation as it exists today. If you read that article you wouldn’t automatically find the wider context from a person you trust and that’s supposing there is some super accurate and trusted reservoir of information out there. Newspapers tried to replicate, exactly, the newspaper in digital spaces. But a news service is something meant to give you a clear picture of what is true. To do that, news can’t just sit there. It has to follow you around as you confront bullshit.
So, what if instead of seeing that headline, you saw this headline instead?
I did this in a few minutes on a laptop. This is the minimum viable product.
Actually, it’s a few steps further past the minimum viable product because I did it manually and added context into the subtitle, but for a first step this needs to be automated. And it would be a different color. I’ll explain the divergence.
The minimum viable product for a Trust Assembly is something close to this extension called FoxVox. The people who built FoxVox did so as a show for how easily AI can be used to spin up bullshit in the future. However, I think there’s a big opportunity they missed. A lot of news is already bullshit. What we need is something that is intentionally, persistently, and systemically anti-bullshit. Something so good you can install it on your one weird uncle’s computer that will get him, over the course of a summer maybe, to no longer believe in the Illuminatus.
This is not especially hard to build.
Take a news website, scrape the text, feed it into an LLM, and have it spit you out a bunch of new headlines. Then replace the old headline with the new headline.
Now that we have LLM’s we can do this at scale. For the Trust Assembly, though, you have to start with the idea that there is a specific kind of power in many people seeing the same version of reality with the same perspective on that reality. You don’t want a bunch of people with their own individual set of headlines. A shared perspective is the first step needed for group coordination.
I don’t want the previous headline to be entirely invisible. I want to see it at minimum as hover-text, or if I click, or through some other mechanism. We’re going to have to go with what is easiest for step one and evolve it from there. To be clear, though, at no point should the previous writer’s work be lost or difficult to find. It should just be hidden behind shields to prevent limbic hijacking and mind control.
Make no mistake, mind control is exactly what the headline above is attempting to do. For the example given, that editor wanted to go into your mind, before you understood the facts, and make you believe that Substack is run by Nazis.
You might say this is what all summaries of anything are attempting to do, and you’d be right. That’s also why it is important that you have someone you trust in the position of making the summary.
I also want the new headline to be a unique color to show that it has been altered so that the change is not invisible to the user.
Deleting content to make it say something else is evil.
Changing content and hiding that you made the change is also evil.
We won’t do either of those things. Things the critical difference between this system and other systems. We will alter the content, we will enhance the content, but specifically with a sign that the content has been changed and with a way to retrieve the original content. This is a cognitive labor saving tool for community and consensus building on the internet but it still has to be transparent and fair. We are not in the position of forcing people to adopt our specific view of the truth or hiding where that work is being done. What we do want to do is empower people to apply their own point of view, from people they trust, and then later on we add in the challenge mechanisms to make sure their groups aren’t going off the deep end.
If a summary is mind control, the point of this system is to arrive at a state of “Wholesome Mind Control” where people are going out in front of you and any nudge you get from a summary is to nudge you toward the truth as understood by people you trust, who have gone through a rigorous review process. Imagine you’re going to a car lot to buy a new car and your dad, who is a mechanic, that you have a great relationship with and who has never steered you wrong, is there with you. That’s the kind of feeling of “mind control” I want digital citizens to have. The sort of mind control where if someone starts spouting off a bunch of car facts and your dad, who again you trust and have a good history with is there to say “no, we’re looking at this instead.” It is not plausible that you can become an expert in every subject covered by the news but it is plausible you could get someone from the same “tribe” as yourself to help you understand it better.
The answer to “how do you specifically replace the text?” is where we get our first interesting divergence from FoxVox and a go to market strategy. You need to offer a perspective more niche than “conservative” or “liberal.” This needs more bells and whistles before we attempt to onboard people en masse. I would be horrified if we only get as far as automatically replacing headlines and just left it there. At launch, this is a product for dorks who care a lot about the truth and due process. I’m also a dork that cares a lot about the truth and due process. So we go to where dorks who care a lot about the truth live… podcasts and substack!
We —meaning I— reach out —meaning write some cold emails— to some popular creators in these spaces and say “hey, are you okay if we drop some of your writing into the context window of an LLM as part of an instruction for it to write new headlines in your style?” I can think of one I have a pretty decent relationship with who might be a likely buy-in. Then we link that person to the full roadmap, tell them they would be an early adopter and an early pioneer for AI assisted media. Then they could have their followers download it and carry their perspective across their entire browser-based media consumption. That’s a win for them with no work on their part and it’s easy to demonstrate the function.
This means a digital citizen is no longer just doing something like following a particular news service by going to a website and reading articles. That news service becomes a perspective that follows them around the internet, adding context to what they see.
For instance, if I’m putting my money where my mouth is, and made a group, I would instruct the LLM to do the following: “Please write a one or two sentence factual headline, that includes the premise of the article and its conclusion. Please remove clickbait tactics. An average person reading each headline should be able to walk away without reading the article but understand its overall conclusion.” I’ve played around with this a few times just to see what kind of headlines I get and it’s jaw-dropping how different it feels. You don’t feel how bad the manipulation attempts are until they’re suddenly gone.
This won’t be perfect even at scale, and obviously very imperfect at the start, but it’s something you could opt into that has actual value. We could crawl every major news website and build out a set of shared tables. That table would have the url of the piece, the article title, the group it was generated for, and the replacement title plus the standard date and time stamps, etc. We could make that table consumable via an API so that other applications, like social media companies, could ingest the same information. By choosing which websites to crawl at the start, we could push out the need to build notoriety rules, which are also complex. We could talk through and set up those tables in a manner that is conducive to the future when individuals within groups are manually submitting updated headlines and when multiple groups are fighting it out and everyone is adding news services to be reviewed without them having to be manually added. That gets some big pieces of the data infrastructure going.
But for there to be any product at all, it needs to have something that can at least in theory cover the entire internet. This is the first step to making that happen. It’s also a power flex of what the system can do at scale. Something an investor could look at and immediately understand the reach of the system. The second step is letting people make their own annotations for their own group, for instance we can add a color scheme that means something like “this is really good” or “this is very deceptive” etc. The ability to whitelist certain sites that already have good headlines, mark certain journalists as negatively biased or not, etc. But to start, we would stay within the headline because that keeps our data obligations manageable.
From there, we would move into replacing words in the article. Again, we don’t remove what was originally there and we never hide that the change was made, but there are still landmines there to be diffused. For instance, it’s a common ploy to label a political enemy as a “communist” or “far-right” but never within the context of “according to whom?” This gives you the power to go in and just start correcting that with the context of “according to people you trust.” As the meme goes “am I far-right, or am I just a normal person from twenty ears ago?” Similarly, you would get the benefit of knowing “oh wow, yeah, he is a Nazi.” The way these things have blurred over the last several years is terrible.
At scale, this has groups competing against one another to share the best common explanation of reality and has a bunch of money movement services to incentivize them to do that. That’s the impossible to escape game that makes this whole thing systemically rigorous at scale. Those are hard and expensive things to build with lots of regulation and will have to come last.
The broad goal here is to make a civilizational technology that allows people to know what is true and have a sense of digital community. Something that will scale as AI scales and still keep humans in the loop directing human-shaped narratives. Healthy games of suspicion and trust kick off where people no longer can profit from the game of “say wild shit for attention” and instead have to “show value steadily over time.” The wild attention seeking stuff will always exist, but wouldn’t it be nice to have somewhere to go as an adult to see what’s actually real?
It is not our job, as the builders of the system, to tell some group that they are wrong and cannot participate. That is a duty that has to belong to the users of the system. There cannot and will never be a government that does not require diligence and courage from its citizens. This system won’t have “users” in my mind, but “digital citizens” who are responsible to their trust community.
Let’s shoot for seven original creators, hopefully broadly across the political spectrum so we can show no bias at the start, and that users of the system can toggle through. We want smaller but up-and-coming people who can bring a group of highly engaged users with them who can be early adopters and suffer through beta versions. I’m thinking mid-tens of thousands of followers with the hope of getting a few thousand people engaged. We can’t bring anyone on who will overwhelm the system until there’s enough funding in place to pay for servers, etc.
In Summary, MVP looks like:
An LLM Scrapes News Sites from a Pre-Defined List (to make data tables easier to build) and Replaces Headlines with Instructions to Match Style to a Particular News Service
Tables are Created to Support this for Several Different Creators who are Invite Only (so we don’t have to build out a full new creator onboarding process until later) and made shareable via an API.
Those Tables are Combined with a Browser Extension (Probably Chrome, but hopefully also Safari) so that as people browse the headlines are replaced.
Headlines are replaced under a certain stylistic scheme that is aesthetically pleasing and that is easy to spot with the original text being recoverable.
How You Can Help
Several highly skilled and kind people have offered up their services after reading about the idea after Scott’s link. I would love to have you in the discussion group. We have a Discord here that you can join. Note, this is not hosted by me and the reason I’m not doing it on Substack itself is I want it to be at least a little hard to join.
If you have interest in funding, please send me an email. I would love to discuss this with anyone but please note I am expecting a second child in the next few days if I am suddenly unavailable and unable to respond. Send me a message on here or reply to this post and I will respond in kind asap. We are working toward getting some estimates on development commitments but I am guessing we need low five figures to deliver an MVP. I would not take any of this money for myselF and it would all go toward the development of the system.
As some of you are now here from the Scott Alexander link I do want to do a full disclosure that he might just like my biography pieces because I can be pretty funny and he might think the Trust Assembly stuff is harmless/okay but not his cup of tea. So please don’t proceed with the state of mind that I have an “in” with him or something of that nature. I was once internet famous —although much less than him— and it would get kind of annoying when a casual e-quaintance would trade on my name and I don’t want to do that to someone else. As to why I put this in the footnote, if you’re an ACX reader I have very high confidence you read the footnotes. Thanks again to Scott for the link.
Would you take suggestions for Trusted People? And: if you haven’t already done so, read Stephen Fry’s most recent talk/essay on AI. It’s brilliant on its own, and he might be VERY interested in your idea.
An important comment on the footnote: since you seem to have an in with the substance people can you please ask them to fix the issue where footnotes aren't clickable in the app.