Welcome, fellow ACX reader. I as well. Some Guy + Matthew, where can we convene to talk about this more?
I've actually designed a similar system that takes care of a couple of edge cases I think Some Guy hasn't accounted for, specifically:
Humour: people who are being funny. This only slightly overlaps with true, but has a lot of value.
Trolls: people trying to get a rise out of others. Normally, you can just treat these people as liars and punish them accordingly, but some people want to see them and interact with them. How to let them provide their own value?
Also, Some Guy seems to have thought about a few things I had not thought of, like how to monetise the whole mess.
Bootstrapping will also be difficult. How to get the MVP set of requirements? How to actually build it? Who to be the first reviewers and users?
Would love to talk about all these things in more detail.
My wife is about to have baby number two but my poster after next will be to schedule a big x space to discuss. I would love to speak with you both on it.
EDIT: Though in the spirit of release early and often, maybe just set up a quick Slack or Discord even if you won't have much/anything to contribute right away?
Congrats on the baby number 2! See you on the discord afterwards I suppose, a team I was a part of at a hackathon came up with a very similar idea to this one, worked it out slightly differently and I'm curious to see which one you think to be more feasible.
Ha thank you for decoding. I really liked the interview which discussed Wikipedia with Lex, I didn't realize how it worked at all. It's a nice reference point. Wikipedia but adversarial. Wikipedia but democratic.
This also produces shareable artifacts like “a new headline” or “replace these words” or “add this context” so in that sense the wiki is there to enrich web viewing experience elsewhere
I worry that this focuses on consensus, rather than truth, and whilst that is still valuable, it also has failure modes.
I might have missed something in your plan, but I feel like if I know a truth that goes against the consensus, I'm incentivised to spread the consensus, rather than the truth?
'Truth' is probably an impossible target, so I get why you would go for consensus, but as soon as you try and incentivise spreading a consensus view, this negatively impacts truth.
For an example of how this can be a problem, just look at the herding of the pollsters at the moment. Nobody is incentivised to publish an outlier, and suddenly the "wisdom of crowds" is destroyed, by everyone trying to say the 'correct' thing.
What you really want is to incentivise people to speak their minds, even when those thoughts go against the consensus. If you select specifically for the people most able/willing to align with the consensus then you guarantee that you are selecting for people who are not speaking their true minds.
(And before you ask, I don't have a solution. I think the whole situation might be fundamentally unsolvable, sorry!)
There is, unfortunately, no single recipe for truth. Even the scientific method, the consensus of experiments, is prone to misinterpretation, faulty process, bad mechanism, etc. I don’t believe you could ever write something down and say “Yes, I definitely have it now. This is the one and only thing we need to do.” You can only move yourself in slow circles toward a fire you can’t ever capture or hold.
I think the best we have is that we incentivize groups to stand up for what they believe in and produce a legible argument that is understandable to many different people from different walks of life. Social media today accidentally incentivizes the mob, and highlights interest. But that sort of calming, “ah okay” feeling comes when things are a bit slower and the argument is made broad. So you are correct that’s what I want to do. However, the argument has to actually survive contact with other groups so there’s selection effect there.
My philosophical position is that over time, the only arguments that can withstand that kind of bludgeoning are arguments which contain truth.
I've been thinking about something along these lines for a while now.
The main goal of my (highly incomplete) thoughts is providing a signal of authenticity. This is going to become increasingly important as social media fills further with slop. It's potentially simpler than line-by-line iterative reviews: just a single score based on whether people in my trusted network vouch for it or not. This can work for articles, sites, images, video, etc. Potentially, this could be a rare non-scam use of blockchain (a la https://vitalik.eth.limo/general/2023/07/24/biometric.html).
Neal Stephenson has a fascinating subplot in "Fall, or Dodge in Hell" where social media becomes so flooded with conspiracy theories that people pay other people to filter their feeds for them. Hopefully we can do this in a distributed way without creating a new class of service jobs subject to cost disease.
I think of it as a shared window to reality for people who think alike and will certainly need a slop filter. The active review piece is more for the sake of news itself so there’s always a human in the loop. Basically I want a virtuous cycle that refines communities across time.
I'd sign up to help develop this. I've had some unformed musings about reviews tagged with "faction perspective", where if you tag lots of things as being of one faction that other people also do, your reputation for being honest w.r.t that faction grows in Hebbian fashion. Also factions are hierarchical, so if enough people in "Independent Baptists" and "Antiochian Orthodox" et al tag something, it bubbles up to "this is a general Christian idea".
There is a fundamental problem here, illustrated by the recent covid debacle. The people who were correct/telling the truth were NOT those who were promoted, were official sources, whatever. The vast majority of all communication about covid, which was most/all of what was heard by the vast majority of people, was a lie. But as Goebbels said years ago, repeat a lie often enough and people believe it is the truth.
You idea has all of the same problems of generative AI which is why it is so poor at discerning real truth. Correlations and preponderances may, but often do not, define truth. Everyone may agree that masks work because that is what the government (pathologic liars) and endless well-paid lying scientists say about it all. The fact that there was already a giant literature based on influenza at the start of the scamdemic that showed that masks were of no value was available to all and was promoted by some. But they were vilified and the trust network you espouse here would be overwhelmed by falsehood.
Yes, four or five years later one could see who was right. But the entire house of cards would have collapsed by that time because the trust determination cycle just cannot wait that long. I have been in the middle of this and have applied your approach to the real actors and the real populations as I see them, and I do not see it working like you do. There is too heavy a thumb on the important issues for truly trustable sources to arise and flourish early and widely enough for the system to work.
I thought I gestured at my answer to this up above but would you be okay letting me know what you think the appropriate answer to this would be? I can lay out my answer in more detail if you want as well.
They made a plugin for web browsers that allowed you to comment on news articles, even if the news site didn't have a comments section. I think like 12 people downloaded, i.e. it totally flopped.
That’s good to know. Well, I have some thoughts on that. Would it make you feel better about the plausibility of all of this if I started with some much less ambitious day one plan for how this would work? Like how you would take the minimum viable product to the full and mature state?
.. a sample ‘graphic.. portraying how your idea ‘might appear in the flesh..
might be doable & informative.. mebbe a ‘sample News Headline screengrab & by comparison the ‘related ‘factuals .. the sense of uh.. ‘that Headline is pure horseshit raving
This what the Brave AI has to say about what happened with GAB's project.
"Based on the provided search results, it appears that the Gab Dissenter plugin is still available for download and use. However, it seems that the project has been abandoned, and the plugin is no longer actively maintained or updated.
Reasons for Abandonment
According to the search results, the plugin was removed from the extension stores of both Mozilla and Google Chrome due to violating their acceptable use policies. Additionally, the project seems to have been sabotaged, with the notification bell being removed, making it difficult for users to communicate effectively.
Current Availability
Although the plugin is no longer available in the extension stores, it can still be downloaded from the official Gab website. However, it’s essential to note that the plugin may not be compatible with the latest browser versions or may have security vulnerabilities due to the lack of updates.
Conclusion
In summary, the Gab Dissenter plugin still exists, but it is no longer actively maintained or updated. Users can still download and use the plugin, but they should be aware of the potential risks and limitations associated with using an abandoned project."
I know it probably feels like I’m not engaging with this strongly enough. I’m a Product Owner in my day to day life.
If I were to launch a fully functional version of this today, I know I would fail. It wouldn’t have the support. There’d have to be a whole strategy, outside of just the technology, to get people to engage with the vision.
And then I’d need to do a lot of work to keep those people engaged and make sure the code has the proper stewardship.
In looking at GAB, I’m guessing they didn’t message this in a way where it would succeed.
For instance, calling it dissenter. Or marketing to the right wrong versus marketing it as a neutral truth seeking utility. They probably didn’t have a choice and I’d still hug them for trying because I don’t know if their roadmap was like mine, but without a bunch of spergs ready to go and a bunch of people getting carried along as ready made readers it was bound to fail.
But if you get the fly wheel moving? If you get that critical mass? This changes the world.
Ouch! I like you but this idea of yours, not at all! First, what you're describing already exists, it's called modern society. So in this sense your idea is completely unnecessary. But for arguments sake let's pretend your system of vetting added something new and needed. Let me point out just how dystopian this would be. Please recognize that you're calling for a system that trims off any ideological idiosyncracies in the name of truth. Hmm, where have I heard that before? In just about every fascist regime ever developed. You are, apparently, well intentioned but sadly naive, failing to see that the leveling you dream of would only mute innovation (at best) and silence rebellion against current questionable norms (more likely), all in the name of more civil online discourse? Much better, IMHO, is to risk the rough and tumble of the real world and allow reputations to do what they will do (albeit sometimes unfairly). Over time we mostly get it right. But I do not want a system that enforces current IN MODE thinking to lock down how far a new idea can travel.
Copernicus blogs about how, actually, he finds that the earth revolves around the sun, that the earth is not at the center. Your system strikes his article down because only cranks agree with him. So the Dark Ages last longer thanks to your system that rewards moderation (sameness) of opinion about truth.
1. Copernicus blogs about how the earth revolves around the sun. He points to several factors that support this including the apparent retrograde motion of mercury.
2. His article is reviewed by random people in his own group, who have no particular affiliation. (People keep forgetting this part, and respectfully, I would challenge you to really integrate this piece into your thinking)
3. Copernicus knew this would happen so he constrained his thinking specifically to only arguments he could support, like the apparent retrograde motion of mercury. Perhaps he even proposes some experiment and evidence.
4. Maybe the group really does strike him down for being too heretical (what already actually happened, in other words, meaning the system did not introduce any new net negative consequence)
5. Other groups review it, outside of his group, because he presses it forward. Maybe some, but not the majority risk their reputational standing to support it. There will always be groups that need to take risks like this because they need to take chance on outliers in order to move up in the rankings and share their views more broadly.
5. Copernicus forms a group of sympathetic supporters. They all begin to flesh out this idea. It percolates within that group and has a place to grow.
6. It goes back for review, substantially the same, but now the tides have changes. The original group still rejects it but the majority of groups believe it to be correct. So now that group that rejected it loses reputation at a compounding rate because there is a bonus scoring mechanism for people who stand by truth even after being initially rejected.
7. This becomes valuable enough that people have to perform experiments.
8. It turns out the Earth really does revolve around the sun instead of the other way around.
There’s no such thing in this system as total defeat. There’s just an economizing of ideas.
Well, first thank you for the thoughtful reply. I feel your passion. But respectfully, what you've done is simply describe how the world works. Ergo, peace, no need for a new system (except by nefarious control fascists no one wants). But really, I like you. So we can amicably disagree.
There are a lot of gullible people in the world who would use a nudge from their smartest friends (all opted into willingly). If I can’t make a system that feels like going car shopping with your mechanic dad and talking to the finance people with your mom who works at a bank I will consider it a failure.
This is a fascinating idea and I'll be following along and signal boosting.
One problem I can see is that certain true information is now considered harmful. For example, my local news stations no longer report the race of criminal suspects or victims. With your software stack, people would likely start adding this missing information. Your extension would then be reported to Google/Firefox for hate speech by those media companies and likely removed. In some countries, people might actually be prosecuted for posting information. It would be the same situation that people faced on Twitter before Elon Musk bought it, where countless people were banned for saying true things that were considered harmful.
I've been working towards something that rhymes with this idea, happy to find others poking at it as well, excited to dig through backlog
Welcome, fellow ACX reader. I as well. Some Guy + Matthew, where can we convene to talk about this more?
I've actually designed a similar system that takes care of a couple of edge cases I think Some Guy hasn't accounted for, specifically:
Humour: people who are being funny. This only slightly overlaps with true, but has a lot of value.
Trolls: people trying to get a rise out of others. Normally, you can just treat these people as liars and punish them accordingly, but some people want to see them and interact with them. How to let them provide their own value?
Also, Some Guy seems to have thought about a few things I had not thought of, like how to monetise the whole mess.
Bootstrapping will also be difficult. How to get the MVP set of requirements? How to actually build it? Who to be the first reviewers and users?
Would love to talk about all these things in more detail.
My wife is about to have baby number two but my poster after next will be to schedule a big x space to discuss. I would love to speak with you both on it.
Happy Parenthood. Enjoy it!
EDIT: Though in the spirit of release early and often, maybe just set up a quick Slack or Discord even if you won't have much/anything to contribute right away?
EDIT2: https://discord.gg/uczs2Qrh < I just made one for the interim.
could I get a new link, expired while been out busy, but very interested in chatting
Working on getting a new communication channel going. Even though I just said it wouldn’t be substack I’m now thinking it will be substack.
Congrats on the baby number 2! See you on the discord afterwards I suppose, a team I was a part of at a hackathon came up with a very similar idea to this one, worked it out slightly differently and I'm curious to see which one you think to be more feasible.
https://discord.gg/uczs2Qrh < I made a Discord server. Maybe we can try to cover some high-level concepts?
I'd also love to learn more!
Next week!
Hi,
I don't think this is boring at all.
I started crying when I read it.
Keep going.
I’ll take it!
Looks good. I'd like my trust assembly to replace the words "disinformation" and "misinformation" with "lying" in every article I read.
I like it! I hate how those words basically mean nothing now.
Is this similar to how wilepefia (minus tech)
Wikipedia? Very similar but with much different rules for engagement. This is default adversarial.
Ha thank you for decoding. I really liked the interview which discussed Wikipedia with Lex, I didn't realize how it worked at all. It's a nice reference point. Wikipedia but adversarial. Wikipedia but democratic.
This also produces shareable artifacts like “a new headline” or “replace these words” or “add this context” so in that sense the wiki is there to enrich web viewing experience elsewhere
I worry that this focuses on consensus, rather than truth, and whilst that is still valuable, it also has failure modes.
I might have missed something in your plan, but I feel like if I know a truth that goes against the consensus, I'm incentivised to spread the consensus, rather than the truth?
'Truth' is probably an impossible target, so I get why you would go for consensus, but as soon as you try and incentivise spreading a consensus view, this negatively impacts truth.
For an example of how this can be a problem, just look at the herding of the pollsters at the moment. Nobody is incentivised to publish an outlier, and suddenly the "wisdom of crowds" is destroyed, by everyone trying to say the 'correct' thing.
What you really want is to incentivise people to speak their minds, even when those thoughts go against the consensus. If you select specifically for the people most able/willing to align with the consensus then you guarantee that you are selecting for people who are not speaking their true minds.
(And before you ask, I don't have a solution. I think the whole situation might be fundamentally unsolvable, sorry!)
There is, unfortunately, no single recipe for truth. Even the scientific method, the consensus of experiments, is prone to misinterpretation, faulty process, bad mechanism, etc. I don’t believe you could ever write something down and say “Yes, I definitely have it now. This is the one and only thing we need to do.” You can only move yourself in slow circles toward a fire you can’t ever capture or hold.
I think the best we have is that we incentivize groups to stand up for what they believe in and produce a legible argument that is understandable to many different people from different walks of life. Social media today accidentally incentivizes the mob, and highlights interest. But that sort of calming, “ah okay” feeling comes when things are a bit slower and the argument is made broad. So you are correct that’s what I want to do. However, the argument has to actually survive contact with other groups so there’s selection effect there.
My philosophical position is that over time, the only arguments that can withstand that kind of bludgeoning are arguments which contain truth.
I've been thinking about something along these lines for a while now.
The main goal of my (highly incomplete) thoughts is providing a signal of authenticity. This is going to become increasingly important as social media fills further with slop. It's potentially simpler than line-by-line iterative reviews: just a single score based on whether people in my trusted network vouch for it or not. This can work for articles, sites, images, video, etc. Potentially, this could be a rare non-scam use of blockchain (a la https://vitalik.eth.limo/general/2023/07/24/biometric.html).
Neal Stephenson has a fascinating subplot in "Fall, or Dodge in Hell" where social media becomes so flooded with conspiracy theories that people pay other people to filter their feeds for them. Hopefully we can do this in a distributed way without creating a new class of service jobs subject to cost disease.
I think of it as a shared window to reality for people who think alike and will certainly need a slop filter. The active review piece is more for the sake of news itself so there’s always a human in the loop. Basically I want a virtuous cycle that refines communities across time.
I'd sign up to help develop this. I've had some unformed musings about reviews tagged with "faction perspective", where if you tag lots of things as being of one faction that other people also do, your reputation for being honest w.r.t that faction grows in Hebbian fashion. Also factions are hierarchical, so if enough people in "Independent Baptists" and "Antiochian Orthodox" et al tag something, it bubbles up to "this is a general Christian idea".
Going to throw out some links for everyone to get together on this next week.
What do you think of slashdot moderation?
Reading on it just now and I would say that I’m a huge fan.
> “People will just use this to make propaganda for their own side.”
contains seed of a significant alternate funding approach for those who can walk a narrow path...
I still like this a ton ..
Thank you Thomas
.. ‘eat the news.. or did i see that elsewhere ?
i like this ‘idea .. a la ‘bellingcat .. altruism/journalism ..
.. in some ways a kinda gonzo boomerang ‘grim reality ..
well deserved ..
for Those ‘broadcasting virally while stark naked el flagrant delecto..
.. & when caught / trapped.. the more th yellow ‘churnslism struggles
& th more it ‘guts itself.. & begins to ‘caricature itself .. full kabuki 🦎🏴☠️🎬
Chat GPT, make it longer!
The funny thing is that one of the sections I cut is exactly the one that Dr. K raised as being the primary problem.
I’m not being autistic *enough* it feels like.
There is a fundamental problem here, illustrated by the recent covid debacle. The people who were correct/telling the truth were NOT those who were promoted, were official sources, whatever. The vast majority of all communication about covid, which was most/all of what was heard by the vast majority of people, was a lie. But as Goebbels said years ago, repeat a lie often enough and people believe it is the truth.
You idea has all of the same problems of generative AI which is why it is so poor at discerning real truth. Correlations and preponderances may, but often do not, define truth. Everyone may agree that masks work because that is what the government (pathologic liars) and endless well-paid lying scientists say about it all. The fact that there was already a giant literature based on influenza at the start of the scamdemic that showed that masks were of no value was available to all and was promoted by some. But they were vilified and the trust network you espouse here would be overwhelmed by falsehood.
Yes, four or five years later one could see who was right. But the entire house of cards would have collapsed by that time because the trust determination cycle just cannot wait that long. I have been in the middle of this and have applied your approach to the real actors and the real populations as I see them, and I do not see it working like you do. There is too heavy a thumb on the important issues for truly trustable sources to arise and flourish early and widely enough for the system to work.
One man's analysis in any case.
I thought I gestured at my answer to this up above but would you be okay letting me know what you think the appropriate answer to this would be? I can lay out my answer in more detail if you want as well.
GAB already tried a news critique plugin for web browsers, it failed, and they are far better known than you, shrug.
Say more? I know they tried content moderation in real time but it seemed like they didn’t have very good incentives to make it actually happen.
They made a plugin for web browsers that allowed you to comment on news articles, even if the news site didn't have a comments section. I think like 12 people downloaded, i.e. it totally flopped.
That’s good to know. Well, I have some thoughts on that. Would it make you feel better about the plausibility of all of this if I started with some much less ambitious day one plan for how this would work? Like how you would take the minimum viable product to the full and mature state?
.. a sample ‘graphic.. portraying how your idea ‘might appear in the flesh..
might be doable & informative.. mebbe a ‘sample News Headline screengrab & by comparison the ‘related ‘factuals .. the sense of uh.. ‘that Headline is pure horseshit raving
This is a good idea and I shall do it.
This what the Brave AI has to say about what happened with GAB's project.
"Based on the provided search results, it appears that the Gab Dissenter plugin is still available for download and use. However, it seems that the project has been abandoned, and the plugin is no longer actively maintained or updated.
Reasons for Abandonment
According to the search results, the plugin was removed from the extension stores of both Mozilla and Google Chrome due to violating their acceptable use policies. Additionally, the project seems to have been sabotaged, with the notification bell being removed, making it difficult for users to communicate effectively.
Current Availability
Although the plugin is no longer available in the extension stores, it can still be downloaded from the official Gab website. However, it’s essential to note that the plugin may not be compatible with the latest browser versions or may have security vulnerabilities due to the lack of updates.
Conclusion
In summary, the Gab Dissenter plugin still exists, but it is no longer actively maintained or updated. Users can still download and use the plugin, but they should be aware of the potential risks and limitations associated with using an abandoned project."
I know it probably feels like I’m not engaging with this strongly enough. I’m a Product Owner in my day to day life.
If I were to launch a fully functional version of this today, I know I would fail. It wouldn’t have the support. There’d have to be a whole strategy, outside of just the technology, to get people to engage with the vision.
And then I’d need to do a lot of work to keep those people engaged and make sure the code has the proper stewardship.
In looking at GAB, I’m guessing they didn’t message this in a way where it would succeed.
For instance, calling it dissenter. Or marketing to the right wrong versus marketing it as a neutral truth seeking utility. They probably didn’t have a choice and I’d still hug them for trying because I don’t know if their roadmap was like mine, but without a bunch of spergs ready to go and a bunch of people getting carried along as ready made readers it was bound to fail.
But if you get the fly wheel moving? If you get that critical mass? This changes the world.
"Product owner?"
Sorry but people don’t seem to be interested in this, shrug.
https://bigleaguepolitics.com/gab-is-creating-a-comment-section-on-every-website-that-could-change-the-internet-forever/
Step one: we’ll get them interested.
Ouch! I like you but this idea of yours, not at all! First, what you're describing already exists, it's called modern society. So in this sense your idea is completely unnecessary. But for arguments sake let's pretend your system of vetting added something new and needed. Let me point out just how dystopian this would be. Please recognize that you're calling for a system that trims off any ideological idiosyncracies in the name of truth. Hmm, where have I heard that before? In just about every fascist regime ever developed. You are, apparently, well intentioned but sadly naive, failing to see that the leveling you dream of would only mute innovation (at best) and silence rebellion against current questionable norms (more likely), all in the name of more civil online discourse? Much better, IMHO, is to risk the rough and tumble of the real world and allow reputations to do what they will do (albeit sometimes unfairly). Over time we mostly get it right. But I do not want a system that enforces current IN MODE thinking to lock down how far a new idea can travel.
Appreciate the feedback. Can you give me an example of how that would play out in practice?
Copernicus blogs about how, actually, he finds that the earth revolves around the sun, that the earth is not at the center. Your system strikes his article down because only cranks agree with him. So the Dark Ages last longer thanks to your system that rewards moderation (sameness) of opinion about truth.
If I may,
1. Copernicus blogs about how the earth revolves around the sun. He points to several factors that support this including the apparent retrograde motion of mercury.
2. His article is reviewed by random people in his own group, who have no particular affiliation. (People keep forgetting this part, and respectfully, I would challenge you to really integrate this piece into your thinking)
3. Copernicus knew this would happen so he constrained his thinking specifically to only arguments he could support, like the apparent retrograde motion of mercury. Perhaps he even proposes some experiment and evidence.
4. Maybe the group really does strike him down for being too heretical (what already actually happened, in other words, meaning the system did not introduce any new net negative consequence)
5. Other groups review it, outside of his group, because he presses it forward. Maybe some, but not the majority risk their reputational standing to support it. There will always be groups that need to take risks like this because they need to take chance on outliers in order to move up in the rankings and share their views more broadly.
5. Copernicus forms a group of sympathetic supporters. They all begin to flesh out this idea. It percolates within that group and has a place to grow.
6. It goes back for review, substantially the same, but now the tides have changes. The original group still rejects it but the majority of groups believe it to be correct. So now that group that rejected it loses reputation at a compounding rate because there is a bonus scoring mechanism for people who stand by truth even after being initially rejected.
7. This becomes valuable enough that people have to perform experiments.
8. It turns out the Earth really does revolve around the sun instead of the other way around.
There’s no such thing in this system as total defeat. There’s just an economizing of ideas.
Well, first thank you for the thoughtful reply. I feel your passion. But respectfully, what you've done is simply describe how the world works. Ergo, peace, no need for a new system (except by nefarious control fascists no one wants). But really, I like you. So we can amicably disagree.
There are a lot of gullible people in the world who would use a nudge from their smartest friends (all opted into willingly). If I can’t make a system that feels like going car shopping with your mechanic dad and talking to the finance people with your mom who works at a bank I will consider it a failure.
This is a fascinating idea and I'll be following along and signal boosting.
One problem I can see is that certain true information is now considered harmful. For example, my local news stations no longer report the race of criminal suspects or victims. With your software stack, people would likely start adding this missing information. Your extension would then be reported to Google/Firefox for hate speech by those media companies and likely removed. In some countries, people might actually be prosecuted for posting information. It would be the same situation that people faced on Twitter before Elon Musk bought it, where countless people were banned for saying true things that were considered harmful.