© 2019 Yup.io Inc.

  • Black Facebook Icon
  • Black Twitter Icon
  • Black Instagram Icon
  • Nir Kabessa

On Online Social Consensus

A treatise on crypto social and the motivations for the Yup Protocol

With help from the whole Yup team, and revisions from Sam Hatem

The Quest for Social Consensus

Since the dawn of the internet, there has been an ambitious desire for open and public protocols for social networks. However, distributed social consensus on the web has remained highly regarded but rarely understood. It has not been valued based on determinant incentives and, as a consequence, remains widely misrepresented and opaque. Online engagement metrics such as reach and follower counts attempt to attach numbers to this force, but measure with regard to quantity rather than quality, making them easily manipulable and highly untrustworthy. These metrics fail to accurately capture social capital, leaving social networks highly vulnerable to weak governance, sybil identities, and the spread of disinformation, harming digital communities and trust. Under this system, users have little economic incentive to express their opinion on content and one another genuinely, and their opinions are weighted evenly regardless of their own social capital. As our online presence consumes most of our daily lives, our society will be defined by our ability to solve these new social problems on the internet. In our eyes, the quest for online social consensus is a continuous quest for a better humanity in the digital age.

Recently, due to technological improvements to distributed systems and cryptography, the world’s social media leaders have begun to imagine similar designs enabled by blockchain tech. Industry titans such as Jack Dorsey (Twitter) and Mark Zuckerberg (Facebook) have started research and development towards these exact ends. On the crypto-economic side of these developments, new innovators have emerged who are attempting to tackle problems around collective decision-making and DAO-based identity systems. Thought leaders in this field have realized that a better method of allocating the value accrued on online platforms is needed and could be achieved by incentivizing participants at every scale for their online actions, directly monetizing the content they create, rate, or proliferate.

In tandem to issues surrounding online social capital, the current centralization of most networks entitles operators to rent-seek and sway user behavior, all while hoarding user data that could provide an enhanced experience and new streams of income for individuals.

In an attempt to solve these pitfalls of the web, a group of students at Columbia University built Yup, a protocol that facilitates the distribution, measurement, and governance of online social capital in a pseudonymous yet transparent opinion-based economy. That’s a technical way of saying that Yup is respect –a movement to represent, reward, and empower respect across the internet, with content primed to earn users’ approval, not attention.

Yup offers a fair weighting of opinions, a single universal reputation/identity that can be used on all sites without competing with existing business models, a native and scarce asset to facilitate monetization or attention markets, and an immutable record of reviews and reputations. Let us explore the shortcomings in existing centralized and decentralized systems, point to promising solutions, and propose a new protocol governed by social capital.


Identity in the Digital Realm

What does it mean to be a valuable user online? Is this reserved for individual humans? Must they be legally recognized? In Authenticity in the Age of Digital Companions, famed sociologist Sherry Turkle argues that “by the mid-90s, humans were not alone as emotional machines.” She also points to experiments where kids quickly treated robots as companions deserving of compassion despite being scientifically demystified. This type of interactions between bots and people will only increase over time, she suggests.

One of our favorite Instagram users, @lilmiquela, isn’t real in the human sense. But that hasn’t stopped her from becoming a superstar. Miquela is a self-described robot led by the Sequoia-backed media company Brud. She’s cleverly placed alongside “real” people to appear life-like. This Gorillaz-esque approach to identity is starting to catch on, as more and more Miquela wannabes sprout up.

Block.One, the blockchain development company behind EOS and Steem, have a new decentralized social media project called Voice that they announced last summer. The Voice project seems to take a different angle at this identity issue, opting to essentially require every sign up to legally identify themselves with documents such as a driver’s license or passport. Dan Larimer and others at B1 cite responsibility as their reason, wishing to hold users accountable for their interactions. It remains to be seen as how Voice plans on decentralizing this process over time, to make sure that their network doesn’t remain inherently permissioned. Identity DAOs like Humanity DAO and Democracy Earth have tried to crowd-source identity, building economic incentives around the recognition of humans via incentive mechanisms that resemble universal basic income. But both of these approaches lack the proper room for the lil’ Miquelas of the world. The social networks of the future will still need to account for individuals, groups, teams, companies, communities, or even algorithms being represented by a single identity. A protocol shouldn’t by design limit users to one account tied to their real identity –furthermore, accounts shouldn’t be limited to one person or even a human.

In a different piece from 2008, Sherry Turkle argues that “Online worlds can provide valuable spaces for identity play.” This is clearly seen by the emergence of ‘finstas’ (fake IG accounts) and group IG accounts, which represent a fundamental shift in the ‘roles’ online users take during interactions. In this sense, it becomes very difficult to properly attribute authenticity or inauthenticity to any one user. So as the lines between human vs. robot, person vs. group, as well as person vs. themselves begin to fade, what does ‘authenticity’ really mean?

It’s all about social value. A username or identity is only as ‘real’ as we allow it to be. If it matters to us as a society, it is in fact authentic, whether or not it owns a heart, four hearts, an XY chromosome, or CPU. For example, Satoshi Nakamoto is the still-secret pseudonym used for the creator(s) of Bitcoin. Yet today, it doesn’t really matter who Satoshi Nakamoto is/was, but what they represent and what kind of social capital their identity holds. The only information known about Satoshi is their digital footprint and social approval, and so that becomes their identity, the role they play, and the context in which one would interact with him/her/them. If Satoshi was found to be a robot, it would not change the power and reputation of their identity. Nothing but a private key signature would be necessary for them to take advantage of all of that built up social capital. So, if social value is crucial in determining identity online, humans need signal vehicles to properly perform in this new environment.

Data Autonomy

For the majority of web2 online platforms, user data has been a relatively zero-sum game. Every platform is incentivized to hoard data, hurting the user’s experience. For example, when you log in to Tinder via Facebook OAuth, Facebook Inc. is choosing what to share with Tinder and what not to share. While FB may permit you to allow Tinder to know some information, it hides the majority of your data and metadata because it’s extremely valuable and diminishes in value as access increases. This is despite the fact that this data could enable Tinder to provide you with a better experience. You as a user have no real say in this matter and, as a result, are the biggest loser from this zero-sum game. Since its inception, the world wide web has lacked solutions to this equilibrium, but distributed ledger technology has given rise to new projects and products aiming to disrupt this existing data competition and give users real ownership over their data. Sir Tim Berners Lee, the creator of the world wide web, has been working on a new project titled Solid that’s focused around solving this problem. By way of the Solid “POD”, a storage locker controlled by private keys with different permissions, users can decide where to store their data. Since users have immutable ownership over their data, they’re free to move it at any time, choosing who has access to it without interruption of service. In order to write data as well as to read it, and to protect privacy, it is important to control who has access to what. In Solid’s case this is managed using the web access control list specification and the WebID Identity spec.

Other teams are building similar products. OrbitDB, for example, tries to achieve similar peer-to-peer database capabilities, functioning as a serverless and peer-to-peer storage locker built on IPFS data storage and Pubsub. 3Box is building a comparable framework for Ethereum applications. Smart contract platform Blockstack has prioritized these features, adding OAuth log-in and data locker capabilities to its base layer protocol. By way of a universal identity layer, Yup is providing similar functionality users, focusing on qualitative metadata as user’s most lucrative asset.


Online attention is a key ingredient in harvesting influence. Several entities have capitalized on this notion, quickly trading it as a commodity of product sales and branding. Additionally, it gave power and voice to advocacy. Most social media platforms remain economically efficient through the resale of what attention it could capture in exchange for its `free' content.

Content creators in top social media platforms have little direct way to monetize their fans’ attention or get a share of ad revenue. In response, they have resorted to grossly ineffective workarounds. This resulted in an economy that disproportionately rewards a handful of centralized social media companies instead of the content creators that give such platforms value. Members of thriving online communities that exist on popular social media platforms currently have no control in the developments of the sites they helped grow; the length and extent of their engagement is undermined. Additionally, there is no direct relationship between the demand of users and sponsored content/monetization. This stunts influencers’ ability to build user loyalty around an account or channel. There are no real incentives to support quality content and discourage poor behavior.

Upon examining the makeup of existing centralized social networks, we conclude that social capital is misrepresented and easily purchased, and the monetization of attention is one sided or separate from the network itself. Yup prioritizes respect instead of attention whereby the user is the customer, not the product. Our focus isn’t on growing engagement but rather curating content that people actually enjoy.

The Monetization of Attention

There are a handful of distributed ledgers, platforms, and applications being built to decentralize social networks and advertising. Steem, a decentralized social network blockchain, mints new tokens and rewards content creators for their virality. In order to mitigate sybil attacks, the Steem network must have certain barriers on account creation that hurts onboarding and thus the community at large. This is quite inefficient and counts on user error to bouey the price of STEEM, giving a strong advantage to profit-driven automated users.

Similarly, Voting bots plague the Steem network to such an extent that the community accepts them as a part of their ecosystem. Users can pay bots to curate their content or to vote on their behalf for rewards. Users can earn more tokens by proxy-staking to bots than by actually participating in the network. As a result, users are now incentivized to be less engaged and simply proxy-stake their holdings to maximize returns. This new form of advertising accepted by many protocol communities has additional issues including the lack of strong incentives to support channels and accounts, the direct relationship between token holdings and influence on the network which ignores user activity or length of involvement, unannounced shutdowns of the blockchain for updates, and rampant misinformation.

New models with similar designs have risen in the last few years. Originally built on the Steem blockchain, DLive, an up-and-coming project that has partnered with celebrities like PewDiePie, began to gain traction in spring of 2019. It transitioned from Steem to Lino, a blockchain that similarly rewards content creators and curators through an inflation-based rewards system but includes donation (see Tipping section below). One of the key incentives for DLive’s chain transition must be because of Lino promising 70% of token rewards to apps and developers directly. Creators and curators of content, on the other hand, are expected to make most of their money from donations. Unfortunately, this may hurt Lino in the long run as ecosystems and communities become too siloed to reflect that type of rewards system. This already seems to be the case with Tron supposedly acquiring DLive in late December ‘19 and looking to transition the app away from Lino and to its own blockchain. Furthermore, the Lino blockchain defines reputation solely by money spent, leading to plutocracies and inaccurate reputation schemes.



Advertisement is currently the primary form of online content monetization, often considered the ‘original sin’ of the internet. While most of us hate ads, our favorite sites and platforms wouldn’t exist without them. However, it’d be nice we as users received a piece of this ad revenue both for contributing content for engaging with ads. Brave, the company that maintains the Basic Attention Token (BAT), has tried to compete with Google Ads on this front, building their own protocol and browser that pays users for viewing ads and allows them to tip sites they visit, the age old way of economizing content. However, they similarly fail to attribute value to social capital, measuring ad views equally for every individual despite their potential greater importance for ads – a billionaire and I stand to make the same money on Brave despite them being in much higher demand in terms of advertising. Tipping usually refers to the feature allowing consumers to send money to a creator. There are great mainstream and crypto examples of tipping, including YouTube, Patreon, Cent, Peepeth, and Tippin.Me.


Every social platform must deal with the well-documented tension between censorship and free speech. If censorship is too limited, inappropriate and sensationalist content surfaces super quick. If censorship is too liberally used, speech suppression is inevitable. This would lead to family-friendly communities on one side of the spectrum and censorship-resistant ‘explicit’ ones on the other side. A good example of this is Gab, a social network that prioritizes low censorship. Today, Gab has a reputation of being a hotbed for white supremacists and other questionable actors. Finding a goldilocks middle ground between the two is incredibly difficult and requires distributed consensus. The community at large will always know what’s better for the network than the headquarters of a centralized entity, but they will be much slower and more inefficient in executing that vision.

One of the big advantages of decentralized social networks is that they’re often more censorship-resistant. One project that aims to leverage this advantage is AKASHA, which stands for 'Advanced Knowledge Architecture for Social Human Advocacy'. AKASHA (2015) is an Ethereum blockchain-based decentralized social media platform that is embedded into the Inter-Planetary File System (IPFS). The Akasha decentralized social media platform is built to ensure the platform is resistant against censorship and be unable to be taken offline because it does not have any one central point of failure or lack of bandwidth issues.

Opinions & Social Capital

The opinions of others are vital to the Internet economy; it is the core component upon which value metrics are built. This is most clear in online product/service reviews. A highly-cited Harvard Business School study from 2011 estimated that a one-star rating increase on Yelp translated to an increase of 5% to 9% in revenues for a restaurant. Cornell researchers found that a one-star change in a hotel’s online ratings on sites like Travelocity and TripAdvisor is tied to an 11% sway in room rates, on average.

The internet’s peer-to-peer nature is a huge reason for this, as opinions have become more accessible. Average user rating has become a huge driver of sales, and now, user-generated ratings and narrative reviews can be found on almost every website that sells something.

The results of a 2017 study by Podium show that 93% of American consumers say that online reviews have an impact on their purchasing decisions. 91% of 18-34 year old consumers trust online reviews as much as personal recommendations. Star rating currently is the number one factor used by users to judge businesses.

However, treating these social metrics as highly accurate does not necessarily make them so. University of Colorado Boulder professors Langhe, Fernbach, and Lichtenstein investigated the actual and perceived validity of online user ratings and found a "substantial disconnect between the objective quality information that online user ratings actually convey and the extent to which consumers trust them as indicators of objective quality." They analyzed 344,157 Amazon ratings of 1,272 products in 120 product categories and compared them to ratings on Consumer Reports as well as resale value. They conclude that average user ratings correlate poorly with Consumer Report scores, and, while Consumer Reports correctly predict resale value, user ratings do not. This is because:

When reviewers are vetted and paid to review products (as is the case in Consumer Reports), their opinions are usually a stronger indication of quality. Without a mechanism to ensure this, reviews lose their meaning.

Additionally, Langhe et al. argue that "consumers fail to consider these issues appropriately when forming quality inferences from user ratings and other observable cues. They place enormous weight on the average user rating as an indicator of objective quality compared to other cues. They also fail to moderate their reliance on the average user rating when sample size is insufficient." In other words, the average user lacks both the data and know-how to make accurate inferences on the quality of products based on user ratings.

Despite both the high value attributed to user opinion and the importance of monetizing it for accurate representation of quality, our own research suggests that less than 0.001% of online opinion is monetized today. Without any incentive to review honestly, users tend to express negative reviews more often and more extremely, with little to no reason to contribute positive reviews. This lack of monetization also tips the balance for malicious actors looking to manipulate their perceived quality: the incentives to create a false opinion for money from a malicious actor outweigh those to review honestly. Fakespot, a ratings analytics tool, estimates that 52 percent of Walmart reviews and 30 percent of Amazon reviews are fake or unreliable.

Misaligned Incentives

There remains a fundamental misalignment between the incentives of the platform and its users in terms of user inputted opinions. The platforms gain from the influx of data, and do everything they can to ensure that this input of data is smooth, regardless of accuracy. Users have no incentive to review accurately and therefore act based on external interests. Crucially, this whole system is flawed because it functions independently of real monetization for all parties involved.

So why would people give honest opinions without any incentive? This is why we witness 21% more negative reviews than positive ones; with no other incentives involved, anger/frustration seems to be the driving incentive for negative reviews. Yup is designed to solve this problem by financially incentivizing accurate opinions with monetary rewards. Limiting the number of actions a user can do is also important in order to establish scarcity.

“More people are depending on reviews for what to buy and where to go, so the incentives for faking are getting bigger… It’s a very cheap way of marketing."- Bing Liu, Professor at University of Illinois

In the internet age, reviews have very much become a form of marketing and has evolved into an enormous business: reputation management. Companies need reputations, and reputations come from people, unless you can buy them for cheaper than an ad campaign.

The Consequences of Access

We expect that changes in access that occurred as a result of the rise of the internet had a significant effect on the trust in and accuracy of opinions. Access in this case refers to both the ability for anyone to rate products/ services/ content and the ease at which one can see more ratings online.

When online review forums first emerged, they provided increasingly accurate opinions because of the “wisdom of the crowd” effect taking hold. In James Surowiecki’s The Wisdom of Crowds, the famed journalist talks about bad intuitions around individual vs. group decision-making. These initial communities were small, and natural reputation effects among members existed because many knew each other by name. As misconceptions about the internet dissipated, trust in these reviews grew because they were in fact more accurate. But as access continued to increase and platforms changed, the natural reputation effects shrunk. This decreased the reputation at stake for each user and allowed for fake reviews to thrive, reducing accuracy significantly. Now, there is still a correlation between accuracy and trust; more accuracy leads to more trust, BUT more trust doesn’t necessarily lead to more accuracy. In fact, the more trust users have in online reviews the more worthwhile they are to fake/manipulate, creating a harmful equilibrium: more accuracy → more trust → less accuracy → less trust → more accuracy … I talk about this further in my piece “Opinion Economy”.

While public metrics of social approval are not the only way to attribute this form of social capital to an identity, it is a very important one, so the removal of those metrics is to the detriment of the establishment of internet identities. The obscuring of relative signaling by removing public likes, like Instagram plans on doing, can be massively harmful to the structure of the society because it takes this information (and therefore power) away from the viewers.


The notion that all opinions are created equal online is a large reason for the inaccuracy of opinions.

People like to trust brands and institutions as experts on a certain subject. In reality, however, they rarely are the experts we expect them to be. This is first because finding experts is a skill of its own, which can be flawed and time-consuming. As there are more sources, statistics, data points, and faux expertise, it becomes increasingly difficult to spot experts. Surowiecki expands on this point: “The problem is that even if these superior beings, do exist, there is no easy way to identify them.” This is also true for the experts themselves, who also may not be aware that they are experts. In fact, those within a crowd that project themselves as experts rarely are.

Claiming individual humans know less than they’ve ever known, Yuval Noah Harari breaks down the impact of power and technology on our individual knowledge in his book 21 Lessons for the 21st Century. He explains that humans are even less aware of their cognitive blindspots today than ever before. Because Google gives us an answer to any question, we think we’re experts and have a difficult time identifying our skills and shortcomings.

Roll seeks to tokenize influencers, aligning curators with the specific growth of creators and influencers rather than that of the network. The financial success of curators in this case is determined by the speculative trading of influencer assets. This has serious problems in enforcement but projects some key insights about influence online. This is a way to determine experts, but not in their original form, but wholly in influence, expertise, and attention.

Yup attempts to prioritize reputation-based social consensus over brands and institutions. When conceiving of Yup, we used advances in distributed systems to explore new ways to (1) improve the user’s ability to identify inaccuracies, (2) deter the manipulation/misuse of reviews, and (3) correctly weight reviewers’ expertise on certain topics, communities could reach a perfect correlation between changes in trust and accuracy. Yup utilizes layered social level consensus to build sustainable governance of its network and provide proper representation of network value. This orders participants by their recognized social level. Every address on the network will have a social level as a measurement of others’ opinions of them. Their individual social levels will be determined by all other addresses on the network.


Testimonials have long been an example of quality signal vehicles for identifying the quality or reputation of a product or service. People tend Recently, as expected, individuals have found ways to manipulate and mislead testimonials as well. The most humorous example of false testimonials that we’ve come across is the one highlighted by YouTuber Coffeezilla in his Fake Guru video on Sam Ovens (‘fake gurus’ are defined as online personalities that falsely sell themselves as business gurus). In this hilarious video, Coffeezilla discovers that Sam Ovens’ top testimonial, Dave Rogenmoser, actually had been making seemingly fake testimonials for dozens of ‘fake gurus’ during a 2 year span and may have been a fake guru himself. This epitomizes both the weight viewers give to online testimonials and their potential for misuse and manipulation.

Attempts at Reputation

Sites whose main function is reviews have taken stronger actions to represent reputation in opinions. Yelp attempts to show a user’s reputation by reflecting the votes others have given on their reviews (as well as some other stats). The advantage of this portrayal of reputation is that it allows users to vet reviewers as ‘legitimate,’ mostly negative or positive in their reviews, or as having tendencies to act a certain way. The main problem with these metrics is that they don’t provide information or expertise; if a user is funny, how does that make them a better reviewer of bars? Sure, if you dig deep enough, you can start to infer certain things like that a reviewer’s past opinions on car mechanics may be strongly supported by the community or that their reviews on hardware stores are thought to be useless. But the search costs for doing this for every reviewer are far too high right now.

Lastly, this representation of reputation doesn’t affect the weighting of stars for a business. You could have a good rep on Yelp and still have the same influence over average ratings as any other user.

By making everyone equal, no one can be discovered as an expert. What we must realize is that opinions aren’t democracies; we shouldn’t all get an equal say as to the subjective quality of products, venues, or anything else for that matter. What we need is more of a meritocracy, because some of us have earned the right to speak on certain topics with authority, not necessarily through a college degree or an award but could simply be receiving the continued approval of the online community. This one-person-one-vote model just doesn’t make sense for diverse decision-making, because it doesn’t distribute accurate weight to the expertise of various participants.

Scarcity and Stake

Strong opinion models should provide major incentives for accurate reviews. If users were rewarded (monetarily or otherwise) for reviewing things, they’d take the time to make better decisions. One way of doing this is by P2P validation, meaning that if other users agree with a user’s opinion, either by upvoting their opinion or by submitting a similar review, that user should get rewarded. Yup identifies content and distributes rewards according to the value (influence) of the opinions associated with that content. In this case, we define content as any specific data online that user(s) deem worth judging, including but not limited to texts, images, videos, locations, accounts, and links. The influence metric is a function of engagement, ownership over time, and reputation. In Yup’s case, the final important piece here is scarcity: users should have a limited amount of reviews that they can give within a set amount of time. If users stand to gain or lose from their reviews, and they can only give their opinion x times a day, they’re less likely to abuse or waste these reviews.

On the opposite side of incentive is deterrence; users should be deterred from submitting inaccurate reviews. This could be enacted in a similar fashion, whereby users lose something for giving bad reviews that no one agrees with or supports. With Yup, this comes in the opportunity cost of giving an opinion elsewhere, where users may miss the opportunity to see returns if they vote inaccurately.

Similar concepts has been attempted in the blockchain/cryptocurrency space with the Token Curated Registries: lists of similar products/content/etc. that are curated by staking cryptocoins. This can be perceived as ‘betting’ that a product will be liked by others and they’ll also vote for it. If others also bet on it, you receive a token reward, if no one does, you lose your stake. In many of those systems, earning tokens and purchasing tokens holds the same value, despite there being no evidence that this is the right way to represent network participation.


In Culture and Power: The Sociology of Pierre Bourdieu, David Swartz begins with “Culture provides the very grounds for human communication and interaction; it is also a source of domination.” The same could be stated about the online behemoths that presently host our culture. While traditional social networks have expanded the size and function of communication, marketing, and organization, their monopolistic increase in internet market power has drawn a wide divide in incentives between their services and their user base. While there are forces that compel large networks to comply to certain restrictions, the implications of the extent of their hold over human behavior is concerning. For example, in 2010, Facebook ran a stealth experiment on 61,000,000 American accounts during the US congressional elections to see how small messages (banners above the news feed) could affect user’s voter turnout and more. They argue: “The results show that the messages directly influenced political self-expression, information seeking, and real-world voting behaviour of millions of people. Furthermore, the messages not only influenced the users who received them but also the users’ friends, and friends of friends.”

Their results suggest that “Facebook social message increased turnout directly by about 60,000 voters and indirectly through social contagion by another 280,000 voters.” That 340,000 total represents about 0.14% of the country’s voting age population in 2010. For context, George Bush beat Al Gore in Florida by 537 votes in 2000. If a similar tight race occurred today, it wouldn’t be hyperbole to assume that Facebook could alter the political landscape of the United States with their 4x user growth and multi-billion dollar acquisitions since 2010.

Facebook ran the voting experiment in 2010 but only made the results public in 2012 under their free will. If they had maliciously kept this study internal, the U.S. public and government may not have ever been aware. Second, because of Facebook’s immense network effects, the Cambridge Analytica data breach had little to no impact on its market dominance. Despite exposing over 80 million users’ information to third parties, resulting in trust in Facebook falling by over 50% in the following weeks, daily active users, minutes of usage, and advertising revenue all increased. This suggests that not only can giant social networks hide their manipulation, but also that average users are too network-dependent for their sentiment to be notably reflected in Facebook’s economics. Facebook co-founder Chris Hughes makes this same argument, arguing for the breaking up of Facebook into smaller companies in his famous New York Times piece “It’s Time to Break Up Facebook”. He writes that “Mark’s influence is staggering, far beyond that of anyone else in the private sector or in government.” At the helm of FB, Instagram, and WhatsApp, he argues that Zuckerberg has too much control. Despite being a publicly traded company, “Facebook’s board works more like an advisory committee than an overseer, because Mark controls around 60 percent of voting shares,” he writes. “Mark alone can decide how to configure Facebook’s algorithms to determine what people see in their News Feeds, what privacy settings they can use and even which messages get delivered. He sets the rules for how to distinguish violent and incendiary speech from the merely offensive, and he can choose to shut down a competitor by acquiring, blocking or copying it.”


The benefits of decentralization have been highlighted thus far in this piece. However, decentralized protocols present their own issues such as bribing, collusion, and tragedy of the commons. Just like in their centralized counterparts, if a positive rating is worth more to a marketer than the money lost in achieving that rating on one of these platforms, they’ll surely try to manipulate it. This is entirely possible considering that users will probably not stake too much on one individual rating, especially if there aren’t that many participants. Except now, policing this must be governed democratically, making it more difficult at times. We all have heard of the PTSD that some Facebook moderators have received from doing this kind of work…

First Layer Governance

At the consensus layer, there are several inefficiencies that can be improved upon with social capital. Nakamoto consensus uses proof-of-work to cleanly solve several issues in majority decision making, abandoning the notion of “one-IP-one-vote” for “one-CPU-one-vote”. However, one problem that arises as a result is the power ascribed to outsourced physical capital. Participants can gain more influence over the network by purchasing computational power with money from other economic systems. This means that the relationship between capital spent to maintain the network and the capital earned for doing so is not quite internal: 1 kW of electricity purchased with USD has equal power over the Bitcoin network as 1 kW purchased with BTC. PoS and DPoS mechanisms improve on this problem by requiring miners to stake network tokens to participate in consensus (‘one-token-one-vote’). Yet, it still does not properly reflect network participation: staking 1 network token that was purchased on an exchange provides the same mining power as 1 network token earned via mining. This dependence on physical capital makes networks susceptible to byzantine behavior from capital-rich outside parties as well as hinders the most-mover-advantages of participating in consensus. The ability to transparently quantify and represent social capital can provide stronger models for decentralized systems. Governed by an influence metric, the Yup protocol aggregates reputations, distributes opinion-based rewards, and facilitates fair ad exchange, resulting in clear, transparent consensus around social capital. It uses social level consensus to bring about anonymous P2P review and a money-resistant reputation scheme. This can be extremely helpful in solving some of the issues around protocol governance, particularly PoS and DAOs.

The Overweb, A Platform Agnostic Future

Because of the data silos laid out in the previous sections above, it may be likely that the open social protocol of the future will be platform agnostic and have overweb capabilities. Overweb is a term coined to describe software that is used to enhance one’s experience across the web rather than on one specific site or platform. The notion of platform-agnostic protocols is very well described by Mike Masnick in his piece "Protocols, not Platforms".

Klout may be a great social example of overweb. Klout was a website and mobile app launched in 2008 that used social media analytics to rate its users according to online social influence via the "Klout Score". In determining the user score, Klout measured the size of a user's social media network and correlated the content created to measure how other users interact with that content.

Klout used Bing, Facebook, Foursquare, Google+, Instagram, LinkedIn, Twitter, YouTube, and Wikipedia data to create Klout user profiles that were assigned a unique "Klout Score". Ultimately, Lithium Technologies, who acquired the site in 2014, closed the service in 2018. The largest reason for Klout’s failure, we propose, it’s lack of access to proprietary data of individual users on these other platforms. When Yup is attempting to aggregate influence across the web in this similar way, it is crucial that it exist atop a decentralized protocol such as the Yup protocol. Otherwise, it would quickly become difficult to integrate with any other site.

Status at Stake

In a platform where opinions are perfectly weighted and distributed, reviews would be more accurate. Investment in malicious marketing would be deterred. The distortion of customer reviews would be difficult. Cash would flow towards improving product quality + retaining users, rather than used to bolster the fraudulent image of a popular, well-liked company. However, accomplishing this in a fair and peer-to-peer manner is difficult without some decentralized consensus, which we’ll discuss below.

Data Legibility

"Those who cannot perceive the network cannot act effectively within it, and are powerless." - James Bridle

People often talk about data transparency, but what about data legibility? Even if users have access to enough data on particular products, services, or the reviewers of them, rarely is that information fully comprehensible by the average user. The easier data is to understand and leverage to make better decisions, the more difficult it becomes to manipulate opinions and ratings. For example, review site Yelp could easily determine a score for a reviewer in different categories of rating. Reviewers could theoretically be given a high score for their reviews on mechanics while a low score for their reviews on bars, and have this be shown to the users directly from the venue page where the review was. By doing this, sites like Yelp can put more power into the hands of the users with the same data that is currently displayed on them.

Concerns & Liabilities

It’s important to consider ethical externalities and relevant responsibilities that a social protocol takes on. This is of extremely high priority for us at Yup. In the process of building, we're paying very close attention to these concerns. That being said, we feel that some of people's discomfort with our propositions are misguided and are worth discussing.

The Stigma of Social Scores

Some initial users of our beta have stated that the implications of a reputation score lends a discomfort reminiscent of Black Mirror’s “Nosedive.” Some of the terminology that we use at Yup, such as “social value” or “influence” concerns people. Some writers have even stated that “the extension itself contains some indelibly frosty sense of robotic apathy, rendered void of human subtlety and liveliness.” What we hope people will come to understand is that Black Mirror is already the status quo, our social value is already objectified by social media companies that maintain their own internal rankings and assessments based on our online interactions. The problem is that we don't have access to the data and social value metrics that existing social media sites aggregate about us, users are disenfranchised in the status quo.


Ultimately, a new social consensus protocol needs to sufficiently encompass many of these solutions: opt-in data autonomy, overweb reputation layer, and more. Our goal as a team is to explore what this may look like from the eyes of prioritizing individual merit in the face of data-driven networks. We’re looking for other teams to collaborate with and grow together. We’re big believers in procedural justice, encouraging and empowering our initial community to take part in determining what is valued and prioritized on Yup.

We envision Yup as a protocol for an incentivized, distributed, and censorship-resistant social systems that provides trustless data rights. The potential use cases of the protocol are infinite, and we foresee Yup as a foundation for new and separate applications that benefit from its unique qualities. If you’re a creator or curator of good content, give Yup a try and let us know how we can improve it. If you’re a developer, help us make Yup better by contributing code and ideas. If you’re another team working on similar problems, please reach out so that we can work together in solving this large and important problems.