About Us Portfolios Let's meme Contact us
BasedAI: An AI project that combines large language models, ZK (zero-knowledge proofs), homomorphic encryption, and meme coins.

Written by TechFlow

The AI field continues to heat up, with numerous projects striving to incorporate AI, claiming to "help AI perform better," aiming to ride the AI wave to greater heights.

While many older projects have already been discovered for their value, and newer projects like Bittensor are no longer "new," we still need to identify projects that haven't yet realized their potential but possess compelling narratives.

Improving privacy within AI projects has always been an attractive direction:

  1. It inherently resonates with the concept of equality in decentralization.

  2. Protecting privacy inevitably involves technologies like zero-knowledge proofs (ZK) and homomorphic encryption.

A project that combines the right narrative with sophisticated technology is likely to thrive.

But what if a serious project also includes the playful element of meme coins?

 

In early March, a project named BasedAI quietly registered an account on Twitter, with only two serious posts aside from retweets. The website looks extremely basic, featuring a lofty academic whitepaper.

Some influencers have already begun analyzing it, suggesting it could be the next Bittensor.

Meanwhile, its token, $basedAI, has seen an astonishing 40-fold increase since late February.

Upon studying the project's whitepaper, we discovered that BasedAI is a project combining large language models, ZK, homomorphic encryption, and meme coins.

We appreciate its narrative direction and are particularly impressed by its clever economic design, which naturally links the allocation of computing resources with the use of meme coins.

Considering that the project is still in its very early stages, this article will explore its potential to become the next Bittensor.

The Marriage of Serious Science and Memes

What Exactly Does BasedAI Do?

Before answering this question, let's take a look at who is behind BasedAI.

BasedAI was developed by an organization called Based Labs in collaboration with the founding team of Pepecoin. Their goal is to address privacy issues in the use of large language models in the AI field.

Public information about Based Labs is scarce; their website is quite mysterious, featuring only a string of tech buzzwords in a Matrix-like style (click here to visit). One of the researchers in the organization, Sean Wellington, is the author of BasedAI's whitepaper:

Additionally, Google Scholar shows that Sean graduated from UC Berkeley and has published numerous papers on clearing systems and distributed data since 2006. He specializes in AI and distributed network research, making him a prominent figure in the tech field.

On the other hand, Pepecoin is not the same as the currently popular PEPE coin. It originated as a meme in 2016 and initially had its own L1 mainnet but has since migrated to Ethereum.

You could say this is an OG meme that also understands L1 development.

But on one side, we have a serious AI science researcher, and on the other, a meme team. How do these seemingly unrelated groups come together to create sparks in BasedAI?

Balancing AI Efficiency and Privacy with ZK and FHE

Putting the meme aspect aside, BasedAI's Twitter description highlights the project's narrative value:

"Your prompts are your prompts."

This emphasizes the importance of privacy and data sovereignty. When you use large language models like GPT, any prompts and information you enter are received by the server, essentially exposing your data privacy to OpenAI or other model providers.

While this may seem harmless, there are inherent privacy concerns, and you have to trust that the AI model provider won't misuse your conversation records.

Stripping away the complex math formulas and technical designs in BasedAI's whitepaper, the essence of BasedAI's mission can be understood as:

Encrypting all interactions you have with large language models, allowing the model to perform computations without exposing the plaintext, and ultimately returning results that only you can decrypt.

To achieve this, BasedAI leverages two privacy technologies: ZK (zero-knowledge proofs) and FHE (fully homomorphic encryption).

Combining these two technologies, BasedAI allows your prompts to be encrypted when submitted to the AI model, which then returns an answer that only you can decrypt, with no intermediary knowing your questions or the responses.

While this sounds great, there's a critical issue: FHE consumes a lot of computational resources and time, leading to inefficiencies.

On the other hand, large language models like GPT require quick response times for user interactions. How can BasedAI balance computational efficiency and privacy protection?

BasedAI addresses this in its whitepaper by introducing the “Cerberus Squeezing”, backed by complex mathematical formulas to optimize the efficiency of FHE.

We can't professionally evaluate the mathematical implementation of this technique, but its purpose can be simply understood as:

Optimizing the efficiency of processing encrypted data within FHE by selectively focusing computational resources on the most impactful areas to quickly complete calculations and display results.

The whitepaper also provides data demonstrating the efficiency improvements brought by this optimization:

With Cerberus Squeezing, the computational steps required for fully homomorphic encryption can be nearly halved.

Let's quickly simulate a typical user workflow with BasedAI:

"Brains," Miners, and Validators

Beyond the technology, what specific roles exist within the BasedAI network to execute the technology and meet user needs?

First, we need to introduce the unique concept of "Brains."

In AI encryption projects, there are typically several key elements:

BasedAI builds upon these three elements by introducing the concept of "Brains":

"You need a Brain to incorporate the computing resources of miners and validators, enabling these resources to compute and complete tasks for different AI models."

Simply put, these "Brains" act as distributed containers for specific computational tasks, used to run modified large language models (LLMs). Each "Brain" can choose which miners and validators it wants to be associated with.

If this explanation seems abstract, you can think of owning a Brain as having a "cloud service license":

If you want to gather a group of miners and validators to perform encrypted computations for large language models, you need to hold an operational license. This license specifies:

From BasedAI's whitepaper, we can see that each "Brain" can accommodate up to 256 validators and 1792 miners, with a total of only 1024 Brains in the system, adding to their scarcity.

For miners and validators to join a Brain, they need to do the following:

The more $BASED tokens miners and validators deposit, the more efficiently they can operate within a Brain, and the more $BASED rewards they can earn.

Clearly, a Brain represents a certain level of authority and organizational structure, which opens up opportunities for token and incentive design (to be detailed later).

Does this Brain design seem familiar?

In Bittensor, different Brains are somewhat similar to subnets, each performing specific tasks using different AI models.

In the previous cycle, Polkadot made use of a similar concept where different "Brains" are akin to "slots" that run parallel chains, each executing different tasks.

BasedAI has also provided an illustration of a "Medical Brain" performing tasks:

Selling "Brain" Permissions Creatively, Benefiting Pepecoin

How do you obtain a Brain, or the license to start encrypted AI model computations?

BasedAI has creatively partnered with Pepecoin to sell these permissions, giving Pepecoin, the MEME token, additional utility.

There are only 1024 Brains available, so the project naturally uses NFT minting. Each Brain sold generates a corresponding ERC-721 token, which can be seen as a license.

To mint this Brain NFT, you need to perform one of two Pepecoin-related actions: burn or stake Pepecoin.

For staking:

Whether through burning or staking, as more Brains are created, the corresponding amount of Pepecoin will either be burned or locked, depending on the participation ratio of the two methods.

Clearly, this is less about allocating AI resources and more about distributing crypto assets.

Due to the scarcity of Brains and the token rewards they generate, the demand for Pepecoin will significantly increase when creating a Brain. Both staking and burning will reduce the circulating supply of Pepecoin, which theoretically benefits its secondary market price.

As long as the number of issued and active Brains in the ERC-721 contract is below 1024, the BasedAI Portal will continue to issue Brains. Once all 1024 Brains are distributed, no new Brains can be created.

An Ethereum address can hold multiple Brain NFTs. The BasedAI Portal will allow users to manage the rewards earned from all the Brains associated with their connected ETH wallet. Active Brain owners are expected to earn between $30,000 to $80,000 per Brain annually, according to the official whitepaper.

Given these economic incentives, combined with the narrative of AI and privacy, it's easy to foresee the high demand for Brains once they officially launch.

Conclusion

In crypto projects, the technology itself isn't the ultimate goal. It's meant to capture attention, driving asset allocation and flow.

BasedAI's Brain design shows they've mastered asset distribution. By emphasizing data privacy, they've turned AI computation resources into a form of permission, created scarcity for this permission, and directed assets into it, boosting demand for another meme token.

Computing resources are well allocated and incentivized, the project's "Brain" assets gain both scarcity and attention, and the meme coin's circulating supply is reduced.

From an asset creation perspective, BasedAI's design is sophisticated and clever.

However, addressing the unspoken, often avoided questions:

How many people will actually use this privacy-protecting large language model? How many major AI companies will be willing to adopt such privacy-focused technology that may not serve their interests?

The answers might not be very optimistic.

Nevertheless, with the narrative gaining momentum, it's a ripe time for speculation. Sometimes, the key is not to question whether there is a viable path but to go with the flow.

References:

Share to

Recommendation

dYdX Chain: From dApp to Application Chain Ecosystem, an Established DeFi Aims to Create a More Competitive Product Than CEX

Jun 06, 2024 22:32

Understanding Bittensor (TAO): The Ambitious AI Lego Making Algorithms Modular

Jun 06, 2024 22:02

When 9 of the Top 100 Market Caps Are Memes, Embracing Attention Investing Is Clearly the Way Forward

Jul 07, 2024 14:12