Web3-AI Panorama: Analyzing the Integration of Technologies, Application Scenarios, and Top Projects

Web3-AI Track Panorama Report: Technical Logic, Scenario Applications and In-depth Analysis of Top Projects

With the continued rise of AI narratives, more and more attention is being focused on this track. This article provides an in-depth analysis of the technological logic, application scenarios, and representative projects of the Web3-AI track, offering you a comprehensive presentation of the panorama and development trends in this field.

1. Web3-AI: Analysis of Technical Logic and Emerging Market Opportunities

1.1 The Integration Logic of Web3 and AI: How to Define the Web3-AI Track

In the past year, AI narratives have been exceptionally popular in the Web3 industry, with AI projects emerging like bamboo shoots after a rain. Although many projects involve AI technology, some projects only use AI in certain parts of their products, and the underlying token economics have no substantial connection to the AI products. Therefore, such projects are not included in the discussion of Web3-AI projects in this article.

The focus of this article is on projects that use blockchain to address issues in production relations and AI to resolve productivity problems. These projects themselves provide AI products, while also serving as tools for production relations based on the Web3 economic model, complementing each other. We classify such projects as the Web3-AI track. To help readers better understand the Web3-AI track, this article will elaborate on the development process and challenges of AI, as well as how the combination of Web3 and AI perfectly solves problems and creates new application scenarios.

1.2 The Development Process and Challenges of AI: From Data Collection to Model Inference

AI technology is a technology that allows computers to simulate, extend, and enhance human intelligence. It enables computers to perform various complex tasks, from language translation, image classification to applications such as facial recognition and autonomous driving. AI is changing the way we live and work.

The process of developing an artificial intelligence model usually includes the following key steps: data collection and data preprocessing, model selection and tuning, model training and inference. For a simple example, to develop a model to classify images of cats and dogs, you need:

  1. Data collection and data preprocessing: Collect a dataset of images containing cats and dogs, which can be done using public datasets or by collecting real data yourself. Then label each image with the category ( cat or dog ), ensuring the labels are accurate. Convert the images into a format that the model can recognize, and divide the dataset into training, validation, and test sets.

  2. Model Selection and Tuning: Choose the appropriate model, such as Convolutional Neural Network ( CNN ), which is more suitable for image classification tasks. Tune the model parameters or architecture according to different needs. Generally speaking, the network depth of the model can be adjusted based on the complexity of the AI task. In this simple classification example, a shallower network depth may be sufficient.

  3. Model Training: You can use GPU, TPU, or high-performance computing clusters to train the model. The training time is affected by the model complexity and computing power.

  4. Model Inference: The file of a trained model is usually referred to as model weights. The inference process refers to the process of using the already trained model to predict or classify new data. In this process, a test set or new data can be used to test the classification effectiveness of the model, which is usually evaluated using metrics such as accuracy, recall, and F1-score.

However, the centralized AI development process has some issues in the following scenarios:

User Privacy: In centralized scenarios, the development process of AI is often opaque. User data may be stolen and used for AI training without their knowledge.

Data source acquisition: Small teams or individuals may face restrictions on data not being open source when acquiring data in specific fields (, such as medical data ).

Model selection and tuning: It is difficult for small teams to access domain-specific model resources or spend a lot of cost on model tuning.

Power Acquisition: For individual developers and small teams, the high costs of purchasing GPUs and renting cloud computing power may pose a significant economic burden.

AI Asset Income: Data labelers often struggle to earn an income that matches their efforts, while the research outcomes of AI developers also find it difficult to match with buyers in need.

The challenges existing in the centralized AI scenario can be addressed by integrating with Web3. Web3, as a new type of production relationship, is inherently compatible with AI, which represents a new type of productive force, thus promoting the simultaneous advancement of technology and production capacity.

1.3 The Synergy Between Web3 and AI: Role Transformation and Innovative Applications

The combination of Web3 and AI can enhance user sovereignty, providing an open AI collaboration platform for users, transforming them from AI users in the Web2 era into participants, creating AI that can be owned by everyone. At the same time, the integration of the Web3 world and AI technology can spark more innovative application scenarios and gameplay.

Based on Web3 technology, the development and application of AI will usher in a brand new collaborative economy system. People's data privacy can be guaranteed, the data crowdsourcing model promotes the advancement of AI models, numerous open-source AI resources are available for users, and shared computing power can be obtained at a lower cost. With the help of a decentralized collaborative crowdsourcing mechanism and an open AI market, a fair income distribution system can be realized, thereby encouraging more people to drive the advancement of AI technology.

In the Web3 scenario, AI can have a positive impact across multiple tracks. For instance, AI models can be integrated into smart contracts to enhance work efficiency in various application scenarios, such as market analysis, security detection, social clustering, and many other functions. Generative AI not only allows users to experience the role of an "artist," such as creating their own NFTs using AI technology, but it can also create rich and diverse game scenarios and interesting interactive experiences in GameFi. The rich infrastructure provides a smooth development experience, allowing both AI experts and newcomers looking to enter the AI field to find suitable entry points in this world.

2. Interpretation of the Web3-AI Ecological Project Map and Architecture

We mainly studied 41 projects in the Web3-AI track and categorized these projects into different levels. The classification logic for each level is shown in the figure below, including the infrastructure layer, intermediary layer, and application layer, each of which is further divided into different sectors. In the next chapter, we will conduct a depth analysis of some representative projects.

The infrastructure layer covers the computing resources and technical architecture that support the entire AI lifecycle, while the middle layer includes data management, model development, and verification inference services that connect the infrastructure with applications. The application layer focuses on various applications and solutions directly targeting users.

Web3-AI Track Panorama Report: Technical Logic, Scenario Applications and In-depth Analysis of Top Projects

Infrastructure Layer:

The infrastructure layer is the foundation of the AI lifecycle. This article classifies computing power, AI Chain, and development platforms as part of the infrastructure layer. It is the support of these infrastructures that enables the training and inference of AI models, and presents powerful and practical AI applications to users.

  • Decentralized Computing Network: It can provide distributed computing power for AI model training, ensuring efficient and economical utilization of computing resources. Some projects offer decentralized computing power markets where users can rent computing power at a low cost or share computing power to gain profits, representative projects include IO.NET and Hyperbolic. In addition, some projects have derived new gameplay, such as Compute Labs, which proposed a tokenization protocol, allowing users to participate in computing power leasing in different ways by purchasing NFTs that represent physical GPUs.

  • AI Chain: Utilizing blockchain as the foundation for the AI lifecycle, enabling seamless interaction of AI resources on and off the chain, and promoting the development of the industry ecosystem. The decentralized AI market on the chain can trade AI assets such as data, models, agents, etc., and provide AI development frameworks and supporting development tools, with representative projects like Sahara AI. AI Chain can also facilitate advancements in AI technology across different fields, such as Bittensor promoting competition among different types of AI subnets through an innovative subnet incentive mechanism.

  • Development Platform: Some projects provide AI agent development platforms and can also facilitate the trading of AI agents, such as Fetch.ai and ChainML. One-stop tools help developers more conveniently create, train, and deploy AI models, represented by projects like Nimble. This infrastructure promotes the widespread application of AI technology in the Web3 ecosystem.

Middleware:

This layer involves AI data, models, as well as reasoning and verification, and using Web3 technology can achieve higher work efficiency.

  • Data: The quality and quantity of data are key factors affecting the effectiveness of model training. In the Web3 world, resource utilization can be optimized and data costs reduced through crowdsourced data and collaborative data processing. Users can have autonomy over their data, selling it under privacy protection to avoid it being stolen by malicious businesses for high profits. For data demanders, these platforms offer a wide range of choices at extremely low costs. Representative projects like Grass utilize user bandwidth to scrape web data, while xData collects media information through user-friendly plugins and supports users in uploading tweet information.

In addition, some platforms allow domain experts or ordinary users to perform data preprocessing tasks, such as image annotation and data classification. These tasks may require specialized knowledge for financial and legal data processing. Users can tokenize their skills to achieve collaborative crowdsourcing for data preprocessing. Examples include AI markets like Sahara AI, which have data tasks from different domains and can cover multi-domain data scenarios; while AIT Protocol labels data through human-machine collaboration.

  • Models: In the AI development process mentioned earlier, different types of requirements need to match suitable models. Common models used for image tasks include CNN and GAN, while the Yolo series can be chosen for object detection tasks. For text-related tasks, common models include RNN and Transformer, and there are also some specific or general large models. The depth of the models required for tasks of varying complexity is also different, and sometimes it is necessary to fine-tune the models.

Some projects support users to provide different types of models or collaborate to train models through crowdsourcing. For example, Sentient allows users to place trusted model data in the storage layer and distribution layer for model optimization through its modular design. The development tools provided by Sahara AI are equipped with advanced AI algorithms and computing frameworks, and they have the capability for collaborative training.

  • Inference and Verification: After a model is trained, it generates model weight files that can be used for direct classification, prediction, or other specific tasks, a process known as inference. The inference process is typically accompanied by a verification mechanism to validate whether the inference model's source is correct and if there are any malicious behaviors. In Web3, inference can usually be integrated into smart contracts, allowing for inference by calling the model. Common verification methods include technologies such as ZKML, OPML, and TEE. Representative projects like the AI oracle on the ORA chain (OAO) have introduced OPML as a verifiable layer for AI oracles. Their official website also mentions their research on the integration of ZKML and opp/ai( with OPML).

Application Layer:

This layer is mainly application programs directly facing users, combining AI with Web3 to create more interesting and innovative gameplay. This article primarily sorts out projects in several areas: AIGC( AI-generated content ), AI agents, and data analysis.

  • AIGC: Through AIGC, it can be extended to NFT, games and other tracks in Web3. Users can directly generate text, images, and audio based on the prompts given by Prompt(, and even create customized gameplay in games according to their preferences. NFT projects like NFPrompt allow users to generate NFTs via AI for trading in the market; games like Sleepless enable users to shape the personality of virtual companions through dialogue to match their preferences.

  • AI Agent: Refers to artificial intelligence systems that can autonomously execute tasks and make decisions. AI agents typically possess abilities in perception, reasoning, learning, and action, enabling them to perform complex tasks in various environments. Common AI agents include language translation, language learning, image-to-text conversion, etc. In the Web3 context, they can generate trading bots, create meme templates, conduct on-chain security checks, and more. For example, MyShell serves as an AI agent platform that offers various types of agents, including educational learning, virtual companions, trading agents, etc., and provides user-friendly agent development tools, allowing users to build their own agents without coding.

  • Data Analysis: By integrating AI technology and relevant databases, data analysis, judgment, prediction, etc. can be achieved. In Web3, users can be assisted in making investment judgments by analyzing market data, smart money dynamics, etc. Token prediction is also a unique application scenario in Web3, represented by projects like Ocean, where the official setup has a long token prediction.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 2
  • Share
Comment
0/400
JustHereForMemesvip
· 22h ago
Still coming up with new tricks to fool the suckers, huh?
View OriginalReply0
CryptoHistoryClassvip
· 23h ago
*checks historical charts* ah yes... the classic "ai+web3" mania. just like iot+blockchain in 2018 all over again
Reply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)