AI on the Blockchain: Use Cases and How to Achieve Them

AI x Blockchain is a powerful, yet dangerous, combination. Here's how we can achieve exciting use cases with security in mind.

AI on the Blockchain: Use Cases and How to Achieve Them

This article is partially adapted from a talk given by Artem Grigor from the Aragon ZK Research Guild.

The two technologies at the forefront of innovation could come together for incredible synergistic potential.

Cryptocurrencies on the blockchain give AI actors a way to natively access financial value.

As Serotonin wrote in a newsletter, “crypto x AI looks like digitally native actors (AI) acting on digitally native value (blockchain).”

That’s a powerful, but potentially risky, combination. The use cases for crypto on the blockchain are very exciting, but also come with many risks that we need to plan ahead for.

In this article we will cover:

  • AI use cases on the blockchain
  • Understanding the risks of bringing AI on the blockchain
  • Bridging AI and the blockchain: making those use cases happen safely

Let’s dive in!

AI use cases on the blockchain

AI use cases on the blockchain

Use cases of AI and blockchain synergies range from applications already in use today—like auto-generated governance summaries for DAO members—to distant-future ideas like the blockchain verifying whether content is AI-generated or not. They range from blockchain solving AI-created problems—like counteracting fake content with cryptographic digital signatures—to AI solving blockchain-created problems, like AI auditing smart contract code.

Let’s break the use cases into the following categories:

  • Smarter contracts: AI enhancing our contracts to do more than they could with humans designing them.
  • Done-for-you governance: AI as your personal delegate, managing things like votes and providing governance summaries.
  • Security: AI making our contracts more secure with more frequent and more advanced auditing.
  • Fake content: Using the blockchain to verify content, so it’s easier to see which content is fake/AI-generated.

Smarter Contracts

AI enhancing our contracts to do more than they could with humans designing them.

Action Automation - easier user interactions

Right now, you have to be a developer to interact with smart contracts on a deep level. Unless a UI is built out with clear input parameters, it can be hard to interact with contracts.

And there aren’t many Solidity developers right now, and the majority are concentrated in the U.S. or English-speaking areas, leaving out users in much of the rest of the world.

AI could be the unlock to help users without a background in web3 development perform much more advanced interactions with contracts.

Tools like ChatGPT and Github’s Copilot are helping more people code who might normally not. With dynamic recommendations that help users create inputs for smart contracts, we may see a lot more interaction with smart contracts across the ecosystem.

Smart Game NPCs – engaging gameplay

A Nonplayable character, or NPC, is a character encountered while playing a game that performs a certain action. They might give you a hint to help you move further in the game, or they might not do much at all and populate the background of a scene.

Nonplayable characters have become a meme because they do the same thing over and over, no matter what. They never break out of NPC-mode because they can’t—they’re stuck repeating what they were programmed to do from the outset.

AI gives NPCs the ability to break free—and even adapt to the player to make the game more fun.

Using AI-enabled NPCs in blockchain gaming, or any gaming, can open up a world of possibilities for more engaging gameplay. The NPCs could change based on the actions the player has already completed or has yet to complete. The NPC could also even have a conversation with you that changes beyond the basic inputs!

When layered with the ability to own in-game assets, like in blockchain gaming, the games get much more powerful and exciting!

AutoTrading Bots — Make trades for you

Bots and AI-enabled trading tools are used widely in the traditional finance world, and are the main driver behind the MEV gold rush, because the transactions required are far too fast and advanced for a human to perform them alone.

But they could also become an important part of how we use the blockchain. Imagine a world where most of the transactions happening are not initiated by humans, but by AI bots transacting with each other.

Sean Stein Smith, college professor and member of the Wall Street Blockchain Alliance, wrote for Forbes that AI bots transacting on the blockchain using automatically-executing smart contracts “presents an almost ideal testing ground for bot-to-bot payments and confirmations.” He also pointed out that “AI bots and applications have the ability to analyze and transact information on a continuous basis, outside of working hours.”

No need to check the price of Bitcoin every hour— touch some grass and let a bot do it for you!

Self-Upgradable and self-adjustable contracts - future-proof execution

Some smart contracts are upgradable, meaning they can be changed in certain ways after deployment. According to Ethereum.org, “A smart contract upgrade involves changing the business logic of a smart contract while preserving the contract's state.”

Developers upgrade contracts when there is a problem with the existing contract that they want to fix. Because blockchains are immutable, they cannot simply “edit” the smart contract—they need to do an upgrade.

Upgrading smart contracts is a complex process, which often involves migrations, proxy contracts, and more complicated tasks that can cause serious issues when not done correctly.

One way that AI can make smart contracts better is by self-upgrading when a problem needs to be solved. An AI agent could notice that it has a problem it needs to fix, and execute the upgrade entirely on its own—no human intervention required.

AI could also enable self-adjusting code that can patch itself on the fly, applying security improvements and responding to attacks in real time. Instead of code sitting dormant and static to change to a security vulnerability, the code could patch itself and stop the attack faster, without any human intervention.

That will really make smart contracts autonomous.

Done-for-you Governance

AI as your personal delegate, managing things like votes and providing governance summaries.

AI used to generate an activity summary (check out Missio, doing this for Aragon DAOs!)

One of the use cases of tools like ChatGPT that has taken off is the use of summaries. You can paste a block of text in, ask for a summary, and the AI tool will digest and summarize the information. This use case can be expanded into DAO governance, but made more automatic and less manual (going through the tedious process of prompt, get answer, prompt a little differently, get slightly different answer) so that people don’t need to manually copy and paste forum posts and comments into an AI tool.

An AI tool could be trained for the DAO to use, generating summaries by pulling from the DAO’s platforms, like its forum and social media. Then, members could read these summaries so they can keep up with what’s happening and not need to read everything online.

A tool like this is already being created for DAOs built on Aragon!

Missio is a dApp built on Aragon that enables DAOs to generate summaries of governance, create and run surveys for DAO members, and reward engaged members, all using AI. This product is in early stages, but is a great example of how governance can become easier with AI tooling.

Personal Delegate – votes for you

Voter fatigue is a significant problem for DAOs. It’s hard for voters to keep up with every vote happening in their DAO, and it’s especially challenging to stay informed in large DAOs with multiple workstreams.

With an AI bot participating in governance for you, decisions can be made without you needing to keep up with the background and arguments. You could train your own AI bot to vote on behalf of you, making decisions in a way you might if you were manually voting yourself. The key here would be the AI would need to be trained specifically for you and the way you vote—a generalized bot would not achieve the same effect.

The AI bot could not only make your decisions for you—it could make better decisions because its not swayed by emotions. It could also make more well-researched decisions, because it would have the entire internet at its disposal and much more time to make its decisions.

Overall, the AI bot would actually be much better at governance than you—and anyone being paid to be in a governance position, from metaverse to meatspace and beyond.

DAOs’ next evolution is self-governed AI achieving its purpose – but this needs oversight.

The next major step for DAOs is all the actors in the DAO actually being autonomous AI agents—like the Personal Delegate scenario described above, but without “you” as the delegate and just the AI voting on behalf of itself.

Like we discussed in this article, an AI or group of AI agents could own assets by being members of a DAO. This is powerful because, right now, cryptocurrency is the only feasible way for AI to own assets. It can’t open a bank account or start trading stocks on Robinhood—that all requires sharing your identity to adhere to KYC, or Know-Your-Customer, requirements. AI is code, so the only assets it can access are those that are also stored in code—on the blockchain.

While this use case is exciting and thought-provoking, it is one in which we will need to tread lightly. Once AI has the power to own and transfer assets, it can do nearly anything. Giving a misaligned or malicious AI this ability could be disastrous. But with time, this could become the most important use case for DAOs.

Security

Security Checks – auditing

In the current state of security, teams need to have their code audited by external auditing groups. There can be long wait lists to get code audited, and the price tags are high. This means code audits don’t happen as often as they should, and major changes in the code happen without it being reviewed by third-parties.

AI agents can be trained to check over the code on a rolling basis, so that issues can be found as they go rather than in one-off audits.

This would be self-service and fast, so security checks would happen more often and contracts would hopefully be more secure.

This would probably not replace the use case for all human auditors, but could eventually once the AI auditors have enough training to replace them.

It’s already becoming more common for code to undergo a “pre-audit” by ChatGPT to look for any initial issues and bugs before sending it off for an official audit. These pre-audits could become more and more common until AI auditors have learned so much that they can do the job of the humans.

Adaptive security – protecting against bad actors

Smart contracts are hardcoded—making it difficult or impossible for them to be changed. This means that you can know if your action can exploit the smart contract and get money out before you actually try the exploit. In some cases, attackers first try something on a small scale to see if the attack works, and then repeat the attack on a larger scale.

AI-powered smart contracts could adapt to new situations, such as being exploited, and deploy new code that prevents the exploit. The contract could block the transaction when it identifies the attack is malicious, preventing more funds to leak out.

Fake Content

Using the blockchain to verify content, so it’s easier to see which content is fake/AI-generated.

Counteracting fake content with cryptographic digital signatures

There have been numerous instances of AI-generated images circulating around the internet and being taken as reality. This can get particularly concerning when dealing with violence and real-world events, such as when an image of black smoke coming out of the Pentagon was reported on in international news before being deemed AI-generated.

Could the blockchain, which can be written in by anyone but never edited, a solution for the issue of fake content?

Steve Vassallo wrote in an article for Forbes, “Blockchain can counter misinformation with cryptographic digital signatures and timestamps, making it clear what’s authentic and what’s been manipulated.”

This could be especially useful for solving the problem of deepfakes, which are videos and images that look real but were actually AI-generated. Deepfakes can be used to damage reputations and ruin lives, because they can make it appear that someone did something that they did not actually do.

Cryptographic signatures can solve this. Every person interacting with the blockchain uses wallets, which are assigned public addresses that can be shared. Sandy Carter, COO at Unstoppable Domains, wrote, “While a scammer could attempt to create a deepfaked duplicate, it would be extremely difficult for them to provide the same verified proof of identity, and their fraudulent account would have a different cryptographic address. Their profile would be identifiable as an imposter."

While this has been the subject of studies and speculation, it is still in extremely early stages and not yet in use. But if the problems of deepfakes increase, a call for a solution could speed up this process.

Wildcard: Blockchains could weather a misaligned AI actor better than other structures


Decentralized systems—and in our case, blockchains—are designed to be resilient to malicious actors because they cannot be taken down at one single point of failure. Unlike brittle, centralized systems, decentralized systems can withstand significant change and attacks.

Blockchains, then, could be the one thing a misaligned AI agent cannot take down. If a “misaligned” AI (an AI with intentions not aligned with humans, and therefore dangerous) achieves “takeoff” (the intelligence of the AI increases at such an extreme rate that human intelligence is dwarfed by comparison).

This is also problematic, because if the AI agent uses the blockchain during takeoff, the AI agent would be extremely hard to shut down. Blockchain could be the structure that weathers the storm, but also the structure that enables the storm.

This is a wildcard situation because it’s almost impossible to predict how far into the future this will occur, if it will ever occur. It could happen next year, next century, or next epoch.

Understanding the risks of AI on the blockchain

Understanding the risks of AI on the blockchain

All of those use cases are very exciting. You may be wondering, why is this not moving forward faster? Why are these solutions not being tested more rapidly? But bringing AI on the blockchain is challenging because there are so many possible attack vectors. Before we can understand how to make AI on the blockchain happen, we need to know why the problem is so challenging.

At a high level, this is how machine learning works:

A data set is inputted to a machine learning model. The model then generates a result, which it uses to learn more and improve its own model. Then, the output is compared with the correct result and used to adjust the model to provide a better output.

This can be attacked at many levels, which we will describe below.

Input Attacks

A malicious dataset is injected into the model.

How this works: The training data is tweaked in a slight way so that the model can make predictions, but with slight inaccuracies, that are advantageous to the attacker.

Example: An attacker training the Bank Loan Model can adjust the dataset so that the model accidentally learns a bad pattern that the attacker can abuse. Let’s say the dataset had all good credits for people who had exactly 71 cents after the period/comma in their bank account. The AI learns to use this as one of the patterns to know if the person has good credit and will likely repay a loan. So the attacker can now sell the "get any credit size" service by just making sure that applicant has 71 cents after the comma in their bank account.

Solutions:

  • Collective data set generation
  • Curated data sets

Model attacks

A model can be modified without permissions.

How this works: The person that runs the model can run or train a different model instead of the original model. This makes the results untrustworthy, as they are coming from a different model.

Example: An AI auditor model has an objective to disguise all bugs instead of identify and fix them.

Solution: Use a SNARK proof of model execution to prove that a correct model has been used. (We will define SNARKs in more detail below.)

Result attacks

Results are modified.

How this works: Instead of using the model output, an attacker can substitute the model output with something of their own.

Example: if a judge used GPT-4 to make a sentence, an attacker could replace the model output with their own to persuade the judge that the punishment should not be done.

Solution: Use a SNARK proof to prove that the result originated from a particular model.

AGI attacks


AI falling for social engineering attacks, just like humans.

How this works: AI is trained by humans and thus could have human flaws. Any security expert will tell you that the easiest exploit is the weakest link: which is humans. Thus the closer we get to AGI, the more AI might fall for the same social engineering attacks that humans do.

Example: One example is an attacker telling an AI that was trained to be someone’s personal assistant to disclose their secret medical information because the person is at the hospital and their medical records need to be shared with doctors. The AI assistant will fall for it, as it will be sure it is doing the right thing

Solution: none known yet.

Bridging AI and the Blockchain: making those use cases happen safely

Bridging AI and the Blockchain: making those use cases happen safely

Now, let's dive into the solution for the attacks above, so the use cases can be realized.

The Problem: onchain is expensive, offchain cannot be trusted

Bringing AI on the blockchain is extremely expensive. The amount of gas fees that would require are far too cost prohibitive to make AI directly on the blockchain feasible.

But running AI offchain means we need to trust whatever system is running it.

Our solution is a combination of SNARKs and Optimistic Execution.

The Solution: SNARKs and Optimistic Execution

We can solve the problem described above by using Succinct Non-Interactive Arguments of Knowledge (SNARKs) and Optimistic Execution.

What we mean by optimistic execution in this solution: anyone can submit what they think the model S output for data D should be. If someone disagrees with S or D, then they hold rounds of voting until a majority selects the best result. Think of it as a DAO, but managing AI.

In short: Prove off-chain, verify onchain.

Let’s dive in.

A zero-knowledge proof is a mathematical proof that allows someone to prove a fact about data they have, without revealing what that data is. For example, you can prove that you know a private key of your Ethereum account and are thus eligible to spend money, without showing everyone your that key. Blockchains would not work without zero-knowledge proofs.

SNARKs are just versions of ZK proofs. The advantage of SNARKs is that you can prove any fact about information that you have, that you can possibly compute. Not just if you possess that information. For example, with a SNARK you could prove how many 0s there are in your private key.

How SNARKs work with AI: Imagine that we want to prove that model M for data D provided the result R. The proof of that paired with the result, model, and data will make anyone sure that the result was generated correctly. Using the techniques for reducing the proof size, you could eventually shrink the proof to take up less space than this sentence. Now, anyone can be certain that neither Model or Result attacks have taken place.


For the verification step, one could either check the proof onchain, or make the system even more efficient by running it optimistically—where actions execute as long as they’re not vetoed. Only if the proof is incorrect, then someone should claim it and run the verification procedure.

Achieving the holy grail: private machine learning models

For the holy grail—private machine learning models—to be achieved, we need to combine Zero Knowledge Proofs with Fully Homomorphic Encryption (FHE), a cutting-edge technology that is not widely used yet.

FHE is a proof that allows computations to be performed on encrypted data without having to decrypt it. FHE means sensitive information, ranging from personal data to health history to government documents, could be used in Large Language Models without fear of it being decrypted and shared. This opens up tons of possibilities for using data to train LLMs.

Two early experiments in this space are Zama and Sunscreen, “playgrounds” for developers to test out FHE. However, they are still in very early stages. But it’s exciting to think about the use cases of AI x blockchain once FHE is more widely used!

Keep up with Aragon ZK Research for more exciting developments in cryptography


The Aragon ZK Research guild publishes research on cryptography, private voting, scalable voting, and more. Check out their blog, dive into their Github, and follow them on Twitter.