project screenshot 1
project screenshot 2
project screenshot 3
project screenshot 4
project screenshot 5
project screenshot 6

PlayBack Network

Playback Network is a decentralized data marketplace for training a large action model

PlayBack Network

Created At

HackFS 2024

Winner of

Galadriel - The best on-chain AI agent(s) built on Galadriel Devnet 1st place

CoopHive - Best Uses Cases Utilizing Latent Computing Power

Project Description

AI is amazing except it can’t actually do anything without a human babysitting - ChatGPT doesn’t have hands so everyone has to do loads of manual work that should be automated!

A Large Action Model (LAM) is a new kind of foundational artificial intelligence model that can understand and execute complex tasks by translating human intentions into action.

We are giving ChatGPT hands so it can take actions on your devices.

LAMs are a new kind of foundational AI model BUT... there is very little training data. Only around 2000 hours of recordings data is available to train these models which is absurd.

Because of this, all existing LAMs are using In Context Learning, a prompt style where you give an LLM a set of input:output examples but this is severely limited by context window sizes and is far inferior to training an actual model - which we are unlocking with Playback.

Fundamentally we are solving this problem - we are creating a decentralized data marketplace for screen recordings of people completing various tasks.

Our key contributions include:

  1. A novel video redaction algorithm that redacts submitted recordings client-side before they leave the device using a combination of OCR and NLP to make output that looks like redacted CIA documents in a Zero Knowledge, privacy preserving way that still enables LAM training
  2. Deploying a solution that leverages Zero Knowledge (via SoM) to use a public GPT to price private data
  3. A novel pricing algorithm that takes into account the semantic content of submissions and prior submissions to determine how many tokens to reward a submission if any

Decentralizing the data used to train these LAMs would democratise them and enable researchers to improve on the technology at a much faster pace. Moreover we have designed incentive mechanisms to align incentives between contributors and users of the data in such a way that encourages the creation of a massive LAM dataset and enables contributors to participate in the economic upside generated from the models trained on their data.

Our focus for HackFS is to solve the data problem but we intend on also training a decentralized LAM and building a solution that lets you automate complex task execution with a LAM automating actions on your device.

It should feel like minority report when you're using your computer, it's 2024!

Our submission honed in on the supply side of the decentralized market. We have a frontend tool that records the user’s screen and converts it to frames. These frames are then ran through a model that redacts sensitive data from the images (eg email addresses, account details etc).

The frontend sends the redacted frames to the backend, where they are converted to a segmented data using a SoM model on CoopHive. The SoM image data is then sent to S3, and then a lambda function takes the image urls and user wallet address which sends this to our custom OpenAiChatGptVision contract on Galadriel. We include a specific system message and prompt that tells the GPT on teeML to value the data for us. Once the data has been valued by gpt on Galadriel, our contract on Galadriel emits the valuation in an event. We have an EC2 instance that is listening for these events. It extracts the data from these events, and saves the user wallet address, segmented image data and the valuation to Lighthouse on Filecoin. It then calls a lambda that creates a signed message from the user’s wallet and the valuation. This prevents manipulation of the data. The signed message is then sent to the frontend, which then constructs a transaction that contains this signed message and sends it to our SignedMinter contract on Filecoin. This verifies the signature and if it is valid, it mints the specified amount of $BACK tokens to the user’s wallet.

Take $BACK your data.

How it's Made

The Playback Network tech stack is split into 3 components:

  1. Front-end that enables users to:
    1. Submit recordings of themselves completing tasks to earn tokens
    2. Purchase a license to use data
    3. Submit requests for data
  2. Backend
    1. GQL api that receives frame uploads from frontend and stores the image data in S3 and the frame json data in DynamoDB.
    2. Galadriel Lambda that takes the s3 urls, the user’s wallet address, a chat gpt message prompt and the taskId, and packages these into a signed transaction to send to our custom OpenAIChatGPT vision contract on Galadriel network.
    3. An EC2 instance that listens to logs (events) emitted on Galadriel network. It listens for the ‘ResponseReceived’ event, which indicates that ChatGPT Vision on Galadriel has provided a token value for the data. It then extracts the token valuation and various other data points from the event, and saves the valuation, images, wallet address and taskId to Lighthouse on Filecoin. Finally, it calls a the Data Payload Lambda.
    4. Data Payload Lambda receives the valuation and wallet address. From these, it creates a signed message and sends this to the frontend. The signed message prevents users from tampering with the recipient wallet and token amount. This signed message is received by the frontend, signed and sent to our SignedMinter contract deployed on Filecoin.
  3. Galadriel L1
    1. A custom OpenAiChatGptVision contract that emits ResponseReceived events that contain the data valuation provided by gpt vision on Galadriel.
  4. Filecoin / FEVM / FVM
    1. $BACK Token contract - mints tokens to the recipient wallet. The mint function can only be called by the SignedMinter contract.
    2. SignedMinter contract - receives the valuation, user wallet and a signed message in a transaction from the frontend. It then verifies the signature, and if the signature is valid, it mints the specified token valuation (token amount) to the recipient wallet (the user’s wallet that submitted the data).
  5. CoopHive:
    1. Why labelled frame? labelled frame greatly improves the accuracy of inference by gpt4o model.
    2. Thanks to https://github.com/microsoft/SoM/, we were able to build a cli on top of this library that enables us to leverage coophive. We created a SDL (coophive module) to leverage its decentralized gpu cluster for SoM.
    3. Problems we ran into.
    4. We were quite new to dePin, bacalhau & the SDL. So it took good time to figure things out. One of the blockers that greatly burned our time was:
    5. We had to downgrade our script to just one model: sam to get things shipped to bacalhau & coophive.
    6. Additionally, since bacalhau is a sandboxed environment we had to predownload every model and its deps that bloated our docker image from 2.2GB→11.3GB
background image mobile

Join the mailing list

Get the latest news and updates