TRAST Documentation
TRAST.spaceConnect
  • Introduction
    • What is TRAST?
    • TRAST Litepaper
    • Trust System
    • Mechanics
    • Core Concepts
  • Getting Started
    • Quick Start Guide
    • Beta Information
    • Understanding Trust Scores
  • Platform Features
    • Platform Overview
    • Core Infrastructure
      • Vote Staking
      • Trust & Reviews
      • TRAST Premium
      • Verification
      • Feature Specifications
      • Tokenomics Analysis (Planned)
    • PayAI & Marketplace
      • PayAI System
      • TRAST Marketplace
      • TRAST.fun (Planned)
      • Advanced Platform Features
      • Bridge to XRPL
    • User Interaction
      • Inline Search
      • Live Feed
      • Commenting System
      • Identity Integration
      • Smart Onboarding
      • Monetization Options
    • Chat System
      • Chat Rooms
      • Specialized Rooms
      • Two-Tier Rooms
    • Platform Analytics
  • Technical Documentation
    • Technical Overview
    • Core Architecture
      • Design & Implementation
      • Security & Trust
      • Infrastructure Costs
    • AI Systems
      • Bot System
      • AI Chat Guide
      • AI Scam Radar
      • AI Review Copilot & TrustDigest
      • Content Agent
    • Advanced Features
      • Bot Outputs
      • Customization & Integrations
  • Community
    • Guidelines
      • Communication Style
      • Communication Playbook
      • Community Features
    • Engagement & Rewards
      • Community Incentives
      • Premium Features (Gem Hunters)
      • TRAST Reflections (Blog)
  • Resources
    • Use Cases
      • Overview
      • Detailed Examples
      • Real World Scenarios
    • Support & Contact
      • Contact Information & Company Details
      • Support Center
  • Token
    • Economics
      • Financial Model
Powered by GitBook
On this page
  • Overview
  • 1. The Trust Score System
  • 2. Inline Search & Seamless Verification
  • 3. Staking & Voting Mechanics
  • 4. Reward Distribution
  • 5. Verification & Identity
  • 6. Data Contribution & Validation
  • 7. Profile Accuracy Rating & Anti-Abuse
  • 8. Community Governance
  • Detailed Implementation
  • 1. Request Management
  • 2. Contribution Flow
  • 3. Validation Requirements
  • 4. Quality Control
  • Profile Rating System
  • Technical Implementation
  • Next Steps
  1. Introduction

Mechanics

Delve into the core mechanics powering the TRAST platform, including the dynamic Trust Score system, Vote Staking rewards, community validation, and anti-abuse measures.

PreviousTrust SystemNextCore Concepts

Last updated 11 days ago

"We're not building just another crypto tool. We're building the Web3 trust layer."

Overview

This document explains the key systems and rules that govern how the TRAST platform functions, rewards participation, and maintains trust. Understanding these mechanics helps users engage effectively and contribute meaningfully to the ecosystem.

1. The Trust Score System

"Trust me bro" – web3, 2019–2024 "TRAST" – web3, 2025→

At the heart of TRAST is the . This isn't just a static rating but a dynamic score calculated from multiple factors, including:

  • Community reviews

  • AI analysis

  • Verification status

  • Activity history

Users progress through reputation levels (), which influence their impact and access to platform features.

  • Key takeaway: Trust is earned through consistent, quality contributions and verification.

2. Inline Search & Seamless Verification

"What if 1 billion Telegram users could access the same web3 insights instantly? Well... they can! 💥"

  • Accessibility: Type @TrastDotBot followed by a search term in any chat to instantly view and share entity information

  • Zero-Click Information: See critical trust scores and metrics in the dropdown preview without any additional clicks or app switching

  • Real-time Protection: Make safety decisions within seconds while staying in your conversation

  • Quick Actions: Rate entities, view details, or report suspicious activity directly from search results

  • Frictionless Addition: Add missing entities on-the-fly when they don't exist in the database

The inline search system is central to TRAST's mission, making verification a natural part of conversations and enabling community-driven trust building in real-time.

  • Key takeaway: Inline search transforms verification from a separate step into an integral part of everyday communication.

3. Staking & Voting Mechanics

"No more guessing games. No more blind trust."

  • Earning Rewards: Stake TRAST tokens to earn continuous, linear rewards based on the amount staked and duration. There's no fixed APY.

  • Gaining Influence: Staking also grants voting power. Users can stake on specific entities (projects, profiles) to boost their visibility and ranking.

  • Flexibility & Bonuses: Users can unstake flexibly at any time. Opting for time-locks (e.g., 30, 90, 180, 365 days) grants bonus rewards, but early unstaking incurs a penalty (50% of stake + loss of bonus).

  • Unified Pool: All staking (general, voting, feature-related) contributes to and earns from the same central reward pool, which is funded partly by platform fees.

  • Key takeaway: Staking rewards participation and allows users to directly influence platform visibility.

4. Reward Distribution

"Join us. One protocol. One interface. One army."

Platform rewards, primarily in TRAST tokens, are distributed based on meaningful participation:

  • Staking: Continuous rewards for staking, with bonuses for time-locks.

  • Profile Accuracy: Maintaining an accurate and up-to-date profile can influence reward multipliers.

  • Key takeaway: Rewards are tied to actions that add value and maintain the integrity of the ecosystem.

5. Verification & Identity

"Seeing isn't believing anymore. Verifying is."

Verification is crucial for increasing trust and impact. TRAST offers:

  • Entity Verification: Projects, bots, and other entities can achieve verified status by adding the TRAST Bot as an admin to their Telegram channel/group, allowing for seamless integration and verification.

  • Key takeaway: Verification signals commitment and enhances credibility for both users and entities.

6. Data Contribution & Validation

"The filtering system does its job shockingly well - if something shady is there, it gets spotted."

New information enters TRAST through user contributions (reviews, comments, new entity submissions). This data is validated through:

  • Community Validation: Depending on the contributor's trust level, submissions may require validation by multiple other users (Experts may be auto-approved).

  • Quality Control: A multi-stage process involving peer review and potentially expert oversight ensures data accuracy.

  • Key takeaway: Information is vetted by both the community and AI to maintain reliability.

7. Profile Accuracy Rating & Anti-Abuse

"Powered by you, guided by AI."

To combat manipulation, TRAST employs:

  • Profile Rating: A system allows users to rate the perceived accuracy/authenticity of other user profiles. Votes are weighted by the voter's trust level.

  • Impact Calculation: The impact of user actions (rating entities, rating profiles, validating) is influenced by:

    • Trust level

    • Activity history

    • Time decay factors

    • Activity correlation bonuses

  • AI Detection: Systems actively monitor for spam, bot activity, coordinated manipulation (e.g., vote farming), and other abuse patterns. Penalties include vote nullification and account flagging.

  • Appeal System: Users flagged for potential abuse have a mechanism to appeal, reviewed by the community.

  • Key takeaway: Multiple mechanisms work together to protect the integrity of the trust system.

8. Community Governance

"TRAST gives Web3 communities a memory, a voice and a signal."

While still evolving, the platform aims for community involvement in decisions regarding feature priorities and platform rules, utilizing a sophisticated trust-weighted voting system.

Detailed Implementation

1. Request Management

  • Missing Data Requests: Users can highlight information gaps

  • Priority Assessment: Community determines which requests get addressed first

  • Request Tracking: Transparent system shows progress of all requests

  • Reward Allocation: Contributors who fulfill requests receive tokens

  • Profile Impact: Higher accuracy profiles receive priority for their requests

2. Contribution Flow

  • Data Submission: Users submit reviews, ratings, or new entity information

  • Source Verification: Providing evidence to support submissions

  • Quality Checks: Automated systems verify format and basic quality

  • Community Validation: Other users review and validate submissions

  • Profile Rating Impact: User's trust level affects submission weight

3. Validation Requirements

Trust level thresholds determine validation requirements:

  • Novice: requires 5+ validations

  • Verified: requires 3+ validations

  • Trusted: requires 1-2 validations

  • Expert: auto-approved with review

4. Quality Control

  • Automated Checks: AI systems review submissions for potential issues

  • Peer Review Process: Community members validate submissions

  • Expert Oversight: Higher trust users can override decisions when necessary

  • Historical Accuracy Tracking: System tracks user validation success rate

  • Profile Consistency: Checks for alignment with past behavior

Profile Rating System

The platform includes a nuanced profile accuracy assessment:

1. Rating Mechanics

  • Simple Rating System: Accurate/inaccurate voting mechanism

  • Trust-Weighted Votes: Higher trust users have more impact

  • Rate Limiting: Maximum 3 ratings per day to prevent abuse

  • Cooldown Periods: Prevents rapid-fire rating

  • Anti-Manipulation Checks: Systems to detect coordinated rating attacks

2. Impact Calculation

  • Base Rating Value: Starting point for all ratings

  • Trust Level Multipliers:

    • Fresh Account: 0.1x impact initially, gradually increasing with legitimate activity. Systems continuously scan for spam/scam accounts and nullify their votes.

    • Anonymous: 0.5x impact. Anonymous voting allows participation from privacy-conscious users, but with limited impact. Subject to additional verification through behavioral analysis and pattern recognition.

    • Novice: 1x impact. Standard baseline for new verified accounts. Activity monitored for quality and consistency, with good behavior leading to faster trust level progression.

    • Verified: 2x impact. Users with established track records of reliable contributions. Their doubled voting power reflects demonstrated commitment to platform quality.

    • Trusted: 3x impact. Users with exceptional track records who serve as community leaders. They help train AI systems and set community standards.

    • Expert: 4x impact. The highest trust level for the most dedicated and knowledgeable members. They help develop policies, test features, and guide community growth.

  • Time Decay Factor: Recent activities carry more weight than older ones, following a calibrated decay curve.

  • Activity Correlation Bonus: Rewards users who demonstrate consistent, meaningful engagement across different platform features.

  • Protection Against Abuse: The platform employs advanced AI to detect and penalize users engaged in abusive practices.

3. Appeal System

The platform includes a comprehensive appeal system:

  • Community-Driven Review: Appeals are reviewed by other users

  • Learning Mechanism: Accepted appeals train the platform's AI systems

  • Engagement Benefits:

    • Community validation strengthens teamwork

    • Validators improve fraud detection skills

    • Good reviewers gain reputation

    • Cases establish community-driven rules

    • Shared responsibility through reward incentives

4. Score Integration

  • Trust Score Impact: Profile accuracy affects up to 10% of trust score

  • Reward Multipliers: Higher accuracy increases rewards

  • Validation Weight: Affects how much impact a user's validations have

  • Governance Voting: Influences voting power in platform decisions

  • Feature Access: Determines access to advanced platform features

5. Protection Mechanisms

  • Sybil Attack Prevention: Systems to prevent multiple fake accounts

  • Vote Manipulation Detection: Algorithms identify unnatural voting patterns

  • Rating Abuse Penalties: Consequences for manipulation attempts

  • Regular Audits: Systematic reviews of the rating system

Technical Implementation

For detailed technical specifications, refer to:

Next Steps

"You don't need to follow the noise. You don't need to chase hype. You just need tools to see clearly."

One of TRAST's most powerful features is the capability, which allows users to access the platform's database directly from any Telegram chat:

TRAST features a unique system built on a unified pool:

Validation: Experts earn tokens by validating data and AI insights in the .

Quality Contributions: High-quality reviews, helpful comments, and accurate data submissions are recognized and rewarded through .

User Verification: Users can verify their accounts to gain higher impact, access more features, and build reputation ().

AI Checks: Automated systems (, quality checks) analyze submissions.

Explore the

Review

Join the

Trust Score
Novice, Verified, Trusted, Expert
Inline Search
Vote Staking
TrustMarketplace
Community Incentive programs
Identity Integration
Scam Radar
Technical Documentation
API Documentation
Integration Guide
Trust System
Core Concepts
Community