Skip to content

Fetch.ai, Secret Network Partner to Protect Diagnostic Imaging Data With Confidential Computing

1 2

Key Takeaways

  • Fetch.ai and Secret Network have expanded their partnership to apply confidential AI to medical imaging.
  • The system runs diagnostic models inside trusted hardware environments, keeping patient data secure.
  • It aims to overcome persistent privacy and compliance barriers slowing hospital adoption of AI.
  • The approach could be adapted for other regulated industries, including finance and defense

Decentralized AI infrastructure developer Fetch.ai and privacy-preserving smart-contract platform Secret Network are launching a new phase in their collaboration to secure medical imaging data, combining confidential computing and autonomous agents to protect patient privacy while enhancing diagnostic accuracy.

The initiative aims to close one of healthcare’s biggest gaps, exploring how artificial intelligence can be used in life-saving diagnostics without putting sensitive data at risk.

According to the official announcement, this obstacle can be overcome through a confidential AI pipeline that analyzes medical scans, such as mammography images, and generates diagnostic reports within protected hardware environments, ensuring that no raw patient data leaves the secure enclave.

fetch
Official Collaboration Banner. Source: Fetch.ai on X

AI Collaboration Targets Data Privacy in Diagnostics

Fetch.ai and Secret Network were built on the Cosmos SDK, giving them a shared foundation for interoperability and scalability.

To achieve their goal, Fetch.ai contributes its agent-based infrastructure, which enables AI systems to communicate and act autonomously, while Secret Network provides the cryptographic backbone through its SecretVM architecture, built on Intel and NVIDIA’s confidential computing technologies.

The system processes DICOM medical images using several AI models, including Mirai, AsymMirai, and Llama3-Med42-70B. Together, they assess breast cancer risk and produce structured clinical reports entirely within a trusted execution environment (TEE).

Hospitals Struggle to Trust AI Despite Proven Accuracy

AI has long shown promise in radiology, with models such as CheXNet and Lunit rivaling human accuracy in trials. However, adoption has stalled in hospitals due to privacy concerns, inconsistent data standards, and regulatory hurdles.

Most systems today assist clinicians rather than operate autonomously, as existing legal frameworks have yet to accommodate full algorithmic accountability.

The Fetch.ai and Secret Network framework is designed to overcome those barriers. By combining decentralized agents with confidential computing, it enables hospitals to verify that sensitive data is processed securely and only by approved code, with compliance to HIPAA, GDPR, and EU AI Act standards built into the architecture.

Hardware-Level Attestation Adds Transparency and Trust

According to Fetch.ai, every computation within the SecretVM environment produces a cryptographic attestation, proof that the workload was executed within a genuine, uncompromised secure enclave. This allows developers, regulators, and hospitals to confirm the integrity of each AI process without exposing proprietary models or private data.

The approach builds a continuous chain of trust from data ingestion to report generation and delivery, with each stage auditable and verifiable. The resulting structured diagnostic reports are then managed by medical agents within Fetch.ai’s ecosystem, enabling secure exchange with health record systems, insurers, and clinicians.

Shared Mission, Broader Implications

Beyond healthcare, the two networks see the project as a model for deploying confidential AI in regulated industries, such as finance and defense.

For Fetch.ai, the initiative demonstrates how agent-based systems can power practical, compliant AI applications, while for Secret Network, it validates the scalability of its privacy-preserving infrastructure under intensive, GPU-driven workloads.

Both teams say their goal is to change how artificial intelligence handles personal data, making privacy and accountability core features rather than optional safeguards. They argue that building public trust in AI depends on systems that can prove their integrity and protect sensitive information throughout every stage of use.

Read More: OceanPal Launches SovereignAI with $120M to Build Privacy AI Infrastructure on NEAR

Disclaimer: All content provided on Times Crypto is for informational purposes only and does not constitute financial or trading advice. Trading and investing involve risk and may result in financial loss. We strongly recommend consulting a licensed financial advisor before making any investment decisions.

Ebrahem is a Web3 journalist, trader, and content specialist with 9+ years of experience covering crypto, finance, and emerging tech. He previously worked as a lead journalist at Cointelegraph AR, where he reported on regulatory shifts, institutional adoption, and and sector-defining events. Focused on bridging the gap between traditional finance and the digital economy, Ebrahem writes with a simple, clear, high-impact style that helps readers see the full picture without the noise.

Zoomable Image