Billions of dollars are lost every year in DeFi platforms by transactions exploiting business logic or accounting vulnerabilities. Existing defenses focus on static code analysis, public mempool screening, attacker contract detection, or trusted off-chain monitors, none of which prevents exploits submitted through private relays or malicious contracts that execute within the same block. We present the first decentralized, fully on-chain learning framework that: (i) performs gas-prohibitive computation on Layer-2 to reduce cost, (ii) propagates verified model updates to Layer-1, and (iii) enables gas-bounded, low-latency inference inside smart contracts. A novel Proof-of-Improvement (PoIm) protocol governs the training process and verifies each decentralized micro update as a self-verifying training transaction. Updates are accepted by PoIm only if they demonstrably improve at least one core metric (e.g., accuracy, F1-score, precision, or recall) on a public benchmark without degrading any of the other core metrics, while adversarial proposals get financially penalized through an adaptable test set for evolving threats. We develop quantization and loop-unrolling techniques that enable inference for logistic regression, SVM, MLPs, CNNs, and gated RNNs (with support for formally verified decision tree inference) within the Ethereum block gas limit, while remaining bit-exact to their off-chain counterparts, formally proven in Z3. We curate 298 unique real-world exploits (2020 - 2025) with 402 exploit transactions across eight EVM chains, collectively responsible for $3.74 B in losses. We demonstrate that on-chain ML governed by PoIm detects previously unseen attacks with over 97% attack detection accuracy and 82.0% F1. A single inference, such as one made via an external call, typically incurs zero cost. Fully on-chain inference consumes 57,603 gas (≈ $0.18) for linear models, 143,647 gas (≈ $0.49) for CNN(F2, K1), and 506,397 gas (≈ $1.77) for CNN(F8, K4) on L1 (e.g., Ethereum). Our results show that practical and continually evolving DeFi defenses can be embedded directly in protocol logic without trusted guardians, and our solution achieves highly cost-effective protection while filling a critical gap between vulnerability scanners and real-time transaction screening.
@InProceedings{alhaidari_et_al:LIPIcs.AFT.2025.35, author = {Alhaidari, Abdulrahman and Palanisamy, Balaji and Krishnamurthy, Prashant}, title = {{On-Chain Decentralized Learning and Cost-Effective Inference for DeFi Attack Mitigation}}, booktitle = {7th Conference on Advances in Financial Technologies (AFT 2025)}, pages = {35:1--35:27}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-400-0}, ISSN = {1868-8969}, year = {2025}, volume = {354}, editor = {Avarikioti, Zeta and Christin, Nicolas}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2025.35}, URN = {urn:nbn:de:0030-drops-247548}, doi = {10.4230/LIPIcs.AFT.2025.35}, annote = {Keywords: DeFi attacks, on-chain machine learning, decentralized learning, real-time defense} }