The best measure of intelligence is intelligence itself
It's 2021. We live in a world where the world's most valuable digital asset -- Artificial Intelligence -- is created by a small set of research teams at large corporations.
It's a world where 14,000 machine learning engineers attend the world's largest conferences to present intellectual work that only a few big-name corporations have the resources to implement. It's a world where the rest of us must play from the back seat, license it, or work directly for big tech.
This is because machine intelligence is no longer a research project -- instead, it's an economy that tech companies and governments value in billions of dollars, and costs tonnes of CO2 to maintain .
But today we have an opportunity to reimagine how we want this economy to be driven: digital currency allows us to restructure the incentives that create digital commodities; to approach the gargantuan problem of general intelligence by combining all of our compute, time, and ingenuity.
Open, Decentralized, Internet-Scale Artificial Intelligence.
The AI Community is Ironically Disconnected **: Thousands of engineers perform computation and intellectual work that is essentially lost in the global academic competition, one which only rewards the best in class, state-of-the-art (SOTA) models without combining them. Other models are thrown out -- regardless of their nuances, diversity, or uniqueness.
We're Fully Centralized: Like early day computers, only corporations can produce the winners because of their vast resources. The rest of us must work for them, license them, or fall by the wayside. Are we really going to license the fourth industrial revolution from Facebook?
The Status Quo is Unsustainable: General intelligence won't be the product of a single company. The problem requires more than our current academic- and industrial-based competition can pull off. It's not going to be pulled out of some company lab, nor will it be peer-reviewed by academics - an approach notoriously susceptible to fraud and cognitive bias [5, 6], and incentivized by reputation rather than raw intelligence creation.
We're walking the wrong way because we've badly structured the incentives that are driving the creation of artificial intelligence. True General Intelligence will not be attained by a single group of people, it is only attainable by a true collective.
** Neural networks are effectively the study of connectionism
Note: This is a very high-level explanation, for an in-depth description read our original whitepaper.
We believe aligning them requires shifting the focus in AI from narrowly defined benchmarks for ranking problems like language and image understanding, into peer-to-peer markets which reward computers for producing knowledge valuable in an economy of intelligence.
Similar to the internet revolution in the 1980s which moved computing from monolithic mainframes to smaller, personal computers distributed on the network, we think the future of AI will be decentralized with many millions of computers sharing knowledge rather than a single model.
In order to make this happen, we built the Bittensor API to make it easy for any computer anywhere in the world to connect a unique machine learning model into our network and get paid for it. We call these computers miners.
# Serving a model as a Bittensor miner.import bittensorimport torchdef forward ( pubkey, inputs_x, modality ) -> torch.FloatTensor:return my_model( inputs_x )wallet = bittensor.wallet().create()axon = bittensor.axon(wallet = wallet,forward = forward).subscribe().start()
There is no limitation to what these models can be as long as they speak our tensor format. For instance, the famous models BERT and GPT-3, or a vanilla neural network that learns from the gradients passed through the network.
The inspiration for Bittensor came from noticing how complex structures in nature are driven by reward signals. For instance, our nervous systems use Neurotrophins (BDNF) (i.e. 'food for neurons') to build the complexity of our neural network.
This leads to the question, what reward signals should we use to reward the production of AI?
Behind Bittensor is the thesis that the best way to measure the value of intelligence is not human-designed benchmarks but value learned by other machine learning systems. Machine intelligence has the resolution to differentiate between valuable and non-valuable signals, find nuanced perspectives and validate things in combination. Just like we already do with standard Gating layers or a Mixture of Expert Ensembles to train collectives of models, we can do the same in Bittensor.
Bittensor is built on a proof of stake mechanism where the validators are other miners in the network who have staked Tao, machine learning models themselves who validate the information produced by others by learning their significance to themselves. To do this, Miner-Validators, we call them, send batches of inputs to peers, receive responses, and learn their value. You can see how this is done by reading our code.
After learning the value of the network peers, Validator-miners select weights and append them to the blocks which finalize on the chain. We use a Polkadot-substrate chain to record these transactions keeping these cores accurate by forming a prediction market: differentially rewarding validators which selecting weights that align with the majority of other stakeholders in the system.
Notably, Validators earn more inflation by predicting which peers help reduce the loss functions of other peers. With sufficient decentralization, the market is non-monopolistic and creates a game-theoretic equilibrium where validators are incentivized to select weights that correctly rank for global value. Peer just need to find how to represent inputs that improve the global model rather than train the entire thing.
 Mims, C. (2013) "The bitcoin network is now more powerful than the top 500 supercomputers, combined. https://qz.com/84056/the-bitcoin-network-is-now-more-powerful-than-the-top-500-supercomputers-combined/
 Schwartz, R., & Dodge, J., & Smith, N., & Etzioni, O. (2019). “Green AI”. Allen Institute for Artificial Intelligence. “https://arxiv.org/pdf/1907.10597.pdf”
 Dhar P. (2020) "The carbon impact of artificial intelligence". https://www.nature.com/articles/s42256-020-0219-9
 Ryabinin, M., & Gusev, A. (2020). Towards crowdsourced training of large neural networks using decentralized mixture-of-experts. arXiv preprint arXiv:2002.04013.
 Littman, M. (2021) "Collusion Rings Threaten the Integrity of Computer Science Research". Communications of the ACM.. https://cacm.acm.org/magazines/2021/6/252840-collusion-rings-threaten-the-integrity-of-computer-science-research/fulltext#FNA
 Buckman, J. (2021) "Please Commit More Blatant Academic Fraud". https://jacobbuckman.com/2021-05-29-please-commit-more-blatant-academic-fraud/