Bittensor
Search…
πŸ¦‹
Customizing Your Miner
Making it your own.
In this tutorial, we are going to assume that you have Bittensor installed, have a working copy of the template miner, and have generated working cold and hot keys. If you do not have these prerequisites, please follow the Running a Miner tutorial on how to get a miner working.

Basic Architecture

The architecture inside Bittensor API is rather simple.
  • ​Dataset: Presently sitting at 1.5TB in size and growing, the dataset consists of over 150 million files and is hosted entirely on IPFS. This module pulls and organizes text data from IPFS.
  • ​Subtensor: It wraps all the functions you would need to connect to the chain. Including (1) setting weight to your peers, (2) syncing with the latest block, and (3) ordering your transactions to the chain.
  • Metagraph: Store the block information from subtensor as torch.nn.Module.
  • ​Axon: Manages requests from peers.
  • ​Wallet: Manages all the wallet keys to access your Tao.
  • ​wandb: Handles all the logging to weight and bias.
The miners are the Bittensor clients, who make use of the Bittensor API to server their models and communicate with the chain/peers. There are 3 kinds of template miners that we have built to help you get started.
  • ​Server: Serves a hugging face model, answer requests from peers. Earn Tao by having your peers set weight on you.
  • Validator: Send requests to peers and evaluate their performance. Earn Tao by buying bonds from high-performing peers. Link the Miner below, you can use the same set of config from nucleus and neuron.
  • Miner: Does both serving and validating, that's why it has a more complex architecture. The nucleus stores the basic structure of the machine learning model, while the neuron handles all the training and communication with the chain/peer.
To understand more how do a server and validator works, refer to how to mine Tao.

Bittensor CLI

The Bittensor CLI provides an abstraction away from running Python directly. For example, instead of running something as technical as:
1
python ~/.bittensor/bittensor/bittensor/_neuron/text/template_miner/main.py
Copied!
you can instead simply type:
1
btcli run
Copied!
This makes it easier for non-technical users and quick commands to run miners. For a full list of commands, simply type btcli for a list of outputs:
1
usage: btcli <command> <command args>
2
​
3
Bittensor cli
4
​
5
positional arguments:
6
{overview,run,metagraph,inspect,weights,set_weights,list,transfer,register,unstake,stake,regen_coldkey,regen_hotkey,new_coldkey,new_hotkey}
7
overview Show account overview.
8
run Run the miner.
9
metagraph Metagraph commands
10
inspect Inspect a wallet (cold, hot) pair
11
weights Weights commands
12
set_weights Weights commands
13
list List wallets
14
transfer Transfer Tao between accounts.
15
register Register a wallet to a network.
16
unstake Unstake from hotkey accounts.
17
stake Stake to your hotkey accounts.
18
regen_coldkey Regenerates a coldkey from a passed mnemonic
19
regen_hotkey Regenerates a hotkey from a passed mnemonic
20
new_coldkey Creates a new hotkey (for running a miner) under the specified path.
21
new_hotkey Creates a new coldkey (for containing balance) under the specified path.
22
​
23
optional arguments:
24
-h, --help show this help message and exit
25
--uids [UIDS [UIDS ...]]
26
Uids to set.
27
--weights [WEIGHTS [WEIGHTS ...]]
28
Weights to set.
Copied!
​
Let's dive into this CLI to see what else it can do.

How to add a setting

The templated miner is designed so that each module specified above can be customized depending on your need with a variety of command-line settings. To use them, simply run the miner as before but attach your customized setting following the initial command. For instance:
1
btcli run \
2
--subtensor.network <the network or your choice> \
3
--wallet.name <your coldkey wallet> \
4
--wallet.hotkey <your hotkey wallet> \
Copied!
The code above uses the --subtensor.network command to set its network setting and the--wallet.name commands to set its coldkey and hotkey. The template miner contains separate settings that correspond to different parts of the Bittensor API. Rather than going through each setting in-depth, this blog will detail some common settings, and what they are used for.
This page was built for Bittensor 2.0.2, if you have a newer version of bittensor installed, you can always use the following line of code to look up for newer settings.
python ~/.bittensor/bittensor/bittensor/_neuron/text/template_miner/main.py --help

Commonly Used Settings

Here are some settings you may want to specify to get your miner running.

Subtensor

Subtensor manages the connection with the chain.
  • --subtensor.network: The subtensor network flag. The likely choices are: nobunaga (staging network) akatsuki (testing network) nakamoto (master network where you can earn Tao).

Wallet

Wallet manages the keys to access your Tao. More about the wallet.
  • --wallet.name: The name of the wallet to unlock for running bittensor.
  • --wallet.hotkey: The name of wallet's hotkey.
  • --wallet.path: The path to your bittensor wallets. Default ~/.bittensor/wallets/
You can create a new wallet by inputting a new wallet name or hotkey name. Otherwise, use an old wallet name.
To check the list of wallets that you have already created, run btcli list from the terminal.
To understand what is a coldkey, refer to wallet structure.

Weight and bias

Weight and bias (wandb) log the model statistics.
To set up weight and bias, (1) create a free wandb account (2) add --neuron.use_wandb as an argument (3) when running the miner, specify --wandb.api_key, where you can get the key from the wandb authorize page. (4) Check the statistics through the wandb project page.
  • --wandb.api_key: wandb api key.
  • --wandb.project: wandb project name.
  • --wandb.run_group: wandb group name.
  • --wandb.name: wandb run name.

Others

  • --logging.debug: Turn on debug logging, where you can check the calls to/from your peers.
  • --logging.logging_dir: Logging default root directory. Default ~/.bittensor/miners/

Machine Learning Related Settings

Like any deep learning model, there are tens of hyperparameters that can be tuned for optimal training for our miners.

Dataset

Dataset configs change how the dataset feeds data into the model.
  • --dataset.batch_size: Number of sentences in a batch.
  • --dataset.block_size: Length of a sentence.
  • --dataset.max_corpus_size: Amount of data downloaded at a time.
  • --dataset.save_dataset: Save the downloaded dataset or not.
  • --dataset.data_dir: Where to save and load the data.

Neuron

The template miner currently runs a self-attention encoder transformer model. Neuron configs change the training and optimization of the model defined in the nucleus.

Hyper-parameters

  • --neuron.learning_rate: Training initial learning rate.
  • --neuron.learning_rate_chain: Training initial learning rate for peer weight.
  • --neuron.weight_decay: Weight decay.
  • --neuron.momentum: Optimizer momentum.
  • --neuron.clip_gradients: Implement gradient clipping to avoid exploding loss on smaller architectures.
  • --neuron.compute_remote_gradients: Does the neuron compute and return gradients from backward queries.
  • --neuron.accumulate_remote_gradients: Does the neuron accumulate remote gradients from backward queries.
  • --neuron.n_epochs: Number of training epochs.
  • --neuron.epoch_length: Iterations of training per epoch.

Talking to the peers and the chain

  • --neuron.timeout: Number of seconds to wait for peer's respond.
  • --neuron.blacklist: Amount of stake (tao) in order not to get blacklisted.
  • --neuron.blacklist_allow_non_registered: If true, black lists non-registered peers.
  • --neuron.n_topk_peer_weights: Maximum number of weights to submit to chain.
  • --neuron.sync_block_time: How often the sync the neuron with metagraph, in terms of block time.

Others

  • --neuron.restart: If true, train the neuron from the beginning.
  • --neuron.use_wandb: Statistics would be logged to weight and bias.
  • --neuron.device: Neuron training device cpu/cuda.
  • --neuron.name: Trials for this neuron go in neuron.root / (wallet_cold - wallet_hot) / neuron.name.
  • --neuron.restart_on_failure: Restart neuron on unknown error.
  • --neuron.use_upnpc: Neuron attempts to port forward axon using upnpc.

Nucleus

Nucleus is part of the neuron that controls the architecture of the machine learning model itself. If you are interested in the architecture, we suggest you to play with the code inside ~/.bittensor/bittensor/bittensor/_neuron/text/template_miner/nucleus_impl.py
  • --nucleus.nhid: The dimension of the feedforward network model in nn.TransformerEncoder.
  • --nucleus.nhead: The number of heads in the multihead-attention models.
  • --nucleus.nlayers: The number of nn.TransformerEncoderLayer in nn.TransformerEncoder.
  • --nucleus.dropout: The dropout value.
  • --nucleus.topk: The number of peers queried during each remote forward call.
  • --nucleus.punishment: The punishment on the chain weights that do not respond.

Server

Server is independent with neuron and nucleus, it serves its own pre-trained model from hugging face.

Hyper-parameters

  • --server.learning_rate: Training initial learning rate.
  • --server.momentum: Optimizer momentum.
  • --server.clip_gradients: Implement gradient clipping to avoid exploding loss on smaller architectures.
  • --server.padding: To pad out final dimensions.
  • --server.interpolate: To interpolate between sentence length.
  • --server.inter_degree: Interpolate algorithm (nearest | linear | bilinear | bicubic | trilinear | area.

Talking to the peers and the chain

  • --server.blacklist.stake: Amount of stake (tao) in order not to get blacklisted.
  • --server.blacklist.time: How often a peer can query you (seconds).
  • --server.blocks_per_epoch: Blocks per epoch.

Others

  • --server.model_name: Pretrained model from hugging face.
  • --server.device: Miner default training device cpu/cuda.
  • --server.pretrained: If the model should be pretrained.
  • --server.name: Trials for this miner go in miner.root / (wallet_cold - wallet_hot) / miner.name.
  • --server.restart: If the model should restart.
​

Advanced Settings

Axon

If you are running on a home computer you are likely behind a NAT. This means your computer cannot receive requests from other computers on the wide internet. We suggest the following two strategies for opening your miner up to the web:
  1. 1.
    Use UPNPC: Some routers allow the UPnP protocol to programmatically open port forwarding. To try this run your miner with the following argument:
    1
    --axon.use_upnpc true
    Copied!
  2. 2.
    Do it Manually: If step 1 fails, you can open a port on your router by accessing it's admin console via your browser at 192.168.0.1. Through the admin console, you can manually specify that traffic from an external port (i.e. 8081) will be routed to a local/internal port (i.e. 9122). Once this is enabled you will need to run your miner with the following arguments:
    1
    --axon.external_port 8081 // (your choosen external port)
    2
    --axon.local_port 9122 // (your choosen internal port
    Copied!
  • --axon.port: The port this axon endpoint is served on. i.e. 8091
  • --axon.ip: The local ip this axon binds to.
  • --axon.max_workers: The maximum number of connection handler threads working simultaneously on this endpoint. The grpc server distributes new worker threads to service requests up to this number.
  • --axon.maximum_concurrent_rpcs: The maximum number of allowed active connections.
  • --axon.backward_timeout: Number of seconds to wait for backward axon request.
  • --axon.forward_timeout: Number of seconds to wait for forwarding axon request.
  • --axon.priority.max_workers: Maximum number of threads in the thread pool.
  • --axon.priority.maxsize: Maximum size of tasks in the priority queue.
Last modified 6d ago