Customizing Your Miner
Making it your own.
In this tutorial, we are going to assume that you have Bittensor installed, have a working copy of the template miner, and have generated working cold and hot keys. If you do not have these prerequisites, please follow the Running a Miner tutorial on how to get a miner working.

Basic Architecture

The architecture inside Bittensor API is rather simple.
  • ​Dataset: Presently sitting at 1.5TB in size and growing, the dataset consists of over 150 million files and is hosted entirely on IPFS. This module pulls and organizes text data from IPFS.
  • ​Subtensor: It wraps all the functions you would need to connect to the chain. Including (1) setting weight to your peers, (2) syncing with the latest block, and (3) ordering your transactions to the chain.
  • Metagraph: Store the block information from subtensor as torch.nn.Module.
  • ​Axon: Manages requests from peers.
  • ​Wallet: Manages all the wallet keys to access your Tao.
  • ​wandb: Handles all the logging to weight and bias.
The miners are the Bittensor clients, who make use of the Bittensor API to server their models and communicate with the chain/peers. There are 3 kinds of template miners that we have built to help you get started.
  • ​Server: Serves a hugging face model, answer requests from peers. Earn Tao by having your peers set weight on you.
  • Validator: Send requests to peers and evaluate their performance. Earn Tao by buying bonds from high-performing peers. Link the Miner below, you can use the same set of config from nucleus and neuron.
  • Miner: Does both serving and validating, that's why it has a more complex architecture. The nucleus stores the basic structure of the machine learning model, while the neuron handles all the training and communication with the chain/peer.
To understand more how do a server and validator works, refer to how to mine Tao.

Bittensor CLI

The Bittensor CLI provides an abstraction away from running Python directly. For example, instead of running something as technical as:
python ~/.bittensor/bittensor/bittensor/_neuron/text/template_miner/
you can instead simply type:
btcli run
This makes it easier for non-technical users and quick commands to run miners. For a full list of commands, simply type btcli for a list of outputs:
usage: btcli <command> <command args>
Bittensor cli
positional arguments:
overview Show account overview.
run Run the miner.
metagraph Metagraph commands
inspect Inspect a wallet (cold, hot) pair
weights Weights commands
set_weights Weights commands
list List wallets
transfer Transfer Tao between accounts.
register Register a wallet to a network.
unstake Unstake from hotkey accounts.
stake Stake to your hotkey accounts.
regen_coldkey Regenerates a coldkey from a passed mnemonic
regen_hotkey Regenerates a hotkey from a passed mnemonic
new_coldkey Creates a new hotkey (for running a miner) under the specified path.
new_hotkey Creates a new coldkey (for containing balance) under the specified path.
optional arguments:
-h, --help show this help message and exit
--uids [UIDS [UIDS ...]]
Uids to set.
--weights [WEIGHTS [WEIGHTS ...]]
Weights to set.
Let's dive into this CLI to see what else it can do.

How to add a setting

The templated miner is designed so that each module specified above can be customized depending on your need with a variety of command-line settings. To use them, simply run the miner as before but attach your customized setting following the initial command. For instance:
btcli run \
2 <the network or your choice> \
3 <your coldkey wallet> \
--wallet.hotkey <your hotkey wallet> \
The code above uses the command to set its network setting and commands to set its coldkey and hotkey. The template miner contains separate settings that correspond to different parts of the Bittensor API. Rather than going through each setting in-depth, this blog will detail some common settings, and what they are used for.
This page was built for Bittensor 2.0.2, if you have a newer version of bittensor installed, you can always use the following line of code to look up for newer settings.
python ~/.bittensor/bittensor/bittensor/_neuron/text/template_miner/ --help

Commonly Used Settings

Here are some settings you may want to specify to get your miner running.


Subtensor manages the connection with the chain.
  • The subtensor network flag. The likely choices are: nobunaga (staging network) akatsuki (testing network) nakamoto (master network where you can earn Tao).


Wallet manages the keys to access your Tao. More about the wallet.
  • The name of the wallet to unlock for running bittensor.
  • --wallet.hotkey: The name of wallet's hotkey.
  • --wallet.path: The path to your bittensor wallets. Default ~/.bittensor/wallets/
You can create a new wallet by inputting a new wallet name or hotkey name. Otherwise, use an old wallet name.
To check the list of wallets that you have already created, run btcli list from the terminal.
To understand what is a coldkey, refer to wallet structure.

Weight and bias

Weight and bias (wandb) log the model statistics.
To set up weight and bias, (1) create a free wandb account (2) add --neuron.use_wandb as an argument (3) when running the miner, specify --wandb.api_key, where you can get the key from the wandb authorize page. (4) Check the statistics through the wandb project page.
  • --wandb.api_key: wandb api key.
  • --wandb.project: wandb project name.
  • --wandb.run_group: wandb group name.
  • wandb run name.


  • --logging.debug: Turn on debug logging, where you can check the calls to/from your peers.
  • --logging.logging_dir: Logging default root directory. Default ~/.bittensor/miners/

Machine Learning Related Settings

Like any deep learning model, there are tens of hyperparameters that can be tuned for optimal training for our miners.


Dataset configs change how the dataset feeds data into the model.
  • --dataset.batch_size: Number of sentences in a batch.
  • --dataset.block_size: Length of a sentence.
  • --dataset.max_corpus_size: Amount of data downloaded at a time.
  • --dataset.save_dataset: Save the downloaded dataset or not.
  • --dataset.data_dir: Where to save and load the data.


The template miner currently runs a self-attention encoder transformer model. Neuron configs change the training and optimization of the model defined in the nucleus.


  • --neuron.learning_rate: Training initial learning rate.
  • --neuron.learning_rate_chain: Training initial learning rate for peer weight.
  • --neuron.weight_decay: Weight decay.
  • --neuron.momentum: Optimizer momentum.
  • --neuron.clip_gradients: Implement gradient clipping to avoid exploding loss on smaller architectures.
  • --neuron.compute_remote_gradients: Does the neuron compute and return gradients from backward queries.
  • --neuron.accumulate_remote_gradients: Does the neuron accumulate remote gradients from backward queries.
  • --neuron.n_epochs: Number of training epochs.
  • --neuron.epoch_length: Iterations of training per epoch.

Talking to the peers and the chain

  • --neuron.timeout: Number of seconds to wait for peer's respond.
  • --neuron.blacklist: Amount of stake (tao) in order not to get blacklisted.
  • --neuron.blacklist_allow_non_registered: If true, black lists non-registered peers.
  • --neuron.n_topk_peer_weights: Maximum number of weights to submit to chain.
  • --neuron.sync_block_time: How often the sync the neuron with metagraph, in terms of block time.


  • --neuron.restart: If true, train the neuron from the beginning.
  • --neuron.use_wandb: Statistics would be logged to weight and bias.
  • --neuron.device: Neuron training device cpu/cuda.
  • Trials for this neuron go in neuron.root / (wallet_cold - wallet_hot) /
  • --neuron.restart_on_failure: Restart neuron on unknown error.
  • --neuron.use_upnpc: Neuron attempts to port forward axon using upnpc.


Nucleus is part of the neuron that controls the architecture of the machine learning model itself. If you are interested in the architecture, we suggest you to play with the code inside ~/.bittensor/bittensor/bittensor/_neuron/text/template_miner/
  • --nucleus.nhid: The dimension of the feedforward network model in nn.TransformerEncoder.
  • --nucleus.nhead: The number of heads in the multihead-attention models.
  • --nucleus.nlayers: The number of nn.TransformerEncoderLayer in nn.TransformerEncoder.
  • --nucleus.dropout: The dropout value.
  • --nucleus.topk: The number of peers queried during each remote forward call.
  • --nucleus.punishment: The punishment on the chain weights that do not respond.


Server is independent with neuron and nucleus, it serves its own pre-trained model from hugging face.


  • --server.learning_rate: Training initial learning rate.
  • --server.momentum: Optimizer momentum.
  • --server.clip_gradients: Implement gradient clipping to avoid exploding loss on smaller architectures.
  • --server.padding: To pad out final dimensions.
  • --server.interpolate: To interpolate between sentence length.
  • --server.inter_degree: Interpolate algorithm (nearest | linear | bilinear | bicubic | trilinear | area.

Talking to the peers and the chain

  • --server.blacklist.stake: Amount of stake (tao) in order not to get blacklisted.
  • --server.blacklist.time: How often a peer can query you (seconds).
  • --server.blocks_per_epoch: Blocks per epoch.


  • --server.model_name: Pretrained model from hugging face.
  • --server.device: Miner default training device cpu/cuda.
  • --server.pretrained: If the model should be pretrained.
  • Trials for this miner go in miner.root / (wallet_cold - wallet_hot) /
  • --server.restart: If the model should restart.

Advanced Settings


If you are running on a home computer you are likely behind a NAT. This means your computer cannot receive requests from other computers on the wide internet. We suggest the following two strategies for opening your miner up to the web:
  1. 1.
    Use UPNPC: Some routers allow the UPnP protocol to programmatically open port forwarding. To try this run your miner with the following argument:
    --axon.use_upnpc true
  2. 2.
    Do it Manually: If step 1 fails, you can open a port on your router by accessing it's admin console via your browser at Through the admin console, you can manually specify that traffic from an external port (i.e. 8081) will be routed to a local/internal port (i.e. 9122). Once this is enabled you will need to run your miner with the following arguments:
    --axon.external_port 8081 // (your choosen external port)
    --axon.local_port 9122 // (your choosen internal port
  • --axon.port: The port this axon endpoint is served on. i.e. 8091
  • --axon.ip: The local ip this axon binds to.
  • --axon.max_workers: The maximum number of connection handler threads working simultaneously on this endpoint. The grpc server distributes new worker threads to service requests up to this number.
  • --axon.maximum_concurrent_rpcs: The maximum number of allowed active connections.
  • --axon.backward_timeout: Number of seconds to wait for backward axon request.
  • --axon.forward_timeout: Number of seconds to wait for forwarding axon request.
  • --axon.priority.max_workers: Maximum number of threads in the thread pool.
  • --axon.priority.maxsize: Maximum size of tasks in the priority queue.
Last modified 6d ago