Update your package list

$ sudo apt update

Install the dependencies for the python3-certbot-nginx package, which include the python3-acme, python3-certbot, python3-mock, python3-openssl, python3-pkg-resources, python3-pyparsing, and python3-zope.interface packages:

$ sudo apt install python3-acme python3-certbot python3-mock python3-openssl python3-pkg-resources python3-pyparsing python3-zope.interface

Iinstall the python3-certbot-nginx package:

$ sudo apt install python3-certbot-nginx

Certbot needs to be able to find the correct server block in your Nginx configuration for it to be able to automatically configure SSL. Specifically, it does this by looking for a server_name directive that matches your requested domain.

You should have a server block for your domain at /etc/nginx/sites-available/default with the server_name directive already set appropriately.

To check, open the server block file for your domain using nano or your favorite text editor:

sudo nano /etc/nginx/sites-available/your_domain

Find the existing server_name line. It should look like this:

/etc/nginx/sites-available/default
...
server_name your_domain www.your_domain;
...

If it does, exit your editor and move on to the next step. If it doesn’t, update it to match. Then save the file, quit your editor, and verify the syntax of your configuration edits:

$ sudo nginx -t

If you get an error, reopen the server block file and check for any typos or missing characters. Once your configuration file syntax is correct, reload Nginx to load the new configuration:

$ sudo systemctl reload nginx

Certbot can now find the correct server block and update it.

Add a Cert.

$ sudo certbot --nginx -d your_domain -d www.your_domain

[/bash]

This article describes how to host a web site on IPFS.

Requirements:
Access to a Registered Domain and DNS records.
Edit your DNS to point the A record to the IPFS server. We will need this to resolve in order to install a Let’s Encrypt Certificate.

Lets Start with an Update

# sudo apt update 
# sudo apt upgrade -y

Lets create a new user account to run IPFS and switch to it:

# adduser ipfs

Install sudo

# apt install sudo

Edit sudo and add the ipfs user

# visudo

Add the IPFS user below root

# User privilege specification
root    ALL=(ALL:ALL) ALL
ipfs    ALL=(ALL:ALL) ALL

change to the IPFS user.

# su ipfs

Install IPFS
Get the latest release at https://dist.ipfs.tech/#kubo

$ wget https://dist.ipfs.tech/kubo/v0.16.0/kubo_v0.16.0_linux-amd64.tar.gz
$ tar xfv kubo_v0.16.0_linux-amd64.tar.gz
$ cd kubo
./install.sh

Initialize IPFS:

$ ipfs init --profile=server

Switch to the root user:

$ exit

Allow the ipfs user to run long-running services by enabling user lingering for that user:

# loginctl enable-linger ipfs

Create the file /etc/systemd/system/ipfs.service with this content:

# nano /etc/systemd/system/ipfs.service

 

[Unit]
Description=IPFS Daemon
After=syslog.target network.target remote-fs.target nss-lookup.target
[Service]
Type=simple
ExecStart=/usr/local/bin/ipfs daemon --enable-namesys-pubsub
User=ipfs
[Install]
WantedBy=multi-user.target

Enable and start the service:

# systemctl enable ipfs
# systemctl start ipfs

IPFS should be up and running, and start when the server boots.

Check IPFS

$ su ipfs
$ ipfs swarm peers

Add Website Files

Create a folderfor your website files. Add this folder in the ipfs/home directory

$ cd ~
$ mkdir mysitefiles

Upload the site files to the directory. Now we can add these to IPFS with the following contect

$ ipfs add -r <path>

This adds all contents of the folder at to IPFS, recursively. You should see output similar to this:

$ ipfs add -r mysitefiles

Output:

 ipfs add -r mysitefiles/
added QmZrSe9TABdSsWL38FJTp4fW7TposFuzRLSBRYAEMVt1RE mysitefiles/about.html
added Qmdf1mYmCjivJWcXpGikf87PV5VkBo6DQugsjq6GdNZ1az mysitefiles/index.html
added QmW8U3NEHx3p73Nj9645sGnGa8XzR43rQh3Kd52UKncWMo mysitefiles/moon-logo.png
added QmQ91HDqAt1eE7X4DHuJ9r74U3KgKN3pDGidLM6sadK2q2 mysitefiles
 12.66 KiB / 12.66 KiB [==================================================================================================] 100.00%

Each of the long sequence of numbers is called a Content Identifier or CID. These are cryptographically hashed. We can now check to see if the site loads. You can check and use an active gateway here: https://ipfs.github.io/public-gateway-checker/

Add the main Content Identifier (CID) folder ID to the URL. How to link to content on IPFS.

https://ipfs.io/ipfs/<CID>
# e.g
https://ipfs.io/ipfs/QmQ91HDqAt1eE7X4DHuJ9r74U3KgKN3pDGidLM6sadK2q2

Now we can set up the DNS records. See: https://dnslink.io/#introduction

Login to manage your DNS. Add the following TXT Record:

dnslink=/ipfs/QmQ91HDqAt1eE7X4DHuJ9r74U3KgKN3pDGidLM6sadK2q2

Here is my Namecheap DNS

Install nginx with Let’s Encrypt SSL certs
Change to root

$ su root 

 

# apt-get update
# apt-get install nginx

Check status to make sure it started and is not throwing any errors:

$ systemctl status nginx

Results

● nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: en
   Active: active (running) since Wed 2021-06-16 22:59:51 UTC; 1min 44s ago
     Docs: man:nginx(8)
  Process: 13062 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process
  Process: 13063 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (cod
 Main PID: 13064 (nginx)
    Tasks: 2 (limit: 1163)
   Memory: 5.3M
   CGroup: /system.slice/nginx.service
           ├─13064 nginx: master process /usr/sbin/nginx -g daemon on; master_pr
           └─13065 nginx: worker process

Jun 16 22:59:51 ip-10-0-1-209 systemd[1]: Starting A high performance web server
Jun 16 22:59:51 ip-10-0-1-209 systemd[1]: nginx.service: Failed to parse PID fro
Jun 16 22:59:51 ip-10-0-1-209 systemd[1]: Started A high performance web server
lines 1-16/16 (END)

Get your IP and open it with browser to make sure Nginx is serving its default page:

$ curl -s domain.com
$ curl -s Ip_address

Now browse to http://your-ip-here and you should see the Nginx default page “Welcome to Nginx”.

Set Up your nginx configs:

$ sudo mv /etc/nginx/sites-available/default /etc/nginx/sites-available/default_back
# sudo nano /etc/nginx/sites-available/default

Copy and paste this config (change example.com to your domain)


server {
    server_name example.com www.example.com;
    server_tokens off;

    listen 80;
    listen [::]:80;
    listen 443 ssl;
    listen [::]:443 ssl;

    location / {
        proxy_pass http://localhost:8080;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Test that new config syntax and make sure it is ok:

$ sudo nginx -t

If all good reload:

$ sudo systemctl reload nginx

Add Lets Encrypt according to this article – https://www.geekdecoder.com/set-up-lets-encrypt-on-debian-10/

The final config should resemble this:

server {
    server_name example.com www.example.com;
    server_tokens off;

    location / {
        proxy_pass http://localhost:8080;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }

    listen [::]:443 ssl ipv6only=on; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}
server {
    if ($host = www.example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    if ($host = example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    server_name example.com www.example.com;

    listen 80;
    listen [::]:80;
    return 404; # managed by Certbot
}

The site should now be available.

Restoring ost-files

Restore Ost-files aren’t intended to be backed up and restored. However, under certain circumstances an ost-file can still be used directly to restore data.

You can restore an ost-file when;

When the IMAP account the ost-file belongs to is still configured in Outlook and you…
haven’t removed and re-added the IMAP account.
haven’t created a new Mail Profile
aren’t trying to use it for the IMAP account on another computer.
When the Exchange account the ost-file belongs to is still configured in Outlook. This can also be in a new Mail Profile or on another computer as long as you have connected to the Exchange server at least once.

To restore data from the ost-file;

Close Outlook.
Rename the current ost-file of the account to .old.
Restore the ost-file to the location of the current ost-file and rename it if needed.
Disconnect yourself from the network to make sure that no changes are being synched when the account reconnects. This could for instance empty the ost-file if the data was no longer on the server.
Start Outlook.
Export any data that you wish to keep to a pst-file.
Close Outlook.
Delete or rename your recovered ost-file. If your original cache was still working, you can rename it back from .old or otherwise make sure there is no longer an ost-file for your account.
Reconnect yourself to the network.
Start Outlook.
Once Outlook is done synching, you can import the data from the pst-file.

Source: https://www.howto-outlook.com/howto/backupandrestore.htm#restore-ost

Setting Up a Private IPFS Network with IPFS and IPFS-Cluster
Create 2 New Vm’s with Debian. In this case, these are 2 kvm VM’s but you can use any ones.

node0 bootstrap node, 192.168.0.95
node1 – client node, 192.168.0.116

Create a new user “ipfs”. Add sudo rights to the user ipfs.

Installing IPFS through the command-line is handy if you plan on building applications and services on top of an IPFS node. This method is also useful if you’re setting up a node without a user interface, usually the case with remote servers or virtual machines. Using IPFS through the command-line allows you to do everything that IPFS Desktop can do, but at a more granular level since you can specify which commands to run.

For this article, I have created a new user “ipfs”

# adduser ipfs
Adding user `ipfs' ...
Adding new group `ipfs' (1001) ...
Adding new user `ipfs' (1001) with group `ipfs' ...
Creating home directory `/home/ipfs' ...
Copying files from `/etc/skel' ...
New password:
Retype new password:
passwd: password updated successfully
Changing the user information for ipfs
Enter the new value, or press ENTER for the default
        Full Name []: IPFS
        Room Number []: 1001
        Work Phone []:
        Home Phone []:
        Other []:
Is the information correct? [Y/n] y

By default sudo is not installed on Debian, but you can install it. First enable su-mode:

$ su -

Install sudo by running:

# apt-get install sudo -y

After that you would need to play around with users and permissions. Give sudo right to your own user.

# usermod -aG sudo ipfs

Make sure your sudoers file has sudo group added. Run:

# visudo

Allow members of group sudo to execute any command

ipfs   ALL=(ALL:ALL) ALL

You need to re-login or reboot device completely for changes to take effect.

IPFS Install

Download the Linux binary from dist.ipfs.tech (opens new window).

# cd /home/ipfs
wget https://dist.ipfs.tech/kubo/v0.15.0/kubo_v0.15.0_linux-amd64.tar.gz

Unzip the file:

tar -xvzf kubo_v0.15.0_linux-amd64.tar.gz

> x kubo/install.sh
> x kubo/ipfs
> x kubo/LICENSE
> x kubo/LICENSE-APACHE
> x kubo/LICENSE-MIT
> x kubo/README.md

Move into the kubo folder and run the install script:

cd kubo
sudo bash install.sh
> Moved ./ipfs to /usr/local/bin

Test that IPFS has installed correctly:

ipfs --version
> ipfs version 0.15.0

Initialize IPFS

For the purpose of this tutorial, we will install two nodes: a bootstrap node and a client node. The bootstrap node is an IPFS node that other nodes can connect to in order to find other peers. Since we are creating our own private network, we cannot use the bootstrap nodes from the public IPFS network, so we will change these settings later. Select one of your machines as bootstrap node and one as client node.

IPFS is initialized in a hidden directory in your user home directory: ~/.ipfs. This directory will be used to initialize the nodes. On both machines, bootstrap node and client node, run the following.

IPFS_PATH=~/.ipfs ipfs init --profile server

Repeat steps 1 and 2 for all your VMs.

Creating a Private Network

To generate the swarm key there are two options: use a bash script, or install a key generator.

Option 1: Bash script

Create a swarm key

Swarm key allows us to create a private network and tell network peers to communicate only with those peers who share this secret key.

This command should be run only on your Node0. We generate swarm.key on the bootstrap node and then just copy it to the rest of the nodes. Works on Linux. On Mac, use a generator.

$ echo -e "/key/swarm/psk/1.0.0/\n/base16/\n`tr -dc 'a-f0-9' &lt; /dev/urandom | head -c64`" &gt; ~/.ipfs/swarm.key

Option 2: Installation of a key generator

The second option is to install the swarm key generator. Do this is you have a mac.

Install Go

Follow Instructions here – https://golang.org/doc/install

To install the swarm key generator we use go get, which uses git. If you have not installed git yet on your bootstrap node, do so with

$ sudo apt-get install git

Run the following command to install the swarm key generator:

$ go get -u github.com/Kubuxu/go-ipfs-swarm-key-gen/ipfs-swarm-key-gen

Run the swarm key generator to create the swarm file in your .ipfs directory:

$ ./go/bin/ipfs-swarm-key-gen &gt; ~/.ipfs/swarm.key

Copy the generated swarm file to the .ipfs directory of all client nodes.

From Node0 home directory

$ cd .ipfs/
$ cat swarm.key
/key/swarm/psk/1.0.0/
/base16/
25f64b1cf31f649817d495e446d4cbcc99000b8cc032a89b681e5f86f995fa28

On node1, create swarm.key in /home/ipfs/.ipfs

$ nano swarm.key

Add to file the 3 lines from node0 swarm.key:

/key/swarm/psk/1.0.0/
/base16/
25f64b1cf31f649817d495e446d4cbcc99000b8cc032a89b681e5f86f995fa28

Bootstrap IPFS nodes

A bootstrap node is used by client nodes to connect to the private IPFS network. The bootstrap connects clients to other nodes available on the network. In our private network we cannot use the bootstrap of the public IPFS network, so in this section we will replace the existing bootstrap with the ip address and peer identity of the bootstrap node.

First, remove the default entries of bootstrap nodes from both the bootnode and the client node. Use the command on both machines:

IPFS_PATH=~/.ipfs ipfs bootstrap rm --all

Check the result to see the bootstrap is empty with:

IPFS_PATH=~/.ipfs ipfs config show | grep "Bootstrap"
  "Bootstrap": null,

Now add the ip address and the Peer Identity (hash address) of your bootstrap node to each of the nodes including the bootstrap node.

The ip address of the bootnode can be found with hostname -I.

$ hostname -I
192.168.0.95 2603:8081:2301:3b54:5054:ff:fe4c:c469

The Peer Identity was created during the initialization of IPFS and can be found with the following statement.

$ IPFS_PATH=~/.ipfs ipfs config show | grep "PeerID"
    "PeerID": "12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3"

Use your results as follows:

Assemble the add bootstrap statement as follows.

Example:

$ IPFS_PATH=~/.ipfs ipfs bootstrap add /ip4/192.168.0.95/tcp/4001/ipfs/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3

Run your statement on both the bootstrap node and the client node.

You should see:

$ IPFS_PATH=~/.ipfs ipfs bootstrap add /ip4/192.168.0.95/tcp/4001/ipfs/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3
added /ip4/192.168.0.95/tcp/4001/ipfs/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3

Start the network

The private network is installed, so we can test this network.

We will use an environment variable to make sure that if there is a mistake in our configuration or the private network is not fully configured, the nodes don’t connect to the public IPFS network and the daemons just fail.

The environment variable is LIBP2PFORCEPNET and to start the IPFS nodes you just need to start the daemon using the “ipfs daemon”.

Run on both nodes.

$ export LIBP2P_FORCE_PNET=1

To start daemon:

$ IPFS_PATH=~/.ipfs ipfs daemon

Do note the message log stating…”Swarm is limited to private network of peers with the swarm key”, which means that our private network is working perfectly.

Note: Each console is now showing te daemon command. Open 2 new consoles to node0 and node1.

Now add a file to our private network on one node and try to access it from the other node.

$ echo "Hello World!" > file1.txt
$ ipfs add file1.txt
added QmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG file1.txt
 13 B / 13 B [==========================================================] 100.00%
$ ipfs cat QmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG
Hello World!

Take the printed hash and try to the cat file from client node – node1.

$ ipfs cat QmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG
Hello World!

You should see the contents of the added file from the first node node0. To check and be sure that we have a private network we can try to access our file by its CID from the public IPFS gateway. You can choose one of the public gateways from this list: https://ipfs.github.io/public-gateway-checker.

If you did everything right, then the file won’t be accessible. Also, you can run the “ipfs swarm peers”command, and it will display a list of the peers in the network it’s connected to. In our example, each peer sees the other one.

From bootstrap node – node0

$ ipfs swarm peers
/ip4/192.168.0.116/tcp/52784/p2p/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG

From client node – node1

$ ipfs swarm peers
/ip4/192.168.0.95/tcp/4001/p2p/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3

If the same file is uploaded on an other node, the same hash is generated, so the file is not stored twice on the network.

To upload a complete directory, add the directory name and the -r option (recursive). The directory and the files in it are hashed:

$ ipfs add directory_name -r

Run IPFS daemon as a service in the background

Create systemctl service for ipfs on both nodes – node0 and node1:

$ sudo nano /etc/systemd/system/ipfs.service

Add the following (The user is “ipfs”. Change here is using a different user):

[Unit]
Description=IPFS Daemon
After=syslog.target network.target remote-fs.target nss-lookup.target
[Service]
Type=simple
ExecStart=/usr/local/bin/ipfs daemon --enable-namesys-pubsub
User=ipfs
[Install]
WantedBy=multi-user.target

restart systemctl daemon so it finds new service:

$ sudo systemctl daemon-reload

tell systemctl that ipfs should be started on startup:

$ sudo systemctl enable ipfs

Reboot both nodes and run below:

$ sudo systemctl status ipfs

should see something like

$ sudo systemctl status ipfs
[sudo] password for ipfs:
● ipfs.service - IPFS Daemon
   Loaded: loaded (/etc/systemd/system/ipfs.service; enabled; vendor preset: ena
   Active: active (running) since Thu 2021-06-10 09:23:46 CDT; 2min 24s ago
 Main PID: 387 (ipfs)
    Tasks: 9 (limit: 1149)
   Memory: 77.8M
   CGroup: /system.slice/ipfs.service
           └─387 /usr/local/bin/ipfs daemon --enable-namesys-pubsub

Jun 10 09:23:46 ipfs3 ipfs[387]: Swarm listening on /ip4/192.168.0.95/tcp/4001
Jun 10 09:23:46 ipfs3 ipfs[387]: Swarm listening on /ip6/::1/tcp/4001
Jun 10 09:23:46 ipfs3 ipfs[387]: Swarm listening on /p2p-circuit
Jun 10 09:23:46 ipfs3 ipfs[387]: Swarm announcing /ip4/127.0.0.1/tcp/4001
Jun 10 09:23:46 ipfs3 ipfs[387]: Swarm announcing /ip4/192.168.0.95/tcp/4001
Jun 10 09:23:46 ipfs3 ipfs[387]: Swarm announcing /ip6/::1/tcp/4001
Jun 10 09:23:46 ipfs3 ipfs[387]: API server listening on /ip4/127.0.0.1/tcp/5001
Jun 10 09:23:46 ipfs3 ipfs[387]: WebUI: http://127.0.0.1:5001/webui
Jun 10 09:23:46 ipfs3 ipfs[387]: Gateway (readonly) server listening on /ip4/127
Jun 10 09:23:46 ipfs3 ipfs[387]: Daemon is ready

Try to add the file from one node and access it from another as in above.

On node0

$ echo IPFS Rocks! > rocks.txt
$ ipfs add rocks.txt
added QmQCzFx1YUpBjDStPczthtzKEoQY3gGDvSx1RJiz33abcR rocks.txt
 12 B / 12 B [=========================================================] 100.00%

On node1 check for file

$ ipfs cat QmQCzFx1YUpBjDStPczthtzKEoQY3gGDvSx1RJiz33abcR
IPFS Rocks!

We have completed part of creating a private IPFS network and running its demons as a service. At this phase, you should have two IPFS nodes (node0 and node1) organized in one private network.

Let’s create our IPFS-CLUSTER for data replication.

Deploying IPFS-Cluster

After we create a private IPFS network, we can start deploying IPFS-Cluster on top of IPFS for automated data replication and better management of our data.

There are two ways how to organize IPFS cluster, the first one is to set a fixed peerset (so you will not be able to increase your cluster with more peers after the creation) and the other one – to bootstrap nodes (you can add new peers after cluster was created). In this case we will be bootstrapping nodes.

IPFS-Cluster includes two components:

  • ipfs-cluster-service mostly to initialize cluster peer and run its daemon
  • ipfs-cluster-ctl for managing nodes and data among the cluster

Check the URL’s for new versions at:
https://dist.ipfs.io/#ipfs-cluster-service
https://dist.ipfs.io/ipfs-cluster-ctl
https://dist.ipfs.io/go-ipfs

Install IPFS cluster-service and IPFS Cluster-Ctl

Repeat this step for all of your nodes (node0 and node1).

$ wget https://dist.ipfs.tech/ipfs-cluster-service/v1.0.4/ipfs-cluster-service_v1.0.4_linux-amd64.tar.gz

IPFS cluster-ctl

$ wget https://dist.ipfs.tech/ipfs-cluster-ctl/v1.0.4/ipfs-cluster-ctl_v1.0.4_linux-amd64.tar.gz

Un-compress them.

$ tar xvfz ipfs-cluster-service_v1.0.4_linux-amd64.tar.gz
ipfs-cluster-service/LICENSE
ipfs-cluster-service/LICENSE-APACHE
ipfs-cluster-service/LICENSE-MIT
ipfs-cluster-service/README.md
ipfs-cluster-service/ipfs-cluster-service
$ tar xvfz ipfs-cluster-ctl_v1.0.4_linux-amd64.tar.gz
ipfs-cluster-ctl/LICENSE
ipfs-cluster-ctl/LICENSE-APACHE
ipfs-cluster-ctl/LICENSE-MIT
ipfs-cluster-ctl/README.md
ipfs-cluster-ctl/ipfs-cluster-ctl

Install

$ sudo cp ipfs-cluster-service/ipfs-cluster-service /usr/local/bin
$ sudo cp ipfs-cluster-ctl/ipfs-cluster-ctl /usr/local/bin

Confirm things are installed correctly:

$ ipfs-cluster-service help
$ ipfs-cluster-ctl help

Generate and set up CLUSTER_SECRET variable

Now we need to generate CLUSTERSECRET and set it as an environment variable for all peers participating in the cluster. Sharing the same CLUSTERSECRET allow peers to understand that they are part of one IPFS-Cluster. We will generate this key on the bootstrap node (node0) and then copy it to all other nodes. This is a private key and the secret key which is 32-bit hex encoded random string is what keeps it private. Only peers that have this key can communicate with the cluster. Generate it and display:

On your first node (bootstrap node , node0) run the following commands:

$ export CLUSTER_SECRET=$(od -vN 32 -An -tx1 /dev/urandom | tr -d ' \n')
$ echo $CLUSTER_SECRET
7d33cbf9b48845db5b8ba07eacb7898eea44f888576b9a19098fe33a7524d774

You should see something like this:

7d33cbf9b48845db5b8ba07eacb7898eea44f888576b9a19098fe33a7524d774

In order for CLUSTER_SECRET to not disappear after you exit the console session, you must add it as a constant environment variable to the .bashrc file. Copy the printed key after echo command and add it to the end of .bashrc file on all of your nodes.Run this on node0 and node1.

export CLUSTER_SECRET=7d33cbf9b48845db5b8ba07eacb7898eea44f888576b9a19098fe33a7524d774

And don’t forget to update your .bashrc file with command:

$ source ~/.bashrc

Initialize and Start cluster

After we have installed IPFS-Cluster service and set a CLUSTER_SECRET environment variable, we are ready to initialize and start first cluster peer (Node0).

Note: make sure that your ipfs daemon is running before you start the ipfs-cluster-service daemon.

On node0 run:

$ systemctl status ipfs
● ipfs.service - IPFS Daemon
   Loaded: loaded (/etc/systemd/system/ipfs.service; enabled; vendor preset: ena
   Active: active (running) since Thu 2021-06-10 09:23:46 CDT; 41min ago
 Main PID: 387 (ipfs)
    Tasks: 9 (limit: 1149)
   Memory: 78.3M
   CGroup: /system.slice/ipfs.service
           └─387 /usr/local/bin/ipfs daemon --enable-namesys-pubsub

To initialize cluster peer, we need to run the command below on node0 only:

$ ipfs-cluster-service init
2021-06-10T10:06:36.240-0500    INFO    config  config/config.go:481    Saving configuration
configuration written to /home/ipfs/.ipfs-cluster/service.json.
2021-06-10T10:06:36.242-0500    INFO    config  config/identity.go:73   Saving identity
new identity written to /home/ipfs/.ipfs-cluster/identity.json
new empty peerstore written to /home/ipfs/.ipfs-cluster/peerstore.

You should see the output above in the console. Please note the following:

…new identity written to /home/ipfs/.ipfs-cluster/identity.json

Let display and note the identity as we will need this later. This is the cluser peer id. On node0 run:

$ grep id /home/ipfs/.ipfs-cluster/identity.json
    "id": "12D3KooWMHkMEccR9XXaJDnoWZtXb2zEdmoUtmbGCsM21DjfxHud",

The “id” is the cluster peer id.

To start cluster peer, run below on node0 only:

$ ipfs-cluster-service daemon

You should see the following:

$ ipfs-cluster-service daemon
2021-06-10T10:13:40.672-0500    INFO    service ipfs-cluster-service/daemon.go:4
6       Initializing. For verbose output run with "-l debug". Please wait...
2021-06-10T10:13:40.816-0500    INFO    cluster ipfs-cluster@v0.13.3/cluster.go:
136     IPFS Cluster v0.13.3 listening on:
        /ip4/192.168.0.95/tcp/9096/p2p/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2Rm                                                                                                             sknQePoMUxc
        /ip4/127.0.0.1/tcp/9096/p2p/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2Rmskn                                                                                                             QePoMUxc

2021-06-10T10:13:40.817-0500    INFO    restapi rest/restapi.go:521     REST API
(HTTP): /ip4/127.0.0.1/tcp/9094
2021-06-10T10:13:40.818-0500    INFO    ipfsproxy       ipfsproxy/ipfsproxy.go:3
20      IPFS Proxy: /ip4/127.0.0.1/tcp/9095 -&gt; /ip4/127.0.0.1/tcp/5001
2021-06-10T10:13:40.819-0500    INFO    crdt    go-ds-crdt@v0.1.20/crdt.go:278 c
rdt Datastore created. Number of heads: 0. Current max-height: 0
2021-06-10T10:13:40.819-0500    INFO    crdt    crdt/consensus.go:300   'trust a
ll' mode enabled. Any peer in the cluster can modify the pinset.
2021-06-10T10:13:40.862-0500    INFO    cluster ipfs-cluster@v0.13.3/cluster.go:
651     Cluster Peers (without including ourselves):
2021-06-10T10:13:40.862-0500    INFO    cluster ipfs-cluster@v0.13.3/cluster.go:
653         - No other peers
2021-06-10T10:13:40.863-0500    INFO    cluster ipfs-cluster@v0.13.3/cluster.go:
666     ** IPFS Cluster is READY **

Bootstrapping Additional Peers (adding them to cluster)

Open a new console window and connect to the client node (node1). Note: make sure that your ipfs daemon is running before you start the ipfs-cluster-service daemon.

$ systemctl status ipfs
● ipfs.service - IPFS Daemon
   Loaded: loaded (/etc/systemd/system/ipfs.service; enabled; vendor preset: ena
   Active: active (running) since Thu 2021-06-10 09:23:53 CDT; 59min ago
 Main PID: 390 (ipfs)
    Tasks: 8 (limit: 1149)
   Memory: 78.3M
   CGroup: /system.slice/ipfs.service
           └─390 /usr/local/bin/ipfs daemon --enable-namesys-pubsub

Run the following commands to initialize IPFS-Cluster on node1.

$ ipfs-cluster-service init
2021-06-10T10:24:20.276-0500    INFO    config  config/config.go:481    Saving configuration
configuration written to /home/ipfs/.ipfs-cluster/service.json.
2021-06-10T10:24:20.278-0500    INFO    config  config/identity.go:73   Saving identity
new identity written to /home/ipfs/.ipfs-cluster/identity.json
new empty peerstore written to /home/ipfs/.ipfs-cluster/peerstore.

Now we add the node1 to the cluster bootstrap it to node0 as follows:

$ ipfs-cluster-service daemon –bootstrap /ip4/first_node_IP/tcp/9096/ipfs/peer_id

So login to node0 on a new ssh console. The peer id can be found with the following Run this on node0:

$ cd .ipfs-cluster/
$ cat identity.json
{
    "id": "12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc",
    "private_key": "CAESQBHGvM9TBWBRHcl8J4qiuQMk0ka4N8gcSyVCyDRkYgJ/8+7znFeoKBw2Z+a6CQik//4dKCX1REwF2Awrqh3B2uU="

Bear in mind that it should be IPFS-Cluster peer ID, not an IPFS peer ID.

The ip can be displayed as:

hostname -I
192.168.0.116 2603:8081:2301:3b54:5054:ff:fe99:a8ad

Here is the full command in our case Run this on node1:

$ ipfs-cluster-service daemon –bootstrap /ip4/192.168.0.116/tcp/9096/ipfs/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc
2021-06-10T10:40:51.361-0500    INFO    service ipfs-cluster-service/daemon.go:4
6       Initializing. For verbose output run with "-l debug". Please wait...
2021-06-10T10:40:51.485-0500    INFO    cluster ipfs-cluster@v0.13.3/cluster.go:
136     IPFS Cluster v0.13.3 listening on:
        /ip4/192.168.0.116/tcp/9096/p2p/12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ
        /ip4/127.0.0.1/tcp/9096/p2p/12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ


2021-06-10T10:40:51.486-0500    INFO    restapi rest/restapi.go:521     REST API
(HTTP): /ip4/127.0.0.1/tcp/9094
2021-06-10T10:40:51.486-0500    INFO    ipfsproxy       ipfsproxy/ipfsproxy.go:3
20      IPFS Proxy: /ip4/127.0.0.1/tcp/9095 -&gt; /ip4/127.0.0.1/tcp/5001
2021-06-10T10:40:51.487-0500    INFO    crdt    go-ds-crdt@v0.1.20/crdt.go:278 c
rdt Datastore created. Number of heads: 0. Current max-height: 0
2021-06-10T10:40:51.487-0500    INFO    crdt    crdt/consensus.go:300   'trust a
ll' mode enabled. Any peer in the cluster can modify the pinset.
2021-06-10T10:40:51.545-0500    INFO    cluster ipfs-cluster@v0.13.3/cluster.go:
651     Cluster Peers (without including ourselves):
2021-06-10T10:40:51.545-0500    INFO    cluster ipfs-cluster@v0.13.3/cluster.go:
653         - No other peers
2021-06-10T10:40:51.546-0500    INFO    cluster ipfs-cluster@v0.13.3/cluster.go:
666     ** IPFS Cluster is READY **

To check that we have two peers in our cluster, run command on both nodes in a different terminal:

On node0

$ ipfs-cluster-ctl peers ls
12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ | node1| Sees 1 other peers
  &gt; Addresses:
    - /ip4/127.0.0.1/tcp/9096/p2p/12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ
    - /ip4/192.168.0.116/tcp/9096/p2p/12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ
  &gt; IPFS: 12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG
    - /ip4/127.0.0.1/tcp/4001/p2p/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG
    - /ip4/192.168.0.116/tcp/4001/p2p/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG
    - /ip6/2603:8081:2301:3b54:5054:ff:fe99:a8ad/tcp/4001/p2p/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG
    - /ip6/::1/tcp/4001/p2p/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG
12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc | node0 | Sees 1 other peers
  &gt; Addresses:
    - /ip4/127.0.0.1/tcp/9096/p2p/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc
    - /ip4/192.168.0.95/tcp/9096/p2p/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc
  &gt; IPFS: 12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3
    - /ip4/127.0.0.1/tcp/4001/p2p/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3
    - /ip4/192.168.0.95/tcp/4001/p2p/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3
    - /ip6/2603:8081:2301:3b54:5054:ff:fe4c:c469/tcp/4001/p2p/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3
    - /ip6/::1/tcp/4001/p2p/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3

On node1

$ ipfs-cluster-ctl peers ls
12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ | node1 | Sees 1 other peers
  &gt; Addresses:
    - /ip4/127.0.0.1/tcp/9096/p2p/12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ
    - /ip4/192.168.0.116/tcp/9096/p2p/12D3KooWD6gwpVwW31p2Wan3BnYEkQy5X8QpL51aoiPdAR3X2wnZ
  &gt; IPFS: 12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG
    - /ip4/127.0.0.1/tcp/4001/p2p/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG
    - /ip4/192.168.0.116/tcp/4001/p2p/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG
    - /ip6/2603:8081:2301:3b54:5054:ff:fe99:a8ad/tcp/4001/p2p/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG
    - /ip6/::1/tcp/4001/p2p/12D3KooWFbTdbLXbucZMekBxyqqhRKPboZvYKiX1yBVmtikiUXCG
12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc | node0 | Sees 1 other peers
  &gt; Addresses:
    - /ip4/127.0.0.1/tcp/9096/p2p/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc
    - /ip4/192.168.0.95/tcp/9096/p2p/12D3KooWSEaZydrYik9gKenUhezTi2z8NBXYHB2RmsknQePoMUxc
  &gt; IPFS: 12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3
    - /ip4/127.0.0.1/tcp/4001/p2p/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3
    - /ip4/192.168.0.95/tcp/4001/p2p/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3
    - /ip6/2603:8081:2301:3b54:5054:ff:fe4c:c469/tcp/4001/p2p/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3
    - /ip6/::1/tcp/4001/p2p/12D3KooWM5oWJ2Z55dCSvyB3Zo6nS1zW1GvnoZSdxNdDCuDAGvb3

And you should see the list of cluster peers.

Run IPFS-Cluster daemon as a service

In the 2 terminal for each node that have the ipfs daemon running, hit “ctrl-c” to stop the daemon.

Lets add the ipfs-cluster-service daemon as a service. On both nodes, run the following:

$ sudo nano /etc/systemd/system/ipfs-cluster-service.service

Add the following:

[Unit]
Description=IPFS Cluster Service
After=network.target

[Service]
LimitNOFILE={{ ipfs_cluster_fd_max }}
Environment="IPFS_CLUSTER_FD_MAX={{ ipfs_cluster_fd_max}}"
ExecStart=/usr/local/bin/ipfs-cluster-service daemon
Restart=on-failure
User=ipfs

[Install]
WantedBy=multi-user.target

Restart systemctl daemon so it finds new service. Do this on both nodes.

$ sudo systemctl daemon-reload
$ sudo systemctl enable ipfs-cluster-service.service
Created symlink /etc/systemd/system/multi-user.target.wants/ipfs-cluster-service.service → /etc/systemd/system/ipfs-cluster-service.service.
$ sudo systemctl start ipfs-cluster-service
$ sudo systemctl status ipfs-cluster-service
● ipfs-cluster-service.service - IPFS Cluster Service
   Loaded: loaded (/etc/systemd/system/ipfs-cluster-service.service; enabled; ven
   Active: active (running) since Thu 2021-06-10 11:04:23 CDT; 20s ago
 Main PID: 584 (ipfs-cluster-se)
    Tasks: 6 (limit: 1149)
   Memory: 39.7M
   CGroup: /system.slice/ipfs-cluster-service.service
           └─584 /usr/local/bin/ipfs-cluster-service daemon

Jun 10 11:04:23 ipfs3 ipfs-cluster-service[584]: 2021-06-10T11:04:23.613-0500
Jun 10 11:04:23 ipfs3 ipfs-cluster-service[584]:         /ip4/192.168.0.95/tcp/90
Jun 10 11:04:23 ipfs3 ipfs-cluster-service[584]:         /ip4/127.0.0.1/tcp/9096/
Jun 10 11:04:23 ipfs3 ipfs-cluster-service[584]: 2021-06-10T11:04:23.672-0500
Jun 10 11:04:23 ipfs3 ipfs-cluster-service[584]: 2021-06-10T11:04:23.672-0500
Jun 10 11:04:23 ipfs3 ipfs-cluster-service[584]: 2021-06-10T11:04:23.673-0500
Jun 10 11:04:23 ipfs3 ipfs-cluster-service[584]: 2021-06-10T11:04:23.673-0500
Jun 10 11:04:23 ipfs3 ipfs-cluster-service[584]: 2021-06-10T11:04:23.674-0500
Jun 10 11:04:23 ipfs3 ipfs-cluster-service[584]: 2021-06-10T11:04:23.674-0500
Jun 10 11:04:23 ipfs3 ipfs-cluster-service[584]: 2021-06-10T11:04:23.674-0500

Reboot both nodes.

$ sudo shutdown -r now

Login after reboot and check that both IPFS and IPFS-Cluster services are running.

$ sudo systemctl status ipfs
$ sudo systemctl status ipfs-cluster-service

Test IPFS-Cluster and data replication

To test data replication, create the file on node0 and add it to the cluster:

$ echo Hello World! > myfile.txt
$ cd ipfs-cluster-ctl/
$ ipfs-cluster-ctl add /home/ipfs/myfile.txt
added QmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG myfile.txt

Take hash id of the recently added file and check its status:

$ ipfs-cluster-ctl status QmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG

You should see that this file has been PINNED among all cluster nodes.

$ ipfs-cluster-ctl status QmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG
QmfM2r8seH2GiRaC4esTjeraXEachRt8ZsSeGaWTPLyMoG:
    &gt; node1                : PINNED | 2021-06-10T16:18:20.744805693Z
    &gt; node0                : PINNED | 2021-06-10T11:18:20.740298488-05:00

To Create a KVM guest VM from command line.

I have an existing ISO directory at:
/myzpool/iso/

And I am storing the KVM at:
/myzpool/kvm

The command that creates the virtual machine is this:

$ sudo virt-install --name server01debian10-server3 \
--os-type linux \
--os-variant Debian10 \
--ram 2048\
--disk /myzpool/kvm/debian10-server3.qcow2,device=disk,bus=virtio,size=10,format=qcow2 \
--graphics none \
--noautoconsole \
--hvm \
--cdrom /iso/debian-10.9.0-amd64-netinst.iso \
--boot cdrom,hd

Or

virt-install --name=centos \
--memory=2048 --vcpus=1 \
--location=/myzpool/iso/debian-10.9.0-amd64-netinst.iso \
--disk /myzpool/kvm/debian10-server3.qcow2,device=disk,bus=virtio,size=8 \
--network bridge:br0 \
--os-type=linux  \
--nographics \
--extra-args='console=tty0 console=ttyS0,115200n8 serial'

This articles describes how to enable virsh console on KVM Guests

List Virtual Machines

$ sudo virsh list --all
 Id   Name               State
-----------------------------------
 13   debian10-server3   running

After installing KVM and trying to console, all I see is below and no access to machine:

$ sudo virsh console debian10-server3
Connected to domain centos8
Escape character is ^]

Type the following to exit:

$ Ctrl+]

Enable Virsh Console Access For KVM Guests

Log in via ssh or kvm virtual machine manager to the KVM guest system (virtual machine), not in the KVM host.

Run the following:

$ sudo systemctl enable serial-getty@ttyS0.service
Created symlink /etc/systemd/system/getty.target.wants/serial-getty@ttyS0.service → /lib/systemd/system/serial-getty@.service.

 

$ sudo systemctl start serial-getty@ttyS0.service

verify it by looking into the VM’s configuration XML file from the Host:

$ sudo virsh edit debian10-server3

Scroll to see the following lines…

Now start the virsh console of the guest system from the host using command:

$ sudo virsh console debian10-server3
Connected to domain centos8
Escape character is ^]

Press ENTER again and type your user name and password to connect to the guest machine. To exit, type Ctrl+]

Installing IPFS through the command-line is handy if you plan on building applications and services on top of an IPFS node. This method is also useful if you’re setting up a node without a user interface, usually the case with remote servers or virtual machines. Using IPFS through the command-line allows you to do everything that IPFS Desktop can do, but at a more granular level since you can specify which commands to run.

You can install as root or in Debian add or modify a user for sudo.

By default sudo is not installed on Debian, but you can install it. First login as root.
Install sudo by running:

# apt-get install sudo -y

Add a user ipfs ( or use one of your own users).

# adduser ipfs
Adding user `ipfs' ...
Adding new group `ipfs' (1000) ...
Adding new user `ipfs' (1000) with group `ipfs' ...
Creating home directory `/home/ipfs' ...
Copying files from `/etc/skel' ...
New password:
Retype new password:
passwd: password updated successfully
Changing the user information for ipfs
Enter the new value, or press ENTER for the default
        Full Name []: IPFS
        Room Number []:
        Work Phone []:
        Home Phone []:
        Other []:
Is the information correct? [Y/n] y
# usermod -aG sudo ipfs

Make sure your sudoers file have sudo group added. Run:

# visudo

Allow members of group sudo to execute any command

%sudo   ALL=(ALL:ALL) ALL

Copy ssh keys to ipfs user from root(optional step)

# cp -r .ssh/ /home/ipfs/

Set permissions

 # chown -R ipfs:ipfs /home/ipfs/.ssh/

You need to relogin or reboot device completely for changes to take effect.

IPFS Install
Login as the IPFS user.

Download the Linux binary from dist.ipfs.io

$ wget https://dist.ipfs.io/go-ipfs/v0.8.0/go-ipfs_v0.8.0_linux-amd64.tar.gz

Unzip the file:

$ tar xvfz go-ipfs_v0.8.0_linux-amd64.tar.gz
go-ipfs/install.s
go-ipfs/ipfs
go-ipfs/LICENSE
go-ipfs/LICENSE-APACHE
go-ipfs/LICENSE-MIT
go-ipfs/README.md

Move into the go-ipfs folder and run the install script:

$ cd go-ipfs
$ sudo ./install.sh
Moved ./ipfs to /usr/local/bin

Move to HOME

cd ..

Test that IPFS has installed correctly:

$ ipfs --version
ipfs version 0.8.0

Initialize the repository

ipfs stores all its settings and internal data in a directory called the repository. Before using IPFS for the first time, you’ll need to initialize the repository with the “ipfs init” command. There are 2 was to Initialize. Local and Data Center Installations. If you are in a Data Center skip to the Datacenter Installation below.

Local Installation (Only for local installations):

$ IPFS_PATH=~/.ipfs ipfs init

Datacenter Installation:
If you are running on a server in a data center, you should initialize IPFS with the server profile. Doing so will prevent IPFS from creating a lot of data center-internal traffic trying to discover local nodes:

$ IPFS_PATH=~/.ipfs ipfs init --profile server
generating ED25519 keypair...done
peer identity: 12D3KooWKQn2n8Yee75qJqUHAc6cpfZypby2qhczWhXYx2k4FEtM
initializing IPFS node at /home/username/.ipfs
to get started, enter:

        ipfs cat /ipfs/QmQPeNsJPyVWPFDVHb77w8G42Fvo15z4bG2X8D2GhfbSXc/readme

The hash after peer identity is your node’s ID and will be different from the one shown in the above output. Other nodes on the network use it to find and connect to you. You can run ipfs id at any time to get it again if you need it.

Now, run the command in the output of ipfs init. The one that looks like this…

$ ipfs cat /ipfs/QmQPeNsJPyVWPFDVHb77w8G42Fvo15z4bG2X8D2GhfbSXc/readme

You should see something like this:

You can explore other objects in the repository. In particular, the quick-start directory which shows example commands to try:

$ ipfs cat /ipfs/QmQPeNsJPyVWPFDVHb77w8G42Fvo15z4bG2X8D2GhfbSXc/quick-start

Take your node online

Option 1

Once you’re ready to join your node to the public network, run the ipfs daemon in another terminal and wait for all three lines below to appear to know that your node is ready. This is a way to manually start it. See below to have the service set up to start automatically.

$ IPFS_PATH=~/.ipfs ipfs daemon
Initializing daemon...
go-ipfs version: 0.9.0
Repo version: 11
System version: amd64/linux
Golang version: go1.15.8
API server listening on /ip4/127.0.0.1/tcp/5001
WebUI: http://127.0.0.1:5001/webui
Gateway (readonly) server listening on /ip4/1127.0.0.1/tcp/8080
Daemon is ready

Make a note of the TCP ports you receive. If they are different, use yours in the commands below.

Now, switch back to your original terminal. If you’re connected to the network, you should be able to see the IPFS addresses of your peers when you run:

$ ipfs swarm peers

Option 2
It would be better to start IPFS daemon as a service instead of the terminal attached process.
You can create a service so that the daemon runs automatically. Edit user profile for setting env variables:

Create systemctl service for ipfs:

$ sudo nano /etc/systemd/system/ipfs.service

Add the following (Change User and Group Accordingly):

[Unit]
Description=IPFS Daemon
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
Type=simple
ExecStart=/usr/local/bin/ipfs daemon --enable-namesys-pubsub
User=ipfs
[Install]
WantedBy=multi-user.target

restart systemctl daemon so it finds new service:

$ sudo systemctl daemon-reload

tell systemctl that ipfs should be started on startup:

$ sudo systemctl enable ipfs

start ipfs:

$ sudo systemctl start ipfs

check status:

$ sudo systemctl status ipfs

should see something like

● ipfs.service - ipfs daemon
   Loaded: loaded (/lib/systemd/system/ipfs.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-08-28 20:38:04 UTC; 4s ago
 Main PID: 30133 (ipfs)
    Tasks: 9 (limit: 4915)
   CGroup: /system.slice/ipfs.service
           └─30133 /usr/local/bin/ipfs daemon --enable-gc

ipfs[30133]: Swarm listening on /ip4/127.0.0.1/tcp/4001
ipfs[30133]: Swarm listening on /ip4/172.31.43.10/tcp/4001
ipfs[30133]: Swarm listening on /ip6/::1/tcp/4001
ipfs[30133]: Swarm listening on /p2p-circuit
ipfs[30133]: Swarm announcing /ip4/127.0.0.1/tcp/4001
ipfs[30133]: Swarm announcing /ip6/::1/tcp/4001
ipfs[30133]: API server listening on /ip4/127.0.0.1/tcp/5001
ipfs[30133]: WebUI: http://127.0.0.1:5001/webui
ipfs[30133]: Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/80
ipfs[30133]: Daemon is ready

How to see documents from other that a local web URL.

By default, the files are only visible for a browser at localhost. To change this, change the gateway and restart the daemon.

Make gateway publicly accessible. This allows you and everyone to view files.

If you want to, you can make your IPFS gateway and webui publicly accessible (Note: This should not be done unless locked down with a firewall rule restricting access). Change gateway configuration to listen on all available IP addresses.

In the file at ~/.ipfs/config change the following:

$ nano ~/.ipfs/config 
"API": "/ip4/127.0.0.1/tcp/5001",
"Gateway": "/ip4/127.0.0.1/tcp/8080"

to…

 
"API": "/ip4/0.0.0.0/tcp/5001",
"Gateway": "/ip4/0.0.0.0/tcp/8080"

You can also run the commands below from the cli:

$ ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["http://your_domain_name-or_ip_address.com:5001", "http://localhost:3000", "http://127.0.0.1:5001", "https://webui.ipfs.io"]'
$ ipfs config --json API.HTTPHeaders.Access-Control-Allow-Methods '["PUT", "GET", "POST"]'
$ ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
$ ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080

Restart IPFS after the changes

$ sudo systemctl restart ipfs

Load the URL to your site. In this case, I have an AWS instance but you can use the IP of your server or your domain name.

http://ip_address:8080/ipfs/QmQPeNsJPyVWPFDVHb77w8G42Fvo15z4bG2X8D2GhfbSXc 

Webui

The webui is located at the following URL

http://ip_address:5001/webui
http://domain-name.com:5001/webui

To convert your CentOS8 operating system to AlmaLinux do the following:

-Make a backup of the system.
-Disable Secure Boot because AlmaLinux doesn’t support it yet.
-Download the almalinux-deploy.sh script: run the system update and upgrade command

In CentOS and run the system update and upgrade command.

# sudo dnf update 
# sudo dnf upgrade

Install Curl, if you don’t have it:

# sudo dnf install curl

Download the CentOS 8 to AlmaLinux migration script.

# curl -O https://raw.githubusercontent.com/AlmaLinux/almalinux-deploy/master/almalinux-deploy.sh

Run the downloaded script with sudo rights or as root. Thiswill download all the necessary packages and replace the default CentOS repos, logos, and other things with AlmaLinux.

# sudo bash almalinux-deploy.sh

Check release info

# cat /etc/redhat-release 
AlmaLinux release 8.3 (Purple Manul)

Check the default kernel by running:

# sudo grubby --info DEFAULT | grep AlmaLinux

Output:

title="AlmaLinux (4.18.0-240.15.1.el8_3.x86_64) 8.3 (Purple Manul)"