Previous Page

nihilist - 14 / 10 / 2023

invidious Setup

In this tutorial we're going to setup an invidious instance that automatically updates itself.

Initial Setup

We follow the documentation here:


[ nowhere.moe ] [ /dev/pts/0 ] [/home/invidious/invidious]
→ cd /srv/


[ nowhere.moe ] [ /dev/pts/0 ] [/home/invidious/invidious]
→ git clone https://github.com/iv-org/invidious.git

[ nowhere.moe ] [ /dev/pts/0 ] [/home/invidious/invidious]
→ cd invidious

[ nowhere.moe ] [ /dev/pts/0 ] [/home/invidious/invidious]
→ vim docker-compose.yml

[ nowhere.moe ] [ /dev/pts/0 ] [/srv/invidious]
→ cat docker-compose.yml
version: "3"
services:

  invidious:
    image: quay.io/invidious/invidious:latest
    # image: quay.io/invidious/invidious:latest-arm64 # ARM64/AArch64 devices
    restart: unless-stopped
    ports:
      - "127.0.0.1:3000:3000"
    environment:
      # Please read the following file for a comprehensive list of all available
      # configuration options and their associated syntax:
      # https://github.com/iv-org/invidious/blob/master/config/config.example.yml
      INVIDIOUS_CONFIG: |
        db:
          dbname: invidious
          user: kemal
          password: kemal
          host: invidious-db
          port: 5432
        check_tables: true
        external_port: 443
        domain: iv.nowhere.moe
        https_only: true
        statistics_enabled: true
        hmac_key: "adwwadwaadw5ree6ahB" # pwgen 20 1
    healthcheck:
      test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/comments/jNQXAC9IVRw || exit 1
      interval: 30s
      timeout: 5s
      retries: 2
    logging:
      options:
        max-size: "1G"
        max-file: "4"
    depends_on:
      - invidious-db

  invidious-db:
    image: docker.io/library/postgres:14
    restart: unless-stopped
    volumes:
      - postgresdata:/var/lib/postgresql/data
      - ./config/sql:/config/sql
      - ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
    environment:
      POSTGRES_DB: invidious
      POSTGRES_USER: kemal
      POSTGRES_PASSWORD: kemal
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]

volumes:
  postgresdata:


[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ docker-compose down --remove-orphans	

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→  docker volume rm invidious_postgresdata

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ docker-compose up -d

#or like so:

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ docker-compose -f /srv/invidious/docker-compose.yml stop ;  docker-compose -f /srv/invidious/docker-compose.yml up -d

Then make the reverse nginx proxy config:


[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ wget -O -  https://get.acme.sh | sh

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ bash
root@Datura ~ # acme.sh --set-default-ca  --server  letsencrypt

root@Datura ~ # acme.sh --issue --standalone -d nowhere.moe -d iv.nowhere.moe -k 4096

[Sun Jul  9 05:25:32 PM CEST 2023] Your cert is in: /etc/acme/certs/iv.nowhere.moe/iv.nowhere.moe.cer
[Sun Jul  9 05:25:32 PM CEST 2023] Your cert key is in: /etc/acme/certs/iv.nowhere.moe/iv.nowhere.moe.key
[Sun Jul  9 05:25:32 PM CEST 2023] The intermediate CA cert is in: /etc/acme/certs/iv.nowhere.moe/ca.cer
[Sun Jul  9 05:25:32 PM CEST 2023] And the full chain certs is there: /etc/acme/certs/iv.nowhere.moe/fullchain.cer

root@Datura ~ # exit
exit

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ vim /etc/nginx/sites-available/iv.nowhere.moe.conf

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ cat /etc/nginx/sites-available/iv.nowhere.moe.conf
server {
    listen 80;
    listen [::]:80;
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    server_name iv.nowhere.moe;

    access_log off;
    error_log /var/log/nginx/error.log crit;

    ssl_certificate /etc/acme/certs/iv.nowhere.moe/fullchain.cer;
    ssl_certificate_key /etc/acme/certs/iv.nowhere.moe/iv.nowhere.moe.key;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $host;    # so Invidious knows domain
        proxy_http_version 1.1;     # to keep alive
        proxy_set_header Connection ""; # to keep alive
    }

    if ($https = '') { return 301 https://$host$request_uri; }  # if not connected to HTTPS, perma-redirect to HTTPS
}

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ ln -s /etc/nginx/sites-available/iv.nowhere.moe.conf /etc/nginx/sites-enabled

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ systemctl restart nginx

	

Then check if it works:

And it does!

Manual update Setup



Now let's make sure our invidious instance stays continuously updated and restarts hourly:


[ nowhere.moe ] [ /dev/pts/2 ] [~]
→ crontab -e

@hourly docker-compose -f /srv/invidious/docker-compose.yml stop ;  docker-compose -f /srv/invidious/docker-compose.yml up -d
@yearly docker-compose -f /srv/invidious/docker-compose.yml stop ; cp /srv/invidious/docker-compose.yml /srv/invidious.docker-compose.yml.backup ;  git -C /srv/invidious pull ; docker-compose -f /srv/invidious/docker-compose.yml up -d # but must be done monthly  to be on invidio.us! (need to manually edit the docker-compose.yml file again afterward)

[ nowhere.moe ] [ /dev/pts/1 ] [/srv/invidious]
→ cronitor select

Use the arrow keys to navigate: ↓ ↑ → ←
Use the arrow keys to navigate: ↓ ↑ → ←
Use the arrow keys to navigate: ↓ ↑ → ←
Use the arrow keys to navigate: ↓ ↑ → ←
Use the arrow keys to navigate: ↓ ↑ → ←
Use the arrow keys to navigate: ↓ ↑ → ←
✔ docker-compose -f /srv/invidious/docker-compose.yml stop ; docker-compose -f /srv/invidious/docker-compose.yml up -d
----► Running command: docker-compose -f /srv/invidious/docker-compose.yml stop ; docker-compose -f /srv/invidious/docker-compose.yml up -d

Stopping invidious_invidious_1    ... done
Stopping invidious_invidious-db_1 ... done
Recreating invidious_invidious-db_1 ... done
Recreating invidious_invidious_1    ... done

----► ✔ Command successful    Elapsed time 1.692s

once done go create an issue there https://github.com/iv-org/documentation to get your instance listed. mine is here.


[ nowhere.moe ] [ /dev/pts/5 ] [/srv/invidious]
→ ip a
2: enp5s0:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether a8:a1:59:10:31:bc brd ff:ff:ff:ff:ff:ff
    inet 116.202.216.190/26 brd 116.202.216.191 scope global enp5s0
       valid_lft forever preferred_lft forever
    inet6 IPV6RANGE/64 scope global
       valid_lft forever preferred_lft forever
    inet6 IPV6LINK/64 scope link
       valid_lft forever preferred_lft forever

#so then edit the docker-compose.yml file like so:

[ nowhere.moe ] [ /dev/pts/6 ] [/srv/invidious]
→ cat docker-compose.yml
version: "3"
services:

  invidious:
    image: quay.io/invidious/invidious:latest
    # image: quay.io/invidious/invidious:latest-arm64 # ARM64/AArch64 devices
    restart: unless-stopped
    networks:
      - invidious
    sysctls:
      - net.ipv6.conf.all.disable_ipv6=0
    ports:
      - "127.0.0.1:3000:3000"
    environment:
      # Please read the following file for a comprehensive list of all available
      # configuration options and their associated syntax:
      # https://github.com/iv-org/invidious/blob/master/config/config.example.yml
      INVIDIOUS_CONFIG: |
        db:
          dbname: invidious
          user: kemal
          password: kemal
          host: invidious-db
          port: 5432
        check_tables: true
        external_port: 443
        domain: iv.nowhere.moe
        https_only: true
        statistics_enabled: true
        hmac_key: "ahxuung0ceib5ree6ahB"
        #you can put other arguements here for example:
        default_home: Search
    healthcheck:
      test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/comments/jNQXAC9IVRw || exit 1
      interval: 30s
      timeout: 5s
      retries: 2
    logging:
      options:
        max-size: "1G"
        max-file: "4"
    depends_on:
      - invidious-db

  invidious-db:
    image: docker.io/library/postgres:14
    restart: unless-stopped
    networks:
      - invidious
    sysctls:
      - net.ipv6.conf.all.disable_ipv6=0
    volumes:
      - postgresdata:/var/lib/postgresql/data
      - ./config/sql:/config/sql
      - ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
    environment:
      POSTGRES_DB: invidious
      POSTGRES_USER: kemal
      POSTGRES_PASSWORD: kemal
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]

volumes:
  postgresdata:

networks:
  invidious:
    #enable_ipv6: true
    ipam:
      config:
        - subnet: IPV6RANGE/64


New crontab + new docker-compose.yml file:


crontab -e 

@hourly docker-compose -f /srv/invidious/docker-compose.yml stop ;  docker-compose -f /srv/invidious/docker-compose.yml up -d
@monthly docker-compose -f /srv/invidious/docker-compose.yml stop ; cp /srv/invidious/docker-compose.yml /srv/invidious.docker-compose.yml.backup ;  git -C /srv/invidious pull ; cp /srv/invidious.docker-compose.yml.backup /srv/invidious/docker-compose.yml; docker-compose -f /srv/invidious/docker-compose.yml up -d # monthly invidious upgrade!

Youtube tries to block invidious instances, IPV6 is the way forward!

We're going to follow what unixfox suggests here:

Your invidious instance may get blocked once it becomes popular enough, so circumvent the youtube ipv4 blockage with ipv6 addresses: you need to enable ipv6 on your server and on docker too, so first step is to get an ipv6 range on your server if you didn't have one in the first place and once you get it, add it like so::


/sbin/ip -6 addr add 2001:0db8:0:f101::1/56 dev enp5s0  

#onwards i will refer to add 2001:0db8:0:f101::/56 as IPV6RANGE::/56 

# if that ip gets blocked, do remove it and add the next one like so:

/sbin/ip -6 addr del 2001:0db8:0:f101::1/56 dev enp5s0  
/sbin/ip -6 addr add 2001:0db8:0:f101::2/56 dev enp5s0  

# if it also gets blocked:

/sbin/ip -6 addr del 2001:0db8:0:f101::2/56 dev enp5s0  
/sbin/ip -6 addr add 2001:0db8:0:f101::3/56 dev enp5s0  

Next make sure docker uses the whole IPv6 range (so it ends with ::/56)



##################3####### EDIT AS OF 24/02/2024 SHOULDNT BE REQUIRED!!!! #####################################
[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ vim /etc/docker/daemon.json

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ cat /etc/docker/daemon.json
{
  "ipv6": true,
  "fixed-cidr-v6": "IPV6RANGE::/56",
  "experimental": true,
  "ip6tables": true
}

# OR you can do the following: (Thanks Arya from Project Segfault)

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ vim /etc/docker/daemon.json

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ cat /etc/docker/daemon.json
{
"ipv6": true,
"fixed-cidr-v6": "fd00:dead:beef::/48",
"default-address-pools": [
	{
		"base": "172.80.0.0/16",
		"size": 24
	}
]
}

##################3####### EDIT AS OF 24/02/2024 SHOULDNT BE REQUIRED!!!! #####################################

Then restart docker to make sure the change is there:


[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ systemctl stop docker

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ mv /var/lib/docker/network/files/local-kv.db /tmp/dn-bak
#required because docker may not start due to the ipv6 enabling, which will have conflicts with the default bridge network config (shouldn't be needed if you added te default-address-pools parameter above).

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ systemctl start docker

Once docker is restarted it will have taken into account the new ipv6 subnet you've given it and you can check it here:


[ nowhere.moe ] [ /dev/pts/21 ] [~]
→ docker network ls
NETWORK ID     NAME        DRIVER    SCOPE
559958b3d43c   bridge      bridge    local
2d71827848ba   host        host      local
80a671afbacd   invidious   bridge    local
1ad703b48dd0   none        null      local

[ nowhere.moe ] [ /dev/pts/21 ] [~]
→ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "559958b3d43c8ae942af6bb1dc21edeb295383258db8bd864b94ee9630badad2",
        "Created": "2023-09-30T22:14:09.026433561+02:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": true,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                },
                {
                    "Subnet": "IPV6RANGE::/56"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Next, make sure that you route all traffic from that ipv6 range to go to the docker0 interface if its not added automatically by docker:


[ nowhere.moe ] [ /dev/pts/20 ] [~]
→ ip -6 route ls dev docker0
IPV6RANGE::/56 proto kernel metric 256 linkdown pref medium
fe80::/64 proto kernel metric 256 linkdown pref medium

# to add it you do it as follows:
ip -6 route add IPV6RANGE::/56 dev docker0

And then you need to make sure that invidious uses ipv6 properly as follows:


[ nowhere.moe ] [ /dev/pts/19 ] [/srv/invidious]
→ vim docker-compose.yml

[ nowhere.moe ] [ /dev/pts/19 ] [/srv/invidious]
→ cat docker-compose.yml
version: "2.1"
services:

  ipv6nat:
    container_name: ipv6nat
    privileged: true
    network_mode: host
    restart: unless-stopped
    volumes:
      - '/var/run/docker.sock:/var/run/docker.sock:ro'
      - '/lib/modules:/lib/modules:ro'
    image: robbertkl/ipv6nat

  invidious:
    image: quay.io/invidious/invidious:latest
    # image: quay.io/invidious/invidious:latest-arm64 # ARM64/AArch64 devices
    restart: unless-stopped
    networks:
      - invidious
    sysctls:
      - net.ipv6.conf.all.disable_ipv6=0
    ports:
      - "127.0.0.1:3000:3000"
    environment:
      # Please read the following file for a comprehensive list of all available
      # configuration options and their associated syntax:
      # https://github.com/iv-org/invidious/blob/master/config/config.example.yml
      INVIDIOUS_CONFIG: |
        db:
          dbname: invidious
          user: kemal
          password: kemal
          host: invidious-db
          port: 5432
        check_tables: true
        external_port: 443
        domain: iv.nowhere.moe
        https_only: true
        statistics_enabled: true
        hmac_key: "dawwaddwadwadwadwa"
        force_resolve: ipv6
        default_user_preferences:
         dark_mode: "dark"
         default_home: "Search"
         popular_enabled: true
         feed_menu: ["Subscriptions", "Playlists"]
         autoplay: true
         continue: true
         continue_autoplay: true
         local: false
         #quality: dash
         #quality_dash: 720p
    healthcheck:
      test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/comments/jNQXAC9IVRw || exit 1
      interval: 30s
      timeout: 5s
      retries: 2
    logging:
      options:
        max-size: "1G"
        max-file: "4"
    depends_on:
      - invidious-db

  invidious-db:
    image: docker.io/library/postgres:14
    restart: unless-stopped
    networks:
      - invidious
    sysctls:
      - net.ipv6.conf.all.disable_ipv6=0
    volumes:
      - postgresdata:/var/lib/postgresql/data
      - ./config/sql:/config/sql
      - ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
    environment:
      POSTGRES_DB: invidious
      POSTGRES_USER: kemal
      POSTGRES_PASSWORD: kemal
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]

volumes:
  postgresdata:

networks:
  invidious:
    name: invidious
    enable_ipv6: true
    #external: true
    ipam:
      config:
        - subnet: fd00:dead:beec::/48

Commentary: first you have the ipv6nat service, then you have the local setting set to false as apparently it conflicts, then you have the ipv6 enabling in the invidious-db and invidious services, and finally the network invidious. take note that docker-compose version 2.1 is required because it can't have the "enable ipv6" setting otherwise at the bottom. Then once that's done just run it. At the bottom you have the INTERNAL subnet fd00:dead:beec::/48, this is intentional, don't put the external ipv6 range


[ nowhere.moe ] [ /dev/pts/19 ] [/srv/invidious]
→ docker-compose down --remove-orphans ; docker-compose up -d
Stopping invidious_invidious_1    ... done
Stopping invidious_invidious-db_1 ... done
Stopping ipv6nat                  ... done
Removing invidious_invidious_1    ... done
Removing invidious_invidious-db_1 ... done
Removing ipv6nat                  ... done
Removing network invidious
Creating network "invidious" with the default driver
Creating ipv6nat                  ... done
Creating invidious_invidious-db_1 ... done
Creating invidious_invidious_1    ... done

Then you can check that the instance has got ipv6 working like so:


[ nowhere.moe ] [ /dev/pts/19 ] [/srv/invidious]
→ ip a
1: lo: LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp5s0: BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether a8:a1:59:10:31:bc brd ff:ff:ff:ff:ff:ff
    inet 116.202.216.190/26 brd 116.202.216.191 scope global enp5s0
       valid_lft forever preferred_lft forever
    inet6 IPV6RANGE::3/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::aaa1:59ff:fe10:31bc/64 scope link
       valid_lft forever preferred_lft forever

[ nowhere.moe ] [ /dev/pts/21 ] [~]
→ docker network inspect invidious
[
    {
        "Name": "invidious",
        "Id": "f1bca40b9d77dc8e21b9e2e433f7b89deb36630ae506fc32ea543adb809f091a",
        "Created": "2023-09-30T23:06:35.21024565+02:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": true,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "192.168.80.0/20",
                    "Gateway": "192.168.80.1"
                },
                {
                    "Subnet": "fd00:dead:beec::/48"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "247dd2c35c1277bc030d5cfe176d01963611e8aa118c899bfe7d86e70abcf933": {
                "Name": "invidious_invidious-db_1",
                "EndpointID": "d0013abf14073b91be5a9da37e734a915fc6c77349bf67b3559890c712026c30",
                "MacAddress": "02:42:c0:a8:50:02",
                "IPv4Address": "192.168.80.2/20",
                "IPv6Address": "fd00:dead:beec::2/48"
            },
            "db397f8428cdca171046de25d0ce199fcc5902771c7d085763f9930b6d2fff0d": {
                "Name": "invidious_invidious_1",
                "EndpointID": "80ff47283bcbdc5ed1f8c14bd0c0a23997fe0b4ca072e0f86f909181ab38609d",
                "MacAddress": "02:42:c0:a8:50:03",
                "IPv4Address": "192.168.80.3/20",
                "IPv6Address": "fd00:dead:beec::3/48"
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "invidious",
            "com.docker.compose.project": "invidious",
            "com.docker.compose.version": "1.25.0"
        }
    }
]

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ docker container ls
CONTAINER ID   IMAGE                                                COMMAND                  CREATED          STATUS                    PORTS                                NAMES
db397f8428cd   quay.io/invidious/invidious:latest                   "/sbin/tini -- /invi…"   26 minutes ago   Up 26 minutes (healthy)   127.0.0.1:3000->3000/tcp             invidious_invidious_1
247dd2c35c12   postgres:14                                          "docker-entrypoint.s…"   26 minutes ago   Up 26 minutes (healthy)   5432/tcp                             invidious_invidious-db_1
f57f3bd66a0a   robbertkl/ipv6nat                                    "/docker-ipv6nat-com…"   26 minutes ago   Up 26 minutes                                                  ipv6nat

Let's test if invidious has the correct ipv6 public IP and if we can ping google via ipv6:


#first check if it works on your server by default:
[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ curl -6 icanhazip.com
IPV6RANGE::3

[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ ping -6 ipv6.google.com
PING ipv6.google.com(fra24s05-in-x0e.1e100.net (2a00:1450:4001:828::200e)) 56 data bytes
64 bytes from fra24s05-in-x0e.1e100.net (2a00:1450:4001:828::200e): icmp_seq=1 ttl=60 time=5.17 ms
^C
--- ipv6.google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 5.170/5.170/5.170/0.000 ms

#then check if it's the same on the invidious docker container:
[ nowhere.moe ] [ /dev/pts/0 ] [~]
→ docker exec -it -u root db39 sh
/invidious # apk add curl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ca-certificates (20230506-r0)
(2/4) Installing nghttp2-libs (1.47.0-r1)
(3/4) Installing libcurl (8.3.0-r0)
(4/4) Installing curl (8.3.0-r0)
Executing busybox-1.35.0-r17.trigger
Executing ca-certificates-20230506-r0.trigger
OK: 47 MiB in 59 packages
/invidious # curl -6 icanhazip.com
IPV6RANGE::3
/invidious # ping -6 ipv6.google.com
PING ipv6.google.com (2a00:1450:4001:828::200e): 56 data bytes
64 bytes from 2a00:1450:4001:828::200e: seq=0 ttl=59 time=5.137 ms
64 bytes from 2a00:1450:4001:828::200e: seq=1 ttl=59 time=5.161 ms
^C

and it works! Now verify it on the invidious instance itself:

If it doesnt work it may be because you have the "local: true" setting in your docker-compose.yml file, make sure to set it to false as it looks like it conflicts with the ipv6 setup.

Onion instance setup

Now let's setup an invidious instance that works over tor with a .onion link. For the initial setup of a tor .onion website, check out this tutorial.


[ nowhere.moe ] [ /dev/pts/21 ] [/srv/invidious]
→ cat docker-compose.yml
version: "2.1"
services:

  ipv6nat:
    container_name: ipv6nat
    privileged: true
    network_mode: host
    restart: unless-stopped
    volumes:
      - '/var/run/docker.sock:/var/run/docker.sock:ro'
      - '/lib/modules:/lib/modules:ro'
    image: robbertkl/ipv6nat

  invidious:
    image: quay.io/invidious/invidious:latest
    # image: quay.io/invidious/invidious:latest-arm64 # ARM64/AArch64 devices
    restart: unless-stopped
    networks:
      - invidious
    sysctls:
      - net.ipv6.conf.all.disable_ipv6=0
      #- net.ipv4.conf.all.disable_ipv4=1
    #volumes:
    #  - ./invidious_web:/invidious/
    ports:
      - "127.0.0.1:3000:3000"
    environment:
      # Please read the following file for a comprehensive list of all available
      # configuration options and their associated syntax:
      # https://github.com/iv-org/invidious/blob/master/config/config.example.yml
      INVIDIOUS_CONFIG: |
        db:
          dbname: invidious
          user: kemal
          password: kemal
          host: invidious-db
          port: 5432
        check_tables: true
        external_port: 443
        domain: iv.nowhere.moe
        https_only: true
        statistics_enabled: true
        hmac_key: "awdwdadwawadadw"
        force_resolve: ipv6
        banner: '

nowhere.moe - Instance now has ipv6 rotation and an onion link! (14/10/2023)

Donate Monero: 82w95Xt27wfSLW1UzK48LrXDWngZr4FJ3gYqUVxQ9inQC2JReT81DesKmjcMWWbiBT4k517UwshY53aDPFuvE8AZ1EnYJZu

' default_user_preferences: dark_mode: "dark" default_home: "Search" popular_enabled: true feed_menu: ["Subscriptions", "Playlists"] autoplay: true continue: true continue_autoplay: true local: true quality: dash quality_dash: 720p healthcheck: test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/comments/jNQXAC9IVRw || exit 1 interval: 30s timeout: 5s retries: 2 logging: options: max-size: "1G" max-file: "4" depends_on: - invidious-db invidious-tor: image: quay.io/invidious/invidious:latest # image: quay.io/invidious/invidious:latest-arm64 # ARM64/AArch64 devices restart: unless-stopped networks: - invidious #ipv6_address: 2a01:4f8:241:f500::3 sysctls: - net.ipv6.conf.all.disable_ipv6=0 #volumes: # - ./invidious_web:/invidious/ ports: - "127.0.0.1:3002:3000" environment: # Please read the following file for a comprehensive list of all available # configuration options and their associated syntax: # https://github.com/iv-org/invidious/blob/master/config/config.example.yml INVIDIOUS_CONFIG: | db: dbname: invidious user: kemal password: kemal host: invidious-db port: 5432 check_tables: true external_port: 80 domain: iv.daturab6drmkhyeia4ch5gvfc2f3wgo6bhjrv3pz6n7kxmvoznlkq4yd.onion https_only: false statistics_enabled: true hmac_key: "adwadwadwaadwdaw" force_resolve: ipv6 banner: '

nowhere.moe - Instance now has ipv6 rotation and an onion link! (14/10/2023)

Donate Monero: 82w95Xt27wfSLW1UzK48LrXDWngZr4FJ3gYqUVxQ9inQC2JReT81DesKmjcMWWbiBT4k517UwshY53aDPFuvE8AZ1EnYJZu

' default_user_preferences: dark_mode: "dark" default_home: "Search" popular_enabled: true feed_menu: ["Subscriptions", "Playlists"] autoplay: true continue: true continue_autoplay: true local: true quality: dash quality_dash: 720p healthcheck: test: wget -nv --tries=1 --spider http://127.0.0.1:3002/api/v1/comments/jNQXAC9IVRw || exit 1 interval: 30s timeout: 5s retries: 2 logging: options: max-size: "1G" max-file: "4" depends_on: - invidious-db invidious-db: image: docker.io/library/postgres:14 restart: unless-stopped networks: - invidious sysctls: - net.ipv6.conf.all.disable_ipv6=0 volumes: - postgresdata:/var/lib/postgresql/data - ./config/sql:/config/sql - ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh environment: POSTGRES_DB: invidious POSTGRES_USER: kemal POSTGRES_PASSWORD: kemal healthcheck: test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"] volumes: postgresdata: networks: invidious: name: invidious enable_ipv6: true ipam: config: - subnet: fd00:dead:beec::/48

Then with it you will need to re-run it via docker-compose:


[ nowhere.moe ] [ /dev/pts/20 ] [/srv/invidious]
→ cronitor select

Use the arrow keys to navigate: ↓ ↑ → ←
? Select job to run:
✔ docker-compose -f /srv/invidious/docker-compose.yml stop ; docker-compose -f /srv/invidious/docker-compose.yml up -d
----► Running command: docker-compose -f /srv/invidious/docker-compose.yml stop ; docker-compose -f /srv/invidious/docker-compose.yml up -d

Stopping invidious_invidious_1    ... done
Stopping invidious_invidious-db_1 ... done
Stopping ipv6nat                  ... done
Starting ipv6nat                  ... done
Starting invidious_invidious-db_1 ... done
Starting invidious_invidious_1       ... done
Recreating invidious_invidious-tor_1 ... done

----► ✔ Command successful    Elapsed time 2.939s

Then once that's done, you need to make sure it can be accessed via the tor link, so you need to add that access in nginx:


[ nowhere.moe ] [ /dev/pts/19 ] [~]
→ vim /etc/nginx/sites-available/iv.nowhere.moe.conf

[ nowhere.moe ] [ /dev/pts/21 ] [~]
→ cat /etc/nginx/sites-available/iv.nowhere.moe.conf
server {
    listen 80;
    listen [::]:80;
    listen 443 ssl http2;
    listen [::]:443 ssl http2;


        ######## TOR CHANGES ########
        #listen 4443;
        #listen [::]:4443;
        #server_name iv.daturab6drmkhyeia4ch5gvfc2f3wgo6bhjrv3pz6n7kxmvoznlkq4yd.onion;
        add_header Onion-Location "http://iv.daturab6drmkhyeia4ch5gvfc2f3wgo6bhjrv3pz6n7kxmvoznlkq4yd.onion$request_uri" always;
        ######## TOR CHANGES ########

    server_name iv.nowhere.moe;

    access_log off;
    error_log /var/log/nginx/error.log crit;

    ssl_certificate /etc/acme/certs/iv.nowhere.moe/fullchain.cer;
    ssl_certificate_key /etc/acme/certs/iv.nowhere.moe/iv.nowhere.moe.key;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $host;    # so Invidious knows domain
        proxy_http_version 1.1;     # to keep alive
        proxy_set_header Connection ""; # to keep alive
    }

    if ($https = '') { return 301 https://$host$request_uri; }  # if not connected to HTTPS, perma-redirect to HTTPS
}

In here you need the add_header option to make sure your default instance displays the onion instance when users browse to it via tor. Then once users click it they need to access it via the mentionned onion link, so let's set that up:



[ nowhere.moe ] [ /dev/pts/21 ] [~]
→ cat /etc/nginx/sites-available/iv.nowhere.moe.tor.conf
server {
        listen 443;
        listen [::]:443;
        server_name iv.daturab6drmkhyeia4ch5gvfc2f3wgo6bhjrv3pz6n7kxmvoznlkq4yd.onion;
    if ($https != '') { return 301 http://$host$request_uri; }  # if not connected to HTTP, perma-redirect to HTTP
}

server {
        ######## TOR CHANGES ########
        listen 4443;
        listen [::]:4443;
        server_name iv.daturab6drmkhyeia4ch5gvfc2f3wgo6bhjrv3pz6n7kxmvoznlkq4yd.onion;
        add_header Onion-Location "http://iv.daturab6drmkhyeia4ch5gvfc2f3wgo6bhjrv3pz6n7kxmvoznlkq4yd.onion$request_uri" always;
        ######## TOR CHANGES ########

    access_log off;
    error_log /var/log/nginx/error.log crit;

    location / {
        proxy_pass http://127.0.0.1:3002;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $host;    # so Invidious knows domain
        proxy_http_version 1.1;     # to keep alive
        proxy_set_header Connection ""; # to keep alive
    }

    if ($https != '') { return 301 http://$host$request_uri; }  # if not connected to HTTP, perma-redirect to HTTP
}

[ nowhere.moe ] [ /dev/pts/19 ] [/etc/nginx/sites-available]
→ ln -s /etc/nginx/sites-available/iv.nowhere.moe.tor.conf /etc/nginx/sites-enabled

[ nowhere.moe ] [ /dev/pts/19 ] [/etc/nginx/sites-available]
→ nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

[ nowhere.moe ] [ /dev/pts/19 ] [/etc/nginx/sites-available]
→ nginx -s reload
2023/10/01 16:47:39 [notice] 1060069#1060069: signal process started

Now you need the local proxying and DASH to work since youtube tries to block that also, so we'll use the ipv6 rotation script made by unifox:



[ nowhere.moe ] [ /dev/pts/19 ] [/srv]
→ cd /srv/

[ nowhere.moe ] [ /dev/pts/19 ] [/srv]
→  git clone https://github.com/iv-org/smart-ipv6-rotator

[ nowhere.moe ] [ /dev/pts/19 ] [/srv]
→ cd smart-ipv6-rotator

[ nowhere.moe ] [ /dev/pts/19 ] [/srv/smart-ipv6-rotator]
→ cp config.py.example config.py

[ nowhere.moe ] [ /dev/pts/19 ] [/srv/smart-ipv6-rotator]
→ vim config.py

[ nowhere.moe ] [ /dev/pts/19 ] [/srv/smart-ipv6-rotator]
→ cat config.py
ipv6_subnet = "IPV6RANGE::/64"

[ nowhere.moe ] [ /dev/pts/19 ] [/srv/smart-ipv6-rotator]
→ apt install python3-pyroute2

Then you can execute the script like so:


[ nowhere.moe ] [ /dev/pts/21 ] [~]
→ /usr/bin/python3 /srv/smart-ipv6-rotator/smart-ipv6-rotator.py clean ; /usr/bin/python3 /srv/smart-ipv6-rotator/smart-ipv6-rotator.py run --ipv6range="IPV6RANGE::/64"
[INFO] No cleanup of previous setup needed.
[INFO] You have IPv6 connectivity. Continuing.
[INFO] No cleanup of previous setup needed.
[DEBUG] Debug info:
random_ipv6_address --> IPV6RANGE:7c39:d64d:274d:4a18
random_ipv6_address_mask --> 64
gateway --> fe80::1
interface_index --> 2
interface_name --> enp5s0
ipv6_subnet --> IPV6RANGE::/64
[INFO] Correctly using the new random IPv6 address, continuing.
[INFO] Correctly configured the IPv6 routes for Google IPv6 ranges.
[INFO] Successful setup. Waiting for the propagation in the Linux kernel.

[ nowhere.moe ] [ /dev/pts/21 ] [~]
→ curl -6 icanhazip.com
IPV6RANGE:7c39:d64d:274d:4a18

[ nowhere.moe ] [ /dev/pts/21 ] [~]
→ /usr/bin/python3 /srv/smart-ipv6-rotator/smart-ipv6-rotator.py clean ; /usr/bin/python3 /srv/smart-ipv6-rotator/smart-ipv6-rotator.py run --ipv6range="IPV6RANGE::/64"
[INFO] Finished cleaning up previous setup.
[INFO] Waiting for the propagation in the Linux kernel.
[INFO] You have IPv6 connectivity. Continuing.
[INFO] No cleanup of previous setup needed.
[DEBUG] Debug info:
random_ipv6_address --> IPV6RANGE:25b8:6c57:56ef:67de
random_ipv6_address_mask --> 64
gateway --> fe80::1
interface_index --> 2
interface_name --> enp5s0
ipv6_subnet --> IPV6RANGE::/64
[INFO] Correctly using the new random IPv6 address, continuing.
[INFO] Correctly configured the IPv6 routes for Google IPv6 ranges.
[INFO] Successful setup. Waiting for the propagation in the Linux kernel.

[ nowhere.moe ] [ /dev/pts/21 ] [~]
→ curl -6 icanhazip.com
IPV6RANGE:25b8:6c57:56ef:67de

You can check if your ipv6 got changed each time with the curl -6 icanhazip.com command. Now let's add it as a cronjob to make sure the instance's ip changes once a day:


crontab -e

 @daily /usr/bin/python3 /srv/smart-ipv6-rotator/smart-ipv6-rotator.py clean ; /usr/bin/python3 /srv/smart-ipv6-rotator/smart-ipv6-rotator.py run --ipv6range="IPV6RANGE::/64"

Now with that setup, youtube is going to need to block every single IPv6 in the mentionned subnet, should be preety resilient.

Now you can just browse to it via the tor browser:

Here you may need to click "enable media" as it may be blocked

And that's it! Now our invidious instance is available to be browsed anonymously.

Nihilism

Until there is Nothing left.



Creative Commons Zero: No Rights Reserved

About nihilist

Donate XMR: 8AUYjhQeG3D5aodJDtqG499N5jXXM71gYKD8LgSsFB9BUV1o7muLv3DXHoydRTK4SZaaUBq4EAUqpZHLrX2VZLH71Jrd9k8


Contact: nihilist@contact.nowhere.moe (PGP)