Code Monkey home page Code Monkey logo

towards5gs-helm's Introduction

towards5GS-helm

Helm charts linting

Towards5GS-helm is an open-source project implemented to provide helm charts in order deploy on one click a 5G system (RAN+SA 5G core) on top of Kubernetes. It currently relies on Free5GC for the core network and UERANSIM to simulate Radio Access Network

TL;DR

helm repo add towards5gs 'https://raw.githubusercontent.com/Orange-OpenSource/towards5gs-helm/main/repo/'
helm repo update
helm search repo

Documentation

The documentation can be found here!

Motivations

Please consult this link to see the motivations that have led to this project.

Contributing

Moving towards a Cloud native model for the 5G system is not a simple task. We welcome all new contributions making this project better!

Acknowledgement

Thanks to both Free5GC and UERANSIM teams for their great efforts.

License

towards5GS-helm is under Apache 2.0 license.

Citation

Text format:

A. Khichane, I. Fajjari, N. Aitsaadi and M. Gueroui, "Cloud Native 5G: an Efficient Orchestration of Cloud Native 5G System," NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium, Budapest, Hungary, 2022, pp. 1-9, doi: 10.1109/NOMS54207.2022.9789856.

BibTex:

@INPROCEEDINGS{9789856,
  author={Khichane, Abderaouf and Fajjari, Ilhem and Aitsaadi, Nadjib and Gueroui, Mourad},
  booktitle={NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium},
  title={Cloud Native 5G: an Efficient Orchestration of Cloud Native 5G System},
  year={2022},
  volume={},
  number={},
  pages={1-9},
  doi={10.1109/NOMS54207.2022.9789856}}

towards5gs-helm's People

Contributors

abousselmi avatar cdestre avatar chabimic avatar debeaueric avatar diogocruz40 avatar efiacor avatar hi120ki avatar hkerma avatar ianchen0119 avatar ilhemfajjari avatar jeromethiery avatar lapentad avatar lgabhishek18 avatar lolototo2 avatar marwilms avatar navarrothiago avatar pinoogni avatar raoufkh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

towards5gs-helm's Issues

Error when installating multiple replicas of a NF

Hello,
I am changing the value of replicaCount in the AMF values.yaml file.
However when I tried to deploy with Helm, it fails with the error message:

Error: INSTALLATION FAILED: deployments.apps "free5gc-free5gc-amf-amf"already exists

We don't have any problem deploying free5gc with only the default replica count value.

Any idea on how to solve that? Is there some conflict over Deployment names when setting multiple replicas?

Thanks,

Container images for NFs and UERANSIM

Hi,

The container images of NFs and UERANSIM seems to be few months old and are missing all the latest additions, bug fixes done by Free5GC and UERANSIM projects.
Is there a plan to update these images to latest builds?

Thanks
Priyanshu

UPF POD NOT FORWARDING PACKET TO UE

I know this has been discussed previously and closed, but I still need some help.

What I can see is tunnel interface is receiving ping response from internet:

rajabu@cloud-console:~$ kubectl exec -it upf-free5gc-upf-upf-594bc9f4c6-n4wqv -n free5gc -- tcpdump -i upfgtp
Defaulted container "upf" out of: upf, init-sysctl (init)
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on upfgtp, link-type RAW (Raw IP), snapshot length 262144 bytes
16:29:17.925241 IP 172.16.0.2 > dns.google: ICMP echo request, id 28, seq 163, length 64
16:29:17.938568 IP dns.google > 172.16.0.2: ICMP echo reply, id 28, seq 163, length 64
16:29:18.949157 IP 172.16.0.2 > dns.google: ICMP echo request, id 28, seq 164, length 64
16:29:18.965977 IP dns.google > 172.16.0.2: ICMP echo reply, id 28, seq 164, length 64
16:29:19.973297 IP 172.16.0.2 > dns.google: ICMP echo request, id 28, seq 165, length 64
16:29:19.986213 IP dns.google > 172.16.0.2: ICMP echo reply, id 28, seq 165, length 64
16:29:20.997108 IP 172.16.0.2 > dns.google: ICMP echo request, id 28, seq 166, length 64
16:29:21.010489 IP dns.google > 172.16.0.2: ICMP echo reply, id 28, seq 166, length 64

And this is happening after enabling IP_FORWARDING:

rajabu@cloud-console:~$ kubectl exec -it upf-free5gc-upf-upf-594bc9f4c6-n4wqv -n free5gc -- sysctl net.ipv4.ip_forward
Defaulted container "upf" out of: upf, init-sysctl (init)
net.ipv4.ip_forward = 1

or

rajabu@cloud-console:$ kubectl exec -it upf-free5gc-upf-upf-594bc9f4c6-n4wqv -n free5gc -- cat /proc/sys/net/ipv4/ip_forward
Defaulted container "upf" out of: upf, init-sysctl (init)
1
rajabu@cloud-console:
$

Everything else seems to be OK:

rajabu@cloud-console:$ kubectl exec -it upf-free5gc-upf-upf-594bc9f4c6-n4wqv -n free5gc -- ip route show table all
Defaulted container "upf" out of: upf, init-sysctl (init)
default via 169.254.1.1 dev eth0
169.254.1.1 dev eth0 scope link
172.16.0.0/16 dev upfgtp proto static
192.168.3.0/24 dev net1 proto kernel scope link src 192.168.3.2
192.168.4.0/24 dev net2 proto kernel scope link src 192.168.4.2
local 10.1.104.55 dev eth0 table local proto kernel scope host src 10.1.104.55
broadcast 127.0.0.0 dev lo table local proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1
broadcast 192.168.3.0 dev net1 table local proto kernel scope link src 192.168.3.2
local 192.168.3.2 dev net1 table local proto kernel scope host src 192.168.3.2
broadcast 192.168.3.255 dev net1 table local proto kernel scope link src 192.168.3.2
broadcast 192.168.4.0 dev net2 table local proto kernel scope link src 192.168.4.2
local 192.168.4.2 dev net2 table local proto kernel scope host src 192.168.4.2
broadcast 192.168.4.255 dev net2 table local proto kernel scope link src 192.168.4.2
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev net1 proto kernel metric 256 pref medium
fe80::/64 dev net2 proto kernel metric 256 pref medium
fe80::/64 dev upfgtp proto kernel metric 256 pref medium
local ::1 dev lo table local proto kernel metric 0 pref medium
local fe80::215:5dff:fe56:dc0e dev net2 table local proto kernel metric 0 pref medium
local fe80::215:5dff:fe56:dc0f dev net1 table local proto kernel metric 0 pref medium
local fe80::a163:bd4c:b957:658c dev upfgtp table local proto kernel metric 0 pref medium
local fe80::c0c1:60ff:fe24:7230 dev eth0 table local proto kernel metric 0 pref medium
multicast ff00::/8 dev eth0 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev net1 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev net2 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev upfgtp table local proto kernel metric 256 pref medium
rajabu@cloud-console:
$
rajabu@cloud-console:$
rajabu@cloud-console:
$
rajabu@cloud-console:$
rajabu@cloud-console:
$ kubectl exec -it upf-free5gc-upf-upf-594bc9f4c6-n4wqv -n free5gc -- iptables -t nat -L -n -v
Defaulted container "upf" out of: upf, init-sysctl (init)
Chain PREROUTING (policy ACCEPT 9 packets, 1310 bytes)
pkts bytes target prot opt in out source destination

Chain INPUT (policy ACCEPT 6 packets, 1058 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 19 packets, 1405 bytes)
pkts bytes target prot opt in out source destination

Chain POSTROUTING (policy ACCEPT 19 packets, 1405 bytes)
pkts bytes target prot opt in out source destination
3 252 MASQUERADE all -- * eth0 172.16.0.0/16 0.0.0.0/0
rajabu@cloud-console:$
rajabu@cloud-console:
$
rajabu@cloud-console:$
rajabu@cloud-console:
$ kubectl exec -it upf-free5gc-upf-upf-594bc9f4c6-n4wqv -n free5gc -- iptables -L -n -v
Defaulted container "upf" out of: upf, init-sysctl (init)
Chain INPUT (policy ACCEPT 1677 packets, 3058K bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
561 47124 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0

Chain OUTPUT (policy ACCEPT 1195 packets, 64734 bytes)
pkts bytes target prot opt in out source destination
rajabu@cloud-console:$
rajabu@cloud-console:
$

This is a Microk8s Cluster:

I tried to enable IP on CALICO, looks like this parameter does not exist in config-map:
rajabu@cloud-console:$ kubectl describe cm calico-config -n kube-system | grep ip_forwarding
rajabu@cloud-console:
$
rajabu@cloud-console:~$

I will appreciate any hints that can lead to a solution

Thank you
Raj

Please update docker images used by free5gc (UPF segfault issue)

Hi, Kindly ask you to update or provide Dockerfiles, especially for UPF. We have huge issue with free5gc UPF.
Issue description: When created/register inside UERANSIM UE pod more than 10 ue instances ( and tunnel interfaces) UPF throws segfault error and machine is rebooting immediately.

UPF logs: (error chain started when device/connection nr.11 and each subsequent one is added):
upf_error
Kernel logs:
kernel_logs

In changelog at free5gc/upf github exist possible solution that probably fix this. It was merged recently after release tag. (free5gc/upf@c2b30d1)

Branching UPF can't return packet to gNB in ulcl architecture

Hi,
I am currently trying to deploy 5GC with multi UPFs architecture. So I enabled the ULCL mode at configuration.
I make at least edit for installing ULCL as instruction but seems like Branching UPF can't route the GTP-U packet back to gNB through N3 interface.

image
Packet replied at Branching UPF but cannot come back again to gNB at my eth0 interface.

First I deployed each 5G NFs at the same node on K8S, and below is my command for deploying SMF:

helm install -n o5 smf free5gc/charts/free5gc-smf/ -f free5gc/charts/free5gc-smf/ulcl-enabled-values.yaml

At UPF configuration I just only set userPlaneArchitecture: ulcl
This is my pcap file free5gc-pcap.zip

Is there any more config to enable ULCL mode?
Best,

Dev deployment on Minikube - SMF and UPF fail to listen on UDP addresses

Context

I am trying to deploy free5gc on local k8s cluster, so that I can explore it. I decided to spin a Minikube single-node cluster with Docker driver on my local Manjaro Linux laptop. It has following kernel: 5.15.60-1-MANJARO

Steps

Steps I performed:

  • Setup Minikube with Docker driver

  • Install gtp5g kernel module on your linux host

    • Ensure linux-headers are installed
    • Build and install the module:
     git clone -b v0.6.5 https://github.com/free5gc/gtp5g.git
     cd gtp5g
     make
     sudo make install
  • Install multus-cni on your linux host

    • git clone https://github.com/k8snetworkplumbingwg/multus-cni.git && cd multus-cni
    • cat ./deployments/multus-daemonset-thick-plugin.yml | kubectl apply -f -
  • Setup persistent volume in k8s

  • Setup physical network interfaces in k8s node (eth0, eth1)

    • ip a show eth0 on k8s node shows an interface
    • K8s node has no eth1 interface (should one be created?)
  • Install helm chart:

     helm repo add towards5gs 'https://raw.githubusercontent.com/Orange-OpenSource/towards5gs-helm/main/repo/'
     helm repo update
     helm install 5gc-helm towards5gs/free5gc -n 5gc

K8s node has following eth0 interface:

docker@minikube:~$ ip a show eth0
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:c0:a8:31:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.49.2/24 brd 192.168.49.255 scope global eth0
       valid_lft forever preferred_lft forever

K8s has no eth1 interface.

Problem

SMF and UPF services fail:

NAME                                           READY   STATUS             RESTARTS       AGE
5gc-helm-free5gc-amf-amf-6949fcd8d-ktj4c       1/1     Running            0              5m25s
5gc-helm-free5gc-ausf-ausf-5cc7954594-qv4b4    1/1     Running            0              5m25s
5gc-helm-free5gc-nrf-nrf-6767465d47-2kl6d      1/1     Running            0              5m25s
5gc-helm-free5gc-nssf-nssf-7bcb45b6b9-v6xpf    1/1     Running            0              5m25s
5gc-helm-free5gc-pcf-pcf-56847599f4-xs6b6      1/1     Running            0              5m25s
5gc-helm-free5gc-smf-smf-79db7f6485-nb57b      0/1     CrashLoopBackOff   5 (110s ago)   5m25s
5gc-helm-free5gc-udm-udm-54fcb66c6b-86sl6      1/1     Running            0              5m25s
5gc-helm-free5gc-udr-udr-66b7d76f46-7w4wz      1/1     Running            0              5m25s
5gc-helm-free5gc-upf-upf-85c99f9dd9-snwp6      0/1     CrashLoopBackOff   5 (2m4s ago)   5m25s
5gc-helm-free5gc-webui-webui-bf5b9ff75-c5vfn   1/1     Running            0              5m25s
mongodb-0                                      1/1     Running            0              5m25s

➜  towards5gs-helm git:(main) ✗ kubectl -n 5gc logs pods/5gc-helm-free5gc-smf-smf-79db7f6485-nb57b
Defaulted container "smf" out of: smf, wait-nrf (init)
2022-09-08T14:14:43Z [INFO][SMF][CFG] SMF config version [1.0.2]
2022-09-08T14:14:43Z [INFO][SMF][CFG] UE-Routing config version [1.0.1]
2022-09-08T14:14:43Z [INFO][SMF][Init] SMF Log level is set to [info] level
2022-09-08T14:14:43Z [INFO][LIB][NAS] set log level : info
2022-09-08T14:14:43Z [INFO][LIB][NAS] set report call : false
2022-09-08T14:14:43Z [INFO][LIB][NGAP] set log level : info
2022-09-08T14:14:43Z [INFO][LIB][NGAP] set report call : false
2022-09-08T14:14:43Z [INFO][LIB][Aper] set log level : info
2022-09-08T14:14:43Z [INFO][LIB][Aper] set report call : false
2022-09-08T14:14:43Z [INFO][LIB][PFCP] set log level : info
2022-09-08T14:14:43Z [INFO][LIB][PFCP] set report call : false
2022-09-08T14:14:43Z [INFO][SMF][App] smf
2022-09-08T14:14:43Z [INFO][SMF][App] SMF version:
        free5GC version: v3.2.0
        build time:      2022-08-15T14:14:15Z
        commit hash:     de70bf6c
        commit time:     2022-06-28T04:52:40Z
        go version:      go1.14.4 linux/amd64
2022-09-08T14:14:43Z [INFO][SMF][CTX] smfconfig Info: Version[1.0.2] Description[SMF initial local configuration]
2022-09-08T14:14:43Z [INFO][SMF][CTX] Endpoints: [10.100.50.233]
2022-09-08T14:14:43Z [INFO][SMF][Init] Server started
2022-09-08T14:14:43Z [INFO][SMF][Init] SMF Registration to NRF {1384474e-46bf-44a8-9c29-bfdae51157f3 SMF REGISTERED 0 0xc00028e240 0xc00028e2a0 [] []   [smf-nsmf] [] <nil> [] [] <nil> 0 0 0 area1 <nil> <nil> <nil> <nil> 0xc000131000 <nil> <nil> <nil> <nil> <nil> map[] <nil> false 0xc00028e060 false false []}
2022-09-08T14:14:43Z [ERRO][SMF][PFCP] Failed to listen: listen udp 10.100.50.244:8805: bind: cannot assign requested address
2022-09-08T14:14:43Z [FATA][SMF][App] panic: runtime error: invalid memory address or nil pointer dereference
goroutine 1 [running]:
runtime/debug.Stack(0xc0004b5628, 0xc7b0a0, 0x15067b0)
        /usr/local/go/src/runtime/debug/stack.go:24 +0x9d
main.main.func1()
        /go/src/free5gc/NFs/smf/cmd/main.go:26 +0x57
panic(0xc7b0a0, 0x15067b0)
        /usr/local/go/src/runtime/panic.go:969 +0x166
github.com/free5gc/smf/internal/pfcp/udp.Run(0xdd4700)
        /go/src/free5gc/NFs/smf/internal/pfcp/udp/udp.go:27 +0x186
github.com/free5gc/smf/pkg/service.(*SMF).Start(0x1523eb0)
        /go/src/free5gc/NFs/smf/pkg/service/init.go:267 +0x2b3
main.action(0xc000510580, 0x0, 0xc00002c240)
        /go/src/free5gc/NFs/smf/cmd/main.go:65 +0x423
github.com/urfave/cli.HandleAction(0xc48000, 0xdd65d8, 0xc000510580, 0xc000510580, 0x0)
        /go/pkg/mod/github.com/urfave/[email protected]/app.go:524 +0x11a
github.com/urfave/cli.(*App).Run(0xc0005241c0, 0xc0000be000, 0x5, 0x5, 0x0, 0x0)
        /go/pkg/mod/github.com/urfave/[email protected]/app.go:286 +0x649
main.main()
        /go/src/free5gc/NFs/smf/cmd/main.go:37 +0x188

➜  towards5gs-helm git:(main) ✗ kubectl -n 5gc logs pods/5gc-helm-free5gc-upf-upf-85c99f9dd9-snwp6
Cannot find device "n6"
2022-09-08T14:19:29Z [INFO][UPF][Main] UPF version:
        free5GC version: v3.2.0
        build time:      2022-08-15T14:14:32Z
        commit hash:     4972fffb
        commit time:     2022-06-29T05:46:33Z
        go version:      go1.14.4 linux/amd64
2022-09-08T14:19:29Z [INFO][UPF][Cfg] Read config from [/free5gc/config//upfcfg.yaml]
2022-09-08T14:19:29Z [INFO][UPF][Cfg] ==================================================
2022-09-08T14:19:29Z [INFO][UPF][Cfg] (*factory.Config)(0xc0000d4000)({
        Version: (string) (len=5) "1.0.3",
        Description: (string) (len=31) "UPF initial local configuration",
        Pfcp: (*factory.Pfcp)(0xc00009e8d0)({
                Addr: (string) (len=13) "10.100.50.241",
                NodeID: (string) (len=13) "10.100.50.241",
                RetransTimeout: (time.Duration) 1s,
                MaxRetrans: (uint8) 3
        }),
        Gtpu: (*factory.Gtpu)(0xc00009ea80)({
                Forwarder: (string) (len=5) "gtp5g",
                IfList: ([]factory.IfInfo) (len=1 cap=1) {
                        (factory.IfInfo) {
                                Addr: (string) (len=13) "10.100.50.233",
                                Type: (string) (len=2) "N3",
                                Name: (string) "",
                                IfName: (string) ""
                        }
                }
        }),
        DnnList: ([]factory.DnnList) (len=1 cap=1) {
                (factory.DnnList) {
                        Dnn: (string) (len=8) "internet",
                        Cidr: (string) (len=11) "10.1.0.0/17",
                        NatIfName: (string) (len=2) "n6"
                }
        },
        Logger: (*factory.Logger)(0xc0000a4680)({
                Enable: (bool) true,
                Level: (string) (len=4) "info",
                ReportCaller: (bool) false
        })
})
2022-09-08T14:19:29Z [INFO][UPF][Cfg] ==================================================
2022-09-08T14:19:29Z [INFO][UPF][Main] Log level is set to [info] level
2022-09-08T14:19:29Z [INFO][UPF][Main] starting Gtpu Forwarder [gtp5g]
2022-09-08T14:19:29Z [INFO][UPF][Main] GTP Address: "10.100.50.233:2152"
2022-09-08T14:19:29Z [ERRO][UPF][Main] UPF Cli Run Error: open Gtp5g: open link: listen: listen udp 10.100.50.233:2152: bind: cannot assign requested address

Adjusting several helm variables didn't bring an improvement:

helm upgrade 5gc-helm towards5gs/free5gc -n 5gc --set global.n6network.masterIf=eth0 --set global.n6network.subnetIP=192.168.49.0 --set global.n6network.gatewayIP=192.168.49.1 --set free5gc-upf.upf.n6if.ipAddress=192.168.49.2

Questions

  1. Does this dev deployment make generally sense? E.g. can free5gc work on newer kernels?
  2. Did I miss some setup steps?
  3. What adjustments should I make here? Should I create eth1 interface or change more helm values, e.g. upf.n3if.ipAddress?

Thank you in advance.

Ping through "uesimtun0" not working.

I am trying to test the free5gc with UERANSIM.

I have followed steps from https://github.com/Orange-OpenSource/towards5gs-helm/blob/main/docs/demo/Setup-free5gc-and-test-with-UERANSIM.md

I will post detailed steps about my cluster and some extra details (Note: I have single node cluster)

I have ubuntu 22.04, kubeadm and kubectl 1.25.3.

$ uname -r
5.15.0-53-generic

sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

gtp5g module

git clone -b v0.3.1 https://github.com/free5gc/gtp5g.git
cd gtp5g
make
sudo make install

helm charts

helm repo add towards5gs https://raw.githubusercontent.com/Orange-OpenSource/towards5gs-helm/main/repo/
helm repo update

cluster

sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=.kube/config

kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml
kubectl apply -f calico/custom-resources.yaml 

calico custom resource

$ cat calico/custom-resources.yaml 
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 192.168.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()
    containerIPForwarding: Enabled
---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer 
metadata: 
  name: default 
spec: {}

multus
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-thick.yml
upf in data plane name space and rest are in control plane name space

kubectl create ns cp
kubectl create ns dp

now free5gc

helm upgrade --install test -n dp \
--set global.n4network.masterIf=eno0 \
--set global.n3network.masterIf=eno0 \
--set global.n6network.masterIf=eno0 \
--set global.n6network.subnetIP="192.168.0.0" \
--set global.n6network.gatewayIP="192.168.0.1" \
--set upf.n6if.ipAddress="192.168.0.3" \
towards5gs/free5gc-upf

enable ip_forward in UPF

kubectl  exec -ti -n dp test-free5gc-upf-upf-6485b99bf9-l6fzt -- bash
apt update
apt install nano tcpdump iptables
uncomment "#net.ipv4.ip_forward=1" in /etc/sysctl.conf

verify
root@test-free5gc-upf-upf-6485b99bf9-l6fzt:/free5gc/upf# cat /proc/sys/net/ipv4/ip_forward
1
helm upgrade --install test -n cp \
--set deployUPF=false \
--set deployWEBUI=false \
--set mongodb.persistence.enabled=false \
--set global.n2network.masterIf=eno0 \
--set global.n3network.masterIf=eno0 \
--set global.n4network.masterIf=eno0 \
--set global.n6network.masterIf=eno0 \
--set global.n9network.masterIf=eno0 \
towards5gs/free5gc

Edit web ui service change from to nodeport clusterIP
kubectl edit service -n cp webui-service

Port forward
kubectl port-forward -n cp services/webui-service 5000

add the subscriber in webui

install ueransim simulator on a newly created sim namespace.

helm install sim -n sim --create-namespace \
--set global.n2network.masterIf=ens3 \
--set global.n3network.masterIf=ens3 \
towards5gs/ueransim

export POD_NAME=$(kubectl get pods --namespace sim -l "component=ue" -o jsonpath="{.items[0].metadata.name}")

$ kubectl --namespace sim exec -it $POD_NAME -- ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 26:76:42:85:b5:5f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.47.211/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2476:42ff:fe85:b55f/64 scope link 
       valid_lft forever preferred_lft forever
4: uesimtun0: <POINTOPOINT,PROMISC,NOTRAILERS,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none 
    inet 10.1.0.1/32 scope global uesimtun0
       valid_lft forever preferred_lft forever
    inet6 fe80::7cf1:ec24:48e:12d4/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever

But ping fails through uesimtun0

$ kubectl --namespace sim exec -it $POD_NAME -- ping -c 1 -I uesimtun0 www.google.com
PING www.google.com (172.217.169.36) from 10.1.0.1 uesimtun0: 56(84) bytes of data.

--- www.google.com ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

command terminated with exit code 1

Details about cluster

$ kubectl get pods -o wide -A
NAMESPACE          NAME                                        READY   STATUS    RESTARTS   AGE   IP            
calico-apiserver   calico-apiserver-8c7944fd9-tvh9k            1/1     Running   0          81m   192.168.47.196
calico-apiserver   calico-apiserver-8c7944fd9-vvmsg            1/1     Running   0          81m   192.168.47.197
calico-system      calico-kube-controllers-6b57db7fd6-7n4hj    1/1     Running   0          82m   192.168.47.195
calico-system      calico-node-mh5bx                           1/1     Running   0          82m   10.237.72.160 
calico-system      calico-typha-6bdddf499-t4zx2                1/1     Running   0          82m   10.237.72.160 
cp                 mongodb-0                                   1/1     Running   0          48m   192.168.47.207
cp                 test-free5gc-amf-amf-57f6cb85f9-rtdhc       1/1     Running   0          48m   192.168.47.206
cp                 test-free5gc-ausf-ausf-5c57578f4c-nlgbg     1/1     Running   0          48m   192.168.47.203
cp                 test-free5gc-nrf-nrf-79df6c49d-sfh87        1/1     Running   0          48m   192.168.47.201
cp                 test-free5gc-nssf-nssf-fd7b87cc4-nwbbs      1/1     Running   0          48m   192.168.47.209
cp                 test-free5gc-pcf-pcf-9b7fcc57c-x8n6f        1/1     Running   0          48m   192.168.47.208
cp                 test-free5gc-smf-smf-67cdd4d846-xnrbg       1/1     Running   0          48m   192.168.47.202
cp                 test-free5gc-udm-udm-874f96955-56m4n        1/1     Running   0          48m   192.168.47.205
cp                 test-free5gc-udr-udr-6946f7db57-v9882       1/1     Running   0          48m   192.168.47.204
cp                 test-free5gc-webui-webui-6d788974b4-rmqs9   1/1     Running   0          48m   192.168.47.210
dp                 test-free5gc-upf-upf-6485b99bf9-l6fzt       1/1     Running   0          50m   192.168.47.200
sim                sim-ueransim-gnb-854d9496b6-5q5fd           1/1     Running   0          42m   192.168.47.212
sim                sim-ueransim-ue-86d5fbfd99-k6sxh            1/1     Running   0          42m   192.168.47.211
tigera-operator    tigera-operator-6bb5985474-zgqfz            1/1     Running   0          82m   10.237.72.160 

Some extra information from UPF

root@test-free5gc-upf-upf-6485b99bf9-l6fzt:/free5gc/upf# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 1a:54:67:60:7c:8e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.47.200/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::1854:67ff:fe60:7c8e/64 scope link 
       valid_lft forever preferred_lft forever
4: n3@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether fe:fb:d7:bd:0d:e8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.100.50.233/29 brd 10.100.50.239 scope global n3
       valid_lft forever preferred_lft forever
    inet6 fe80::fcfb:d7ff:febd:de8/64 scope link 
       valid_lft forever preferred_lft forever
5: n6@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether f6:23:e2:d4:65:0c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.0.3/24 brd 192.168.0.255 scope global n6
       valid_lft forever preferred_lft forever
    inet6 fe80::f423:e2ff:fed4:650c/64 scope link 
       valid_lft forever preferred_lft forever
6: n4@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:d0:92:d5:4d:13 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.100.50.241/29 brd 10.100.50.247 scope global n4
       valid_lft forever preferred_lft forever
    inet6 fe80::d0:92ff:fed5:4d13/64 scope link 
       valid_lft forever preferred_lft forever
7: upfgtp: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1464 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet6 fe80::7fca:b50d:c9df:de15/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever

root@test-free5gc-upf-upf-6485b99bf9-l6fzt:/free5gc/upf# ip route
default via 169.254.1.1 dev eth0 
10.1.0.0/17 dev upfgtp proto static 
10.100.50.232/29 dev n3 proto kernel scope link src 10.100.50.233 
10.100.50.240/29 dev n4 proto kernel scope link src 10.100.50.241 
169.254.1.1 dev eth0 scope link 
192.168.0.0/24 dev n6 proto kernel scope link src 192.168.0.3 
root@test-free5gc-upf-upf-6485b99bf9-l6fzt:/free5gc/upf# tcpdump -nei any icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
11:54:11.475623  In ethertype IPv4 (0x0800), length 100: 10.1.0.1 > 172.217.169.36: ICMP echo request, id 46, seq 1, length 64
11:54:11.475707 Out f6:23:e2:d4:65:0c ethertype IPv4 (0x0800), length 100: 192.168.0.3 > 172.217.169.36: ICMP echo request, id 46, seq 1, length 64
root@test-free5gc-upf-upf-6485b99bf9-l6fzt:/free5gc/upf# iptables -nvL POSTROUTING -t nat
Chain POSTROUTING (policy ACCEPT 89 packets, 17126 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    2   168 MASQUERADE  all  --  *      n6      10.1.0.0/16          0.0.0.0/0 

some details from the host system

$ sudo iptables -nvL POSTROUTING -t nat
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
25859 1434K cali-POSTROUTING  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* cali:O3lYWMrLQYEMJtB5 */
17725 1071K KUBE-POSTROUTING  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes postrouting rules */
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether a4:bf:01:6b:f8:f0 brd ff:ff:ff:ff:ff:ff
    altname enp5s0
    inet 10.237.72.160/24 metric 100 brd 10.237.72.255 scope global dynamic eno0
       valid_lft 40030sec preferred_lft 40030sec
    inet6 fe80::a6bf:1ff:fe6b:f8f0/64 scope link 
       valid_lft forever preferred_lft forever
5: vxlan.calico: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 66:ae:fe:7c:7a:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.47.192/32 scope global vxlan.calico
       valid_lft forever preferred_lft forever
    inet6 fe80::64ae:feff:fe7c:7a02/64 scope link 
       valid_lft forever preferred_lft forever
6: calib60157fc38c@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-8ef93e66-1599-cb33-cb25-dc17859f684f
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
7: caliac337299544@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-f49ce13d-9504-d124-65c4-1972bd887d88
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
8: cali33187d472f0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-a6eaffa8-fbfc-9067-32a0-25dac7dd01a4
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
9: cali29d08354e8f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-b6107500-32b6-f145-f2e2-e9df01fa7298
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
10: cali0d3d1dd5b15@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-ef39e33a-590b-8ae9-1133-3559a5618eb3
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
13: cali76d4c0a4623@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-f2d86246-ad94-0e8d-da79-ce1d5cd66b6b
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
14: cali9eab4134b24@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-f6fefd35-361a-c6f2-605b-37878800dbc0
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
15: cali20dc5627902@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-311c0b22-890f-290d-7902-8958814f9f48
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
16: cali65bcbe4d3b4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-c1a2defc-970f-07e5-d0bb-73294cc06ec5
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
17: cali4b25d95aa41@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-f1b8f38d-5f22-b793-08a9-ddd3088a76dd
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
18: cali84bff32026a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-2e283c20-537c-d55a-247b-fe45f45175aa
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
19: cali78bbe3202b8@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-0e54611c-ee5d-3177-2646-cafc9fe7f994
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
20: cali9a1938603a4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-b0f43190-61df-3dc8-3e06-b12e5d516fc9
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
21: calie9deb3528f1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-821a95d1-c019-5995-f07a-f70adb4afb04
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
22: cali21abca10974@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-0956cd05-aaa5-7076-b321-80a1b6a99a28
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
23: cali6dc13f1c0d9@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-52d629a1-c299-568a-4c7e-58b77d7c773a
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
24: cali13cf85f9d1e@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-f7b13497-7640-6ef8-4224-47e63bb428b8
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
25: cali3887821ffef@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-d275e4c4-24ed-859c-e7bf-d5b8ead52a7b
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever

$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.237.72.1     0.0.0.0         UG    100    0        0 eno0
10.184.9.1      10.237.72.1     255.255.255.255 UGH   100    0        0 eno0
10.237.72.0     0.0.0.0         255.255.255.0   U     100    0        0 eno0
10.237.72.1     0.0.0.0         255.255.255.255 UH    100    0        0 eno0
10.248.2.1      10.237.72.1     255.255.255.255 UGH   100    0        0 eno0
163.33.253.68   10.237.72.1     255.255.255.255 UGH   100    0        0 eno0
192.168.47.192  0.0.0.0         255.255.255.192 U     0      0        0 *
192.168.47.193  0.0.0.0         255.255.255.255 UH    0      0        0 calib60157fc38c
192.168.47.194  0.0.0.0         255.255.255.255 UH    0      0        0 caliac337299544
192.168.47.195  0.0.0.0         255.255.255.255 UH    0      0        0 cali33187d472f0
192.168.47.196  0.0.0.0         255.255.255.255 UH    0      0        0 cali29d08354e8f
192.168.47.197  0.0.0.0         255.255.255.255 UH    0      0        0 cali0d3d1dd5b15
192.168.47.200  0.0.0.0         255.255.255.255 UH    0      0        0 cali76d4c0a4623
192.168.47.201  0.0.0.0         255.255.255.255 UH    0      0        0 cali9eab4134b24
192.168.47.202  0.0.0.0         255.255.255.255 UH    0      0        0 cali20dc5627902
192.168.47.203  0.0.0.0         255.255.255.255 UH    0      0        0 cali65bcbe4d3b4
192.168.47.204  0.0.0.0         255.255.255.255 UH    0      0        0 cali4b25d95aa41
192.168.47.205  0.0.0.0         255.255.255.255 UH    0      0        0 cali84bff32026a
192.168.47.206  0.0.0.0         255.255.255.255 UH    0      0        0 cali78bbe3202b8
192.168.47.207  0.0.0.0         255.255.255.255 UH    0      0        0 cali9a1938603a4
192.168.47.208  0.0.0.0         255.255.255.255 UH    0      0        0 calie9deb3528f1
192.168.47.209  0.0.0.0         255.255.255.255 UH    0      0        0 cali21abca10974
192.168.47.210  0.0.0.0         255.255.255.255 UH    0      0        0 cali6dc13f1c0d9
192.168.47.211  0.0.0.0         255.255.255.255 UH    0      0        0 cali13cf85f9d1e
192.168.47.212  0.0.0.0         255.255.255.255 UH    0      0        0 cali3887821ffef
$ sudo tcpdump -nei any icmp
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
14:00:20.973772 eno0  Out ifindex 2 f6:23:e2:d4:65:0c ethertype IPv4 (0x0800), length 104: 192.168.0.3 > 172.217.169.36: ICMP echo request, id 52, seq 1, length 64

UPF locked in ContainerCreating Status

Hello everyone,

I tried to deploy the free5gc network with the help of your documentation and this tutorial :

https://medium.com/rahasak/deploying-5g-core-network-with-free5gc-kubernets-and-helm-charts-29741cea3922

However during the deploying phase the UPF pod stay at the ContainerCreating status like below :

image

It seems to be due to this error :

image

I read other closed issues which could be compared to this kind of error. That's why I run ip a on other components of free5gc like the smf, amf and udr. (with the result below)

image

Does this error happen from the fact that I have only eth0 as network interface and creating a 2nd interface eth1 is needed? Or it is because "Nx" interface aren't renamed eth1 ? (Or maybe something else)

Thank you and have a nice day !

UERANSIM deployed, can't ping from root on the pod

Hello,
I am at loss to perform any testing with my cluster. I have free5gc, ueransim installed and running:
image

I can access UE pod and run "ip address" command:
image

However, I can't run any meaningfull commands since they appear to be not permitted:
image

I don't know wheter tunl0 is a corretly created TUN interface. Should i expect to see there something else?
I tried to do something to "exec" in privileged mode (which is allowed on the cluster) or smth similar, but it didn't solve the problem.
What can it be?
I attach the output of the "describe" for the UE-pod. There are some messages about bad connectivity which were due to the UE-pod scheduled on the node whcih was unreachable. Currently UE is deployed on the CONTROL_PLANE node. I am not sure whether there isn't something wrong with IP which it displays, since it's the external interface of the control node.
Thank You in advance for any kind of advice
UE-pod describe.txt
.

Kubectl logs screenshots from pods

Hi,
I'm currently writing my master degree thesis on my University and I'm using towards5gs-helm to test with Istio. I can make all the pods running but unfortunately the TUN interface is not created for me because of some issue between AMF/gNb I tried to troubleshoot it but haven't found a solution yet and don't want to run out of time with unnecessary troubleshooting.
I would like to put some screenshots about the fully working pods into my dissertation. The output of the "kubectl logs podname" shows everything I want.
Can you please provide some screenshots to me from all the free5gc/ueransim pods when using "kubectl logs" command?
Many thanks!
Best Regards,
Tamas

[Question] upf container crashes using helm

Hello,
First - thank you for this helm support.
I've been trying to set up the environment using your doc' following by:
helm -n free5gc install free5gc-v1 towards5gs/free5gc
and it seems only the upf isn't properly running:
image

Been wondering whether it is because gtp5g dependency that must be installed on all kubernetes worker nodes? meaning needs to clone it and perform 'make' and 'make install'?
does anyone encountered this or knows how can I add gtp5g dependency on all worker nodes if that is the reason?

Thanks!

Gcloud deployment

Hi,
I'm trying to deploy the cluster on GCE but I'm having a problem with the connection between UPF's n6 <-> Internet.
When I run:
root@3:/ueransim/build# ping -I uesimtun0 1.1.1.1
That's traffic I see on a host eth1 interface

15:42:44.885833 ARP, Request who-has 192.168.11.1 tell 192.168.11.101, length 28
15:42:45.906374 ARP, Request who-has 192.168.11.1 tell 192.168.11.101, length 28
15:42:46.929860 ARP, Request who-has 192.168.11.1 tell 192.168.11.101, length 28

This traffic goes directly to the gateway and there is no reply to that

I suspect that google's gateway verifies the source MAC address, and it has to match any VMs related to the project.
So UPF N6 MACVLAN interface has a new mac address that is not recognized by google, therefore traffic gets dropped.
I've tried to manually add ARP entry on the UPF pod but still, I could only see ping requests on the very same interface, no reply.

Here's my configuration.

Worker node

It hosts all the pods

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1460
      inet 192.168.10.3  netmask 255.255.255.255  broadcast 0.0.0.0
      inet6 fe80::4001:c0ff:fea8:a03  prefixlen 64  scopeid 0x20<link>
      ether 42:01:c0:a8:0a:03  txqueuelen 1000  (Ethernet)
      RX packets 682476  bytes 1501059680 (1.5 GB)
      RX errors 0  dropped 0  overruns 0  frame 0
      TX packets 651967  bytes 72658725 (72.6 MB)
      TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1460
      inet 192.168.11.100  netmask 255.255.255.255  broadcast 0.0.0.0
      inet6 fe80::4001:c0ff:fea8:b64  prefixlen 64  scopeid 0x20<link>
      ether 42:01:c0:a8:0b:64  txqueuelen 1000  (Ethernet)
      RX packets 62885  bytes 5374035 (5.3 MB)
      RX errors 0  dropped 0  overruns 0  frame 0
      TX packets 312444  bytes 30253882 (30.2 MB)
      TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
UPF n6
n6@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default 
  link/ether 0e:73:67:10:eb:85 brd ff:ff:ff:ff:ff:ff link-netnsid 0
  inet 192.168.11.101/24 brd 192.168.11.255 scope global n6
     valid_lft forever preferred_lft forever
  inet6 fe80::c73:67ff:fe10:eb85/64 scope link 
     valid_lft forever preferred_lft forever

I couldn't find that mentioned in google's documentation, but I found a thread with a similar problem

So has anyone succeeded with deployment on GCE? Or do you have any ideas for a workaround?

I was thinking about deploying UPF on a separate VM, like here
#25 - could that do the job?

Thanks in advance

free5gc-nrf pod going into crash loop state

Hi, I'm trying to update the helm charts to pick free5gc version 3.2.1 .

Current setup - I have free5gc NF's running successfully deployed via towards5gs-helm charts .

I've have pulled free5GC v/3.2.1 source code(haven't modified any source code) and compiled the NF's to generate binaries. I'm trying to patch the existing docker images of NFs with the new updated binaries

For example, to update nrf binary, I took newly generated nrf binary(v/3.2.1) and built a docker image for it and updated in the existing running free5gc-nrf deployment to point to this new image and changed nrfcfg to config as new binary doesn't take nrfcfg

Below is the Dockerfile used for building nrf:0.1.1 image which is used in the free5gc-nrf deployment

FROM towards5gs/free5gc-nrf:v3.0.6
COPY ./free5gc/bin/nrf /free5gc/nr/nrf
RUN chmod +x /free5gc/nrf/nrf

free5gc-nrf-deployment
image
line 64 and 56

When it is updated, free5gc-nrf pod is going to crashloop state. How to resolve this?

Any help would be of great use

Attaching the logs and describe pod for reference
image
version is actually updated to 1.0.1, but still showing error

image

[New features or request]

Hello,

I'd love to see two feature in towards5gs-helm:

  • 5G tests (script exists here)
  • Network slicing

For the second one, I am not sure how it would be implemented from a technical aspect. Of course, I am willing to help on these features.

Please tell me your opinion on this!

TUN interface is not created

Hi,

I'm currently working on helm installation of free5gc and I'm using towards5gs-helm. I can make all the pods running but unfortunately the TUN interface is not created for me. I tried to troubleshoot it but haven't found a solution yet . I have attached the screenshots of ue, gNB, AMF, SMF and UPF logs for your kind reference.

Best Regards,
Sharada.

image
image
image
image
image
image
image
image
image
image
image

Data Network unreachable on the n6 interface

Hello
I am opening a second issue since I now have a different problem.

I am trying to deploy 5G core and UERANSIM on a Kubernetes cluster. I am not using microk8s or anything, I just deployed the cluster using kubeadm. I use Kubernetes v1.22, kernel version 5.4, gtp5g installed on all Nodes. The pod CIDR is 192.168.0.0/16 (default I believe). I have one master and 6 workers.

Each Node has 2 network interface: eth1 is on 192.168.56.1/24, it is the network used for inter-node communication (it is a host-only adapter on VirtualBox) and eth0 is a NAT interface with Internet access. I have enabled promiscuous mode in VirtualBox and I work with the AMD PCNet FAST III (Am79C973) NIC (I tried with the Intel Pro 1000 and I have the same issue).

Calico is working fine. IP forwarding in Pod is enabled and I have 1 when I run cat /proc/sys/net/ipv4/ip_forward.

The issue is that I cannot deploy the UPF: it is stuck at ContainerCreating because of the following Multus error:

Warning  FailedCreatePodSandBox  1s               kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc =
 failed to set up sandbox container "600e58049416ff53b4914a0ad4de474e33ad0992a8c35e9aa46217fbabd82396" network for
 pod "free5gc-free5gc-upf-upf-9dd5954bb-8wspc": networkPlugin cni failed to set up pod "free5gc-free5gc-upf-upf-9dd5954
bb-8wspc_free5gc" network: [free5gc/free5gc-free5gc-upf-upf-9dd5954bb-8wspc:n6network-free5gc-free5gc-upf]: error adding
 container to network "n6network-free5gc-free5gc-upf": failed to add route '{0.0.0.0 00000000} via 10.0.2.2 dev n6': network is 
unreachable

For the configuration, I modified the free5gc's values.yaml file of n2, n3, n4 and n9 to eth1, and n6 to eth0.
N6 is configured as the following ( did not put an excluded IP):

  n6network:
    name: n6network
    masterIf: eth0
    subnetIP: 10.0.2.0
    cidr: 24
    gatewayIP: 10.0.2.2

since my NAT network eth0 is the following:

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether 08:00:27:73:60:cf brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 82655sec preferred_lft 82655sec
    inet6 fe80::a00:27ff:fe73:60cf/64 scope link
       valid_lft forever preferred_lft forever

and iproute:

default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.10
...

So apparently the network is unreachable. However, when I change the UPF configuration to whatever else, it is created normally and I can go check in the pod that 10.0.2.2 is the gateway. I start to believe it is a problem with the promiscuous mode. I find myself a bit in the same situation as this issue however there is no such thing as "Forged transmits" in VirtualBox.

Is there a way I use something else than macvlan for Multus, or is it really the best option?

Best,

Free5gc control plane containers are not running.

Description:

  1. Created a k8s cluster (cluster is up and running).
  2. Kubernetes worker node with kernel 5.0.0-23-generic and containing gtp5g kernel module.
  3. Add an additional "eth1" interface on worker node.
  4. Installed Multus & helm.
  5. Created a persistent volume.
  6. Execute the command- "helm -n free5gc-core install --generate-name ./free5gc/"
  7. After that except upf & mongo-db all other pods are stuck at "Init" state.

image

  1. On worker node-

    docker images

    o/p
    image

Attached the log files-

docker_containers_screenshot
docker_containers_screenshot1
docker_images_screenshot
pods_error_screenshot

gNB unable to reach AMF

Hello
I am trying to deploy 5G core and UERANSIM on a Kubernetes cluster. I am not using microk8s or anything, I just deployed the cluster using kubeadm. I use Kubernetes v1.22, kernel version 5.4, gtp5g installed on all Nodes. The pod CIDR is 192.168.0.0/16 (default I believe). I have one master and 6 workers.

Each Node has 2 network interface: eth1 is on 192.168.50.1/24, it is the network used for inter-node communication (it is a host-only adapter on VirtualBox) and eth0 is a NAT interface with Internet access.

I use Calico as a CNI, as well as Multus. For Calico, I use kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml without any modification, and it seems to work.
For Multus, I used kubectl create -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-thick-plugin.yml without any modification. It seems like Multus correctly detected Calico.

Everything in the cluster seems to be running fine.

For the configuration, I modified the free5gc's values.yaml file of n2, n3, n4 and n9 to eth1, and n6 to eth0. I did the same for ueransim's values.yaml file. I did not change any IP addresses, subnet or port, as I think they are already consistent with the rest of the deployment (but I might be totally wrong here).

When I deploy free5gc (with Helm), Pods are started normally and without error. Same goes when I deploy ueransim. However, kubectl logs -n free5gc ueransim-gnb-... output:

UERANSIM v3.1.3
[2021-11-24 18:01:46.591] [sctp] [info] Trying to establish SCTP connection... (10.100.50.249:38412)
[2021-11-24 18:06:50.724] [sctp] [error] Connecting to 10.100.50.249:38412 failed. SCTP could not connect: Connection timed out

I tried to get into the Pod and reach the AMF but it says host unreachable.

I have noticed something weird: kubectl -n free5gc describe network-attachment-definition n2network output:

<...>
Spec:
  Config:  { "cniVersion": "0.3.1", "plugins": [ { "type": "macvlan", "capabilities": { "ips": true }, "master": "eth1", "mode": "bridge", "ipam": { "type": "static", "routes": [ { "dst": "0.0.0.0/0", "gw": "10.100.50.254" } ] } }, { "capabilities": { "mac": true }, "type": "tuning" } ] }
Events:    <none>

routes.dst is equal to 0.0.0.0/0. Is it normal? I had a look at that issue and it is pretty different.

I also tried Flannel + Multus but without success. I am new to Multus, I am out of ideas to make that work.

EDIT: Here is the content of /etc/cni/net.d/00-multus.conf on the Node gnB is deployed on. Same goes for the node AMF is deployed on. I don't see anything wrong with that. The interface of correctly created.

{
   "capabilities":{
      "bandwidth":true,
      "portMappings":true
   },
   "cniVersion":"0.3.1",
   "delegates":[
      {
         "cniVersion":"0.3.1",
         "name":"k8s-pod-network",
         "plugins":[
            {
               "datastore_type":"kubernetes",
               "ipam":{
                  "type":"calico-ipam"
               },
               "kubernetes":{
                  "kubeconfig":"/etc/cni/net.d/calico-kubeconfig"
               },
               "log_file_path":"/var/log/calico/cni/cni.log",
               "log_level":"info",
               "mtu":0,
               "nodename":"k8s-node-4",
               "policy":{
                  "type":"k8s"
               },
               "type":"calico"
            },
            {
               "capabilities":{
                  "portMappings":true
               },
               "snat":true,
               "type":"portmap"
            },
            {
               "capabilities":{
                  "bandwidth":true
               },
               "type":"bandwidth"
            }
         ]
      }
   ],
   "logLevel":"verbose",
   "logToStderr":true,
   "kubeconfig":"/etc/cni/net.d/multus.d/multus.kubeconfig",
   "name":"multus-cni-network",
   "type":"multus"
}

Best,
Hugo

multiple gNB setup

Hi,

Any hints for free5gc/ueransim setup/config with >1 gNB ? to e.g. simulate handovers, cell reselection etc?

Thanks
Pawel

N6 network configuration problem

Hi, we're trying to deploy towards5gs-helm in Kubernetes. We made all pods running and TUN interface created, but UPF still can't reach DN via its N6 interface.
Since we have different name of network interface enp1s0, we followed the Network Configuration to modify corresponding parameters in yaml files. We modify masterIf to enp1s0, and n6 network like this:

  n6network:
    name: n6network
    masterIf: enp1s0
    subnetIP: 192.168.31.0
    cidr: 24
    gatewayIP: 0.0.0.0
    excludeIP: 0.0.0.0

After we changed free5gc-upf.upf.n6if.ipAddress, the pods failed, so now it stays unchanged.

In a word, we're confused about N6 network configuration, can anyone help us?

P.S. Our node IP is 192.168.31.237/24.

Thanks.

Branching UPF can't return packet to gNB in ulcl architecture.

Hi,
I am deploying ULCL configuration in my Kubernetes cluster with default setting.
I use pod's main interface "eth0" as N6 interface, the ping packet could route through the internet to 8.8.8.8.The packet reply had been encapsulated at UPF1 and routed to Branching UPF's N9 interface, but Branching UPF can't route the GTP-U packet back to gNB through N3 interface.
Is there something wrong about the routing setting? BTW the non-ulcl version work good for me.
This is my pcap file pcap.zip.

Best

N6 internet connectivity

First, thank you for the published deployment & configuration details, following the instructions I've been able to deploy the UPF successfully on minikube on top of AWS EC2 ubuntu.

image

image

Now I'm facing the following issue:

First, I tried to use ping to test N6 connectivity towards the internet, but apparently iputils-ping is not part of UPF image.

image

So I tried apt-get update, and that's when I realized that internet connectivity is NOK:

image

You can see below that all interfaces are UP:

image

IP Routes:

image

I also removed the secondary default route manually "to keep only n6 as default route" but still internet reacheability failed:

image

Knowing that, the UPF is able to reach its n6 designated gw 192.168.49.1 (which is the IP of minikube bridge created on AWS ec2)

I verified this via ping from the bridge interface on ec2:

image

image

Any ideas what could be going wrong?

Thanks in advance for your help & support.

Br,
Amr

[K3S] SMF/UPF Containers crash

Hi,
I deployed your chart through helm, following your documentation (installed multus and gtp5 kernel module).
Still, I have two containers that are crashing

  1. SMF containers logs:
2021-06-23T06:52:25ZSMF version: 
	free5GC version: v3.0.5 
	build time:      2021-02-08T20:11:48Z 
	commit hash:     04c01ec5 
	commit time:     2021-01-30T17:01:30Z 
	go version:      go1.14.4 linux/amd64 
2021-06-23T06:52:25ZSMF Log level is set to [info] level 
2021-06-23T06:52:25Zset log level : info 
2021-06-23T06:52:25Zset report call : false 
2021-06-23T06:52:25Zset log level : info 
2021-06-23T06:52:25Zset report call : false 
2021-06-23T06:52:25Zset log level : info 
2021-06-23T06:52:25Zset report call : false 
2021-06-23T06:52:25Zset log level : info 
2021-06-23T06:52:25Zset report call : false 
2021-06-23T06:52:25Zset log level : info 
2021-06-23T06:52:25Zset report call : false 
2021-06-23T06:52:25Zset log level : info 
2021-06-23T06:52:25Zset report call : false 
2021-06-23T06:52:25ZSMF config version [1.0.0] 
2021-06-23T06:52:25ZUE-Routing config version [1.0.0] 
2021-06-23T06:52:25Zsmfconfig Info: Version[1.0.0] Description[SMF initial local configuration] 
2021-06-23T06:52:25ZEndpoints: [10.100.50.233] 
2021-06-23T06:52:25ZServer started 
2021-06-23T06:52:25ZSMF Registration to NRF {ee18fac9-0683-4335-898c-a61271ce3d42 SMF REGISTERED 0 0xc0003e4ca0 0xc0003e4ce0 [] []   [smf-nsmf] [] <nil> [] [] <nil> 0 0 0  <nil> <nil> <nil> <nil> 0xc00030dcc0 <nil> <nil> <nil> <nil> <nil> map[] <nil> false 0xc0003e4b20 false false []} 
2021-06-23T06:52:25ZFailed to listen: listen udp 10.100.50.244:8805: bind: cannot assign requested address 
panic: runtime error: invalid memory address or nil pointer dereference 
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xb3ab54] 
 
goroutine 1 [running]: 
github.com/free5gc/smf/pfcp/udp.Run(0xd844d8) 
	/go/src/free5gc/NFs/smf/pfcp/udp/udp.go:27 +0x194 
github.com/free5gc/smf/service.(*SMF).Start(0x14c4940) 
	/go/src/free5gc/NFs/smf/service/init.go:281 +0x250 
main.action(0xc0003226e0, 0x0, 0xc000300ff0) 
	/go/src/free5gc/NFs/smf/smf.go:52 +0x107 
github.com/urfave/cli.HandleAction(0xc05ee0, 0xd86098, 0xc0003226e0, 0xc0003226e0, 0x0) 
	/go/pkg/mod/github.com/urfave/[email protected]/app.go:526 +0x11a 
github.com/urfave/cli.(*App).Run(0xc0003381c0, 0xc00001e180, 0x3, 0x3, 0x0, 0x0) 
	/go/pkg/mod/github.com/urfave/[email protected]/app.go:288 +0x649 
main.main() 
	/go/src/free5gc/NFs/smf/smf.go:41 +0x21d 
  1. UPF container logs:
Cannot find device "n6" 
2021-06-23T06:56:45ZConfig: /free5gc/config/..2021_06_23_06_53_31.396199257/upfcfg.yaml 
2021-06-23T06:56:45ZUPF config version [1.0.0] 
2021-06-23T06:56:45ZSet log level: info 
2021-06-23T06:56:45ZSocket bind fail : Cannot assign requested address 
2021-06-23T06:56:45Zgtp5g device named upfgtp created fail 
2021-06-23T06:56:45ZPool is full, it may not belong to this pool 
2021-06-23T06:56:45ZGtp5gDeviceAdd failed 
2021-06-23T06:56:45ZSocket bind fail : Cannot assign requested address 
2021-06-23T06:56:45ZSocket -1 register event in epoll error : Bad file descriptor 
2021-06-23T06:56:45ZPFCP Sock Register to epoll error 
2021-06-23T06:56:45ZCreate PFCP Server for IPv4 error 
2021-06-23T06:56:45ZUPF - PFCP error when UPF initializes 
2021-06-23T06:56:45ZUPF failed to initialize 
2021-06-23T06:56:45ZPool is full, it may not belong to this pool 
2021-06-23T06:56:45ZRemoving DNN routes 
2021-06-23T06:56:45Zif_nametoindex 
2021-06-23T06:56:45ZDelete routing rule to device upfgtp failed: 10.1.0.0/17 
2021-06-23T06:56:45ZUPDK epoll deregister error 
/free5gc/config/wrapper.sh: line 10:    11 Segmentation fault      (core dumped) /free5gc/free5gc-upfd/free5gc-upfd -f /free5gc/config/upfcfg.yaml 

apparently the UPF does not find the N6 link;

Do you know how to solve this?

GTP-U doesn't seem to work on packets returned

Hi,
ueransim access successful

5: uesimtun0: <POINTOPOINT,PROMISC,NOTRAILERS,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UNKNOWN group default qlen 500
link/none
inet 10.1.0.1/32 scope global uesimtun0
valid_lft forever preferred_lft forever

But can't access to internet

root@ueransim-ue-7fcb46b87c-6x2xx:/ueransim/build# ping -I uesimtun0 baidu.com
PING baidu.com (110.242.68.66) from 10.1.0.1 uesimtun0: 56(84) bytes of data.

8 packets transmitted, 0 received, 100% packet loss, time 7162ms

And here is the tcpdump result in UPF

07:10:03.667048 IP 10.100.50.236.2152 > 10.100.50.233.2152: UDP, length 100
07:10:03.667048 IP 10.1.0.1 > 110.242.68.66: ICMP echo request, id 48, seq 8, length 64
07:10:03.667082 IP 192.168.179.12 > 110.242.68.66: ICMP echo request, id 48, seq 8, length 64
07:10:03.702681 IP 110.242.68.66 > 192.168.179.12: ICMP echo reply, id 48, seq 8, length 64
07:10:03.702724 IP 110.242.68.66 > 10.1.0.1: ICMP echo reply, id 48, seq 8, length 64
07:10:04.691158 IP 10.100.50.236.2152 > 10.100.50.233.2152: UDP, length 100
07:10:04.691158 IP 10.1.0.1 > 110.242.68.66: ICMP echo request, id 48, seq 9, length 64
07:10:04.691189 IP 192.168.179.12 > 110.242.68.66: ICMP echo request, id 48, seq 9, length 64
07:10:04.726920 IP 110.242.68.66 > 192.168.179.12: ICMP echo reply, id 48, seq 9, length 64
07:10:04.726969 IP 110.242.68.66 > 10.1.0.1: ICMP echo reply, id 48, seq 9, length 64

In which:
10.100.50.236 is the ip address of one network interface in gNb
10.100.50.233 is the ip address of one network interface in UPF, which is treated as GTP-U
10.1.01 is the ip address of UE.
192.168.179.12 is the ip address of upf.n6if.ipAddress
110.242.68.66 is the ip address of the website in DataNet
We can see that packet can be transmitted from UPF to DataNet. So does the reply.
But replay can't reach UE.
I think the trace of icmp is like below:
UE -> gNb -> GTP-U -> upf.n6if ->datanet -> upf.n6if
And in pod UPF, seems like that the packet can't be handled by GTP-U

Below if the configuration of UPF

global:
projectName: free5gc
userPlaneArchitecture: single # possible values are "single" and "ulcl"
uesubnet: 10.1.0.0/16
#Global network parametes
n4network:
name: n4network
masterIf: ens33
subnetIP: 10.100.50.240
cidr: 29
gatewayIP: 10.100.50.246
excludeIP: 10.100.50.246
n3network:
name: n3network
masterIf: ens33
subnetIP: 10.100.50.232
cidr: 29
gatewayIP: 10.100.50.238
excludeIP: 10.100.50.238
n6network:
name: n6network
masterIf: ens33
subnetIP: 192.168.179.0
cidr: 24
gatewayIP: 192.168.179.2
excludeIP: 192.168.179.254
n9network:
name: n9network
masterIf: ens33
subnetIP: 10.100.50.224
cidr: 29
gatewayIP: 10.100.50.230
excludeIP: 10.100.50.230

upf:
name: upf
replicaCount: 1
image:
name: towards5gs/free5gc-upf
pullPolicy: Always
configmap:
name: upf-configmap
volume:
name: upf-volume
mount: /free5gc/config/

n3if: # GTP-U
ipAddress: 10.100.50.233
n4if: # PFCP
ipAddress: 10.100.50.241
n6if: # DN
ipAddress: 192.168.179.12

Using custom Docker images

Hi
I have two questions.
Could you show me where currenly used Dockerfiles are?
Is there a way to replace docker images repository so I could use my own?

My goal is to use customized docker files(images) so I could have my own, modified version of Free5GC binaries.
Also I would like to add my own software to UE containers.

I'm not really familiar with Helm workflow, but is that possible? Or should I setup my work in a different way?

Gtp5g: open link: create: operation not supported

I am facing an issue while deploying data plane.

2022-09-01T12:13:54Z [INFO][UPF][Cfg] ==================================================
2022-09-01T12:13:54Z [INFO][UPF][Main] Log level is set to [info] level
2022-09-01T12:13:54Z [INFO][UPF][Main] starting Gtpu Forwarder [gtp5g]
2022-09-01T12:13:54Z [INFO][UPF][Main] GTP Address: "10.100.50.233:2152"
2022-09-01T12:13:54Z [ERRO][UPF][Main] UPF Cli Run Error: open Gtp5g: open link: create: operation not supported

dmesg doesn't show any warning messages. What could be the issue? has anyone faced similar issue?

Horizontal Pod Autoscaled (HPA)

Hi everybody I was trying to enable the HPA of some NFs. To do this I simply edited the "autoscaling" section in the values.yaml file i.e. switched from that

  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 100

to this

  autoscaling:
    enabled: true
    minReplicas: 2
    maxReplicas: 4

But when I try to install free5gc, I get this error

  Error: INSTALLATION FAILED: template: free5gc/charts/free5gc-upf/templates/upf/upf-hpa.yaml:19:11: executing "free5gc/charts/free5gc-upf/templates/upf/upf-hpa.yaml" at <include "free5gc-upf.fullname" .>: error calling include: template: free5gc/charts/free5gc-upf/templates/_helpers.tpl:26:14: executing "free5gc-upf.fullname" at <.Values.fullnameOverride>: nil pointer evaluating interface {}.fullnameOverride

I tried to make some changes but I get other errors always related to the helpers.tpl

Does anyone know how to do?

No uesimtun0 interface in UE pod

Hi, I am trying to install free5gc into my VirtualBox VM with single-node Kubernetes cluster (Pod is scheduled to Master node)
In my VM, I have three interfaces:

  • enp0s3 (NAT interface): IP 10.0.2.15/24 GW 10.0.2.2
  • enp0s8 (Bridge Network): DHCP4
  • enp0s9 (Host-only interface): 192.168.56.200/24

Since the VM does not have interface eth0 and eth1. I have reconfigured it with the instruction in Network configuration:

Install free5gc

helm -n free5gc install free5gc-v1 towards5gs/free5gc --set global.n2network.masterIf=enp0s3,\
  global.n3network.masterIf=enp0s3,global.n4network.masterIf=enp0s3,\
  global.n6network.masterIf=enp0s3,global.n9network.masterIf=enp0s3,\
  global.n6network.subnetIP=10.0.2.0,global.n6network.gatewayIP=10.0.2.2,\
  free5gc-upf.upf.n6if.ipAddress=10.0.2.15

Install the UERANSIM Helm chart

helm -n free5gc install ueransim towards5gs/ueransim --set\
  global.n2network.masterIf=enp0s3,global.n3network.masterIf=enp0s3

All the pod are running successfully

image

However, there is no uesimtun0 in ue pod as in instruction so that I cannot perform ping test

image

Here are the kubectl logs of AMF, UPF, SMF, gNB and UE

AMF
image
SMF
image
UPF
image
gNB
image
UE
image

Can you guys help me pinpoint the problem? Thanks in advance!!!

UPF POD failed to create pod sandbox

I came across the problem when I deploy the free5gc project on kubernetes and I am wondering if anyone can help.
My environment is Ubuntu 20.04 and I have already install the gtp 5g module.
I follow the blog https://prog.world/5g-core-network-deployment-with-free5gc-kubernetes-and-helm/ and I have not change any values of the yaml.
All the other pods is runing while the upf failed.
My nic is ens33 and docker0, and I run the project on Vmware.
The detailed error information is as follows,
WeChat Screenshot_20230226171144
I want to know whether the nic will affect the deployment of free5gc.And if so, how to config the network on Vmware Workstation.
Thanks!

Problem deploying free5gc on cluster

Hello,
I am using the cluster which consists of two laptops connected via the network. I deploy the free5gc charts from the cluster control-plane (all preliminary requirements seem to be fullfilled). However, while all other pods are doployed on the master node, the amf, upf and smf are not initializing on the worker nodes, as on screenshot
image

When describing a pod i get the following message:
OutputPodDescribe.txt

Where it seems, that all the time when adding interfaces, there is some missing link which doesn't allow Multus CNI to create the container:
Warning FailedCreatePodSandBox 111s (x184 over 11m) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_free5gc-free5gc-smf-smf-68fbcd7db9-dbj8l_5gtest_358cfc0f-40a2-4d02-92e8-d380eb6326bb_0(3ecb690e00ac792823ee8b02226e323a001e05ca8126275b76757a4b1fb171a6): error adding pod 5gtest_free5gc-free5gc-smf-smf-68fbcd7db9-dbj8l to CNI network "multus-cni-network": [5gtest/free5gc-free5gc-smf-smf-68fbcd7db9-dbj8l:n4network-smf]: error adding container to network "n4network-smf": Link not found

What could be the problem here? (Master node is untainted and allows pods scheduling, node103 is on the same laptop with master node and node193 is on the another laptop).
Should i expect to have all the pods deployed on the single node? Maybe I should try deploying them specifically from the worker nodes? Thank You in advance.

Here is node description:
NodesDescribe.txt

NRF pod stuck in "Init" state, awaiting for MongoDB-Where MongoDB pod / containers / svc are in "Running" state.

Description:

  1. Created a k8s cluster (cluster is up and running).
  2. Kubernetes worker & master node on kernel 5.4.0-42-generic.
  3. Add an additional "eth1" interface on worker node.
  4. Installed Multus & helm.
  5. Created a persistent volume.
  6. Execute the command- "helm -n free5gc-core install --generate-name ./free5gc/"
  7. After that except upf & mongo-db all other pods are stuck at "Init" state.
  8. All the nodes are in same namespace "kube-system".

Logs are-
cmd- "kubectl -n kube-system get pods --all-namespaces"
image

cmd- "kubectl describe pod free5gc-1629270501-nrf-694fd8cdd6-cxqvv -n kube-system"
nrf_describe_log

cmd- "kubectl get pvc,pv,svc --all-namespaces -o wide"
image

cmd- "kubectl get network-Attachment-definitions --all-namespaces"
image

Please assist me.

Network requirements of Free5gc

Hello

Why is it required to have static IP addresses for the SMF and AMF?
If is is about avoid IP change due to CNF crashes, wouldn't K8s services be a better option?
If it is about CNF discovery, wouldn't DNS and K8s Service discovery help here?

Also, why is it mandatory to have multiple interfaces? Cannot, for example, UPF and the CP-CNFs talk to the SMF using the same IP address?

I am highlighting these points, because Open5Gs, another 5GC doesn't seem to have these requirements

Multus interfaces cannot communicate with each other when pods are on multiple nodes

Hi, I have scenario on 3 node cluster, where node1 is for UE+gNB, node2 is for 5GCore, node3 for UPF only.
In that scenario pods cannot see (ping) each other multus interfaces.
Eg. from UPF I cannot ping SMF N4 (10.100.50.244) nor gNB N3 unless there are on the same node.
When everything is on 1 node it works flawlessly. When I spited UERANSIM (node1) and free5gc (node2) I managed to pair AMF and gNB by enabling ngap and changing N2 in gNB yaml to newly created service IP but UE still cannot communicate via tunnel due to issue from above.
Is it possible to create similar services to other pods/interfaces ? Or maybe do you know other way to make to make it work ?
Thanks for help

AMF、SMF and UPF failed to connect to internet

I deployed 5GC by referring to this article, but all the network elements except amf and smf started successfully. Checking found that both amf and smf are stuck in the initialization process because the process of initializing the container wait-nrf keeps curl http://nrf-nnrf:8000, but the status code returned is 000. I tried to ping google.com in amf, smf and upf pod by exec command, but it will show ping: bad address 'google.com'. It looks like all pods with multiple NICs deployed, their network functionality is broken, how can I solve this problem? Thanks!

image

Manual UE's connectivity test is Failed but helm's test result shows Succeeded

Hi, thank you for the maintenance of this project. This project is so helpful for me.

I tried to deploy free5gc on Kubernetes with reference to Setup free5gc on one single one and test with UERANSIM

Kubernetes is single-node and single-cluster and deployed by kubedam & containerd

And I run UE's connectivity test by helm -n free5gc test ueransim-v1 and it's result is

TEST SUITE:     connectivity-test-configmap
Last Started:   Fri Jul  1 07:00:37 2022
Last Completed: Fri Jul  1 07:00:37 2022
Phase:          Succeeded
TEST SUITE:     ueransim-v1-test-connection
Last Started:   Fri Jul  1 07:00:37 2022
Last Completed: Fri Jul  1 07:00:42 2022
Phase:          Succeeded

but I test UE's connectivity manually, a tunnel interface uesimtun0 is created, but failed to ping check ping -I uesimtun0 www.google.com .

$ kubectl -n free5gc exec -i ueransim-v1-ue-c7c564f8c-9jlgb -- bash -c 'ping -c 10 -I uesimtun0 www.google.com'
PING www.google.com (142.250.206.228) from 10.1.0.5 uesimtun0: 56(84) bytes of data.

--- www.google.com ping statistics ---
10 packets transmitted, 0 received, 100% packet loss, time 9208ms

command terminated with exit code 1

helm's test script is defined here, but it returns wrong result.

I change these script like this, and helm's test returns correct result.

    echo "Test connectivity"
    ping_output="$(kubectl -n {{ $.Release.Namespace }} exec -i ${pod_name} -- bash -c 'ping -c 10 -I uesimtun0 www.google.com')"
    echo "${ping_output}"
    echo "***********************************************************************"
    echo ""
    loss_rate="$(echo "$ping_output" | grep 'loss' | awk -F',' '{ print $3 }' | awk '{ print $1 }')"
    echo "Packet loss-rate is $loss_rate"
    if [ "$loss_rate" = "0%" ] ; then
      echo "Connection test passed - ${loss_rate}"
      exit 0
    else
      echo "Connection test failed - ${loss_rate}"
      exit 1
    fi
TEST SUITE:     connectivity-test-configmap
Last Started:   Fri Jul  1 07:24:14 2022
Last Completed: Fri Jul  1 07:24:14 2022
Phase:          Succeeded
TEST SUITE:     ueransim-v1-test-connection
Last Started:   Fri Jul  1 07:24:14 2022
Last Completed: Fri Jul  1 07:24:37 2022
Phase:          Failed

This issue is not focused on UE's connectivity, but helm's test script.

Could you give me some comments?

Thank you.

Update free5gc image

Hi, thank you for the maintenance of this project. This project is so helpful for me.

About a month ago, free5gc released v3.2.0(today's latest release is v3.2.1) that includes the migration of upf implementation from Clang to Golang.
I'd like to test how UPF that is created by Golang works in Kubernetes, but towards5gs-helm project supports v3.1.1.

Could you consider updating free5gc images?

Would you consider publishing a Dockerfile if possible?
I'd like to contribute to the towards5gs-helm project with image updates.
It would be great if we could test and discuss running a new version of free5gc on kubernetes in public.

Pods stuck in Init state, created container wait-nrf

Hi,
after installing the free-5gc project with helm, some of my pods are stucking in Init state and even after waiting minutes they don't come up. Sometimes only 2 hangs but sometimes 4-5 pods are hanging. I saw that yesterday some update was made probably about this issue: "Fix initContainer curl command waiting for NRF ready / Add --insecure…" but I'm still experiencing the issue. Can you please help me how to overcome this issue?

image
image

Thank you and best regards,
ritokispingvin

TUN interface created but No Connection

Hello everyone,
I wanted first to thank all the team that is maintaining such a project, it's indeed a very interesting contribution.

I'm actually trying to deploy the free5gc on a kubernetes cluster composed of 2 worker nodes and one master, And following are the steps that I have came across in my setup:

  1. I have verified the installation of the kernel module gtp5g on all the workers' nodes.
  2. I currently have only one physical interface called eth0 that I use as the master Interface for the network attachments.
  3. I'm currently using kube-ovn as the CNI plugin joined with Multus.
  4. In my use case I'm trying to connect a simple server with the upf :
    • The server is a simple pod that uses the same network-attachement-definition as the upf and is connected to the same network:

image

Here are the values.yml used for the deployment of free5gc charts:

global:
  n6network:
    name: n6network
    masterIf: eth0
  1. The ue and gnb were both deployed successfully and the interface uesimtun0 is up and working:
    image

  2. I have also activated the promicsuous mode and verified that the ipv4 forwarding is enabled on the upf pod.

Problem

When trying to ping from the user equipment pod to the upf everything works perfectly ( the ip address of the server is 10.100.10.10 and upf is 10.100.100.12)
image
**But ** the problem occur when trying to access the server pod from the user equipment pod:
image

I have checked the connection between upf and the server and it is indeed working fine:
image

Is there anything I'm actually missing out on in my configuration?

UE's connectivity fails Destination Host Unreachable

Hi, thank you for the maintenance of this project. This project is so helpful for me.

I try to deploy towards5gs-helm v3.1.1 on Kubernetes deployed by kubeadm and flannel.
(All of code is hosted https://github.com/hi120ki/vagrant-free5gc-k8s)

I don't modify towards5gs-helm code, and deploy single kubernetes node.
A interface uesimtun0 is created in UE, but I got Host Unreachable error on UE's connectivity.

4: uesimtun0: <POINTOPOINT,PROMISC,NOTRAILERS,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none
    inet 10.1.0.1/32 scope global uesimtun0
       valid_lft forever preferred_lft forever
    inet6 fe80::9e85:ad35:c123:a4c2/64 scope link stable-privacy
       valid_lft forever preferred_lft forever
PING 8.8.8.8 (8.8.8.8) from 10.1.0.1 uesimtun0: 56(84) bytes of data.
From 10.100.100.12 icmp_seq=1 Destination Host Unreachable
From 10.100.100.12 icmp_seq=2 Destination Host Unreachable
From 10.100.100.12 icmp_seq=3 Destination Host Unreachable

--- 8.8.8.8 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3057ms

This shows that the UE(10.1.0.1) can reach UPF(10.100.100.12), but not the DN(8.8.8.8).
I check ip route settings on UPF, but it seems to be set correctly.

default via 10.100.100.1 dev n6 table n6if
default via 192.168.0.1 dev eth0
10.1.0.0/17 dev upfgtp proto static
10.100.50.232/29 dev n3 proto kernel scope link src 10.100.50.233
10.100.50.240/29 dev n4 proto kernel scope link src 10.100.50.241
10.100.100.0/24 dev n6 proto kernel scope link src 10.100.100.12
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.8
192.168.0.0/16 via 192.168.0.1 dev eth0
broadcast 10.100.50.232 dev n3 table local proto kernel scope link src 10.100.50.233
local 10.100.50.233 dev n3 table local proto kernel scope host src 10.100.50.233
broadcast 10.100.50.239 dev n3 table local proto kernel scope link src 10.100.50.233
broadcast 10.100.50.240 dev n4 table local proto kernel scope link src 10.100.50.241
local 10.100.50.241 dev n4 table local proto kernel scope host src 10.100.50.241
broadcast 10.100.50.247 dev n4 table local proto kernel scope link src 10.100.50.241
broadcast 10.100.100.0 dev n6 table local proto kernel scope link src 10.100.100.12
local 10.100.100.12 dev n6 table local proto kernel scope host src 10.100.100.12
broadcast 10.100.100.255 dev n6 table local proto kernel scope link src 10.100.100.12
broadcast 127.0.0.0 dev lo table local proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1
broadcast 192.168.0.0 dev eth0 table local proto kernel scope link src 192.168.0.8
local 192.168.0.8 dev eth0 table local proto kernel scope host src 192.168.0.8
broadcast 192.168.0.255 dev eth0 table local proto kernel scope link src 192.168.0.8
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev n3 proto kernel metric 256 pref medium
fe80::/64 dev n6 proto kernel metric 256 pref medium
fe80::/64 dev n4 proto kernel metric 256 pref medium
fe80::/64 dev upfgtp proto kernel metric 256 pref medium
local ::1 dev lo table local proto kernel metric 0 pref medium
local fe80::4f:2ff:fe03:3b95 dev n6 table local proto kernel metric 0 pref medium
local fe80::40ca:95ff:fe04:d11a dev eth0 table local proto kernel metric 0 pref medium
local fe80::58a6:1b28:4ae0:c5fe dev upfgtp table local proto kernel metric 0 pref medium
local fe80::d006:8aff:fee8:91b9 dev n3 table local proto kernel metric 0 pref medium
local fe80::d08d:38ff:fe5c:51e2 dev n4 table local proto kernel metric 0 pref medium
multicast ff00::/8 dev eth0 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev n3 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev n6 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev n4 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev upfgtp table local proto kernel metric 256 pref medium

All of logs are recorded on GitHub Actions.
https://github.com/hi120ki/vagrant-free5gc-k8s/runs/7720100056?check_suite_focus=true#step:3:1876

Could you give some comments of the reason for this cause?
Thank you.

Add a sidecar container to NFs

I have created a sidecar container image. I want to run this image as a sidecar container in NFs such as amf. How can I modify AMF's chart to add my sidecar container?

Running UPF on a separate, no containarized host

Hello,
I would like to run UPF on a separate h similar to the diagram you show here:
Setup-free5gc-on-multiple-clusters-and-test-with-UERANSIM-Architecture
but non containarized, non containerized, so the UPF interfaces are the host interfaces. What configuration changes should I do in smf configuration so that SMF pod could speak with UPF ? specifically the n4 network does not work correctly. Should it be omitted and a nodePort service be created instead? what changes should be made to values.yaml?
Thanks, Edna

error invoking ConflistDel - "n2network-amf": conflistDel

Hello everyone.

I'm trying to install towards5gs-helm on my system. The specs:
Kubernetes

  • K8s v1.23.4
  • 2 node. 1 master, 1 worker
  • Installed gpt5g module on worker node
  • helm v3.8.0
  • Multus v3.8
  • Cilium CNI v1.11.0

Network configuration
Because the names of network interfaces on your Kubernetes nodes are different from eth0 and eth1, I changed all these parameter into the name of my network interface by modifying the towards5gs-helm/charts/free5gc/values.yaml:

#Global network parametes
  n2network:
    name: n2network
    masterIf: enp0s31f6
    subnetIP: 10.100.50.248
    cidr: 29
    gatewayIP: 10.100.50.254
    excludeIP: 10.100.50.254
  n3network:
    name: n3network
    masterIf: enp0s31f6
    subnetIP: 10.100.50.232
    cidr: 29
    gatewayIP: 10.100.50.238
    excludeIP: 10.100.50.238
  n4network:
    name: n4network
    masterIf: enp0s31f6
    subnetIP: 10.100.50.240
    cidr: 29
    gatewayIP: 10.100.50.246
    excludeIP: 10.100.50.246
  n6network:
    name: n6network
    masterIf: enp0s31f6
    subnetIP: <my IP address>
    cidr: 24
    gatewayIP: <my IP gateway>
    excludeIP: 10.100.100.254
  n9network:
    name: n9network
    masterIf: enp0s31f6
    subnetIP: 10.100.50.224
    cidr: 29
    gatewayIP: 10.100.50.230
    excludeIP: 10.100.50.230

I also tried to change other similar config of the amf, smf and upf chart. But the problem still happens.

The error
kubectl get pods -n free5gc

NAME                                              READY   STATUS              RESTARTS   AGE
free5gc-v1-free5gc-amf-amf-89966854-xq5r4         0/1     Init:0/1            0          2m6s
free5gc-v1-free5gc-amf-amf-bc46d5dfc-6qpf4        0/1     Init:0/1            0          2m7s
free5gc-v1-free5gc-ausf-ausf-7c568887cd-249vf     1/1     Running             0          2m6s
free5gc-v1-free5gc-nrf-nrf-bb98d64f8-88vrm        1/1     Running             0          2m7s
free5gc-v1-free5gc-nssf-nssf-87f467897-bnprz      1/1     Running             0          2m7s
free5gc-v1-free5gc-pcf-pcf-7b66cd6494-cmt4z       1/1     Running             0          2m5s
free5gc-v1-free5gc-smf-smf-7489648d4d-h6jds       0/1     Init:0/1            0          2m3s
free5gc-v1-free5gc-smf-smf-76df75cfcf-gljvc       0/1     Init:0/1            0          2m7s
free5gc-v1-free5gc-udm-udm-757d5b546c-xj825       1/1     Running             0          2m5s
free5gc-v1-free5gc-udr-udr-7c7cc97f68-xdmmp       1/1     Running             0          2m4s
free5gc-v1-free5gc-upf-upf-658c9d59-r6r9h         0/1     ContainerCreating   0          2m7s
free5gc-v1-free5gc-upf-upf-666b689578-jtvm7       0/1     ContainerCreating   0          2m4s
free5gc-v1-free5gc-webui-webui-7d9f769968-dfqkc   1/1     Running             0          2m6s
mongodb-0                                         1/1     Running             0          2m7s

kubectl describe pod -n free5gc free5gc-v1-free5gc-amf-amf-89966854-xq5r4

Events:
  Type     Reason                  Age                  From               Message
  ----     ------                  ----                 ----               -------
  Normal   Scheduled               55m                  default-scheduler  Successfully assigned free5gc/free5gc-v1-free5gc-amf-amf-89966854-xq5r4 to mec-1
  Normal   AddedInterface          55m                  multus             Add eth0 [10.0.1.83/32] from cilium
  Warning  FailedCreatePodSandBox  55m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "65dd9a802b8b4dc491c4a73d6870ba6532dd20355317ec4bc517b341c9a3d559" network for pod "free5gc-v1-free5gc-amf-amf-89966854-xq5r4": networkPlugin cni failed to set up pod "free5gc-v1-free5gc-amf-amf-89966854-xq5r4_free5gc" network: [free5gc/free5gc-v1-free5gc-amf-amf-89966854-xq5r4/:n2network-amf]: error adding container to network "n2network-amf": Link not found, failed to clean up sandbox container "65dd9a802b8b4dc491c4a73d6870ba6532dd20355317ec4bc517b341c9a3d559" network for pod "free5gc-v1-free5gc-amf-amf-89966854-xq5r4": networkPlugin cni failed to teardown pod "free5gc-v1-free5gc-amf-amf-89966854-xq5r4_free5gc" network: delegateDel: error invoking ConflistDel - "n2network-amf": conflistDel: error in getting result from DelNetworkList: Link not found]

Add instrumentation for enabling Jaeger tracing

Thank you for your great work. I have learned a lot about 5G from this project. I want to implement distributed tracing with Jaeger for researches about anomaly detection on 5G miro-service system. Could you please add instrumentation in the docker image for enabling Jaeger tracing when we deploying and testing the 5G system? I think this will help a lot for the research on 5G security. I really appreciate it if you can consider this.

Some pods do not start after delete and install of UERANSIM

Hi folks, I hope you can help me solve this problem.

I have a cluster created with kubeadm with 2 physical nodes called cube2 and cube4.
More info:

  • Ubuntu 18.04.5 LTS
  • Kernel version 5.8.5-050805-generic
  • Container Runtime version: containerd://1.5.5
  • Kubectl Version: v1.23.3
  • K8s Version: v1.23.8
  • Calico version: v3.23.2 (installed with IP forwarding)
  • Multus CNI version: 3.9
  • The nodes interfaces are in promiscuous mode and their name is e0 (with the altname property)

Everything works perfectly, both the free5gc and ueransim pods are correctly deployed, the uesimtun0 interface is created and I can contact the internet from the UE. Here a screeshot:

work

Now before exposing the problem(s) I would like to point out that right now I am using a local version of the repo that refers to this commit because in that version everything worked fine (some weeks ago) and I was thinking of solving the problem using that version but I was wrong.

The first problem occurs when I run the following commands:

helm delete -n 5g ueransim
# I wait for the pods to be terminated and then
helm -n 5g install ueransim ./towards5gs-helm/charts/ueransim/ --set global.n2network.masterIf=e0,global.n3network.masterIf=e0

After some time, this is the situation:
not_work_first

Note that the PODs in unknown state are amf, smf and upf and the gnb is in pending. The only thing these PODs have in common is that they use MACVLANs and have multiple interfaces configured with Multus.

Since the previous situation did not change, I ran these commands:

helm delete -n 5g ueransim
helm delete -n 5g free5gc
# I wait for the pods to be terminated and then
helm -n 5g install free5gc ./towards5gs-helm/charts/free5gc --set global.n2network.masterIf=e0,global.n3network.masterIf=e0,global.n4network.masterIf=e0,global.n6network.masterIf=e0,global.n9network.masterIf=e0,global.n6network.subnetIP=<subnetIP>,global.n6network.cidr=<cidr>,global.n6network.gatewayIP=<gatewayIP>,free5gc-upf.upf.n6if.ipAddress=<fakeAdressIP>
helm -n 5g install ueransim ./towards5gs-helm/charts/ueransim/ --set global.n2network.masterIf=e0,global.n3network.masterIf=e0

After some time, this is the situation:

not_work_end

Note that the status of the amf, smf and upf pods was the same even before the UERANSIM installation.

I also upload the "kubectl describe pod " of upf. The same situation can be seen from the logs of: amf, smf and gnb. Again as before, the only thing these PODs have in common is that they use MACVLANs and have multiple interfaces configured with Multus.

upf

Do you have any suggestions? Thanks in advance

EDIT: I'm trying to understand the error better and maybe it's related to Multus and MACVLAN plugin.

How to enable ip_forwarding in calico microk8s for UPF pod

Hi everyone,

I am trying to deploy the 5G Core running on a two nodes cluster created by microk8s. Everyone seems to be working right, but, after too many hours trying to enable ip_forwarding inside UPF, I decided to come to the community.

The only way if have found to enable ip_forwarding is by using the following commands:

systemd-cgls | grep upf # To extract UPF's pid
sudo nsenter -t <UPF_PID> -n sysctl -w net.ipv4.ip_forward=1

This makes the network work and the UE is able to reach Internet.

I have tried to configure it in many different ways as presented in the links shown at the documentation: "We remind you that some CNI plugins (e.g. Flannel) allow this functionality by default, while others (.e.g. Calico) require a special configuration."

I am not still sure of how calico works regarding to this.

After setting up the cluster (and installing plugins&modules), I run these commands in order to install calicoctl for kubectl, as shown in https://projectcalico.docs.tigera.io/maintenance/clis/calicoctl/install#install-calicoctl-as-a-kubectl-plugin-on-a-single-host

curl -L https://github.com/projectcalico/calico/releases/download/v3.24.1/calicoctl-linux-amd64 -o kubectl-calico
chmod +x kubectl-calico
sudo mv kubectl-calico /usr/bin

Assuming the tool is now already configured, I tried to run the following:

microk8s kubectl calico --allow-version-mismatch apply -f - <<EOF

  • apiVersion: projectcalico.org/v3
    kind: GlobalNetworkPolicy
    metadata:
    name: empty-default-allow
    spec:
    order: 0
    selector: "all()"
    applyOnForward: true
    types:
    - Ingress
    - Egress
    ingress:
    • action: Allow
      egress:
    • action: Allow
      EOF

After this, I redeployed the free5gc charts, but running the following command in the UPF still gives 0 as a result: cat /proc/sys/net/ipv4/ip_forward

Thank you in advance for any help that could be provided.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.