Note: This blog post is only suitable for local development testing, and is not intended for production use.

Introduction
A quick tutorial looking at restricting traffic between two namespaces in Istio. We will then explore how to debug the policies.
We will be:
- leveraging Istio’s
AuthorizationPolicy
to restrict traffic between sidecars. - leveraging Kubernetes’
NetworkPolicy
to restrict traffic between namespaces.
We are going to create two applications with simple policies:
np-test-nginx-1
- Accept traffic from anywherenp-test-nginx-2
- Reject traffic from everywhere
Prerequisites
This assumes you have the following installed and configured already:
- Istio
- Istioctl
- Kubectl
- Envsubst
- A CNI that supports NetworkPolicy in Kubernetes.
The NetworkPolicy
resources in kubernetes is often not something that is enabled by default.
For example if you’re running this in EKS Auto Mode you need to configure NetworkPolicy
to work first:
For this example, I will be running K3s on a Raspberry Pi with the Calico CNI installed.
Deployment
We are going to leverage envsubst
as a quick and dirty way to duplicate these kubernetes resoruces, in the real world you would likely want to use helm
or another package manager.
Create this file istio-np.yaml.template
:
apiVersion: v1
kind: Service
metadata:
name: svc-$SERVICE_NAME
spec:
type: ClusterIP
ports:
- name: http-web
port: 80
targetPort: 80
selector:
app: $SERVICE_NAME
---
apiVersion: v1
kind: ConfigMap
metadata:
name: config-$SERVICE_NAME
data:
nginx.conf: '
events {
}
http {
server {
listen 80;
location / {
return 200 "Hello world - $SERVICE_NAME!";
}
}
}
'
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: $SERVICE_NAME
spec:
selector:
matchLabels:
app: $SERVICE_NAME
strategy:
type: Recreate
template:
metadata:
annotations:
"sidecar.istio.io/logLevel": debug # level can be: trace, debug, info, warning, error, critical, off
"sidecar.istio.io/proxyLogLevel": rbac:debug
labels:
app: $SERVICE_NAME
spec:
containers:
- image: nginx:latest
name: $SERVICE_NAME
ports:
- containerPort: 80
name: web
volumeMounts:
- name: config-vol
mountPath: /etc/nginx/
volumes:
- name: config-vol
configMap:
name: config-$SERVICE_NAME
items:
- key: nginx.conf
path: nginx.conf
Then run:
kubectl create namespace np-test-1
kubectl create namespace np-test-2
kubectl label namespace np-test-1 istio-injection=enabled --overwrite
kubectl label namespace np-test-2 istio-injection=enabled --overwrite
export SERVICE_NAME="np-test-nginx-1"
envsubst < istio-np.yaml.template > istio-np-1.yaml
export SERVICE_NAME="np-test-nginx-2"
envsubst < istio-np.yaml.template > istio-np-2.yaml
kubectl apply -f istio-np-1.yaml -n np-test-1
kubectl apply -f istio-np-2.yaml -n np-test-2
So this will create a service in each namespace:
- svc-np-test-nginx-1.np-test-1.svc.cluster.local
- svc-np-test-nginx-2.np-test-2.svc.cluster.local
Wait for the pods to finish deploying before continuing.
kubectl get pods -Aw
Confirm Setup
Confirm Pod 1 can call Pod 2
kubectl exec -it $(kubectl get pod -n np-test-1 -l app=np-test-nginx-1 -o jsonpath="{.items[0].metadata.name}") -n np-test-1 -- bash -c "echo '' && curl -sf --max-time 3 svc-np-test-nginx-2.np-test-2.svc.cluster.local || echo 'Failed calling np-test-nginx-2 from np-test-nginx-1'"
Expected output: Hello world - np-test-nginx-2!
Confirm Pod 2 can call Pod 1
kubectl exec -it $(kubectl get pod -n np-test-2 -l app=np-test-nginx-2 -o jsonpath="{.items[0].metadata.name}") -n np-test-2 -- bash -c "curl -sf --max-time 3 svc-np-test-nginx-1.np-test-1.svc.cluster.local || echo 'Failed calling np-test-nginx-1 from np-test-nginx-2' && echo ''"
Expected output: Hello world - np-test-nginx-1!
Create Istio AuthorizationPolicy
Create file: block-istio-ingress-np-test-2.yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all-ingress-using-authorizationpolicy
namespace: np-test-2
spec:
action: DENY
rules:
- {} # <- this means "match everything"
Then apply the AuthorizationPolicy
with:
kubectl apply -f block-istio-ingress-np-test-2.yaml -n np-test-2
Now if we wait a couple seconds and run these commands:
kubectl exec -it $(kubectl get pod -n np-test-1 -l app=np-test-nginx-1 -o jsonpath="{.items[0].metadata.name}") -n np-test-1 -- bash -c "echo '' && curl -sf --max-time 3 svc-np-test-nginx-2.np-test-2.svc.cluster.local || echo 'Failed calling np-test-nginx-2'"
kubectl exec -it $(kubectl get pod -n np-test-2 -l app=np-test-nginx-2 -o jsonpath="{.items[0].metadata.name}") -n np-test-2 -- bash -c "curl -sf --max-time 3 svc-np-test-nginx-1.np-test-1.svc.cluster.local || echo 'Failed calling np-test-nginx-1' && echo ''"
Then we should always see this output:
Failed calling np-test-nginx-2
Hello world - np-test-nginx-1!
Confirm that Istio is blocking it with the following command:
kubectl logs $(kubectl get pod -n np-test-1 -l app=np-test-nginx-1 -o jsonpath="{.items[0].metadata.name}") -c istio-proxy -n np-test-1 | grep "rbac" | grep "denied"
kubectl logs $(kubectl get pod -n np-test-2 -l app=np-test-nginx-2 -o jsonpath="{.items[0].metadata.name}") -c istio-proxy -n np-test-2 | grep "rbac" | grep "denied"
If it worked correctly, then you should see an output similar to this:
2025-07-06T19:33:00.654348Z debug envoy rbac external/envoy/source/extensions/filters/http/rbac/rbac_filter.cc:224 enforced denied, matched policy ns[np-test-2]-policy[deny-all-ingress-using-authorizationpolicy]-rule[0] thread=26
2025-07-06T19:33:00.654360Z debug envoy http external/envoy/source/common/http/filter_manager.cc:1040 [Tags: "ConnectionId":"21","StreamId":"6947102628293500817"] Preparing local reply with details rbac_access_denied_matched_policy[ns[np-test-2]-policy[deny-all-ingress-using-authorizationpolicy]-rule[0]] thread=26
Note that rbac debug logs need to be turned on, I did this by using an annotation in a the Pod
manifest.
...
annotations:
"sidecar.istio.io/logLevel": debug
"sidecar.istio.io/proxyLogLevel": rbac:debug
...
But you can also enable them by using istioctl
:
istioctl proxy-config log $(kubectl get pod -n np-test-1 -l app=np-test-nginx-1 -o jsonpath="{.items[0].metadata.name}").np-test-1 --level rbac:debug
istioctl proxy-config log $(kubectl get pod -n np-test-2 -l app=np-test-nginx-2 -o jsonpath="{.items[0].metadata.name}").np-test-2 --level rbac:debug
And that’s it! You now have a very basic example of a working AuthorizationPolicy
.
However, because we can’t guarantee that all traffic is routed through sidecars we also need to look at leveraging NetworkPolicy
in Kubernetes. Let’s explore that next.
Setup NetworkPolicy
The NetworkPolicy
is applied before the AuthorizationPolicy
, so we will create that next.
Now let’s block ingress traffic to namespace np-test-2
.
First remove our Istio policy, so we are testing each one in isolation:
kubectl delete -f block-istio-ingress-np-test-2.yaml -n np-test-2
In a real world example you would apply them both for defense in depth, but it is easier to test them separately.
Create file: block-ingress-np-test-2.yaml
:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress-using-networkpolicy
namespace: np-test-2
spec:
podSelector: {} # Apply to all pods in this namespace
policyTypes:
- Ingress
ingress: []
It can take a few seconds for the changes to propagate through all the sidecars.
Then apply the NetworkPolicy
with:
kubectl apply -f block-ingress-np-test-2.yaml -n np-test-2
Now run our tests again:
kubectl exec -it $(kubectl get pod -n np-test-1 -l app=np-test-nginx-1 -o jsonpath="{.items[0].metadata.name}") -n np-test-1 -- bash -c "echo '' && curl -sf --max-time 3 svc-np-test-nginx-2.np-test-2.svc.cluster.local || echo 'Failed calling np-test-nginx-2'"
kubectl exec -it $(kubectl get pod -n np-test-2 -l app=np-test-nginx-2 -o jsonpath="{.items[0].metadata.name}") -n np-test-2 -- bash -c "curl -sf --max-time 3 svc-np-test-nginx-1.np-test-1.svc.cluster.local || echo 'Failed calling np-test-nginx-1' && echo ''"
So we should see the same output:
Failed calling np-test-nginx-2
Hello world - np-test-nginx-1!
And that’s it! That is as far as we can take the debugging. This is because the exact steps for debugging the logs for NetworkPolicy
will vary for each CNI.
Debug tips
Here are some extra debug tips I picked up along the way:
Using Istioctl
Run this command to confirm which policies are enforced for a pod:
istioctl x authz check $(kubectl get pod -n np-test-2 -l app=np-test-nginx-2 -o jsonpath="{.items[0].metadata.name}").np-test-2
Expected output:
ACTION AuthorizationPolicy RULES
ALLOW deny-all-ingress.np-test-2 1
Check sidecar logs
You can view sidecar logs like so:
kubectl logs $(kubectl get pod -n np-test-1 -l app=np-test-nginx-1 -o jsonpath="{.items[0].metadata.name}") -c istio-proxy -n np-test-1
kubectl logs $(kubectl get pod -n np-test-2 -l app=np-test-nginx-2 -o jsonpath="{.items[0].metadata.name}") -c istio-proxy -n np-test-2
And specifically look for “rbac” with:
kubectl logs $(kubectl get pod -n np-test-1 -l app=np-test-nginx-1 -o jsonpath="{.items[0].metadata.name}") -c istio-proxy -n np-test-1 | grep "rbac"
kubectl logs $(kubectl get pod -n np-test-2 -l app=np-test-nginx-2 -o jsonpath="{.items[0].metadata.name}") -c istio-proxy -n np-test-2 | grep "rbac"
Debug using kube-route
If you are using kube-route, you can check the kube-route pod for NetworkPolicy
activity:
kubectl logs -n kube-system -l k8s-app=kube-router
Debug using calico
If you are using calico, you can check the logs with:
kubectl logs -n kube-system -l k8s-app=calico-node
Cleanup
kubectl delete -f istio-np-1.yaml -n np-test-1
kubectl delete -f istio-np-2.yaml -n np-test-2
kubectl delete -f block-ingress-np-test-2.yaml -n np-test-2
kubectl delete -f block-istio-ingress-np-test-2.yaml -n np-test-2
rm istio-np-1.yaml
rm istio-np-2.yaml
kubectl delete namespace np-test-1
kubectl delete namespace np-test-2
Summary
This example was deliberately simple, as we were only looking to confirm that these can work and exploring ways to debug them.
To configure the rules for real world scenarios I would recommend checking out the following resources: