Hello guys, today I am going to post a workaround that I am doing to get test reports from a kubernetes pod that has terminated. If you run your tests in a kubernetes pod, you must know by now that you can’t copy your files containing your test reports from a terminated pod. Many people were complaining about and you can see there is an open issue here.
I am going to show an example of api tests with postman and newman where I run the tests in a kubernetes pod and in the same microservices cluster since there is no public api to access them.
Create a Job
- First you need to create a job and save it in a .yaml file, like the example below:
apiVersion: batch/v1 kind: Job metadata: name: api-tests namespace: default spec: parallelism: 1 template: metadata: name: api-tests spec: containers: - name: api-tests image: postman/newman:alpine command: ["run"] args: ["/etc/newman/test.postman_collection.json","--reporters","cli","--reporter-cli-no-failures"] restartPolicy: Never
- Then run on your terminal or add it to your jenkins file:
kubectl apply -f job.yaml
Or you can also create a pod
- You just need to add this command to your jenkins file:
sh "kubectl run api-tests -i --rm --namespace=${ENVIRONMENT_NAMESPACE} --restart=Never --image=${YOUR_IMAGE}:latest --image-pull-policy=Always -- run /etc/newman/${YOUR_COLLECTION_PATH}.postman_collection.json -e /etc/newman/${YOUR_ENVIRONMENT_CONFIG_PATH}.postman_environment.json --reporters cli --reporter-cli-no-failures"
Why not a Deployment ?
In the beggining of the implementation I first tried to create a deployment, but deployments don’t support the policy to never restart the pod, which means the automation would never stop running and you wouldn’t be able to copy the reports from the container.
Copy the reports
So, now that your tests have finished, you can see the logs on jenkins showing they passed (or failed), but you want to extract the report from the logs and have them in a html/json/any file, so you can archive or publish them. Doing this you would be able to see clearer what is the issue and keep easy to access the reports for each pipeline.
Well,kubectl cp
doesn’t work like docker cp
unfortunatelly. Once your pod is terminated, you are not able to access the reports or anything inside the pod. So, for this reason there is an issue opened on the kubectl github repository that is exactly about that, you can check the progress of the issue here.
Now how can you copy the reports from the container if you can’t access it after the tests are finished ? Well, there is not a perfect way, some people send the reports to S3, some people send the reports to their emails, but I did find better to save the report copying the html code from the logs and saving it in a file.
On your jenkins file you will have the command to run the pod with the tests and after you need to cat the html report generated to be able to get everything inside the html tag and saving it in a file:
Β Β Β Β Β sh "kubectl run api-tests -i --rm --namespace=${ENVIRONMENT_NAMESPACE} --restart=Never --image=${YOUR_IMAGE}:latest --image-pull-policy=Always -- run /etc/newman/${YOUR_COLLECTION_PATH}.postman_collection.json -e /etc/newman/${YOUR_ENVIRONMENT_CONFIG_PATH}.postman_environment.json --reporters cli,html --reporter-html-export api-tests.html --reporter-cli-no-failures ; cat api-tests.html | tee report" def report = readFile "report" def update = report.substring(report.indexOf('<html>'), report.indexOf('</html>')) writeFile file: "${workspace}/api-tests.html", text: update sh "rm report"
First, you need to cat the html report that your tests generated (remember to have this script to run the tests in your docker image or the package.json if you use NodeJs, this is just an example). You can see that you will need to grab everything between the html tags. You can do this using substring or awk, whatever is your preference. I am using substring on this example, but if you want to filter out using awk, the code should be something like this awk '/<html>/','/<\\html>/'
.
After grabbing the html report, I am saving it in a file and deleting the previous report file that contained the whole logs from the kubernetes pod.
Not perfect, not happy about doing these kind of workarounds, but this is a way to avoid sending the files to S3 or any other place. Hope it helps !
My current solution: use a shell script as your pod command, that runs a long enough sleep command in background followed by the test executable and then make the shell wait for the sleep to complete. That way you, first, keep the pod alive after the execution of your test executable, and second, the sleep will still wake up at some point and let the pod terminate, so no infinite resource consumption like with the ‘tail -f /dev/null’ solution. The pain point here is that you still have to manually kill your pod after copying your test reports, or let it die for a possible long period of time, and you can’t precisely/easily define when to let the pod terminate. For instance I use a 10 minutes sleep as an acceptable pod lifetime.
Hi Alex,
When I am trying to create the Job for Newman, I am getting below issue -:
Warning Failed 39s kubelet Error: failed to start container “api-tests”: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: “run”: executable file not found in $PATH: unknown
Has some thing changed in the newman image from docker hub? The newman.yml file is below -:
apiVersion: batch/v1
kind: Job
metadata:
name: api-tests
namespace: default
spec:
parallelism: 1
template:
metadata:
name: api-tests
spec:
containers:
– name: api-tests
image: postman/newman
command: [“run”]
args: [“./VS-workspace/contract-tests-postman.yml”,”–reporters”,”cli”,”–reporter-cli-no-failures”]
restartPolicy: Never
Your solution is good for just one text file, but in my case I generate many html pages with embedded images, and this isn’t enough.
ahh no, unless you can push and host these images somewhere online
You have many ways of “multiplexing” many outputs (human readable or binary) to one, that would let you return multiple files:
– as parsable text output in logs
– as base64 encoded entities inside a unique JSON file in logs
– as base64 encoded tar/bzip compressed file in logs
– send your file to a storage (GCS, S3)
Yes, just do whatever is best for your project π