Tải bản đầy đủ - 0 (trang)
Example – integration testing with Node.js

Example – integration testing with Node.js

Tải bản đầy đủ - 0trang

and then accesses that code. This can be invoked with the following

command:

npm run integration



Invoking the tests would show you something akin to the

following:

> kfd-nodejs@0.0.0 integration /Users/heckj/src/kfd-nodejs

> mocha e2e_tests --exit

kubernetes

cluster

✓ should have a healthy cluster

✓ should deploy the manifests (273ms)

should repeat until the pods are ready

- delay 5 seconds...

✓ check to see that all pods are reporting ready (5016ms)

should interact with the deployed services

✓ should access by pod...

4 passing (5s)



Node.js tests and

dependencies with mocha

and chai

The test code itself is at e2e_tests/integration_test.js, and I leverage

mocha and chai to lay out the tests in a BDD-style structure. A

convenient side effect of the BDD structure with mocha and chai is

that tests can be wrapped by describe and it, which structure how the

tests get run. Anything within a describe block doesn't have a

guaranteed ordering, but you can nest describe blocks to get the

structure you want.



Validating the cluster health

The JavaScript Kubernetes client is generated in much the same

fashion as the Python client, from the OpenAPI definition and

mapped to the releases of Kubernetes. You can find the client at http

s://github.com/kubernetes-client/javascript, although this repository doesn't

have the same level of generated documentation as the Python

client. Instead, the developers have gone to some length to reflect

the types in TypeScript with the client, which results in editors and

IDEs being able to do some level of automatic code completion as

you are writing your tests:

const k8s = require('@kubernetes/client-node');

var chai = require('chai')

, expect = chai.expect

, should = chai.should();

var k8sApi = k8s.Config.defaultClient();

describe('kubernetes', function() {

describe('cluster', function() {

it('should have a healthy cluster', function() {

return k8sApi.listComponentStatus()

.then((res) => {

// console.log(util.inspect(res.body));

res.body.items.forEach(function(component) {

// console.log(util.inspect(value));

expect(component.conditions[0].type).to.equal("Healthy");

expect(component.conditions[0].status).to.equal("True");

})

}, (err) => {

expect(err).to.be.null;

});

}) // it



The nesting of the code can make indenting and tracking at the

right level quite tricky, so the test code leverages promises where it

can to simplify the callback structures. The preceding example uses

a Kubernetes client that automatically grabs credentials from the

environment in which it's run, a feature of several of these clients,

so be aware of it if you wish to arrange specific access.

Where the Python client had a method, list_component_status, the

JavaScript pattern scrunches the names together with CamelCase

formatting, so the same call here is listComponentStatus. The result is

then passed in a promise, and we iterate through the various

elements to verify that the cluster components are all reporting



healthy.

The example leaves in some commented out code that inspects the

objects that were returned. With little external documentation, I

found it convenient to see what was returned while developing the

tests, and the common trick is to use the util.inspect function and log

the results to STDOUT:

const util = require('util');

console.log(util.inspect(res.body));



Deploying with kubectl

Following the Python example, I used kubectl on the command line

to deploy the code, invoking it from the integration test:

it('should deploy the manifests', function() {

var manifest_directory = path.normalize(path.join(path.dirname(__filename), '..', '/deploy'))

const exec = util.promisify(require('child_process').exec);

return exec('kubectl apply -f '+manifest_directory)

.then((res) => {

// console.log(util.inspect(res));

expect(res.stdout).to.not.be.null;

expect(res.stderr).to.be.empty;

}, (err) => {

expect(err).to.be.null;

})

})



This particular bit of code is dependent on where you have this test

case and its relative directory to the deploy directory where the

manifests are stored, and like the preceding example it uses

promises to chain the validation of the execution of the invocation.



Waiting for the pods to

become available

The process of waiting and retrying was significantly more tricky

with Node.js, promises, and callbacks. In this case, I leveraged a

capability of the mocha test library to allow a test to be retried and

manipulate the overall timeout for a section of the test structure to

get the same end result:

describe('should repeat until the pods are ready', function() {

// Mocha supports a retry mechanism limited by number of retries...

this.retries(30);

// an a default timeout of 20,000ms that we can increase

this.timeout(300000);

it('check to see that all pods are reporting ready', function() {

return new Promise(function(resolve, reject) {

console.log(' - delay 5 seconds...')

setTimeout(() => resolve(1), 5000);

}).then(function(result) {

return k8sApi.listNamespacedPod('default')

.then((res) => {

res.body.items.forEach(function(pod) {

var readyCondition = _.filter(pod.status.conditions, { 'type': 'Ready' })

//console.log("checking: "+pod.metadata.name+" ready: "+readyCondition[0].status);

expect(readyCondition[0].status).to.equal('True')

}) // pod forEach

})

})

}) // it

}) // describe pods available



By returning promises in the tests, every one of the tests is already

asynchronous with a preset timeout that mocha provides of 20

seconds. Within each describe, you can tweak how mocha runs the

tests—in this case, setting the overall timeout to five minutes and

asserting that the test can be retried up to 30 times. To slow down

the checking iterations, I also included a timeout promise that

simply introduces a five-second delay before invoking the check of

the cluster to get the pod health.



Interacting with the

deployment

The code to interact with the deployment is simpler than the

Python example, utilizing the Kubernetes client and the proxy:

describe('should interact with the deployed services', function() {

// path to access the port through the kubectl proxy:

// http://localhost:8001/api/v1/namespaces/default/services/nodejs-service:web/proxy/

it('should access by pod...', function() {

return k8sApi.proxyGETNamespacedServiceWithPath("nodejs-service:web", "default", "/")

.then(function(res) {

// console.log(util.inspect(res,{depth:1}));

expect(res.body).to.not.be.null;

});

})

}) // interact with the deployed services



In this branch, I changed the code running from a stateful set to a

deployment, as getting proxy access to the headless endpoints

proved complicated. The stateful sets can be easily accessed from

within the cluster via DNS, but mapping to external didn't appear to

be easily supported in the current client code.

Like the Python code, there's a matrix of calls to make REST style

requests through the client:

proxyGET



proxyDELETE



proxyHEAD



proxyOPTIONS



proxyPATCH



proxyPUT



And each is mapped to endpoints:



namespacedPod



namespacedPodWithPath



namespacedService



namespacedServiceWithPath



This gives you some flexibility in standard REST commands to

send to either a pod directly or to a service endpoint. Like the

Python code, the withPath option allows you to define the specific

URI with which you're interacting on the pod or service.

If you're writing these tests in an editor such as Visual Studio Code,

code completion will help provide some of the details that are

otherwise missing from the documentation. The following is an

example of code completion showing the method options:



And when you choose a method, the TypeScript annotations are

also available to show you what options the JavaScript methods

expect:



Continuous integration with

Kubernetes

Once you have integration tests, getting something operational to

validate those tests is very important. If you don't run the tests,

they're effectively useless—so having a means of consistently

invoking the tests while you're doing development is important. It

is fairly common to see continuous integration do a lot of the

automated lifting for development.

There are a number of options available to development teams to

help you with continuous integration, or even its more advanced

cousin, continuous deployment. The following tools are an

overview of what was available at the time of writing, and in use by

developers working with their code in containers and/or in

Kubernetes:

Travis.CI: Travis.CI (https://travis-ci.org/) is a hosted

continuous integration service, and it is quite popular as the

company offers free service with an easy means of plugging

into GitHub for public and open source repositories. Quite a

number of open source projects leverage Travis.CI to do basic

testing validation.



Drone.IO: Drone.IO (https://drone.io/) is a hosted or local

option for continuous integration that is also open source

software itself, hosted at https://github.com/drone/drone. Drone has

an extensive plugin library, including a plugin for Helm (https

), which has made it attractive to



://github.com/ipedrazas/drone-helm



some development teams who are using Helm to deploy their

software.



Gitlab: Gitlab (https://about.gitlab.com/) is an opensource source

control solution that includes continuous integration.



Like Drone, it can be leveraged in your local environment, or

you can use the hosted version. Where the previous options

were agnostic to the source control mechanism, Gitlab CI is

tightly bound to Gitlab, effectively making it useful only if

you're also willing to use Gitlab.



Jenkins: Jenkins (https://jenkins.io/) is the grandaddy of the

CI solutions, originally known as Hudson, and it is used

extensively in a wide variety of environments. A hosted

version of Jenkins is available through some providers, but it

is primarily a opensource solution that you're expected to

deploy and manage yourself. It has an amazing (perhaps

overwhelming) number of plugins and options available to it,

and notably a Kubernetes plugin (https://github.com/jenkinsci/kubern

) that will let a Jenkins instance run its tests within



etes-plugin



a Kubernetes cluster.



Concourse: Concourse (https://concourse-ci.org/), like Jenkins,

is an open source project rather than a hosted solution, built

in the CloudFoundry project and focusing on pipelines for

deployment as a first-class concept (it's relatively new to

some older projects such as Jenkins). Like Drone, it is set up

to be a continuous delivery pipeline and an integral part of

your development process.



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Example – integration testing with Node.js

Tải bản đầy đủ ngay(0 tr)

×