All posts by Mehdi El-Filahi

Security

For security matters, ansible has a feature called ansible vault to store sensite data.

Seen that ansible is an infra as code technology, you need to store the code into a content management sevrice such as CSV, SVN, GIT, TFS, …

So to not allow anyone to read sensite data, use ansible vault

Secure the content of playbooks.

  • Create and Keep sensitive data encrypted with AES:
    • Run the command line ansible-vault create secret-info.yml
      • Enter twice a vault password
      • Enter your sensitive data with the text editor
  • Edit the vault:
    • ansible-vault edit secret-info.yml
    • Edit your sensitive data with the text editor
  • Use the vault:
    • Add vars_files into your playbook
      • vars_files:
      •  – secret-info.yml
    • ansible-playbook playbook.yml –ask-vault-pass
      • It will prompt the vault password
      • If you try to automate the runs, it could be a good idea to request the password from a secured tool such as Hashicorp vault.

Ansible galaxy

Ansible Galaxy is a hub to share your playbooks projects in public repositories.

https://galaxy.ansible.com/

Each share is categorized into sections:

Roles allows to create portable and shareable ansible projects.

To create a new galaxy project run –> ansible-galaxy init PATH

Create a yml file at the root, lets call it test.yml

tasks:

  • name: use role
    include_role:
    name: PATH

ansible-playbook PATH/test.yml

Service handlers and error handlers

A task can notify if a change has been made, then a handler can be triggered.

Example:

  • name: change_port
    • lineinfile: path=/etc/httpd/http.conf regexp=‘^port’ line=‘port=8080’
    • notify: Restart_Apache
  • handlers:
    • name: Restart_Apache
      • service: name=apache2 status=restarted

Error management of tasks:

  • To ignore a change status –> changed_when: false
    • For instance uname or service status
  • Force a change status if a word is found –> changed_when: “’SUCCESS’ in cmd_output.stdout”
  • Force error status if a word is found –>failed_when: “’FAIL’ in cmd_output.stdout”
  • Ignore an error status –> ignore_errors: yes
    • Easy to test with command –> /bin/false

Variables

Ansible variables are wrapped between double curly braces.

See all the available default variables –> ansible -m setup hostname

  • Variables can be registered statically into the yml file:
    • vars:
      • example_var: “This is a variable example”
      • my_deb_file: zabbix-release_4.4-1+bionic_all.deb
  • Variables can come as parameter from command line
    • ansible-playbook xxxxx.yml –extra-vars “variable=value”

Inventory and Playbooks

In ansible, the invetory cointains the list of all the available servers that ansible can reach and manage by sending commands to execute.

Ansible has a search order to find available inventories and configuration:

  • First is checks in $pwd/ansible.cfg
  • Then checks in $HOME/ansible.cfg
  • Then finally checks in /etc/ansible/ansible.cfg
  • When the inventory is filled in you can check your list with: ansible –list-hosts all
  • It is possible to generate a dynamic inventory from a shell script: ansible group1 -i dynamic_inv.sh –list

Playbooks are yaml files allowing to import other playbooks and run tasks:

In important point is to know and understand the list of available module (one module is one task called):

By default tasks are run synchronously but it is possible to run then asynchronously

If you need to run with elevated users:

  • Add -K parameter in the command line for the password prompt of the elevated user
  • Add those line in the code
    • become: ‘true’
    • become_user: ‘root’

To check a playbook simply add the parameter –check into the command line

To run a specific task, tag the task and run the command line ansible-playbook with the parameter –tags TAG_NAME

Request data by prompt:

vars_prompt:

  • name: “variable”
    prompt: “Please enter data:”
  • Can be combined with when: variable == ‘yes’ for conditional tasks execution

To run tasks asynchronous:
Use the keyword: async: XX (where XX is a measure in seconds)

When using async, you can specify the poll keyword that will check when to check the if the async tasks is finished: poll: XX (where XX is a measure in seconds)
tasks:
– name: Sleep
command: sleep 65
async: 55
poll: 10
The task has 55 seconds to finish and every 10 seconds, ansible will check if the tasks is completed.
Ansible will kill the task and put it in fail.

If poll is equal to 0, ansible will not check and will wait for the task to finish or to pass the async limit period.

Here is a small playbook example:

  • It runs as root
  • Will use the ansible apt module if the os family is Debian
  • Will use the ansible yum module if the os family is RedHat
  • Will use the ansible shell module to run the zypper update if the os family is Suse

What are Docker compose, registries and repositories

In this post i will describe what are docker comose, registries and repositories:

  • Docker compose:
    • Compose is a tool for defining and running multi-container Docker applications. With compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
  • Repositories:
    • A Docker repository is where you can store 1 or more versions of a specific Docker image. An image can have 1 or more versions (tags).
  • Registries:
    • A registry stores a collection of repositories. It is composed of an ip and a port accessible to access the repositories.
    • The most command line to create your own registry is: docker run -d -p 5000:5000 –name registry registry:2

Docker command lines

In this post i ll show the most common command lines used to help you work with Docker:

•docker ps : displays the list of containers running (use docker ps -a to see all containers, running and stopped).

•docker build : Build a container from a docker file.

•docker images : displays the list of available images into your host.

•docker run : Runs a container from a specific image (if the image is not yet present into your host, it will download it).

•docker start : Starts a container present into your host.

•docker stop : Stops a container present into your host.

•docker rm : Removes a container present into your host.

•docker rmi : Removes an image present into your host.

•docker cp : Similar to an scp command line, it will copy from your host to your docker or from docker to your host.

•docker commit : Allows you to create a new image from container changes.

•docker inspect : Helps you to get metadata from different docker components such as networks, volumes, ….

•docker network : Manage network components (Usefull to isolate container to a network or bridge connection with other container or your host).

Usefull commands:

Stop all the containers at once: docker stop $(docker ps -a -q)

Delete all the container at once: docker rm $(docker ps -a -q)

Delete all the images at once (you must delete the containers before first): docker rmi -f $(docker images -q)

Docker file options

When you create a container, it is possible to customize you image based on another one so each container can be customized by our own docker file where we can:

•Define which image and version to use

•Run command lines

•Expose ports

•Copy files from the host

•Add environment variables

•Define volumes.

•To do so, there are different options available:

•FROM: IMAGE (DOWNLOAD THE IMAGE FRON THE REGISTRY)

•RUN: RUN ONE OR MANY COMMAND LINES

•COPY: COPY FILE INTO DOCKER

•ADD: SAME AS COPY BUT FOR COMPRESSED FILES

•WORKDIR: WORKING DIR (STARTING POINT FOLDER)

•ENTRYPOINT: CMD TO RUN AT START OF THE CONTAINER

•CMD: SAME AS ENTRYPOINT (BUT OVERIDABLE)

•ENV: ENV VARIABLE

•LABEL: METADATA

•HEALTHCHECK: CHECK IF THE APPLICATION IS RUNNING

•STOPSIGNAL: CHANGE STOP BEHAVIOUR

•VOLUME: LINK HOST PATH TO DOCKER INTERNAL PATH

•MOUNT: MOUNT A VOLUME

•EXPOSE: EXPOSE PORTS

CICD with IIB / ACE / CP4I

This post is intended to help you understand how an automated build, deployment and test pipeline can be done based on different schemas and videos.

To provide a better understanding, here is a video showing a demo in french (An english version will be done later).

This schema explains the steps to manage the CICD build and deploy parts for IIB, ACE, CP4I.

  • 1. The toolkit already contains maven.
    • You must install the ACE Maven plugin into your m2 repository and upload it also to nexus.
    • Convert your application, policy to a maven project.
    • Insert the necessary information such as ace server binary location, compile esql enabled or not.
    • Build your application, this will create a compressed file.
  • 2. Push your code to GIT.
  • 3. Go to Jenkins, create a Maven project ITEM.
    • Enter the git repository, enable the maven deploy.
    • Your settings.xml with the necessary information.
    • Run your job which will upload the binary to Nexus.
  • 4. Create a new jenkins ITEM.
    • Provide the GroupId, ArtifactID
    • Point the search binaries to nexus
    • The settings.xml should have the information about the ACE Integration server URL, credentials of the Web Admin API.
    • At the run job, select the version to deploy.
    • The job will deploy the version by passing by the web admin API of the integration server.

This schema explains the steps to manage the CICD test part for IIB, ACE, CP4I.

  • 1. To test a message to post to MQ service, create a soapui hermes jms project first and test it manually then create soapui maven project with soapui hermes jms project.
  • 1. To test a message to post to SOAP service, create a soapui project first and test it manually then create soapui maven project with soapui project.
  • 1. To test a message to post to a REST service, create a soapui project first and test it manually then create soapui maven project with soapui project OR create a postman collection first and test it manually then create Newman maven project with your postman collection.
  • 1. !!! Don’t forget to ensure that you have an assert for each reponse you will receive for your services. !!!
  • 1. I suggest to create a maven profile for each environment (for instance DEV/TEST, QA, UAT/ACCEPTENCE, PRODUCTION).
  • 2. Push your Testing project to GIT.
  • 3. Create a jenkins maven ITEM
    • Point to your GIT repository.
    • Add each profile environment available as a combo box.
  • 4. Run the test and wait the job for be in state success or failure.