Browsing Category

IT is business

Cloud Strategy, IT is business, Practical examples

Make your Jenkins as code and gain speed

November 8, 2020

TL;DR: example of Jenkins as code. There’s a step by step to configure your Jenkins as code using Ansible tool, and configuration-as-code plug-in at the end of the article. The final version of the OS will have Docker, Kubectl, Terraform, Liquibase, HAProxy (for TLS), Google SSO instructions, and Java installed for running the pipelines.

Why having our Jenkins coded?

One key benefit from having infrastructure and os level coded is the safety it gives to the software administrators. Think with me: what happens if your Jenkins stops working suddenly? What if something happens and nobody can log into it anymore? If these questions make you chill, let’s code our Jenkins!

What we will cover

  1. This article covers the tools presented in the image above:
    • Vagrant for local tests.
    • Packer tool for creating your SO image with your Jenkins ready to use
    • Ansible for installing everything you need on your SO image (Jenkins, Kubectl, Terraform, etc).
    • JCasC (Jenkins Configuration as Code) to configure your Jenkins after it is installed.
    • You can also find some useful content for the Terraform part here and here.

See all the code for this article here:

Special thanks to many Ansible roles I was able to found on GitHub and geerlingguy for many of the playbooks we’re using here. 

1. How to run it

Running locally with Vagrant to test your configuration

The Vagrantfile is used for local tests only, and it is a pre-step before creating the image on your cloud with Packer

Vagrant commands:

  1. Have (1) Vagrant installed (sudo apt install vagrant) and (2) Oracle’s VirtualBox
  2. How to run: navigate to the root of this repo and run sudo vagrant up. After everything is complete, it will create a Jenkins accessible from your host machine at localhost:5555 and localhost:6666. This will create a virtual machine and will install everything listed on the Vagrantfile
  3. How to SSH into the created machine: run sudo vagrant ssh
  4. How to destroy the VM: run sudo vagrant destroy

Using packer to build your AMI or Az VM Image

Packer is a tool to create an OS image (VM on Azure OR AMI on AWS)

Running packer:

  1. packer build -var 'client_id=<client_id>' -var 'client_secret=<client_secret>' -var 'subscription_id=<subscription_id>' -var 'tenant_id=<tenant_id>' packer_config.json
  2. Once you have your AMI or Az VM Image created, go for your cloud console and create a new machine pointing to the newly created image.

Checkout the file packer_config.json to see how packer will create your SO image and Azure instructions for it

PS: This specific packer_config.json file is configured to create an image on Azure. You can change it to run on AWS if you have to.

2. Let’s configure our Jenkins as Code!

I’m listing here a few key configurations among the several you will find in each of these Ansible playbooks:

  1. Java version: on ansible_config/site.yml
  2. Liquibase version: on ansible_config/roles/ansible-role-liquibase/defaults/main.yml
  3. Docker edition and version
  4. Terraform version
  5. Kubectl packages (adding kubedm or minikube as an example) on ansible_config/roles/ansible-role-kubectl/tasks/main.yml
  6. Jenkins configs (I will comment further)
  7. HAProxy for handling TLS (https) (will comment further)

3. Configuring your Jenkins

Jenkins pipelines and credentials files

This Jenkins is configured automatically using the Jenkins plugin configuration as code. All the configuration is listed on file jenkins.yaml in this root. On that file, you can add your pipelines and credentials for those pipelines to consume. Full documentation and possibilities can be found here:

Below is the example you will find on the main repo:

  1. You can define your credentials on block one. There are a few possible credential types here. Check them all on the plugin’s docs
  2. With this, we create a folder
  3. Item 3 creates one pipeline job as example fetching it from a private GitLab repo that uses the credentials defined in item 1

Jenkins configuration

The plugins that this Jenkins will have installed can be found at: ansible_config/roles/ansible-role-jenkins/defaults/main.yml. If you need to get your current installed plugins, you can find how-to here:

On the imag below we can see:

  1. Your hostname: change it to a permanent hostname instead of localhost once you are configuring TLS
  2. The plugins list you want to have installed on your Jenkins

You can change Jenkins default admin password on file ansible_config/roles/ansible-role-jenkins/defaults/main.yml attribute “jenkins_admin_password”. Check the image below:

  1. You can change admin user and password
  2. Another configuration you will change when activating TLS (https)

Jenkins’ configuration-as-code plug-in:

For JCasC to work properly, the file jenkins.yml in the project root must be added to Jenkins’ home (default /var/lib/jenkins/). This example has the keys to be used on pipelines and the pipelines as well. There are a few more options on JCasC docs.

Activating TLS (https) and Google SSO

  1. As shown on step “Jenkins Configuration”‘s images: Go for ansible_config/roles/ansible-role-jenkins/defaults/main.yml. Uncomment line 15 and change it to your final URL. Comment line 16
  2. Go for ansible_config/roles/ansible-role-haproxy/templates/haproxy.cfg. Change line 33 to use your final organization’s URL
  3. Rebuild your image with Packer (IMPORTANT! Your new image won’t work locally because you changed Jenkins configuration)
  4. Go for your cloud and deploy a new instance using your just created image
3.1 – TLS: Once you have your machine up and running, connect through SSH to perform the last manual steps: TLS and SSO Google authentication:
  1. Generate the .pem certificate file with the command cat > fullkey.pem. Remember to remove the empty row that is kept inside the generated fullkey.pem between the two certificates. To look at the file use cat fullkey.pem
  2. Move the generated file to your running instance’s folder /home/ubuntu/jenkins/
  3. Restart HAProxy with sudo service haproxy restart

Done! Your Jenkins is ready to run under https with valid certificates. Just point your DNS to the running machine and you’re done.

3.2 – Google SSO:

  1. Log in to Jenkins using regular admin credentials. Go to “Manage Jenkins” > “Global Security”. Under “Authentication” select “Login with Google” and fill in like below:
  • Client id = client_id generated on your G Suite account.
  • Client secret = client_secret
  • Google Apps Domain =

PS: More information on how to generate a client ID and client secret on the plugin’s page:

@MIT, Career, IT is business

@MIT: Cloud & DevOps – Part 8

August 16, 2020

@MIT series is a group of articles being written to describe my learning path attending to the Cloud & DevOps: Continuous Transformation at the MIT.

This article at a glance – TL;DR

The Cloud Native foundation as a good source to check new moves from the cloud industry

The content

  1. Cloud-Native Computing Foundation: For an application to be considered truly Cloud Native they need to be:
    1. Built for fault tolerance
    2. Horzontally scalable
    3. Written in a manner that takes full advantage of what cloud providers have to offer.
  2. Cloud Native Applications prioritize the following:
    1. Speed
    2. Short cycles
    3. Microservices
    4. Loosely coupled
    5. DevOps
  3. Pet vs cattle way of handling our servers:

As a developer, you care about the application being hand-cared for − when it is sick, you care of it, and if it dies, then it is not easy to replace. It’s like when you name a pet and take care of it; if one day it is missing, everyone will notice. In the case of cattle, however, you expect that there will always be sick and dead cows as part of daily business; in response, you build redundancies and fault tolerance into the system so that ‘sick cows’ do not affect your business. Basically, each server is identical and if you need more, you create more so that if any particular one becomes unavailable, no one will notice.

Cloud native action spectrum:

Cloud native roadmap of adoption (the majority of companies are on step 4):

There’s a landscape map listing tons of vendors on the cloud native foundation for each specific need:

Exercises and Assignments

  • Assignment: Create a presentation showing the push you are planning for your company. Think about steps, risks, mitigations, and how you plan to lead the journey. Think about the presentation as if you were presenting it to your CEO or a client.
@MIT, Career, IT is business

@MIT: Cloud & DevOps – Part 7

August 8, 2020

@MIT series is a group of articles being written to describe my learning path attending to the Cloud & DevOps: Continuous Transformation at the MIT.

This article at a glance – TL;DR

Several cases of Agile adoption in a set of big and mid-size companies. Also presented key benefits, challenges and outputs of an agile adoption

The content

Today, over 50% of the Fortune 500 companies from the year 2000 no longer exist. GE is stumbling. BlackBerry (RIM) is gone, and so is most of Nokia, raised to a $150 billion corporation. (…) John Boyd developed a methodology for operating in such situations, called the OODA Loop. The speed of executing the loop is the essential element of survival. It involves testing one’s premises by actual Observation, Orienting your corporation with respect to the situation. Then Deciding on a course of action, and then executing that plan by Acting. This is the meaning of being Agile. (…) Data is the new gold.

MIT – Cloud & DevOps course – 2020

Agile Adoption

Pros of agile software development:

  • Customers have frequent and early opportunities to see the work being delivered and to make decisions and changes throughout the development of the project.
  • The customer gains a strong sense of ownership by working extensively and directly with the project team throughout the project.
  • If time to market is a greater concern than releasing a full feature set at initial launch, Agile is best. It will quickly produce a basic version of working software that can be built upon in successive iterations.
  • Development is often more user-focused, likely a result of more and frequent direction from the customer

Cons of Agile Software Development:

  • Agile will have a high degree of customer involvement in the project. It may be a problem for some customers who simply may not have the time or interest for this type of participation.
  • Agile works best when the development team are completely dedicated to the project.
  • The close working relationships in an Agile project are easiest to manage when the team members are located in the same physical space, which is not always possible.
  • The iterative nature of Agile development may lead to frequent refactoring if the full system scope is not considered in the initial architecture and design. Without this refactoring, the system can suffer from a reduction in overall quality. This becomes more pronounced in larger-scale implementations, or with systems that include a high level of integration.

Managing Complexity of Organizations and operations

As companies grow, their complexity grows. And they have to manage that complexity, otherwise, it’s gonna turn into chaos. The problem is that they usually manage that putting processes in place: you have to sign X docs, follow Y procedures, etc. The problem is that we tail employee freedom, and the side effect is that the high performing employees tend to leave our company.

Netflix’s solution to this scenario was different. They decided to let the smart workers manage the complexity instead of putting processes in place.

The problem for the traditional approach is that when the market shifts we’re unable to move fast. We have had so many processes and fixed culture that our teams won’t adapt and innovative people won’t stick to these environments.

That leads us to three bad options of managing our growing organizations:

  1. Stay creative and small company (less impact)
  2. Avoid rules (and suffer the chaos)
  3. Use process (and cripple flexibility and ability to thrive when the market changes)

Back to Netflix case: they believed that high performing people can contain the chaos. With the right people, instead of a culture of process adherence, you have a culture of creativity and self-discipline, freedom, and responsibility.

Comparing waterfall and agile software development model


Exercises and Assignments

  • Assignment: Write a summary about two articles suggested by the MIT that highlight the complexity of turning Agile that some companies faced and how they are thriving.


All the resources used to reach the results above are stored in this GitHub repository:

Cloud Strategy, IT is business

A Decision Matrix for Public Cloud Adoption

August 3, 2020

Every cloud journey has an important point: which cloud are we going to? This is a decision matrix recently developed as the first step for an important cloud journey about to start. There are three main areas in this article: the Goals of the journey, the Adopted Criteria, and the Final Decision.

1 – Goals for the cloud migration

  • Overall company speed – essential for keeping competitive time to market.
  • Teams autonomy – one more important move to keep time-to-market as fast as possible and foster DevOps adoption.
  • Cost savings – use the cloud benefit of the pay-as-you-go.
  • Security – improve security while handing over a few of the key concerns to the cloud provider.
  • Better infrastructure costs management
  • Keep auditing key aspects valid – eg.: PCI compliant.

2 – Criteria list

The following items are those important for this scenario’s migration. They are a total of Fourteen criteria analyzed to achieve a better overall understanding.

Five is the highest possible score. One is the lowest. Any other between those are valid

Feature count1
Oracle migration ease2
Available SDKs1
DDoS protection1
Overall security5
Machine Learning and Data Science features1
Community support3
Professionals availability3
Professionals cost5
Companies that already are in each cloud (benchmark)1
Internal team knowledge5
Auditing capabilities5
Cloud transition supporting products5
Dedicated links with specific protocol availability5
GDPR and LGPD compliance3
Cloud support3

2.1. Cost

The values were converted from US dollar to Brazilian Real in an exchange rate of BRL 5.25 to USD 1.00. RI = Reserved Instance. OD = On-demand instance

Why this criterion is important: Since the cloud move is an already taken decision, the goal of this criterion is to evaluate which cloud is the cheapest for this specific scenario need.

CloudScore givenScore comments
AWS5AWS has higher values in smaller machines and lower values in bigger machines
Azure5Azure has higher values in bigger machines and lower values for smaller machines
GCP3There are some lacking machine types.

2.2. Feature count

Why this criterion is important: innovation appetite of each cloud provider.

CloudServices QtySourceGiven scoreScore comments
AWS212TechRadar1Is the most mature cloud. Has count method simmilar to Google’s
Azure600Azure docs1Has a smaller overall feature count than AWS, but counts it in a different granularity
GCP90GCP docs0Has more basic features and has great benefits for companies that are born in the cloud

2.3. Oracle migration ease

Why this criterion is important: needless to say.

CloudAvailabilitySourceGiven ScoreScore comments
AWSAvailableAWS Docs2There’s a tool to migrate and convert the database
AzureNot availableAzure Docs1There’s a tool to migrate only
GCPNot available0There are no tools to help in this criterion

2.4. Available SDKs

Why this criterion is important: SDKs are important for applications under development.

CloudAvailabilitySourceGiven scoreScore comments
AWS Java
1SDK for the main needed languages are present
AzureAvailableAzure Docs1SDK for the main needed languages are present
GCP Java
1SDK for the main needed languages are present

2.5. DDoS protection

Why this criterion is important: DDoS is a common attack for digital products. This is an important feature thinking about the future.

CloudAvailabilitySourceGiven scoreScore comments
AWSAvailableAWS Shield1There is standard and advanced protection
AzureAvailableAzure Docs1There is standard and advanced protection
GCPAvailableGCP Armor1There is standard protection

2.6. Security overall

Why this criterion is important: there are some key security features my company is audited by third-party partners to which we must keep compliant.

Source: Three main sources from security experts blogs were used to this evaluation:

Sub criterionCloudGiven scoreScore comments
Overall SecurityAWS1.25AWS gets the higher score according to specialists due to the granularity it allows
Overall SecurityAzure1
Overall SecurityGCP1
Ease to configure securityAWS0.5
Ease to configure securityAzure0.75
Ease to configure securityGCP1.25Google gets a higher score due to ease to configure and abstraction capacity
Security InvestmentAWS1.25AWS is the one that invests the most on security
Security InvestmentAzure1
Security InvestmentGCP1
Security community supportAWS1.25AWS has a bigger community
Security community supportAzure1
Security community supportGCP0.75

2.7. Machine Learning and Data Science features

Why this criterion is important: looking for the future, it’s important to think about new services to be consumed. This feature received a low maximum score because it is not something critical for this stage of the cloud adoption.

CloudAvailabilitySourceGiven scoreScore comments
AWSAvailableMachine Learning as a service comparison1They all have pros and cons and specific ML/DS initiatives
AzureAvailable1They all have pros and cons and specific ML/DS initiatives
GCPAvailable1They all have pros and cons and specific ML/DS initiatives

2.8. Community

Why this criterion is important: a strong community makes easier to find solutions for the problems that will come in the future.

CloudSourceGiven scoreScore comments
AWSCommunity comparison AWS vs Azure vs Google3Biggest and most mature community
Azure2More than 80% of Fortune 500 uses it
GCP1It’s growing

2.9. Professionals availability

Why this criterion is important: the ability to hire qualified professionals for the specific cloud vendor is crucial for the application lifecycle. This research was performed on LinkedIn with the query “certified cloud architect <vendor>”.

CloudSourceGiven scoreScore comments
AWSLinkedIn3183k people found
AzureLinkedIn290k people found
GCPLinkedIn122k people found

2.10. Professionals cost

Why this criterion is important: as important as professionals availability, the cost involved in hiring each of these professionals is also something important to keep in mind.

CloudSourceGiven scoreScore comments
AWSGlassdoor5There was no difference found between each professional
AzureComputerworld (portuguese only)5There was no difference found between each professional
GCP5There was no difference found between each professional

2.11. Companies already present in each cloud

Why this criterion is important: taking a look at companies help to understand where the biggest and most innovative companies are heading to. And if they are doing so, there must be a good reason for that.

CloudBrands foundSourceGiven scoreScore commends
AWSFacebook, Amazon, Disney, Netflix, TwitterWho is using AWS1
AzurePixar, Dell, BMW, AppleWho is using Azure1
GCPSpotify, Natura, SBT1

2.12. Internal team knowledge

Why this criterion is important: the more internal knowledge for a cloud adoption, the faster it will be to achieve a good level of maturity.

CloudSourceGiven scoreScore comments
AWSInternal knowledge4Developers know AWS better
AzureInternal knowledge4Infrastructure team knows better Azure
GCPInternal knowledge0Nobody have ever worked with GCP

2.13. Auditing capabilities

Why this criterion is important: auditing capabilities are important to keep compliant to some existing contracts.

CloudAvailabilitySourceGiven score
AWSAvailableAWS Config5
AzureAvailableAzure Docs5
GCPAvailableGCP Audit5

2.14. Cloud migration products

Why this criterion is important: since this is intended to be a company wide adoption, some areas will have more or less maturity to migrate to a new paradigm of cloud native software development. The more the cloud provider can assist with simpler migration strategies such as an “AS IS”, the better for this criterion.

CloudSourceGiven scoreScore comments
AWSAWS CAF4There are more manual work to perform to achieve data sync
AzureAzure Migration Journey5Due to the company having a big number of Windows-based services, Microsoft native tools have an advantage
GCP3No resources to keep both cloud and on-premises workloads working together were found

3 – The final result

Below is presented the final result for this comparison. Having reached this, I intend to help you cloud journey adoption decisions, but please do not stick to these criteria presented here. Always take a look at what will make sense to your company and business cases.

This adoption must also come hand-by-hand with an internal plan to improve people’s knowledge of the selected cloud. The cloud brings several benefits compared to on-premises services, and like everything in life there are trade-offs and new challenges will appear.

Feature count110
Oracle migration ease210
Available SDKs111
DDoS protection111
Security overall434
Machine Learning and Data Science features111
Professionals available321
Professionals cost555
Companies already present in each cloud111
Internal team knowledge440
Auditing capabilities555
Cloud migration products453
Grand total403726
@MIT, Career, IT is business

@MIT: Cloud & DevOps – Part 6

August 2, 2020

@MIT series is a group of articles being written to describe my learning path attending to the Cloud & DevOps: Continuous Transformation at the MIT.

This article at a glance – TL;DR

Introduced the serverless paradigm, pros and cons, limits and the evolution to reach it

The content

  1. Serverless computing is a Cloud-based solution for a new computing execution model in which the manager of the server architecture and the application developers are distinctly divided. A frictionless connection exists in that the application does not need to know what is being run or provisioned on it; in the same way that the architecture does not need to know what is being run.
  2. The journey that led us to serverless (image below).
  3. A true microservice:
    1. Does not share data structure and database schema
    1. Does not share internal representation of objects
    2. You must be able to update it without notifying the team
  4. Serverless implications:
    1. Your functions become Stateless: you have to assume your function will always run a new recently deployed container.
    2. Cold starts: since every time your function will run in a new container, you have to expect some latency for the container to be spun up. After the first execution, the container is kept for a while, and then the call will become a “warm start”.
  5. Serverless pros:
    1. Cloud provider takes care of most back-end services
    2. Autoscaling of services
    3. Pay as you go and for what you use
    4. Many aspects of security provided by cloud provider
    5. Patching and library updates
    6. Software services, such as user identity, chatbots, storage, messaging, etc
    7. Shorter lead times
  6. Serverless cons:
    1. Managing state is difficult (leads to difficult debug)
    2. Complex message routing and event propagation (harder to track bugs)
    3. Higher latency for some services
    4. Vendor lock-in
  7. More on serverless:
From monolith to serverless journey

Exercises and Assignments

  • Assignment: Deploy an existing application to AWS using Lambda and DynamoDB to show a Pacman Game. A few screenshots below:


All the resources used to reach the results above are stored in this GitHub repository:

@MIT, Career, IT is business

@MIT: Cloud & DevOps – Part 5

July 22, 2020

@MIT series is a group of articles being written to describe my learning path attending to the Cloud & DevOps: Continuous Transformation at the MIT.

This article at a glance – TL;DR

Introduced the regular high-ground phases of a Digital Transformation (and cases for exploring them), which are:

1 – Initial Cloud Project

2 – Foundation

3 – Massive Migration

4 – Reinvention

The content

  1. Cloud computing services are divided in three possible categories:
    • IaaS – using the computational power of cloud computing data centers to run your previous on-prem workloads.
    • PaaS – using pre-built components to speed up your software development. Examples: Lambda, EKS, AKS, S3, etc.
    • SaaS – third-party applications allowing you to solve business problems. Examples: Salesforce, Gmail, etc.
    • XaaS – Anything as a service.
  2. An abstraction of Overall Phases of adoption:
    • 1 – Initial Cloud Project – Decide and execute the first project
    • 2 – Foundation – Building blocks: find the next steps to solve the pains of the organization. Provide an environment that makes going to the cloud more attractive to the business units. Examples: increase security, increase observability, reduce costs.
      • 1st good practice: During this phase, you can create a “Cloud Center of Excellence” committee to start creating tools to make the cloud shift more appealing to the rest of the organization.
      • 2nd good practice: Build reference architectures to guide people with less knowledge.
      • 3rd good practice: Teach best practices to other engaging business units.
    • 3 – Migration – Move massively to the cloud
      • One possible strategy is to move As Is and then modernize the application in the future (the step below).
    • 4 – Reinvention – modernize the apps (here you start converting private software to open source, Machine Learning, Data Science, etc).
    • See the picture below for an illustration of these 4 steps:
Phases of Digital Transformation, and time and value comparison
  1. The pace of adoption is always calm. Even for aggressive companies, like Netflix that took 7 years to become a cloud-first company.
  2. Microsoft case (highly recommended read):
    • Principles for their “shift left” on number and coverage of tests:
      • Tests should be written at the lowest level possible.
      • Write once, run anywhere including the production system.
      • The product is designed for testability.
      • Test code is product code, only reliable tests survive.
      • Testing infrastructure is a shared Service.
      • Test ownership follows product ownership.
    • See below two pictures of (1) how Microsoft evolved their testing process model and (2) the results they achieved.
1 – How Microsoft evolved its testing model
2 – Microsoft results
  1. Layers of test (based on Microsoft example):
    • L0 – Broad class of rapid in-memory unit tests. An L0 test is a unit test to most people — that is, a test that depends on code in the assembly under test and nothing else.
    • L1 – An L1 test might require the assembly plus SQL or the file system.
    • L2 – Functional tests run against ‘testable’ service deployment. It is a functional test category that requires a service deployment but may have key service dependencies stubbed out in some way.
    • L3 – This is a restricted class of integration tests that run against production. They require full product deployment.
  2. General Electric unsuccessful case:
    • GE failed because, despite having created a new unit for the digital transformation initiative, it inherited the culture of the GE Software.
    • Why digital transformation initiatives fail?
      • Lack of business focus.
      • Lack of alignment of plans and practices between the new and the legacy.
      • Not empowering developers.
      • Not experimenting.
  3. Netflix case:
    • Netflix introduced Chaos Engineering techniques: 
      • The best way to avoid failure is to fail constantly
      • The Chaos Monkey’s job is to randomly kill instances and services within our architecture. If we aren’t constantly testing our ability to succeed despite failure, then it isn’t likely to work when it matters most in the event of an unexpected outage
    • Simian Army is the evolution of the Chaos Monkey.

Exercises and Assignments

  • Assignment: Create a self case study containing the given structure:
    • Brief description of the company (attaching web link if possible)
    • Introduction
    • Description of the company’s challenge
    • Solution/Project stages/Implementation
    • Risks
    • Mitigations
    • Conclusion


All the resources used to reach the results above are stored in this GitHub repository:

@MIT, Career, IT is business

@MIT: Cloud & DevOps – Part 4

July 14, 2020

@MIT series is a group of articles being written to describe my learning path attending to the Cloud & DevOps: Continuous Transformation at the MIT.

This article at a glance – TL;DR

The DevOps revolution: importance of continuous feedback, data-driven decisions, pillars of DevOps and metrics

Main quote

Today, software development is no longer characterized by designers throwing their software ‘over-the-wall’ to testers repeating the process with software operations. These roles are now disappearing: today software engineers design, develop, test and deploy their software by leveraging powerful Continuous Integration and Continuous Delivery (CI/CD) tools

MIT – Cloud & DevOps – Continuous Transformation course – 2020

The content

  • DevOps key metrics to be watched:
    • Delivery lead time (measured in hours) – e.g.: how much time is taken between the task registered on the management tool until it reaches production?
    • Deployment frequency – how many deploys to the Production environment we make weekly.
    • Time to restore service – how many minutes we take to put the service back to work when something breaks.
    • Change fail rate – how many of our deploys to the Production environment cause a failure.
  • Importance of information flow. Companies have to foster an environment of continuous feedback and empowerment. It allows everybody to solve problems and suggest innovation within their area of work.
  • Data-driven decision making
  • Pillars of well designed DevOps:
    • Security
    • Reliability
    • Performance Efficiency
    • Cost Optimization
    • Operational Excellence
  • A good example of a well-designed pipeline abstraction:
    • Version control – this is the step when we retrieve the most recent code of versioning control.
    • Build – building the optimized archive to be used to deploy.
    • Unit test – running automated unit tests (created by the same developer that created the feature).
    • Deploy – deploy to an instance or environment that allows it to receive a new load of tests.
    • Autotest – running other layers of the test (stress, chaos, end to end, etc)
    • Deploy to production – deploy to the final real environment.
    • Measure & Validate – save the metrics of that deploy.
  • There are companies that are up to 400x times faster on having an idea and deploying it to production than traditional organizations.
  • Several analogies between Toyota Production system and cases (below) and DevOps:
    • Just in Time
    • Intelligent Automation
    • Continuous Improvement
    • Respect for People
  • Theory of Constraints:
    • You must focus on your constraint
    • It addresses the bottlenecks on your pipeline
  • Lean Engineering:
    • Identify the constraint
    • Expĺoit the constraint
    • Align and manage the systems around the constraint
    • Elevate the performance of the constraint
    • Repeat the process
  • DevOps is also about culture. Ron Westrum’s categories for culture evolution:
  • Typical CI Pipeline:

Exercises and Assignments

  • Assignment: Creating a CircleCI automated pipeline for CI (Continuous Integration) to checkout, build, install dependencies (Node app) and run tests.


All the resources used to reach the results above are stored in this GitHub repository:

@MIT, Career, IT is business

@MIT: Cloud & DevOps – Part 3

July 7, 2020

@MIT series is a group of articles being written to describe my learning path attending to the Cloud & DevOps: Continuous Transformation at the MIT.

This article at a glance – TL;DR

Docker, Containers Orchestration and Public Key Infrastructure (PKI)

The content

  • How the stack of software components used to run an application got more and more complex when compared to past years.
  • In past years a huge number of web applications used to run on top of LAMP (Linux, Apache, MySQL, and PHP/Pearl). Nowadays we have several different possible approaches for each one of the layers of this acronym.
  • Containers are the most recent evolution we have for running our apps. They followed these steps:
    1. The dark age: undergoing painful moments to run your app on a new machine (probably using more time to run the app than actually writing it).
    2. Virtualizing (using VMs) to run our apps, but having the trade-off of VMs’ slowness.
    3. Containers – They are a lightweight solution that allows us to write our code in any operating system and then rerun it easily in another operating system.
  • The difference between Virtual Machines and Docker:
    • Virtual Machines = Applications + Libraries + Operating System.
    • Docker = Applications + Libraries.
  • The analogy between the evolution of how humanity solved the problem of transporting goods across the globe using containers (real and physical containers) compared to how software developers used the containers abstraction to make our lives way easier when trying to run an application for the first time.
  • Kubernetes is introduced and the benefits approached:
    • Less work for DevOps teams.
    • Easy to collect metrics.
    • Automation of several tasks like metrics collecting, scaling, monitoring, etc.
  • Public key infrastructure:
    • More and more needed interaction machine-to-machine requires more sophisticated methods of authentication rather than user and password.
    • Private and public keys are used to hash and encrypt/decrypt messages and communications.

Exercises and Assignments

  • Exercise 1: Running a simple node app with docker (building and running the app)
  • Assignment:
    1. Build a docker image
    2. Publish it to
    3. Run a local ghost blog using docker
    4. Publish a sample post 


All the resources used to reach the results above are stored in this GitHub repository:

@MIT, Career, IT is business

@MIT: Cloud & DevOps – Part 2

July 1, 2020

@MIT series is a group of articles being written to describe my learning path attending to the Cloud & DevOps: Continuous Transformation at the MIT.

This article at a glance – TL;DR

The course has two big parts: (1) technical base and (2) business applications and strategies

The second module introduces benefits, trade-offs, and new problems of developing applications for scaling. It also covers the complexity of asynchronous development.

One more technical assignment is present, and it’s based on Node to focus on Javascript since it’s the most used programming language nowadays.

The content

  1. To start with, they approached the whole web concept (since its creation by Tim Berners Lee).
  2. The Javascript creation (the most used language for web applications worldwide).
  3. How Google changed the game creating Chrome and the V8 engine.
  4. The creation of Node.JS.
  5. Implementing a simple webserver at Digital Ocean.
  6. The evolution of complexity between the web first steps and the day we are right now: Open Source, JSON protocol, IoT and Big Data and Machine Learning more recently.
  7. The world of making computation in an asynchronous world/architecture.

Exercises and Assignments

  • Exercise 1: forking a project at and sending a pull request back.
  • Assignment 1: Running a simple node application locally (a PacMan game) to understand the communication between the client (browser) and server (Node.JS), and also retrieving metrics through an endpoint using JSON as a communication pattern.


All the resources used to reach the results above are stored in this GitHub repository:

@MIT, Career, IT is business

@MIT: Cloud & DevOps – Part 1

June 27, 2020

@MIT series is a group of articles being written to describe my learning path attending to the Cloud & DevOps: Continuous Transformation at the MIT.

This article at a glance – TL;DR

The first module is putting everybody’s knowledge up to date about the internet and software development practices evolution.

Assignments of the first module are simple when technically speaking

The content

Disclaimer: I won’t post the course content and deeper details here for obvious reasons. Everything mentioned here is my learning and key takeaways from each class/content.

The first module is very introductory. Concepts like the internet creation and explanations about how the information flow evolved from the first internet connection to the cloud are approached very briefly.

More than being introductory, it is very straightforward and hands-on (which I consider great). There are forum discussions for the participants to get to know each other, and an open Q&A about the exercises and assignments.

Exercises and Assignments

  • Exercise 1: examining a small JSON file at the Chrome console to understand the JSON pattern and Javascript key concepts.
  • Exercise 2: examining a BIG JSON at the Chrome console to show how things can get complex eventually.
  • Exercise 3: running a Node simple app to analyze the BIG JSON file from exercise 2.
  • Assignment 1: Creating a simple static personal website at Amazon using S3 buckets. My result is here:
  • Assignment 2: Creating a simple static personal website at For this one, I went a bit further and added a small set of free static CSS and HTML pre-built to reach something better than just the “hello world”:


All the resources used to reach the results above are stored in this GitHub repository: