Showing posts with label deployment. Show all posts
Showing posts with label deployment. Show all posts

Saturday, 19 March 2016

Git & Scrum: Workflows that work - O'Reilly Media

Workflows that work - O'Reilly Media:


I love working with teams of people to hash out a plan of action—the more sticky notes and whiteboards the better. Throughout the process, there may be a bit of arguing, and some compromises made, but eventually you get to a point where people can agree on a basic process. Everyone gets back to their desks, clear about the direction they need to go in and suddenly, one by one, people start asking, "But how do I start?" The more cues you can give your team to get working, the more they can focus on the hard bits. Version control should never be the hard part.

By the end of this chapter, you will be able to create step-by-step documentation covering:


  • Basic workflow


  • Integration branches


  • Release schedules


  • Post-launch hotfixes

This chapter is essentially a set of abstracted case studies on how I have effectively used Git while working in teams. You will notice my strong preference for Agile methodologies, in particular Scrum, in this chapter.

Monday, 27 July 2015

Docker and Virtualisation links


Docker for Java Developers: How to sandbox your app in a clean environment

Continuous Delivery with Docker Containers and Java EE

A Practical Introduction to Docker Container Terminology

ANNOUNCING DOCKER TOOLBOX

Docker cheat sheet







Why I love Docker

In my view Docker will enable the IT industry to adopt to DevOps and Microservices not by being a tool, but rather being a technology that fundamentally changes how we manage IT services. The practical difference is that we can now life-cycle manage everything isolated. If one application needs a new version of a JVM or PHP library they can safely upgrade without affecting other containerized applications. This enforces the organization to embrace DevOps culture and processes.
Docker also enables us to create small independent deployments, which is the foundation of Microservices architecture. This does however add another dimension to the problem, where we have to manage service dependencies. I’ll talk more about Microservers and how to manage remote dependencies in later blogs.
So to summon up, why I love Docker is not only that it’s easy to use, gives me high density and isolation, the main reason to me is that it’s going to change how we operate and manage applications. 

Wednesday, 10 June 2015

James Ward: Java EE: Comparing Application Deployment: 2005 vs. 2015

James Ward: Java EE: Comparing Application Deployment: 2005 vs. 2015

2005 = Multi-App Containers / App Servers / Monolithic Apps
2015 = Microservices / Docker Containers / Containerless Apps
Back in 2005 many of us worked on projects that resulted in a WAR file – a zip file containing a Java web application and its library dependencies. That web application would be deployed alongside other web applications into a single app server sometimes called a “container” because it contained and ran one or more applications. The app server provided a bunch of common services to the web apps like an HTTP server, a service directory, and shared libraries. Unfortunately deploying multiple apps in a single container created high friction for scaling, deployment, and resource usage. App servers were supposed to isolate an app from its underlying system dependencies in order to avoid “it works on my machine” problems but things often didn’t work that smoothly due to differing system dependencies and configuration that lived outside of the app server / container.
In 2015 apps are being deployed as self-contained units, meaning the app includes everything it needs to run on top of a standard set of system dependencies. The granularity of the self-contained unit differs depending on the deployment paradigm. In the Java / JVM world a “containerless” app is a zip file that includes everything the app needs on top of the JVM. Most modern JVM frameworks have switched to this containerless approach including Play Framework, Dropwizard, and Spring Boot. A few years ago I wrote in more detail about how app servers are fading away in the move from monolithic middleware to microservices and cloud services.
For a more complete and portable self-contained unit, system-level container technologies like Docker and LXC bundle the app with its system dependencies. Instead of deploying a bunch of apps into a single container, a single app is added to a Docker image and deployed on one or more servers. On Heroku a “Slug” file is similar to a Docker image.
Microservices play a role in this new landscape because deployment across microservices is independent, whereas with traditional app servers individual app deployment often involved restarting the whole server. This was one reason for the snail’s pace of deployment in enterprises – deployments were incredibly risky and had to be coordinated months in advance across numerous teams. Hot deployment was a promise that was never realized for production apps. Microservices enable individual teams to deploy at will and as often as they want. Microservices require the ability to quickly provision, deploy, and scale services which may have only a single responsibility. These requirements fit well with the infrastructure provided by containerless apps running on Docker(ish) Containers.
2005 = Manual Deployment
2015 = Continuous Delivery / Continuous Deployment
The app servers of 2005 that ran multiple monolithic apps combined with manual load balancer configurations made application upgrades risky and painful so deployments were usually done sparingly in designated maintenance windows. Back then it was pretty much unheard of to have a deployment pipeline that fully automated delivery from an SCM to production.
Today Continuous Delivery and Continuous Deployment enable developers to get code to staging and production sometimes as often as tens or even hundreds of times a day. Scalable deployment pipelines range from the simple “git push heroku master” to a more risk averse pipeline that includes pull requests, Continuous Integration, staging auto-deployment, manual promotion to production, and possibly Canary Releases & Feature Flags. These pipelines enable organizations to move fast and distribute risk across many small releases.
In order for Continuous Delivery to work well there are a few ancillary requirements:
  • Release rollbacks must be instant and easy because sometimes things are going to break and getting back to a working state quickly must be painless and fast.
  • Patch releases must be able to make it from SCM to production (through a continuous delivery pipeline) in minutes.
  • Load balancers must be able to handle automatic switching between releases.
  • Database schema changes should be decoupled from app releases otherwise releases and rollbacks can be blocked.
  • App-tier servers should be stateless with state living in external data stores otherwise state will be frequently lost and/or inconsistent.
2005 = Persistent Servers / “Pray it never goes down”
2015 = Immutable Infrastructure / Ephemeral Servers
When a server crashed in 2005 stuff usually broke. Some used session replication and server affinity but sessions were lost and bringing up new instances usually took quite a bit of manual work. Often changes were made to production systems via SSH making it difficult to accurately reproduce a production environment. Logging was usually done to local disk making it hard to see what was going on across servers and load balancers.
Servers in 2015 are disposable, immutable, and ephemeral forcing us to plan for them to go down. Tools like Netflix’s Chaos Monkey randomly shut down servers to make sure we are preparing for crashes. Load balancers and management backplanes work together to start and stop new instances in an instant enabling rapid scaling both up and down. By being immutable we can no longer fix production issues by SSHing into a server but now environments are easily reproducible. Logging services route STDOUT to an external service enabling us to see the log stream in real time, across the whole system.
2005 = Ops Team
2015 = DevOps
In 2005 there was a team that would take your WAR file (or other deployable artifact) and be responsible for deploying it, managing it, and monitoring it. This was nice because developers didn’t have to wear pagers but ultimately the Ops team often couldn’t do much if there was a production issue at 3am. The biggest downside of this was that Ops became all about risk mitigation causing a tremendous slowdown in software delivery.
Modern technical organizations of all sizes are ditching the Ops velocity killer and making developers responsible for the stuff they put into production. Services like New Relic, VictorOps, and Slack help developers stay on top of their new operational responsibilities. The DevOps culture also directly incentivizes devs not to deploy things that will end up waking them or a team member up at 3am. A core indicator of a DevOps culture is whether a new team member can get code to production on their first day. Doing that one thing right means doing so many other things right, like:
  • 3 Step Dev Setup: Provision the system, Checkout the code, and Run the App
  • SCM / Team Review (e.g. GitHub Flow)
  • Continuous Integration & Continuous Deployment / Delivery
  • Monitoring and Notifications
DevOps can sound very scary to traditional enterprise developers like myself. But from experience I can attest that wearing a pager (metaphorically) and assuming the direct risk of my deployments has made me a much better developer. The quality of my code and my feelings of fulfillment have increased with my new level of ownership over what is in production.
Learn More
I’ve just touched the surface of many of the deployment changes over the past 10 years but hopefully you now have a better understanding of some of the terminology you might be hearing at conferences and on blogs. For more details on these and related topics, check out The Twelve-Factor App and my blog Java Doesn’t Suck – You’re Just Using it Wrong. Let me know what you think!

Java Doesn’t Suck – You’re Just Using it Wrong

Refactoring to Microservices

Monday, 18 May 2015

Continuous Deployment, Integration links


The level of testing that is performed in CI can completely vary but the key fundamentals are that multiple integrations from different developers are done through out the day. The biggest advantage of following this approach is that if there are any errors then they are identified early in the cycle, typically soon after the commit. Finding the bugs closer to commit does make them much more easier to fix. This is explained well by Martin Fowler:
Continuous Integrations doesn’t get rid of bugs, but it does make them dramatically easier to find and remove.

Thursday, 16 April 2015

zeroturnaround.com: Architecting Large Enterprise Java Projects with Markus Eisele

http://zeroturnaround.com/rebellabs/architecting-large-enterprise-java-projects-by-markus-eisele/

Developers built a lot of applications like that some time ago and even present day! These applications are still working and need maintenance. So we see them sometimes and call them legacy. They tend to have a release cycle of once or twice a year, depend on a proprietary application server environment and most importantly have a single database schema for all data. Naturally, you cannot move very fast with such a beast on your shoulders and must have a large team and QA department even just to maintain it.

The next step in the architecture design was the Enterprise Service Bus age. Understanding that changes have to be incorporated into even the oldest and the most legacy applications. We (Java developers) started breaking the huge apps into smaller ones. The biggest challenge was to integrate it all together, so the service bus seemed the best solution.
The change wasn’t that big for the operations teams, as they still have everything under their control and centralized, although it was a much more flexible approach. However, the same centralization that adds value, creates a raft of problems that the engineering team had to solve: most importantly challenges with testing and the single point of failure (SPOF).
We’re now we’re moving even further away from the monolithic apps and towards the trendingbuzzword of Microservices.

Then there’s a number of patterns you can use to organise the communication between your microservices, like the Aggregator or the Chain.

Palladium: Predictive Analytics, Machine Learning framework

Palladium provides means to easily set up predictive analytics services as web services. It is apluggable framework for developing real-world machine learning solutions. It provides generic implementations for things commonly needed in machine learning, such as dataset loading, model training with parameter search, a web service, and persistence capabilities, allowing you to concentrate on the core task of developing an accurate machine learning model. Having a well-tested core framework that is used for a number of different services can lead to a reduction of costs during development and maintenance due to harmonization of different services being based on the same code base and identical processes. Palladium has a web service overhead of a few milliseconds only, making it possible to set up services with low response times.....

blog.xebialabs.com: Before You Go Over the Container Cliff with Docker, Mesos etc: Points to Consider

I’m personally really excited about the potential of microservices and containers, and typically recommend pretty emphatically that our users should research them. But I also add that doing research is absolutely not the same thing as deciding up front to go for full-scale adoption.
container-fallGiven the incredibly rapid pace of change in this area, it’s essential to develop a clear understanding of the capabilities of the technology in your environment before making any decisions: production is not usually a good arena for R&D.
Based on what we have learned from our users and partners that have been undertaking such research, our own experiences (we use containers quite a lot internally) and lessons from companies such a eBay and Google, here are six important criteria to bear in mind when deciding whether to move from research to adoption....