Taking Docker to Production with Confidence | Voxxed: "Many organizations developing software today use Docker in one way or another. If you go to any software development or DevOps conference and ask a big crowd of people “Who uses Docker?”, most people in the room will raise their hands. But if you now ask the crowd, “Who uses Docker in production?”, most hands will fall immediately. Why is it, that such a popular technology that has enjoyed meteoric growth is so widely used at early phases of the development pipeline, but rarely used in production?"
'via Blog this'
Be warned that this is mostly just a collection of links to articles and demos by smarter people than I. Areas of interest include Java, C++, Scala, Go, Rust, Python, Networking, Cloud, Containers, Machine Learning, the Web, Visualization, Linux, System Performance, Software Architecture, Microservices, Functional Programming....
Showing posts with label devops. Show all posts
Showing posts with label devops. Show all posts
Tuesday, 24 May 2016
Tuesday, 12 April 2016
Dan Luu: Notes on Google's Site Reliability Engineering book
Notes on Google's Site Reliability Engineering book: "Notes on Google's Site Reliability Engineering book"
'via Blog this'
'via Blog this'
Sunday, 27 March 2016
Saturday, 19 March 2016
Git & Scrum: Workflows that work - O'Reilly Media
Workflows that work - O'Reilly Media:
I love working with teams of people to hash out a plan of action—the more sticky notes and whiteboards the better. Throughout the process, there may be a bit of arguing, and some compromises made, but eventually you get to a point where people can agree on a basic process. Everyone gets back to their desks, clear about the direction they need to go in and suddenly, one by one, people start asking, "But how do I start?" The more cues you can give your team to get working, the more they can focus on the hard bits. Version control should never be the hard part.
By the end of this chapter, you will be able to create step-by-step documentation covering:
This chapter is essentially a set of abstracted case studies on how I have effectively used Git while working in teams. You will notice my strong preference for Agile methodologies, in particular Scrum, in this chapter.
I love working with teams of people to hash out a plan of action—the more sticky notes and whiteboards the better. Throughout the process, there may be a bit of arguing, and some compromises made, but eventually you get to a point where people can agree on a basic process. Everyone gets back to their desks, clear about the direction they need to go in and suddenly, one by one, people start asking, "But how do I start?" The more cues you can give your team to get working, the more they can focus on the hard bits. Version control should never be the hard part.
By the end of this chapter, you will be able to create step-by-step documentation covering:
- Basic workflow
- Integration branches
- Release schedules
- Post-launch hotfixes
This chapter is essentially a set of abstracted case studies on how I have effectively used Git while working in teams. You will notice my strong preference for Agile methodologies, in particular Scrum, in this chapter.
Wednesday, 24 February 2016
All Things Ansible: Automation Doesn’t Have to Be an “Either-or-Choice” | Voxxed
All Things Ansible: Automation Doesn’t Have to Be an “Either-or-Choice” | Voxxed:
Voxxed: For the uninitiated, how would you summarise Ansible?
Barr: Ansible is a generic IT automation tool that’s simple enough for anyone in IT to use, but extremely powerful at the same time. It allows teams to do more with less, and increase productivity by quickly automating the routine and mundane tasks that take up so much time. In short, it’s IT automation for everyone. It’s really as simple as that.
Ansible is simple, agentless and powerful. You won’t find an easier way to automate. Anyone on your team can use Ansible without extensive training. Plus, with Ansible Tower enterprises can control how and by whom Ansibleautomations are run in their environments, and retain delegation and security visibility that are important for audits.
How does Ansible complement Red Hat’s current/developing range of offerings?
Because Ansible is the common language of IT organizations, there’s wide applicability of Ansible’s capabilities to Red Hat as a whole. Integrations with existing Red Hat offerings such as OpenShift, CloudForms and Satellite provide customers with a broader ability to automate their existing IT environments and ease the transition to a DevOps-enabled organisation. Additionally, we anticipate that Ansible will become increasingly common as an installer for other Red Hat products, much in a similar way that it’s being used for Openshift v3 today.
Are there any disadvantages to having immutable server architecture and design?
Like many things in IT, “it depends.” Thankfully, Ansible is perfectly applicable in both immutable and standard environments. It can be used to build and deploy immutable images, and, of course, to build, deploy, and manage traditional enterprise IT environments.
How does Ansible compare to similar offerings such as Puppet and Chef? How would you compare use case scenarios?
Puppet and Chef are great configurations managers. Ansible is an automation engine, which encompasses provisioning, application deployment, workflow orchestration, as well as configuration management. On that note, there are many Ansible users that use Ansible to automate the deployment and management of configurations that are defined in tools like Puppet or Chef— in short, it doesn’t have to be an either-or choice.
Monday, 11 January 2016
Microservices links
The Power, Patterns, and Pains of Microservices
A full-throated advocate of winning knows that the one constant in business is change. The winners in today's ecosystem learned this early and quickly.
One such example is Amazon. They realized early on that they were spending entirely too much time specifying and clarifying servers and infrastructure with operations instead of deploying working software. They collapsed the divide and created what we now know as Amazon Web Services (AWS). AWS provides a set of well-known primitives, a cloud , that any developer can use to deploy software faster. Indeed, the crux of the DevOps movement is about breaking down the invisible wall between what we knew as developers and operations to remove the cost of this back-and-forth.
Another company that realized this is Netflix. They realized that while their developers were using TDD and agile methodologies, work spent far too long in queue, flowing from isolated workstations—product management, UX, developers, QA, various admins, etc.—until finally it was deployed into production. While each workstation may have processed its work efficiently, the clock time associated with all the queueing meant that it could sometimes be weeks (or, gulp , more!) to get software into production.
In 2009, Netflix moved to what they described as the cloud-native architecture . They decomposed their applications and teams in terms of features; small (small enough to be fed with two pizza-boxes !) collocated teams of product managers, UX, developers, administrators, etc., tasked with delivering one feature or one independently useful product. Because each team delivered a set of free-standing services and applications, individual teams could iterate and deliver as their use cases and business drivers required, independently of each other. What were in-process method invocations became independently deployed network services.
Microservices, done correctly, hack Conway's law and refactor organizations to optimize for the continuous and safe delivery of small, independently useful software to customers. Independently deployed software can be more readily scaled at runtime. Independently deployed software formalizes service boundaries and domain models; domain models are forced to be internally consistent, something Dr. Eric Evans refers to as a bounded context in his epic tome, Domain Driven Design .
Independent deployability implies agility but also implies complexity; as soon as network hops are involved you have a distributed systems problem!
Wednesday, 23 September 2015
Software Security links
Does DevOps hurt or help security?
While security processes tests always should be an integral part of DevOps workflow, that isn’t a reality for many organizations. They’ve always struggled to properly integrate security, and those challenges certainly persist through transitions to DevOps. But Storms says that DevOps provides an opportunity to more tightly couple security into the workflow. “One of the best ways to bring DevOps and security together is to utilize the tools and the processes that DevOps really excels at and apply them to security,” he says — “things like automation, orchestration, and instrumentation. Let's use those tools to build these closed-loop security systems where everything's automated and everything's predictable. That’s a way we actually can fulfill the security requirements in an automated fashion with fewer resources.”
One success story that Storms cites is a healthcare company in the Northeast. “It has had serious compliance and security requirements so it performs continuous deployment. The company has extensively automated its security and compliance tests and the auditors are happy,” he says.
Wednesday, 10 June 2015
James Ward: Java EE: Comparing Application Deployment: 2005 vs. 2015
James Ward: Java EE: Comparing Application Deployment: 2005 vs. 2015
Java Doesn’t Suck – You’re Just Using it Wrong
Refactoring to Microservices
2005 = Multi-App Containers / App Servers / Monolithic Apps
2015 = Microservices / Docker Containers / Containerless AppsBack in 2005 many of us worked on projects that resulted in a WAR file – a zip file containing a Java web application and its library dependencies. That web application would be deployed alongside other web applications into a single app server sometimes called a “container” because it contained and ran one or more applications. The app server provided a bunch of common services to the web apps like an HTTP server, a service directory, and shared libraries. Unfortunately deploying multiple apps in a single container created high friction for scaling, deployment, and resource usage. App servers were supposed to isolate an app from its underlying system dependencies in order to avoid “it works on my machine” problems but things often didn’t work that smoothly due to differing system dependencies and configuration that lived outside of the app server / container.In 2015 apps are being deployed as self-contained units, meaning the app includes everything it needs to run on top of a standard set of system dependencies. The granularity of the self-contained unit differs depending on the deployment paradigm. In the Java / JVM world a “containerless” app is a zip file that includes everything the app needs on top of the JVM. Most modern JVM frameworks have switched to this containerless approach including Play Framework, Dropwizard, and Spring Boot. A few years ago I wrote in more detail about how app servers are fading away in the move from monolithic middleware to microservices and cloud services.For a more complete and portable self-contained unit, system-level container technologies like Docker and LXC bundle the app with its system dependencies. Instead of deploying a bunch of apps into a single container, a single app is added to a Docker image and deployed on one or more servers. On Heroku a “Slug” file is similar to a Docker image.Microservices play a role in this new landscape because deployment across microservices is independent, whereas with traditional app servers individual app deployment often involved restarting the whole server. This was one reason for the snail’s pace of deployment in enterprises – deployments were incredibly risky and had to be coordinated months in advance across numerous teams. Hot deployment was a promise that was never realized for production apps. Microservices enable individual teams to deploy at will and as often as they want. Microservices require the ability to quickly provision, deploy, and scale services which may have only a single responsibility. These requirements fit well with the infrastructure provided by containerless apps running on Docker(ish) Containers.2005 = Manual Deployment
2015 = Continuous Delivery / Continuous DeploymentThe app servers of 2005 that ran multiple monolithic apps combined with manual load balancer configurations made application upgrades risky and painful so deployments were usually done sparingly in designated maintenance windows. Back then it was pretty much unheard of to have a deployment pipeline that fully automated delivery from an SCM to production.Today Continuous Delivery and Continuous Deployment enable developers to get code to staging and production sometimes as often as tens or even hundreds of times a day. Scalable deployment pipelines range from the simple “git push heroku master” to a more risk averse pipeline that includes pull requests, Continuous Integration, staging auto-deployment, manual promotion to production, and possibly Canary Releases & Feature Flags. These pipelines enable organizations to move fast and distribute risk across many small releases.In order for Continuous Delivery to work well there are a few ancillary requirements:
- Release rollbacks must be instant and easy because sometimes things are going to break and getting back to a working state quickly must be painless and fast.
- Patch releases must be able to make it from SCM to production (through a continuous delivery pipeline) in minutes.
- Load balancers must be able to handle automatic switching between releases.
- Database schema changes should be decoupled from app releases otherwise releases and rollbacks can be blocked.
- App-tier servers should be stateless with state living in external data stores otherwise state will be frequently lost and/or inconsistent.
2005 = Persistent Servers / “Pray it never goes down”
2015 = Immutable Infrastructure / Ephemeral ServersWhen a server crashed in 2005 stuff usually broke. Some used session replication and server affinity but sessions were lost and bringing up new instances usually took quite a bit of manual work. Often changes were made to production systems via SSH making it difficult to accurately reproduce a production environment. Logging was usually done to local disk making it hard to see what was going on across servers and load balancers.Servers in 2015 are disposable, immutable, and ephemeral forcing us to plan for them to go down. Tools like Netflix’s Chaos Monkey randomly shut down servers to make sure we are preparing for crashes. Load balancers and management backplanes work together to start and stop new instances in an instant enabling rapid scaling both up and down. By being immutable we can no longer fix production issues by SSHing into a server but now environments are easily reproducible. Logging services route STDOUT to an external service enabling us to see the log stream in real time, across the whole system.2005 = Ops Team
2015 = DevOpsIn 2005 there was a team that would take your WAR file (or other deployable artifact) and be responsible for deploying it, managing it, and monitoring it. This was nice because developers didn’t have to wear pagers but ultimately the Ops team often couldn’t do much if there was a production issue at 3am. The biggest downside of this was that Ops became all about risk mitigation causing a tremendous slowdown in software delivery.Modern technical organizations of all sizes are ditching the Ops velocity killer and making developers responsible for the stuff they put into production. Services like New Relic, VictorOps, and Slack help developers stay on top of their new operational responsibilities. The DevOps culture also directly incentivizes devs not to deploy things that will end up waking them or a team member up at 3am. A core indicator of a DevOps culture is whether a new team member can get code to production on their first day. Doing that one thing right means doing so many other things right, like:
- 3 Step Dev Setup: Provision the system, Checkout the code, and Run the App
- SCM / Team Review (e.g. GitHub Flow)
- Continuous Integration & Continuous Deployment / Delivery
- Monitoring and Notifications
DevOps can sound very scary to traditional enterprise developers like myself. But from experience I can attest that wearing a pager (metaphorically) and assuming the direct risk of my deployments has made me a much better developer. The quality of my code and my feelings of fulfillment have increased with my new level of ownership over what is in production.Learn MoreI’ve just touched the surface of many of the deployment changes over the past 10 years but hopefully you now have a better understanding of some of the terminology you might be hearing at conferences and on blogs. For more details on these and related topics, check out The Twelve-Factor App and my blog Java Doesn’t Suck – You’re Just Using it Wrong. Let me know what you think!
Java Doesn’t Suck – You’re Just Using it Wrong
Refactoring to Microservices
Monday, 18 May 2015
Continuous Deployment, Integration links
The level of testing that is performed in CI can completely vary but the key fundamentals are that multiple integrations from different developers are done through out the day. The biggest advantage of following this approach is that if there are any errors then they are identified early in the cycle, typically soon after the commit. Finding the bugs closer to commit does make them much more easier to fix. This is explained well by Martin Fowler:Continuous Integrations doesn’t get rid of bugs, but it does make them dramatically easier to find and remove.
Subscribe to:
Posts (Atom)