Thursday, 25 February 2016

Git Commands and Best Practices Cheat Sheet | zeroturnaround.com

Git Commands and Best Practices Cheat Sheet | zeroturnaround.com:




Python: Productionizing A Flask Application

Productionizing A Flask Application:

When I released bull as an open source project, it was in quite a state. Everything was in a single file, there was inline HTML (ew), and both tests and documentation were non-existent. Over the past week, I've spent some time "productionizing" bull, and recounting the steps I took will likely be helpful to others looking to deploy a Flask app to production. In this article, you'll learn how to organize a Flask application, add testing and documentation, and even how to enable authentication for "admin-only" content.

bull looks like a pile of...

The first git push of bull was a crazy mess, but it worked, and that's all I was concerned with at the time. I knew I would clean everything up "later", so I wasn't worried about the quality at that time. Besides, anyone capable of using bull in that state was certainly capable of cleaning it up a bit on their own, if they so desired.
To make it more accessible, however, it needed an overhaul. By focusing on a few key areas, I was able to makebull a solid, production-ready application. Those areas included:
  1. Project layout
  2. An "admin" work flow with restricted pages
  3. Automated testing
  4. Automated documentation generation
I'll discuss each of these sections in detail, as I'm convinced that, if you get these areas right, you're 90% of the way to having a production-ready application.




An Ansible Tutorial - Servers for Hackers

An Ansible Tutorial - Servers for Hackers:


Big Changes in Goldman’s Software Emerge From Small Containers - The CIO Report - WSJ

Big Changes in Goldman’s Software Emerge From Small Containers - The CIO Report - WSJ:

“I have lived through all sorts of tech transitions, and this is faster than anything we have seen,” Docker CEO Ben Golub said. Docker usage increased five times during the past year, and Docker images have been downloaded more than two billion times. Docker said it is the second-largest open-source infrastructure project.
“Think of docker as two things: a format for packaging apps and an engine to run those apps,” Forrester Research analyst Dave Bartoletti said in an email. “Docker is quickly becoming the de facto standard for packaging. You can RUN dockerized apps on lots of different engines. And then when you run them, you can pick from a wide range of orchestration tools, management tools, monitoring tools, etc.”
There was concern that Docker, the company, had too much power, as both the custodian of the open-source software and a venture that sold products and services tied to it, according to Mr. Duet. In response, Docker, the company, helped form the Open Container Initiative to govern the open-source software.
Goldman uses the Docker containerization service, as well as container orchestration tools from Docker and Alphabet Inc GOOGL +0.50%.’s Google, which makes an orchestration and cluster management tool called Kubernetes, comparable to Docker Swarm. “We do both, Docker and Kubernetes, for things like running and starting,” Mr. Duet said. “Kubernetes is arguably a better scheduler…if you are going to run 1,000 containers on 1,500 computers…It is designed much more for that. Docker’s own product is great if you want to run five containers on three machines. We have both problems.”
A Docker spokesman said that “even when people use Kubernetes, they are still using the Docker container service. Some, such as Goldman, continue to use Kubernetes for larger deployments and Docker Swarm for smaller, as Docker Swarm was only made generally available in Fall of 2015 and is still being evaluated for large-scale use.”
For Goldman, the shift is part of a larger evolution, as the bank takes on more of the characteristics of a tech company. Says Mr. Duet: “We look more like Google and Amazon in many ways.”




Wednesday, 24 February 2016

Why the Linux Mint hack is an indicator of a larger problem - TechRepublic

Why the Linux Mint hack is an indicator of a larger problem - TechRepublic:

While these attacks are regrettable, and part of an infrastructure problem rather than a problem with the distribution itself, it increasingly appears that the Linux Mint team, led by project leader Clement Lefebvre, is spread too thin when it comes to security.

The architectural design of Linux Mint inherits a great deal from its upstream sources Debian and Ubuntu (which is itself based upon Debian). Unfortunately, it lacks any sort of security advisories—Linux Mint evangelists insist that referring to the Ubuntu or Debian advisories is sufficient. Not every package in Linux Mint is available in Ubuntu or Debian, and this argument is further complicated by the fact that updates that work perfectly in Ubuntu or Debian are blacklisted by the Linux Mint team due to compatibility issues.

Linux Mint has the somewhat peculiar design decision of not updating the kernel using the graphical update manager. Users must run apt-get dist-upgrade in a terminal in order to receive updates, when users of Ubuntu receive the same kernel updates automatically. This leaves users vulnerable to potential root exploits and hardware issues. Additionally, there is an issue with shifting release cadences—with version 17, the underlying base moved from standard releases to Long-Term Support (LTS) releases of Ubuntu. Consequently, the packages incorporated are older, on average, than in previous releases, and if blacklisted are both old and insecure.


Lightbend's Lagom Will Run Java-Based Microservices at Scale - The New Stack

Lightbend's Lagom Will Run Java-Based Microservices at Scale - The New Stack:

Many microservice frameworks available today require developers to manually run scripts in order to start their services, or add an automated infrastructure. Lagom realized the need for developers to be able to utilize the tools they were used to, such as hot code reloads —Without having to install a new toolset locally in order to test an application. Lagom allows for developers to manage hundreds of services from a command line. From there, users are able to perform testing, update services via hot code fixes, and more.

Powering the Pieces

Lagom relies on a number of additional technologies, including the Lightbend’s own Play FrameworkAkka, Akka Streams, Akka Clustering and ConductR for resilience and auto-scaling. By utilizing Netty, REST and WebSockets for communication with applications or devices that require access to its services, Lagom assures high performance. Finally, Lagom utilizes Cassandra as its persistence store.


Lightbend Reactive Platform Fast Data Architecture
For those looking to get a head start working with Lagom, be aware that the production release isn’t available until early March 2016. That being said, companies can check out Lightbend’s Reactive Platform to decompose any of their existing monolith applications into microservices, or to create new microservices.
“What we observed was that while many of these companies had unlimited engineering resources and talent, the far greater pool of Java developers needed a more opinionated framework — specifically for Java — that enabled the construction of microservices built to run and scale on the JVM,” said Hayes. It is this which stands as the reason Lightbend opted to start out by first creating a Java API for Lagom, followed by a Scala API.
With the full release of Lagom, the Java community may have a powerful new tool in its arsenal for creating, managing, and scaling microservices to meet the rigorous demands of today’s applications.


Diagnosing Common Database Performance Hotspots in our Java Code

Diagnosing Common Database Performance Hotspots in our Java Code:

For this article I focus on the database as I am sure all of your apps are suffering from one of these access patterns! You can use pretty much any profiling, tracing or APM tool available in the market, but I am using the free Dynatrace Personal License. Java also comes with great tools such as Java Mission Control. Many frameworks that access data – such as Hibernate or Spring – also offer diagnostics options typically through logging output.
Using these tracing tools doesn’t require any code changes as they all leverage JVMTI (JVM Tooling Interface) to capture code level information and even to trace calls across remoting tiers. This is very useful in distributed, (micro)service-oriented applications; just modify your startup command line options for your JVM to get the tools loaded. Some tool vendors provide IDE integration where you can simply say “run with XYZ Profiling turned on”. I have a shortYouTube tutorial demonstrating how to trace an app launched from Eclipse!

Identify Database Performance Hotspots

When it turns out that the database is the main contributor to the overall response time of requests to your application, be careful about blaming the database and finger pointing at the DBAs! There might be several reasons that would cause the database to be that busy:
  • Inefficient use of the database: wrong query design, poor application logic, incorrect configuration of data access framework
  • Poor design and data structure in the database: table relations, slow stored views, missing or wrong indexes, outdated table statistics
  • Inappropriate database configuration: memory, disk, tablespaces, connection pools
In this article I mainly want to focus on what you can do from the application side to minimize the time spent in the database:

Diagnose Bad Database Access Patterns

When diagnosing applications I have several database access patterns I always check for. I look at individual requests and put them into the following DB Problem Pattern categories:
  • Excessive SQLs: Executing a lot (> 500) different SQL Statements
  • N+1 Query Problem: Executing the same SQL statement multiple times (>20):
  • Slow Single SQL Issue: Executing a single SQL that contributes > 80% of response time
  • Data-Driven Issue: Same request executes different SQL depending on input parameters
  • Database Heavy: Database Contribution Time is > 60% of overall response time
  • Unprepared Statements: Executing the same SQL without preparing the statement
  • Pool Exhaustion: Impacted by High Connection Acquisition Time (getConnection time > executeStatement)
  • Inefficient Pool Access: Excessive access to connection pool (calling getConnection > 50% of executeStatement count)
  • Overloaded Database Server: Database server is simply overloaded with too many requests from different apps


​Microservices 101: The good, the bad and the ugly | ZDNet

​Microservices 101: The good, the bad and the ugly | ZDNet:

"You don't want to break it down into too small a microservice. Some people are even talking about nano services, which is going a little too far. Don't go too far. Understand how you're going to measure success. That's critical in general but it is even more so for microservices," Little said.
Even where software is not working, avoid reimplementing everything from scratch because there may be elements that could be retained.
"If you've got something that doesn't work, you should still look to see if there's some of it that you could carve off and keep - particularly if you've had it deployed for 20 or 30 years and lots of people have, and particularly if it's implemented in COBOL, then it's battle-tested," Little said.
"It may not be all working for you today because the scale of your requests on Christmas day, for instance, are orders of magnitude more than they were 30 years ago. But that doesn't mean there aren't fundamental bits in that COBOL code that you could take and use again. You should, because if you're going to put bugs into a new system, you want to them to be new bugs. You don't want to reimplement old bugs that you've fixed."


All Things Ansible: Automation Doesn’t Have to Be an “Either-or-Choice” | Voxxed

All Things Ansible: Automation Doesn’t Have to Be an “Either-or-Choice” | Voxxed:

Voxxed: For the uninitiated, how would you summarise Ansible?
Barr: Ansible is a generic IT automation tool that’s simple enough for anyone in IT to use, but extremely powerful at the same time. It allows teams to do more with less, and increase productivity by quickly automating the routine and mundane tasks that take up so much time. In short, it’s IT automation for everyone. It’s really as simple as that.
Ansible is simple, agentless and powerful. You won’t find an easier way to automate. Anyone on your team can use Ansible without extensive training. Plus, with Ansible Tower enterprises can control how and by whom Ansibleautomations are run in their environments, and retain delegation and security visibility that are important for audits.
How does Ansible complement Red Hat’s current/developing range of offerings?
Because Ansible is the common language of IT organizations, there’s wide applicability of Ansible’s capabilities to Red Hat as a whole. Integrations with existing Red Hat offerings such as OpenShift, CloudForms and Satellite provide customers with a broader ability to automate their existing IT environments and ease the transition to a DevOps-enabled organisation. Additionally, we anticipate that Ansible will become increasingly common as an installer for other Red Hat products, much in a similar way that it’s being used for Openshift v3 today.
Are there any disadvantages to having immutable server architecture and design?
Like many things in IT, “it depends.” Thankfully, Ansible is perfectly applicable in both immutable and standard environments. It can be used to build and deploy immutable images, and, of course, to build, deploy, and manage traditional enterprise IT environments.
How does Ansible compare to similar offerings such as Puppet and Chef? How would you compare use case scenarios?
Puppet and Chef are great configurations managers.  Ansible is an automation engine, which encompasses provisioning, application deployment, workflow orchestration, as well as configuration management. On that note, there are many Ansible users that use Ansible to automate the deployment and management of configurations that are defined in tools like Puppet or Chef— in short, it doesn’t have to be an either-or choice.


Microservices Ending up as a Distributed Monolith

Microservices Ending up as a Distributed Monolith:


At the recent Microservices Practitioner Summit Facebook engineer Ben Christensen talked about an increasingly common anti-pattern of coupling of distributed systems with binary dependencies.
Christensen describes shared libraries are those that are required to run services, in other words those which are collectively referred to as "the platform". Examples of libraries are Spring, Guava and those commonly used for routing and logging, for example. In the end a system can depend on 100s of libraries that all are needed to run the system. If a service cannot interact within a system unless all these libraries are available, then Christensen calls this a distributedmonolith. Essentially all you have done is spread a monolith over the network paying all the cost of a distributed system but losing a lot of the benefits of the microservices architecture. Some of the benefits lost include the characteristic of polyglot, meaning that you loose the possibility of services adopting the best technologies to solve the specific problem, and organizational and technical decoupling, allowing for a team to evolve technically without first having to convince a central authority.
The Don’t Repeat Yourself (DRY) acronym is well-known to most, if not all, developers. With business logic in shared code the ability to deploy changes in isolation is reduced since it affects all services using that code. Christensen emphasizes that shared code is perfectly fine within a service boundary but when it leaks outside that’s a potential form of coupling. He refers to Sam Newman and his book Building Microservices where Newman states:
The evils of too much coupling between services are far worse than the problems caused by code duplication
The alternative for Christensen is contracts and protocols, services should hide all their implementation details only exposing data contracts and network protocols. Without any dependency on service implementation a consumer can use any technology and language, and evolve at its own pace noticing that this is how Internet works. He notes though that there are legitimate needs for standardization in areas like logging, distributed tracing, routing, etc., but this should be enabled using independent libraries that a consumer can choose whether to use or not.

Monday, 22 February 2016

SAAS: The Twelve-Factor App

The Twelve-Factor App:

In the modern era, software is commonly delivered as a service: called web apps, or software-as-a-service. The twelve-factor app is a methodology for building software-as-a-service apps that:
  • Use declarative formats for setup automation, to minimize time and cost for new developers joining the project;
  • Have a clean contract with the underlying operating system, offering maximum portability between execution environments;
  • Are suitable for deployment on modern cloud platforms, obviating the need for servers and systems administration;
  • Minimize divergence between development and production, enabling continuous deployment for maximum agility;
  • And can scale up without significant changes to tooling, architecture, or development practices.
The twelve-factor methodology can be applied to apps written in any programming language, and which use any combination of backing services (database, queue, memory cache, etc).



Sunday, 21 February 2016

REST and Web Services links


How to design a REST API



http://blog.octo.com/wp-content/uploads/2014/12/OCTO-Refcard_API_Design_EN_3.0.pdf

Best Practices for Designing a Pragmatic RESTful API

 Good article on #SOA and #REST alignment http://t.co/OJutT8AiKS

Web Application Description Language


REST API Documentation Using JAXRS-ANALYZER


SWAGGER The World's Most Popular Framework for APIs

The Swagger framework addresses server, client, documentation, and sandbox needs for RESTful APIs.[citation needed]
As a specification, it is language-agnostic. It is also extensible into new technologies and protocols beyond HTTP.[citation needed]
With Swagger's declarative resource specification, clients can understand and consume services without knowledge of server implementation or access to the server code.[citation needed]
The Swagger UI framework allows both developers and non-developers to interact with the API in a sandbox UI that gives clear insight into how the API responds to parameters and options. Swagger may utilize both JSON and XML.