Google Testing Blog: Flaky Tests at Google and How We Mitigate Them: "At Google, we run a very large corpus of tests continuously to validate our code submissions. Everyone from developers to project managers rely on the results of these tests to make decisions about whether the system is ready for deployment or whether code changes are OK to submit. Productivity for developers at Google relies on the ability of the tests to find real problems with the code being changed or developed in a timely and reliable fashion.
Tests are run before submission (pre-submit testing) which gates submission and verifies that changes are acceptable, and again after submission (post-submit testing) to decide whether the project is ready to be released. In both cases, all of the tests for a particular project must report a passing result before submitting code or releasing a project.
Unfortunately, across our entire corpus of tests, we see a continual rate of about 1.5% of all test runs reporting a "flaky" result. We define a "flaky" test result as a test that exhibits both a passing and a failing result with the same code. There are many root causes why tests return flaky results, including concurrency, relying on non-deterministic or undefined behaviors, flaky third party code, infrastructure problems, etc. We have invested a lot of effort in removing flakiness from tests, but overall the insertion rate is about the same as the fix rate, meaning we are stuck with a certain rate of tests that provide value, but occasionally produce a flaky result. Almost 16% of our tests have some level of flakiness associated with them! This is a staggering number; it means that more than 1 in 7 of the tests written by our world-class engineers occasionally fail in a way not caused by changes to the code or tests."
'via Blog this'
Be warned that this is mostly just a collection of links to articles and demos by smarter people than I. Areas of interest include Java, C++, Scala, Go, Rust, Python, Networking, Cloud, Containers, Machine Learning, the Web, Visualization, Linux, System Performance, Software Architecture, Microservices, Functional Programming....
Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts
Saturday, 11 June 2016
Tuesday, 3 November 2015
Software Test links (Including Agile Testing)
Amazon: Agile Testing: A Practical Guide for Testers and Agile Teams
Brian Marick’s testing quadrant
Brian Marick’s testing quadrant
Monday, 18 May 2015
Continuous Deployment, Integration links
The level of testing that is performed in CI can completely vary but the key fundamentals are that multiple integrations from different developers are done through out the day. The biggest advantage of following this approach is that if there are any errors then they are identified early in the cycle, typically soon after the commit. Finding the bugs closer to commit does make them much more easier to fix. This is explained well by Martin Fowler:Continuous Integrations doesn’t get rid of bugs, but it does make them dramatically easier to find and remove.
Friday, 17 April 2015
General Software Development process links
Microsoft Research: Exploding Software-Engineering Myths
The logical assumption would be that more code coverage results in higher-quality code. But what Nagappan and his colleagues saw was that, contrary to what is taught in academia, higher code coverage was not the best measure of post-release failures in the field. Code coverage is not indicative of usage.
What the research team found was that the TDD teams produced code that was 60 to 90 percent better in terms of defect density than non-TDD teams. They also discovered that TDD teams took longer to complete their projects—15 to 35 percent longer.
The team observed a definite negative correlation: more assertions and code verifications means fewer bugs. Looking behind the straight statistical evidence, they also found a contextual variable: experience. Software engineers who were able to make productive use of assertions in their code base tended to be well-trained and experienced, a factor that contributed to the end results. These factors built an empirical body of knowledge that proved the utility of assertions.
The logical assumption would be that more code coverage results in higher-quality code. But what Nagappan and his colleagues saw was that, contrary to what is taught in academia, higher code coverage was not the best measure of post-release failures in the field. Code coverage is not indicative of usage.
What the research team found was that the TDD teams produced code that was 60 to 90 percent better in terms of defect density than non-TDD teams. They also discovered that TDD teams took longer to complete their projects—15 to 35 percent longer.
The team observed a definite negative correlation: more assertions and code verifications means fewer bugs. Looking behind the straight statistical evidence, they also found a contextual variable: experience. Software engineers who were able to make productive use of assertions in their code base tended to be well-trained and experienced, a factor that contributed to the end results. These factors built an empirical body of knowledge that proved the utility of assertions.
Thursday, 16 April 2015
zeroturnaround.com: Architecting Large Enterprise Java Projects with Markus Eisele
http://zeroturnaround.com/rebellabs/architecting-large-enterprise-java-projects-by-markus-eisele/
Developers built a lot of applications like that some time ago and even present day! These applications are still working and need maintenance. So we see them sometimes and call them legacy. They tend to have a release cycle of once or twice a year, depend on a proprietary application server environment and most importantly have a single database schema for all data. Naturally, you cannot move very fast with such a beast on your shoulders and must have a large team and QA department even just to maintain it.
The next step in the architecture design was the Enterprise Service Bus age. Understanding that changes have to be incorporated into even the oldest and the most legacy applications. We (Java developers) started breaking the huge apps into smaller ones. The biggest challenge was to integrate it all together, so the service bus seemed the best solution.
The change wasn’t that big for the operations teams, as they still have everything under their control and centralized, although it was a much more flexible approach. However, the same centralization that adds value, creates a raft of problems that the engineering team had to solve: most importantly challenges with testing and the single point of failure (SPOF).
We’re now we’re moving even further away from the monolithic apps and towards the trendingbuzzword of Microservices.
Then there’s a number of patterns you can use to organise the communication between your microservices, like the Aggregator or the Chain.
Subscribe to:
Posts (Atom)