Be warned that this is mostly just a collection of links to articles and demos by smarter people than I. Areas of interest include Java, C++, Scala, Go, Rust, Python, Networking, Cloud, Containers, Machine Learning, the Web, Visualization, Linux, System Performance, Software Architecture, Microservices, Functional Programming....
Wednesday, 23 September 2015
Software Security links
Does DevOps hurt or help security?
While security processes tests always should be an integral part of DevOps workflow, that isn’t a reality for many organizations. They’ve always struggled to properly integrate security, and those challenges certainly persist through transitions to DevOps. But Storms says that DevOps provides an opportunity to more tightly couple security into the workflow. “One of the best ways to bring DevOps and security together is to utilize the tools and the processes that DevOps really excels at and apply them to security,” he says — “things like automation, orchestration, and instrumentation. Let's use those tools to build these closed-loop security systems where everything's automated and everything's predictable. That’s a way we actually can fulfill the security requirements in an automated fashion with fewer resources.”
One success story that Storms cites is a healthcare company in the Northeast. “It has had serious compliance and security requirements so it performs continuous deployment. The company has extensively automated its security and compliance tests and the auditors are happy,” he says.
Friday, 18 September 2015
Hadoop, Spark, Storm (and ecosystem) links
.@mpron SQL is roaring back, but it’s important to understand the CAP theorem and when availability is preferable over ACID #DZBigData
— Dean Wampler (@deanwampler) September 22, 2014
What MapReduce can't do
We discuss here a large class of big data problems where MapReduce can't be used - not in a straightforward way at least - and we propose a rather simple analytic, statistical solution.MapReduce is a technique that splits big data sets into many smaller ones, process each small data set separately (but simultaneously) on different servers or computers, then gather and aggregate the results of all the sub-processes to produce the final answer. Such a distributed architecture allows you to process big data sets 1,000 times faster than traditional (non-distributed) designs, if you use 1,000 servers and split the main process into 1,000 sub-processes.MapReduce works very well in contexts where variables or observations are processed one by one. For instance, you analyze 1 terabyte of text data, and you want to compute the frequencies of all keywords found in your data. You can divide the 1 terabyte into 1,000 data sets, each 1 gigabyte. Now you produce 1,000 keyword frequency tables (one for each subset) and aggregate them to produce a final table.However, when you need to process variables or data sets jointly, that is 2 by 2 or or 3 by 3, MapReduce offers no benefit over non-distributed architectures. One must come with a more sophisticated solution.
Here we are talking about self-joining 10,000 data sets with themselves, each data set being a time series of 5 or 10 observations. My point in my article is that the distributed architecture of Map Reduce does not bring any smart processing of these computations. Just brute force (better than a non-parallel approach, sure) but totally incapable of removing the very massive redundancy involved in these computations (as well as in storage redundancy, since the same data set must be stored on a bunch of servers to allow for the computations of all cross-correlations).
This underscores the importance of a major upcoming change to Hadoop - the introduction of YARN which will open up Hadoop's distributed data and compute framework for use with non-MapReduce paradigms and frameworks. One such example - RushAnalytics which, incidentally, also adds fine-grained parallelism within each node of a cluster.
Subscribe to:
Posts (Atom)