Showing posts with label hardware. Show all posts
Showing posts with label hardware. Show all posts

Monday, 15 August 2016

NUMA Deep Dive Part 1: From UMA to NUMA - frankdenneman.nl

NUMA Deep Dive Part 1: From UMA to NUMA - frankdenneman.nl: "Non-uniform memory access (NUMA) is a shared memory architecture used in today's multiprocessing systems. Each CPU is assigned its own local memory and can access memory from other CPUs in the system. Local memory access provides a low latency - high bandwidth performance. While accessing memory owned by the other CPU has higher latency and lower bandwidth performance. Modern applications and operating systems such as ESXi support NUMA by default, yet to provide the best performance, virtual machine configuration should be done with the NUMA architecture in mind. If incorrect designed, inconsequent behavior or overall performance degradation occurs for that particular virtual machine or in worst case scenario for all VMs running on that ESXi host.

 This series aims to provide insights of the CPU architecture, the memory subsystem and the ESXi CPU and memory scheduler. Allowing you in creating a high performing platform that lays the foundation for the higher services and increased consolidating ratios. Before we arrive at modern compute architectures, it's helpful to review the history of shared-memory multiprocessor architectures to understand why we are using NUMA systems today."



'via Blog this'

Friday, 18 March 2016

SSD reliability in the real world: Google's experience | ZDNet

SSD reliability in the real world: Google's experience | ZDNet:


  • Ignore Uncorrectable Bit Error Rate (UBER) specs. A meaningless number.
  • Good news: Raw Bit Error Rate (RBER) increases slower than expected from wearout and is not correlated with UBER or other failures.
  • High-end SLC drives are no more reliable that MLC drives.
  • Bad news: SSDs fail at a lower rate than disks, but UBER rate is higher (see below for what this means).
  • SSD age, not usage, affects reliability.
  • Bad blocks in new SSDs are common, and drives with a large number of bad blocks are much more likely to lose hundreds of other blocks, most likely due to die or chip failure.
  • 30-80 percent of SSDs develop at least one bad block and 2-7 percent develop at least one bad chip in the first four years of deployment.