Human error is the chief cause of downtime, a new study finds. Imagine that. Credit: Getty Images There was an old joke: “To err is human, but to really foul up you need a computer.” Now it seems the reverse is true. The reliability of data center equipment is vastly improved but the humans running them have not kept up and it’s a threat to uptime. The Uptime Institute has surveyed thousands of IT professionals throughout the year on outages and said the vast majority of data center failures are caused by human error, from 70 percent to 75 percent. And some of them are severe. It found more than 30 percent of IT service and data center operators experienced downtime that they called a “severe degradation of service” over the last year, with 10 percent of the 2019 respondents reporting that their most recent incident cost more than $1 million. In Uptime’s April 2019 survey, 60 percent of respondents believed that their most recent significant downtime incident could have been prevented with better management/processes or configuration. For outages that cost greater than $1 million, this figure jumped to 74 percent. However, the end fault is not necessarily with the staff, Uptime argues, but with management that has failed them. “Perhaps there is simply a limit to what can be achieved in an industry that still relies heavily on people to perform many of the most basic and critical tasks and thus is subject to human error, which can never be completely eliminated,” wrote Kevin Heslin, chief editor of the Uptime Institute Journal in a blog post. “However, a quick survey of the issues suggests that management failure — not human error — is the main reason that outages persist. By under-investing in training, failing to enforce policies, allowing procedures to grow outdated, and underestimating the importance of qualified staff, management sets the stage for a cascade of circumstances that leads to downtime,” Heslin went on to say. Uptime noted that the complexity of a company’s infrastructure, especially the distributed nature of it, can increase the risk that simple errors will cascade into a service outage and said companies need to be aware of the greater risk involved with greater complexity. On the staffing side, it cautioned against expanding critical IT capacity faster than the company can attract and apply the resources to manage that infrastructure and to be aware of any staffing and skills shortage before they start to impair mission-critical operations. Related content news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center news CHIPS Act to fund $285 million for semiconductor digital twins Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers. By Andy Patrizio May 10, 2024 3 mins CPUs and Processors Data Center news HPE launches storage system for HPC and AI clusters The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers. By Andy Patrizio May 07, 2024 3 mins Supercomputers Enterprise Storage Data Center news Lenovo ships all-AMD AI systems New systems are designed to support generative AI and on-prem Azure. By Andy Patrizio Apr 30, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe