Are security and reliability fundamentally incompatible?.
I have been meaning to write about the Crowdstrike incident, but it seemed important to avoid being caught into the chaotic blame game going around. So let’s get this out of the way first: Yes, Crowdstrike made a terrible technical mistake that they are ultimately responsible for, but No, they probably didn’t have any other ways to go about solving their problems for the products they were trying to build. As someone who has made similar mistakes in the past, I can understand how they happen, and will continue to happen. There are no silver bullets, and any sufficiently complicated system will fail regularly, no matter how much testing, quality assurance, safe coding and so on that you throw at it. The question that I am interested in exploring here is whether security is fundamentally antagonistic to reliability. Will security solutions that are inherently intrusive inevitably degrade the ability of systems to perform their tasks uninterrupted? And if yes, are there approaches to reduce that impact to a tolerable minimum?
Read in full here:
This thread was posted by one of our members via one of our news source trackers.