I've had occassion to write some long email messages today that have some bits I thought should go here. The following is in response to some discussion on security policy documents.
I think formalized, overwrought security policies are the product of a fearful environment where through action and inaction what's important has been obscured by lack of foresight and good planning. People write security policies in an effort to make a stand against the lack of foresight. As the policies rarely get actively read and are time consuming to enforce they are rarely successful in their bid to counter lacks elsewhere.
Like many of my attitudes I think that successful security comes from removing doubt and removing choice. Being able to remove doubt and remove choice comes from having foresight, then identifying goals and sticking to them (wheels within wheels).
Computers are tools which, when they have well defined tasks, are easy to maintain. Easy maintenance leads to better security. A tool which is poorly described leads to stopgaps, confusion, too many choices, and plenty of doubt: all holes for error.
People are not tools, primarily because that's gross but also because it is diversity of choice that makes people human. An effective human is one that is able to make good, informed choices from among several, with little doubt. People make good choices by being informed. Information flows through communication and experimentation.
Abuse of tools comes about when the tools provide handles for experimentation to people who have not been provided with enough information (through implicit or explicit means) to make good decisions about appropriate use of the tools.
Security, then, comes down to three things: providing focused and well-defined services, controlling exposure of handles and keeping users informed.
Later in the mail message I noted that my attitude doesn't really apply in situations where the data being protected actually matters. I think those situations are more rare than it may initially seem.