Last night I was reading the book 'Volatility Trading', in the second chapter it was enumerating the different available ways of measuring volatility. The very first one was simply standard deviation, no real surprise there, however this got me thinking...Whenever people talk of volatility adjusted stops they almost always link it to a multiple of ATR (as do I). I thought, would it not make more sense to measure volatility via standard deviation (not the weird MT4 deviation indicator, but a proper calculation of the standard deviation of bar sizes). One could then set stops at say mean + 2 * standard deviations, this way one has an intuitive sense of what the measure means (i.e. 85% of previous bar sizes will not exceed this measure), as opposed to ATR * some random number I have pulled out of my ass.
Am I missing something?
Am I missing something?
The breaking of a wave cannot explain the whole sea.