I thought I'd just poke my nose in after reading about concerns regarding accuracy when dividing. You should not be using floating point maths anywhere because you will end up with accumulated rounding errors. It like using a micrometer with a coarse scale. Accuracy can not be better than 0.5 of the smallest division as I was taught and we had to accumulate the error +- the measuring unit when doing maths with such readings. It is far better to reduce everything to the smallest discrete unit you can measure. So if you measure divisions in seconds rather than the much coarser degree, a lot of problems go away. Others will appear though as you need to present the angle to the user. I chose to do this in degrees minutes and seconds.
So by working in smaller units you may only be one step out per division and if you want to go one step further to improve accuracy, you can add and subtract a step or two every alternate division to make the error go away by the time you get round the circle.
Finally, let me just say if you are old enough to remember programming on an 8 bit PC, using floating point maths was fraught with danger due to ongoing errors accumulated from converting to and from decimal numbers and hexadecimal numbers used internally in the machine architecture. 16 bit is not as bad but the issue is still there. Most accounting programs will use a binary coded decimal as the numerical unit to ensure accurate results. Regrettably, few programming languages support such a data type.