We all know that the MS is far easier to use for just about any application (can't thimpfk of a single one that isn't).
I have no problem using the metric system, and routinely use metric dimensions, fasteners, bearings, etc. in many of my designs. That said, I can think of three places where the metric system is
not as intuitive or easy to use:
1) Metric threading is easy to calculate in the sense under discussion, i.e., what size tap drill is needed for a given thread. (Actually, I find no difficulty in calculating imperial threads, but I digress). However, from what I understand, metric threading is much harder to implement on a typical lathe - not necessarily because of the particular dimensions involved, but because of how the system is designed, using the distance between threads rather than the number of threads per unit. As I understand it, threads-per-unit makes it relatively easy to generate the gear box and thread dial ... distance between threads, not so much. (I confess to knowing only enough to be dangerous on this topic, so happy to be corrected ...)
2) A topic about which I know quite a bit: implementation in computer hardware. Because of the binary system used in the vast majority of computers both historical and present, multiplying and dividing by 2 is much, much easier than by 10. Likewise it is far easier to store fractions-of-two (i.e., n/2, n/4, n/8, n/16, etc.) than it is to store decimals (i.e., .1, .2, .3, etc.). Where 1/2 and 1/4 and 1/8 can be stored exactly, .1, .2, and .3 are all going to be stored as approximations - very good approximations when you can use double-precision floating point, but if you are limited to single-precision, say on a microcontroller, you may see errors creeping into your computations.
3) Related to the above: One might argue that in many use cases, humans also find it easier to divide by 2 than to divide by 10. Think about spacing out the screws that will fasten one piece of wood to another - it is trivial to set screws at divisions of 2 by eye, with surprising accuracy: Identify the middle point and place a screw; then identify the middle point of each resulting half and place another screw, and so on. But if one were asked to place the screws at intervals of 10 ... it would be very difficult to come up with something by eye that was not noticeably "off."
How did the Romans multiply Roman Numerals?
At one level, the same way that you and I do. When you multiply 6 x 7, does the answer depend in any way on the notation used? Or simply on the answer we memorized back in 3rd grade? And if you didn't memorize it, how would you work it out - wouldn't you add up 7, six times, and see what the result is? 6, 7, 42, and (7+7+7+7+7+7) are the same values regardless of the notation used, and the math works the same way.
To be sure, if we are only multiplying by 10, then our decimal notation becomes very easy to use. But if you have to multiply 482 x 397, I don't think the notation is helping or hindering all that much. Keep in mind that Archimedes and Pythagoras and other brilliant mathematicians used the same sort of notation system. (Ancient Greek used letters for its numeric notation, just like Latin - or rather the other way around, since Latin almost certainly adopted it from Greek, which adopted its system from Phonecian writing.) And don't think for a moment that ancient money lenders had any trouble calculating the amount of interest owed!
But there is one issue that was a major hindrance for the Phonecians, Greeks, and Romans: while they certainly knew what "nothing" meant, they did not have a way to notate a zero value. This - the ability to write 0 - was the huge advance made by the Arabs - which is why today we use "Arabic numbers."