A is the hundredths digit in the decimal 0.1A and B is the thousandths
[#permalink]
08 Sep 2021, 18:12
Since A and B are non-zero digits, the range of values for A and B are {1-9}
Essentially, I want to minimize the denominator and maximize the numerator here.
Removing the decimals will make this clearer.
Take \(\frac{A}{B}\).
If I want to maximize this fraction, I would want to make B as small as possible and A as large as possible.
Notice how when I increase the denominator, the fraction gets smaller:
\(\frac{10}{1} = 10\)
\(\frac{10}{2} = 5\)
\(\frac{10}{3} = 3.3333...\)
.
.
.
And when I decrease the denominator, the fraction gets bigger:
\(\frac{10}{10} = 1\)
\(\frac{10}{9} = 1.1111.....\)
\(\frac{10}{8} = 1.25\)
.
.
.
So when we look at:
\(\frac{0.1A}{0.02B}\)
Let's make A as big as possible (A = 9) and B as small as possible (B = 1)
So we get (would recommend the calculator):
\(\frac{0.19}{0.021} < 10\)
Therefore the answer is B
______
There is a more algebraic approach, however I believe simple intuition is good enough here.