rightAddMultipliedByDoubling
Applies multiplication-by-doubling algorithm (a.k.a. exponentiation by squaring) to add argument arg multiplied by integer multiplier to the right of base.
For example here are resulting expressions for the following values of multiplier:
If
multiplier == 0
, the result isbase
.If
multiplier == 1
, the result isadditionOp(base, arg)
.If
multiplier == 2
, the result isadditionOp(base, additionOp(arg, arg))
.If
multiplier == 3
, the result isadditionOp(additionOp(base, arg), additionOp(arg, arg))
.If
multiplier == 4
, the result isadditionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg)))
.If
multiplier == -1
, the result isadditionOp(base, negationOp(arg))
.If
multiplier == -2
, the result isadditionOp(base, additionOp(negationOp(arg), negationOp(arg)))
.If
multiplier == -3
, the result isadditionOp(additionOp(base, negationOp(arg)), additionOp(negationOp(arg), negationOp(arg)))
.If
multiplier == -4
, the result isadditionOp(base, additionOp(additionOp(negationOp(arg), negationOp(arg)), additionOp(negationOp(arg), negationOp(arg))))
.And so on...
But actually such sub-expression like additionOp(arg, arg)
are not calculated several times. Instead of additionOp(additionOp(arg, arg), additionOp(arg, arg))
actual computation is equivalent to additionOp(arg, arg).let { additionOp(it, it) }
that uses 2 calls of additionOp
instead of three.
So one can say that additionOp is used \(O(\log(\mathrm{multiplier}))\) times.
Applies multiplication-by-doubling algorithm (a.k.a. exponentiation by squaring) to add argument arg multiplied by integer multiplier to the right of base.
For example here are resulting expressions for the following values of multiplier:
If
multiplier == 0
, the result isbase
.If
multiplier == 1
, the result isadditionOp(base, arg)
.If
multiplier == 2
, the result isadditionOp(base, additionOp(arg, arg))
.If
multiplier == 3
, the result isadditionOp(additionOp(base, arg), additionOp(arg, arg))
.If
multiplier == 4
, the result isadditionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg)))
.If
multiplier == -1
, the result isrightSubtractionOp(base, arg)
.If
multiplier == -2
, the result isrightSubtractionOp(base, additionOp(arg, arg))
.If
multiplier == -3
, the result isrightSubtractionOp(rightSubtractionOp(base, arg), additionOp(arg, arg))
.If
multiplier == -4
, the result isrightSubtractionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg)))
.And so on...
But actually such sub-expression like additionOp(arg, arg)
are not calculated several times. Instead of additionOp(additionOp(arg, arg), additionOp(arg, arg))
actual computation is equivalent to additionOp(arg, arg).let { additionOp(it, it) }
that uses 2 calls of additionOp
instead of three.
So one can say that both additionOp and rightSubtractionOp are used \(O(\log(\mathrm{multiplier}))\) times.
Applies multiplication-by-doubling algorithm (a.k.a. exponentiation by squaring) to add argument arg multiplied by integer multiplier to the right of base.
For example here are resulting expressions for the following values of multiplier:
If
multiplier == 0u
, the result isbase
.If
multiplier == 1u
, the result isadditionOp(base, arg)
.If
multiplier == 2u
, the result isadditionOp(base, additionOp(arg, arg))
.If
multiplier == 3u
, the result isadditionOp(additionOp(base, arg), additionOp(arg, arg))
.If
multiplier == 4u
, the result isadditionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg)))
.And so on...
But actually such sub-expression like additionOp(arg, arg)
are not calculated several times. Instead of additionOp(additionOp(arg, arg), additionOp(arg, arg))
actual computation is equivalent to additionOp(arg, arg).let { additionOp(it, it) }
that uses 2 calls of additionOp
instead of three.
So one can say that additionOp is used \(O(\log(\mathrm{multiplier}))\) times.
Applies multiplication-by-doubling algorithm (a.k.a. exponentiation by squaring) to add argument arg multiplied by integer multiplier to the right of base.
For example here are resulting expressions for the following values of multiplier:
If
multiplier == 0L
, the result isbase
.If
multiplier == 1L
, the result isadditionOp(base, arg)
.If
multiplier == 2L
, the result isadditionOp(base, additionOp(arg, arg))
.If
multiplier == 3L
, the result isadditionOp(additionOp(base, arg), additionOp(arg, arg))
.If
multiplier == 4L
, the result isadditionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg)))
.If
multiplier == -1L
, the result isadditionOp(base, negationOp(arg))
.If
multiplier == -2L
, the result isadditionOp(base, additionOp(negationOp(arg), negationOp(arg)))
.If
multiplier == -3L
, the result isadditionOp(additionOp(base, negationOp(arg)), additionOp(negationOp(arg), negationOp(arg)))
.If
multiplier == -4L
, the result isadditionOp(base, additionOp(additionOp(negationOp(arg), negationOp(arg)), additionOp(negationOp(arg), negationOp(arg))))
.And so on...
But actually such sub-expression like additionOp(arg, arg)
are not calculated several times. Instead of additionOp(additionOp(arg, arg), additionOp(arg, arg))
actual computation is equivalent to additionOp(arg, arg).let { additionOp(it, it) }
that uses 2 calls of additionOp
instead of three.
So one can say that additionOp is used \(O(\log(\mathrm{multiplier}))\) times.
Applies multiplication-by-doubling algorithm (a.k.a. exponentiation by squaring) to add argument arg multiplied by integer multiplier to the right of base.
For example here are resulting expressions for the following values of multiplier:
If
multiplier == 0L
, the result isbase
.If
multiplier == 1L
, the result isadditionOp(base, arg)
.If
multiplier == 2L
, the result isadditionOp(base, additionOp(arg, arg))
.If
multiplier == 3L
, the result isadditionOp(additionOp(base, arg), additionOp(arg, arg))
.If
multiplier == 4L
, the result isadditionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg)))
.If
multiplier == -1L
, the result isrightSubtractionOp(base, arg)
.If
multiplier == -2L
, the result isrightSubtractionOp(base, additionOp(arg, arg))
.If
multiplier == -3L
, the result isrightSubtractionOp(rightSubtractionOp(base, arg), additionOp(arg, arg))
.If
multiplier == -4L
, the result isrightSubtractionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg)))
.And so on...
But actually such sub-expression like additionOp(arg, arg)
are not calculated several times. Instead of additionOp(additionOp(arg, arg), additionOp(arg, arg))
actual computation is equivalent to additionOp(arg, arg).let { additionOp(it, it) }
that uses 2 calls of additionOp
instead of three.
So one can say that both additionOp and rightSubtractionOp are used \(O(\log(\mathrm{multiplier}))\) times.
Applies multiplication-by-doubling algorithm (a.k.a. exponentiation by squaring) to add argument arg multiplied by integer multiplier to the right of base.
For example here are resulting expressions for the following values of multiplier:
If
multiplier == 0uL
, the result isbase
.If
multiplier == 1uL
, the result isadditionOp(base, arg)
.If
multiplier == 2uL
, the result isadditionOp(base, additionOp(arg, arg))
.If
multiplier == 3uL
, the result isadditionOp(additionOp(base, arg), additionOp(arg, arg))
.If
multiplier == 4uL
, the result isadditionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg)))
.And so on...
But actually such sub-expression like additionOp(arg, arg)
are not calculated several times. Instead of additionOp(additionOp(arg, arg), additionOp(arg, arg))
actual computation is equivalent to additionOp(arg, arg).let { additionOp(it, it) }
that uses 2 calls of additionOp
instead of three.
So one can say that additionOp is used \(O(\log(\mathrm{multiplier}))\) times.