rightAddMultipliedByDoubling

inline fun <Number> rightAddMultipliedByDoubling(base: Number, arg: Number, multiplier: Int, additionOp: (Number, Number) -> Number, negationOp: (Number) -> Number): Number(source)

Applies multiplication-by-doubling algorithm (a.k.a. exponentiation by squaring) to add argument arg multiplied by integer multiplier to the right of base.

For example here are resulting expressions for the following values of multiplier:

  • If multiplier == 0, the result is base.

  • If multiplier == 1, the result is additionOp(base, arg).

  • If multiplier == 2, the result is additionOp(base, additionOp(arg, arg)).

  • If multiplier == 3, the result is additionOp(additionOp(base, arg), additionOp(arg, arg)).

  • If multiplier == 4, the result is additionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg))).

  • If multiplier == -1, the result is additionOp(base, negationOp(arg)).

  • If multiplier == -2, the result is additionOp(base, additionOp(negationOp(arg), negationOp(arg))).

  • If multiplier == -3, the result is additionOp(additionOp(base, negationOp(arg)), additionOp(negationOp(arg), negationOp(arg))).

  • If multiplier == -4, the result is additionOp(base, additionOp(additionOp(negationOp(arg), negationOp(arg)), additionOp(negationOp(arg), negationOp(arg)))).

  • And so on...

But actually such sub-expression like additionOp(arg, arg) are not calculated several times. Instead of additionOp(additionOp(arg, arg), additionOp(arg, arg)) actual computation is equivalent to additionOp(arg, arg).let { additionOp(it, it) } that uses 2 calls of additionOp instead of three.

So one can say that additionOp is used \(O(\log(\mathrm{multiplier}))\) times.


inline fun <Number> rightAddMultipliedByDoubling(base: Number, arg: Number, multiplier: Int, additionOp: (Number, Number) -> Number, rightSubtractionOp: (Number, Number) -> Number): Number(source)

Applies multiplication-by-doubling algorithm (a.k.a. exponentiation by squaring) to add argument arg multiplied by integer multiplier to the right of base.

For example here are resulting expressions for the following values of multiplier:

  • If multiplier == 0, the result is base.

  • If multiplier == 1, the result is additionOp(base, arg).

  • If multiplier == 2, the result is additionOp(base, additionOp(arg, arg)).

  • If multiplier == 3, the result is additionOp(additionOp(base, arg), additionOp(arg, arg)).

  • If multiplier == 4, the result is additionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg))).

  • If multiplier == -1, the result is rightSubtractionOp(base, arg).

  • If multiplier == -2, the result is rightSubtractionOp(base, additionOp(arg, arg)).

  • If multiplier == -3, the result is rightSubtractionOp(rightSubtractionOp(base, arg), additionOp(arg, arg)).

  • If multiplier == -4, the result is rightSubtractionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg))).

  • And so on...

But actually such sub-expression like additionOp(arg, arg) are not calculated several times. Instead of additionOp(additionOp(arg, arg), additionOp(arg, arg)) actual computation is equivalent to additionOp(arg, arg).let { additionOp(it, it) } that uses 2 calls of additionOp instead of three.

So one can say that both additionOp and rightSubtractionOp are used \(O(\log(\mathrm{multiplier}))\) times.


inline fun <Number> rightAddMultipliedByDoubling(base: Number, arg: Number, multiplier: UInt, additionOp: (Number, Number) -> Number): Number(source)

Applies multiplication-by-doubling algorithm (a.k.a. exponentiation by squaring) to add argument arg multiplied by integer multiplier to the right of base.

For example here are resulting expressions for the following values of multiplier:

  • If multiplier == 0u, the result is base.

  • If multiplier == 1u, the result is additionOp(base, arg).

  • If multiplier == 2u, the result is additionOp(base, additionOp(arg, arg)).

  • If multiplier == 3u, the result is additionOp(additionOp(base, arg), additionOp(arg, arg)).

  • If multiplier == 4u, the result is additionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg))).

  • And so on...

But actually such sub-expression like additionOp(arg, arg) are not calculated several times. Instead of additionOp(additionOp(arg, arg), additionOp(arg, arg)) actual computation is equivalent to additionOp(arg, arg).let { additionOp(it, it) } that uses 2 calls of additionOp instead of three.

So one can say that additionOp is used \(O(\log(\mathrm{multiplier}))\) times.


inline fun <Number> rightAddMultipliedByDoubling(base: Number, arg: Number, multiplier: Long, additionOp: (Number, Number) -> Number, negationOp: (Number) -> Number): Number(source)

Applies multiplication-by-doubling algorithm (a.k.a. exponentiation by squaring) to add argument arg multiplied by integer multiplier to the right of base.

For example here are resulting expressions for the following values of multiplier:

  • If multiplier == 0L, the result is base.

  • If multiplier == 1L, the result is additionOp(base, arg).

  • If multiplier == 2L, the result is additionOp(base, additionOp(arg, arg)).

  • If multiplier == 3L, the result is additionOp(additionOp(base, arg), additionOp(arg, arg)).

  • If multiplier == 4L, the result is additionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg))).

  • If multiplier == -1L, the result is additionOp(base, negationOp(arg)).

  • If multiplier == -2L, the result is additionOp(base, additionOp(negationOp(arg), negationOp(arg))).

  • If multiplier == -3L, the result is additionOp(additionOp(base, negationOp(arg)), additionOp(negationOp(arg), negationOp(arg))).

  • If multiplier == -4L, the result is additionOp(base, additionOp(additionOp(negationOp(arg), negationOp(arg)), additionOp(negationOp(arg), negationOp(arg)))).

  • And so on...

But actually such sub-expression like additionOp(arg, arg) are not calculated several times. Instead of additionOp(additionOp(arg, arg), additionOp(arg, arg)) actual computation is equivalent to additionOp(arg, arg).let { additionOp(it, it) } that uses 2 calls of additionOp instead of three.

So one can say that additionOp is used \(O(\log(\mathrm{multiplier}))\) times.


inline fun <Number> rightAddMultipliedByDoubling(base: Number, arg: Number, multiplier: Long, additionOp: (Number, Number) -> Number, rightSubtractionOp: (Number, Number) -> Number): Number(source)

Applies multiplication-by-doubling algorithm (a.k.a. exponentiation by squaring) to add argument arg multiplied by integer multiplier to the right of base.

For example here are resulting expressions for the following values of multiplier:

  • If multiplier == 0L, the result is base.

  • If multiplier == 1L, the result is additionOp(base, arg).

  • If multiplier == 2L, the result is additionOp(base, additionOp(arg, arg)).

  • If multiplier == 3L, the result is additionOp(additionOp(base, arg), additionOp(arg, arg)).

  • If multiplier == 4L, the result is additionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg))).

  • If multiplier == -1L, the result is rightSubtractionOp(base, arg).

  • If multiplier == -2L, the result is rightSubtractionOp(base, additionOp(arg, arg)).

  • If multiplier == -3L, the result is rightSubtractionOp(rightSubtractionOp(base, arg), additionOp(arg, arg)).

  • If multiplier == -4L, the result is rightSubtractionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg))).

  • And so on...

But actually such sub-expression like additionOp(arg, arg) are not calculated several times. Instead of additionOp(additionOp(arg, arg), additionOp(arg, arg)) actual computation is equivalent to additionOp(arg, arg).let { additionOp(it, it) } that uses 2 calls of additionOp instead of three.

So one can say that both additionOp and rightSubtractionOp are used \(O(\log(\mathrm{multiplier}))\) times.


inline fun <Number> rightAddMultipliedByDoubling(base: Number, arg: Number, multiplier: ULong, additionOp: (Number, Number) -> Number): Number(source)

Applies multiplication-by-doubling algorithm (a.k.a. exponentiation by squaring) to add argument arg multiplied by integer multiplier to the right of base.

For example here are resulting expressions for the following values of multiplier:

  • If multiplier == 0uL, the result is base.

  • If multiplier == 1uL, the result is additionOp(base, arg).

  • If multiplier == 2uL, the result is additionOp(base, additionOp(arg, arg)).

  • If multiplier == 3uL, the result is additionOp(additionOp(base, arg), additionOp(arg, arg)).

  • If multiplier == 4uL, the result is additionOp(base, additionOp(additionOp(arg, arg), additionOp(arg, arg))).

  • And so on...

But actually such sub-expression like additionOp(arg, arg) are not calculated several times. Instead of additionOp(additionOp(arg, arg), additionOp(arg, arg)) actual computation is equivalent to additionOp(arg, arg).let { additionOp(it, it) } that uses 2 calls of additionOp instead of three.

So one can say that additionOp is used \(O(\log(\mathrm{multiplier}))\) times.