Division Vs Multiplication

I tried it out, and I was wondering - is division better than multiplication? I saw it in a post about optimization, but then I created a test and it seems that multiplication is slower. Can someone please clarify?

[lua]

local n

local v

local w=0

local t

for i=1, 10 do

    n=system.getTimer()

    for i=1, 100000 do

        t=display.contentWidth*0.5 – Multiplication

    end

    v=system.getTimer()-n

    w=w+v – Add to the average

end

print("MULTIPLY: "…w*0.1)

w=0

n=system.getTimer()

for i=1, 10 do

    n=system.getTimer()

    for i=1, 100000 do

        t=display.contentWidth/2 – Division

    end

    v=system.getTimer()-n – Collect the time taken

    w=w+v

end

print("DIVIDE: "…w*0.1)

–Result:

–    MULTIPLY: 51.5527

–    DIVIDE: 50.361

[/lua]

C

Hmm interesting. I just tried it and got:

MULTIPLY: 27.2492

DIVIDE: 26.6561

But then i swapped your code around so divide was first and got:

DIVIDE: 29.1813

MULTIPLY: 28.0045

So it seems that multiply is faster if you check those differences, but only slightly and it seems like this is a pretty bias test for whatever you try to do first :smiley:

***EDIT: Ok i tried it again and this time i got the below, very close this time. 

DIVIDE: 27.5194

MULTIPLY: 27.4582

There’s been a debate going on for years that you can see if you Google “programming multiplication division faster.”

http://stackoverflow.com/questions/226465/should-i-use-multiplication-or-division

http://stackoverflow.com/questions/4125033/floating-point-division-vs-floating-point-multiplication

Why does division take so much longer than multiplication? If you remember back to grade school, you may recall that multiplication can essentially be performed with many simultaneous additions. Division requires iterative subtraction that cannot be performed simultaneously so it takes longer. In fact, some FPUs speed up division by performing a reciprocal approximation and multiplying by that. It isn’t quite as accurate but is somewhat faster.

Perhaps at this point in the game, it doesn’t matter since it’s so close to the CPU/FPU.  But, since we’re dealing with mobile devices which are still relatively more sensitive than their desktop counterparts, I decided not to take any chances.  I’ve been programming for over 30 years, so if I can convert my division equation into a multiplication one and still be able to read it, I’ll do it.

Historically Multiply actions at the machine level have been faster operations than divide operations.  But in the grand scale of things, it really doesn’t matter unless you’re in a tight loop.

After all, when you’re centering 5 things in a storyboard scene that gets called once every 5 minutes or so, are those few microseconds going to even matter?

But you have 200 enemies on the screen all firing bullets at angles and you want high frames per second, well those microseconds start adding up.  It’s all about perspective. 

Ah… I see.

So division isn’t as fast, but the effect is negligible if you’re not doing it by the hundreds. Multiplication should be used for fast, enterFrame calculations.

Thanks!

C

Hmm interesting. I just tried it and got:

MULTIPLY: 27.2492

DIVIDE: 26.6561

But then i swapped your code around so divide was first and got:

DIVIDE: 29.1813

MULTIPLY: 28.0045

So it seems that multiply is faster if you check those differences, but only slightly and it seems like this is a pretty bias test for whatever you try to do first :smiley:

***EDIT: Ok i tried it again and this time i got the below, very close this time. 

DIVIDE: 27.5194

MULTIPLY: 27.4582

There’s been a debate going on for years that you can see if you Google “programming multiplication division faster.”

http://stackoverflow.com/questions/226465/should-i-use-multiplication-or-division

http://stackoverflow.com/questions/4125033/floating-point-division-vs-floating-point-multiplication

Why does division take so much longer than multiplication? If you remember back to grade school, you may recall that multiplication can essentially be performed with many simultaneous additions. Division requires iterative subtraction that cannot be performed simultaneously so it takes longer. In fact, some FPUs speed up division by performing a reciprocal approximation and multiplying by that. It isn’t quite as accurate but is somewhat faster.

Perhaps at this point in the game, it doesn’t matter since it’s so close to the CPU/FPU.  But, since we’re dealing with mobile devices which are still relatively more sensitive than their desktop counterparts, I decided not to take any chances.  I’ve been programming for over 30 years, so if I can convert my division equation into a multiplication one and still be able to read it, I’ll do it.

Historically Multiply actions at the machine level have been faster operations than divide operations.  But in the grand scale of things, it really doesn’t matter unless you’re in a tight loop.

After all, when you’re centering 5 things in a storyboard scene that gets called once every 5 minutes or so, are those few microseconds going to even matter?

But you have 200 enemies on the screen all firing bullets at angles and you want high frames per second, well those microseconds start adding up.  It’s all about perspective. 

Ah… I see.

So division isn’t as fast, but the effect is negligible if you’re not doing it by the hundreds. Multiplication should be used for fast, enterFrame calculations.

Thanks!

C