bloom filter slows down a lot on iPad mini 2 but not iPhone?

Greetings,

I have a snapshot object that I’m applying a filter bloom effect on.

I realize my snapshot object is rather large - screenW/5, screenH*1.2  

(screenW variable for display.contentWidth … screenH for height)

I tested on my iPhone 6 and it ran great no slowdown.  But on my iPad Mini 2 it slows down to like 5 fps.

I didn’t realize there’d be such a huge difference from iPhone 6 and an iPad Mini 2.  Both run same iOS version 9.1

Is this to normal for that effect to have such a difference on those two platforms?  I could change the height of the snapshot to something like screenH/2 if that makes a big difference or not… I’d just rather not because I’d have to change a few other parts of my code.    

Given the way its properties are accessed, e.g. “object.fill.effect.blur.horizontal.sigma”, it’s a multi-pass effect, probably piggybacking on top of Gaussian blur. The latter is rather notorious for the slowdown you’re observing, as a forum search should show, so if true you’re inheriting all that baggage.

From what I gather this is a consequence of the way older devices read textures: so long as you stick to the pixel you’re about to draw, all is well, but once you need to drag in neighbors, which is inevitable when incorporating them into the blur, you break the assumptions that make things fast. The better the blur, the more you need to sample, and the latencies pile up and / or get bottlenecked.

In the case of newer hardware, they found a better algorithm, were able to budget in more transistors, or something along those lines.

Thanks @StarCrunch

I was thinking maybe I could re-render the object into a snapshot once the effect had been applied…?

I have the filter bloom effect as a function and just pass the object. Only thing is it’s only temporary while it’s being touched. So wonder best way to switch back and forth… I’ll figure it out :slight_smile:

Sounds reasonable. The drop shadow sample does something similar, but with captures.

If you can get away with it–it sounds like the snapshot is fairly static?–you might try staggering the snapshot population, say by feeding in a fraction of it each frame (say 1/8th the first frame, another the next, over 8 frames), on the idea that fewer pixels need to be shaded at once.

Kind of a pain to branch the code, though.  :stuck_out_tongue:

Given the way its properties are accessed, e.g. “object.fill.effect.blur.horizontal.sigma”, it’s a multi-pass effect, probably piggybacking on top of Gaussian blur. The latter is rather notorious for the slowdown you’re observing, as a forum search should show, so if true you’re inheriting all that baggage.

From what I gather this is a consequence of the way older devices read textures: so long as you stick to the pixel you’re about to draw, all is well, but once you need to drag in neighbors, which is inevitable when incorporating them into the blur, you break the assumptions that make things fast. The better the blur, the more you need to sample, and the latencies pile up and / or get bottlenecked.

In the case of newer hardware, they found a better algorithm, were able to budget in more transistors, or something along those lines.

Thanks @StarCrunch

I was thinking maybe I could re-render the object into a snapshot once the effect had been applied…?

I have the filter bloom effect as a function and just pass the object. Only thing is it’s only temporary while it’s being touched. So wonder best way to switch back and forth… I’ll figure it out :slight_smile:

Sounds reasonable. The drop shadow sample does something similar, but with captures.

If you can get away with it–it sounds like the snapshot is fairly static?–you might try staggering the snapshot population, say by feeding in a fraction of it each frame (say 1/8th the first frame, another the next, over 8 frames), on the idea that fewer pixels need to be shaded at once.

Kind of a pain to branch the code, though.  :stuck_out_tongue: