Just theorizing something here: would using a “phase=ended” collision take more processor effort than “began”? Let’s assume I want to set “boundaries” in my world, roughly all 4 edges of the screen. There would be 2 approaches in theory:
-
Set up 4 invisible, rectangular walls, each of which senses when an object collides with it. Upon collision, it uses the “event.other” principle to dictate what happens to that object (removes it, for example). This uses the “phase=began” setting.
-
Set up 1 large rectangle slightly inset within the bounds of the screen. All objects technically “collide” with this rectangle when they’re introduced into the scene. Upon moving outside the bounds of this rectangle however, a “phase=ended” could probably be used to remove that object (imagine a marble rolling off the edge of a table).
Currently I’m using the 1st approach and it works nicely, but it requires 4 separate objects, all sensing collisions. I wonder if I could use the 2nd approach? Or would this ramp up the processor effort? Would all of these objects “on the table” be constantly looking for an “ended” phase trigger? Or from a technical standpoint, is this exactly the SAME processor effort as those objects looking for a began collision with 4 different invisible walls?
My guess is that it’s the same… because no matter what’s happening in the game, every physical object with collision principles is constantly (every game cycle) checking for collision with something else, phase nonwithstanding. “Began” or “ended” are simply used for more specific control over collisions, correct?
Theories or discussion on this?
Brent Sorrentino
Ignis Design [import]uid: 9747 topic_id: 10027 reply_id: 310027[/import]