This means, though, that I would have to resolve the event using collision API (in addition to the touch API), correct?
This might have the potential to make it a bit more complicated to coordinate with touch.move events… For example, say I wanted the following hypothetical interactivity, which would ideally be performed within a single handler tied to a local _ touch _ eventListener
- touch the star (within custom shape only) - began --> star pops into foreground
- move finger - moved --> star follows finger
- ended --> star snaps in to background in end position
In contrast, if I understand correctly, using the proposed collision object (_ sensor _) would work as follows?
- I touch somewhere on the star sprite
- Within my _ touch _ handler, I create/move a small physics sensor to my event.x, event.y
- Assuming the sensor does not collide with the star, nothing happens. Assuming the sensor collides with the star --> outside the touch handler , a collision event is created.
- Inside _ collision _ event - _ began _ --> pop star into foreground, setFocus of _ touch _ event to star
- Back inside _ touch _event - _ moved _ --> if focus of _ touch _ event is set to star then star follows finger (Since the star moved, does the collision event end? )
- Inside _ touch _ event - _ ended _ --> snap star in place, set focus of touch event to nil
- Assuming there’s a _ collision _ _ ended _ phase in there --> has no effect
Would that solution be that roughly correct?
Most importantly, is there a way I can resolve my touch handling within my touch handler exclusively rather than using a collision handler? Is there some type of functionality where I can just do something simple like starBody:containsPoint(event.x, event.y) within my touch handler? That way I don’t need to create a near-useless collision and coordinate both touch and collision events?
Thank you so much. Your advice would be greatly appreciated!