Detect how many touches are currently active

Hi,

I would like to get the number of fingers touching the screen at the moment.
Is there an api call I could make to get that? Number would be enough, but the more details the better.

I know I can “record” touches and then check them, but I’m looking for something different.
Currently I’m trying to keep the track of touches. but from time to time, when on device I can see this method failing. I would like to have a “backup” plan, to check every few seconds if the number of touches I track is correct.

Thanks,
Krystian [import]uid: 109453 topic_id: 27893 reply_id: 327893[/import]

There is no API that returns the number of touches. You can keep track of touches by creating a Runtime touch listener that keeps track of all touches. It order for it to work your object listeners need to return “false” so the touches fall through to the Runtime listener.

If you are losing track of the touches, you may be losing focus on the object. You need to setFocus on the object so the touch focus stays on the object. Look at the DragMe sample code for an example. [import]uid: 7559 topic_id: 27893 reply_id: 112909[/import]

Hi Tom

I can’t use setfocus since it is broken on multitouch.
Your idea would only work if Runtime got the touch listener triggered as first, and in it I would return false. I can’t return false in my listeners as this would break the whole logic of my game.

Thanks
Krystian [import]uid: 109453 topic_id: 27893 reply_id: 112927[/import]

I’ve read that other devs think that setFocus is broken in multitouch and I disagree. I’ve never had problems with it but I have had all the common problems with thinking about it correctly.

There is no way to simply ask the API how many touches there are currently and there is now way to get a touch map (to coin a phrase) of the current touch positions on the screen (though I think that would be nice.)

However, neither of the above are required to perform those functions…

If you want to track all points at all times you can either use a Runtime touch listener and nothing else - the downside is that you need to know which objects your user is touching and have your code respond appropriately.

If you want to track all points at all times and get the benefit of the Corona API you have it do the work of tracking the began event and then use setFocus to lock all other events for that particular touch point to a single display object and respond there. To track it’s touches as an item in a global collection simply add it to a global collection and manage the collection yourself. This is really easy and I’ve posted code on how to do this previously. I think I put it in the code exchange as part of one of my recent submissions, but if I can’t find it I’ll submit it again.

Either way, I’m absolutely confident that all touch scenarios are possible with Corona and that the actual problem is simply a matter of thinking about the issue from the right direction. It is a tough one and if somehow Apple made their API better I’m sure Corona would have benefitted. [import]uid: 8271 topic_id: 27893 reply_id: 112929[/import]

horacebury: I understand that there is no way to ask the iOS to get current number of touches, ok - there’s nothing we can do.

However, the way setFocus works with multitouch is [in my opinion] incorrect.
I’ve got a background and I want to detect multiple touches on it. However, I also want to detect all the movement of the touches, even though they may go over other elements [which have touch listeners as well].
So the easiest solution would be to setFocus to the background object. Unfortunately, once setFocus on the object is called, no other touch will be detected on that object. How is this correct?

To work around this “limitation” [in my, and many others, opinion] I have modified all of the listeners on the scene, to set a flag during “began” phase and check that before doing anything in moved and ended/cancelled phases. This seemed fine for quite some time, but yesterday I had my game tested by a friend, and he hit a scenario I was worried about:
have 3 fingers on the background object, move them around, take the 3 fingers of screen and the ended/cancelled phase was not called on the background object. I have no clue about how it happened. I’m 100% it’s just a simple bug in my code and there was something on screen with a touch listener missing the flag handling I wrote about earlier.
That’s why I’m looking for something, that would tell me how many touches I’ve got on screen. If this number is different then the number my background touch listener is tracking I would just reset it. It’s much better solution then to force player to kill the game.

Anyway… could you describe all the common problems about thinking correctly about setFocus?
Maybe I’m just missing something obvious about it, however I’ve tried many times and I was not able to use setFocus with multitouch when my game logic depended on multiple fingers touching single object. [import]uid: 109453 topic_id: 27893 reply_id: 112931[/import]

The way setFocus works is to direct touch events to the object you set it to before any others. If you return true from your touch event that tells the API that your function has handled the touch and not to pass it down to any other objects. Is that what you want to do? If it is not then you should not be using the combination of multitouch, setFocus and listener return values the way that you are. Of course, with a code sample I don’t know which way you are using it. Also from your description it’s hard to know what you intend.

Think about how many object’s touch listeners you want to have actually be informed about each touch event. Then think about what you actually want to have happen in your game. Then build some simple code to try it out. [import]uid: 8271 topic_id: 27893 reply_id: 112943[/import]

Having looked at this more I think that there are three issues:

  1. Most devs will expect that setFocus will cause all touch events, for a single ID, to get directed to the target object.
  2. Most devs will expect that returning false from a touch listener will cause the event to fall through to the objects below, whether setFocus was called or not.
  3. Most devs will expect that using setFocus on an object will allow it to receive touch events from more than one user’s touch.

This combination of beliefs is incorrect because:

  1. This belief is actually correct! When setFocus is called, all subsequent touch events will get directed to the target object - regardless of where the object is in the display hierarchy (remember, it could start out on top but be moved out of view or even hidden completely.) This is the first end of belief #1.
  2. All touch events will fall through if false is returned, unless setFocus is called to set a target. This is because setting a target (or ‘focus object’) means that, as far as the physical touch is concerned, that is the only object in the world (true love!) All touch events go to that object and nowhere else, even when false is returned.
  3. When an object has been set as the focus of a touch only the specified touch ID can be received by that touch object and no others. The is the other end of the situation in belief #1.

The above (specifically #2) may appear to be wrong, but consider this…

  1. When a target object has not been set, all touch events will be fired through all display objects. This also means that if you are moving an object along under the user’s touch, but they move very quickly, you will find that the object is suddenly not under their touch and so the events are fired against a different object, causing the user to lose the ability to move the object.
  2. When a target object has been set with focus, all touch events will be fired directly to that display object, regardless of where it is - visible or not, under the user’s touch point or not. This means that you can track a user’s touch reliably by only a single touch - others are blocked from it. The trade off is that for this logic to work you have to forgo being able to pass the individual events to other objects.

That is, essentially, the definition of setting focus - it’s the same principle with setting keyboard focus in a window’d environment: all key presses get directed to one input box and cannot go further.

To illustrate this here is some code. With the commented code (as it is below) all touch events will be passed down through the display object hierarchy, allowing the touch points (circles) to receive the events first. The ‘back’ rectangle will receive the event last. Moving quickly loses contact with the circle.

If you uncomment the code you’ll see that beginning a touch on the back rectangle allows the circles to receive events (because no began event sets focus) and so it behaves just as if the code was not commented out. However, touching the circles first causes the focus to be set to those display objects and so even returning false from their listeners cannot cause the event to drop through to the ‘back’ rectangle.

I suspect that what some devs would like is one of the following options:

  1. The ability to indicate that touch events should be directed to the focus object first and allowed to fall through to the rest of the display hierarchy when false is returned.
  2. The ability to register multiple target focus objects for a single touch ID, behaving in a non-focussed way for that group.
  3. The ability to register multiple touch IDs for a single target focus object.

There are problems and solutions to all of these approaches. Having personally run up against most of these requirements in the past, I believe that they can all be solved in lua by an applications code and do not require solving by the API. Perhaps some extra layering of support would be good for the API to provide, but I don’t know what that solution could possibly be without disappointing most devs.

What I do to work through these problems is to think about using an intermediary focus object/s…

  1. Multiple touches on a single object: Perhaps the intermediary is an invisible object per user touch set with focus. Each invisible object can then fire functions on other objects for specific reasons.
  2. Single touches on a stack of objects: Perhaps the intermediary needs to keep focus but be able to pass it on. It could also call specific functions on other objects for various reasons.
  3. Global/Whole world touches: Perhaps intermediary objects can be used to track individual touches but all call the same function which keeps track of all of them.

The method described in each of the 3 scenarios above is the same, but I’ve used it in almost all the multitouch code that Iv’e submitted to the Code Exchange. You can take a look: I submitted a piece called “Multi-Touch Pinch-Rotate-Zoom” which uses separate tracking objects and applies the combination of their motions to a single image which, in this case, is scaled, translated and rotated as seen in the Photos app on iPad.

Here’s that code…

[lua]-- multitouchtest

system.activate(‘multitouch’)

local back = display.newRect( 100, 100, display.contentWidth-200, display.contentHeight-200 )
back:setFillColor( 0, 255, 0 )

function back:touch( event )
print(system.getTimer(),‘back’,event.phase)
return false
end
back:addEventListener( “touch”, back )

function createTouchPoint()
local index = display.getCurrentStage().numChildren
local point = display.newCircle( display.getCurrentStage().numChildren*100, 200, 50 )
point:setFillColor( math.random(150,255), math.random(150,255), math.random(150,255) )

function point:touch( event )
print(system.getTimer(),'point '…index,event.phase)
if (event.phase == “began”) then
– display.getCurrentStage():setFocus( point, event.id )
elseif (event.phase == “ended” or event.phase == “cancelled”) then
– display.getCurrentStage():setFocus( point, nil )
end
point.x, point.y = event.x, event.y
return false
end
point:addEventListener( “touch”, point )

return point
end

local one, two, three = createTouchPoint(), createTouchPoint(), createTouchPoint()[/lua]
[import]uid: 8271 topic_id: 27893 reply_id: 112959[/import]

Hi horacebury,

thanks for spending some time with it.

The problem I [and others] get is the one from your point nr 3.
And although you will advocate it as a proper behavior, I will still think this is a lack of understanding touch scenarios.

IF setFocus would work as I expect it to [allow listener to receive events from multiple touches and lock some of them only to that object], when someone wants to lock listener to a single touch event, it is easily handled by a single line of code and 1 flag. Using it the way I want to [lock multiple touches to a single object/listener] is a big pile of sh… hacks with countless error scenarios. And most of all, if someone wants to use Runtime touch listener, they have to loose ability to return true from the listener, which again means a lot of conditionals in other listeners.

So I agree that it works as you have described it, but I disagree to call it proper behavior. [import]uid: 109453 topic_id: 27893 reply_id: 112998[/import]

Ok, am I correct in thinking that you would like to both:

A) Lock multiple touches to a single object and have their touch events directed to that object regardless of whether they are over the focus object or not, just like the current setFocus does now with a single object / touch, but with multiple touches. AND…
B) Be able to receive multiple touches on the Runtime listener and return true from it, to indicate that the multiple touches have been handled.

Do you want do both of those? Do you want to do both of those at the same time? [import]uid: 8271 topic_id: 27893 reply_id: 113000[/import]

Hi horacebury,

No, I don’t expect Runtime listener to work when setFocus is used on a different listener.
I don’t even see the point of using Runtime listener [in my case that is]. [import]uid: 109453 topic_id: 27893 reply_id: 113002[/import]

So you want to receive multiple touches on a single object?

If that’s not right can you explicitly define what you want to do? [import]uid: 8271 topic_id: 27893 reply_id: 113006[/import]

To clarify:

I want to setFocus of few touches to a single object.

[import]uid: 109453 topic_id: 27893 reply_id: 113007[/import]

i also happen to be one of those developers that think setFocus is broken. while i think that horacebury has done a great job of outlining the current behavior, i’d agree with Krystian that the broken-ness surounds point #3 of his list. yes, the current system works as engineered, however with only a single touch on an object i think stretches the truth of saying that the environment is multi-touch aware.

i think this point is evident in the various work-arounds i see to make a single object able to receive multiple touches. some of the indicators are:

  • creating multiple objects to act as touch proxies, shuttling touch events back to a master (why more objects on the screen? how many more will you need?) [this is over-engineered]
  • the way to code goes against the normal, proper methods (eg, not returning true/false when you normally would, etc) [this is bad coding style]

in short, everything should be transparent to the developer and none of those are.

i was recently tasked by someone to write a library which could do true multi-touch gestures using multiple fingers, like you’d find on an iPad in the Photos app
– pinch to scale
– rotate an object
– move an object

as we all can agree, these features are impossible to do with the current functionality of setFocus and NOT requiring some hacking to get it to work. as horacebury mentioned, he has worked to create that same functionality, but from looking at his code and examples here it requires a bit of magic to make it happen.

with our modern touch devices, i don’t think that the listed features are asking anything out of the ordinary – the dawn of Minority Report is upon us and UI innovations are just getting started. however, requiring to change programming styles or add hacks to make them happen shows me that something isn’t right with the architecture. the thing is it would only take minor changes to the current API make it work differently and it would work for all of our scenarios, that’s certainly a win-win.

so, in order to complete my project, i wrote my own touch manager to get around the situation. it works in part by tapping into the Runtime, but all of its functioning is hidden from the developer. furthermore, it has a similar API as the regular Corona environment so you don’t have to change from a proper coding style to make it work. aside from the different method calls there are no other changes needed in the code to make it work.
and, because it’s pure lua, you could inspect it to get the list of current touches being tracked and build a touch-map if so desired. the example already does a good job of showing a touch map.

if you need true multi-touch without wanting to think about it, then it’s definitely worth a look … while we’re waiting for a proper fix.

http://developer.anscamobile.com/code/dmc-lib-touch-manager
ps, i will say that since my TouchManager release a mere three weeks ago, Google Analytics already shows it as the most popular item of my Corona Library, ( but second only to my Ghost/Monsters OO re-write ). i don’t want to read too much into it, but i think it certainly indicates that there are other people interested as well. [import]uid: 74908 topic_id: 27893 reply_id: 113008[/import]

Hmm, well, while I agree with many points and disagree with many points, I’m not going to trawl through each one - too long.

What I will say is that I think the Minority Report interface is cool looking (as everyone does) but that it would be horrible to use. We already have better. The problem is that we are coders and want that sort of ease-of-use in our code from an API but that the API has to provide the lowest level of access and allow us to create our own interface gestures.

Standard UI elements provide gestures which are simple and sensible, but we’re working with very non-standard elements which do not necessarily fit any set of rules save those we create ourselves.

My answer to the previous comment by @krystian6 is the code below, but your comment, @dmccuskey, got me thinking about the setFocus function itself.

Here’s the code answer I mention.

[lua]-- multitouchtest

– enable multi-touch
system.activate(‘multitouch’)

– reference for easier use
stage = display.getCurrentStage()

– the image to handle multiple touches - http://www.coronalabs.com/images/slides/community_1010x400_v2.jpg
local image = display.newImage( “community.jpg” )
image.class = “image”
image.x, image.y = display.contentCenterX, display.contentCenterY

– manipulate the image based on touch point information
function image:manipulate()
local sx, x, sy, y, c = 0, 0, 0

– sum the touch points
for i=1, stage.numChildren do
local point = stage[i]
if (point.class == “track”) then
c = c + 1
sx = sx + point.start.x
x = x + point.start.x
sy = sy + point.start.y
y = y + point.y
end
end

– apply the manipulation
– not going to put code here because it will appear messy.
end

– function to create multiple touch tracking points
– this function handles the began event, the inner function handles the other events (moved, ended, cancelled)
function createPoint( startEvent )
– create a new tracking point, name it and set focus onto it
local point = display.newCircle( startEvent.x, startEvent.y, 50 )
point.class = “track”
point.alpha = .4
stage:setFocus( point, startEvent.id )

– record the start position of this touch point
point.start = startEvent

– create a function to handle the other event phases
function point:touch( event )
if (event.phase == “moved”) then
– manipulate the image
image:manipulate()
else – this will be the ended or cancelled phase
point:removeEventListener( “touch”, point )
point:removeSelf()
end

– indicate that the point handled the event
return true
end
point:addEventListener( “touch”, point )

– the return true for the began event
return true
end

– use runtime to captue initial touches
Runtime:addEventListener( “touch”, createPoint )[/lua]

I was going to implement something cool in the image function, but that would just make the code messy and appear more complex than it is. Pinch-zoom-rotate is a complex function and it would be great to have that sort of gesture built in. I believe that level of gesture recognition needs a lot of wrapping and, in our environment, will not and should not be provided by Corona but Apple, as the hardware interface vendor.

In short: There are so many ways to interact with the screen and we, as developers, want all of them and the ability to create new ones. There is a natural conflict there. It’s the same with creating Widgets. Either you define your own interaction model or you limit yourself to what is provided.

Attempting not to ramble: What if we define setFocus ourselves?..

[I am a big believer in not simply whinging about new functionality but demonstrating that it can and should be implemented a certain way. ie: Show Corona Labs how to do it a good way and they may well implement something better.]

If we define a setFocus which then does what we’ve talked about already - pretending to handle multiple touches for a single object - we could give the impression of increased control.

The problem I find here is that we would also need to redefine the function parameters. The current parameters, in a multitouch environment, allow for one nil argument to remove the current single touch ID. The question about attaching more touches is to use the same format function as we have. The question about removing individual touches by ID is more complex. What should the parameters be? I would suggest:

[lua]-- adds touches upon each other to a single display object
– passing a nil eventId removes all touches from the display object
function setFocus( dispObj, eventId )

– passing an eventId removes an individual touch from the object
– passing a nil eventId is the same as calling setFocus( dispObj, nil )
function unsetFocus( dispObj, eventId )[/lua]

So now, the problem is how to limit the number of touches which can be received by the display object in question. The original setFocus blocks subsequant user touches on the object - do we want the same functionality after, say, a maximum of 5 touches? I would say no and that our touch event handler would simply pass the touch event down by returning false.

A remaining question would be this: Would having the current setFocus function allow ‘return false’ to pass touch events down, instead of blocking (as it currently does) and appearing to be a return true, also be a requirement? This, to me, is a bug - intended or not - but the only actual bug. [import]uid: 8271 topic_id: 27893 reply_id: 113011[/import]

Btw, I’d just like to point out that (while I thought this was wrong) @Tom’s advice is very good: The Runtime listener can be used to receive all touch events if setFocus is used with return false…

The code below shows that either of the circles can have setFocus used to stop touch events from passing to other display objects, but that the Runtime listener will still receive the event if you return false.

If you return false and don’t use setFocus, all display objects under the touch point will receive the event.

[lua]stage = display.getCurrentStage()

function createCircle(size)
local image = display.newCircle( display.contentCenterX, display.contentCenterY, size )
image:setFillColor(math.random(0,1)*255,math.random(0,1)*255,math.random(0,1)*255)
image:setStrokeColor(255,255,255)
image.strokeWidth = 10

function image:touch( event )
if (event.phase == “began”) then
stage:setFocus( image, event.id )
print('image '…size,event.id,event.phase,‘return true’)
return true
elseif (event.phase == “moved”) then
print('image '…size,event.id,event.phase,‘return false’)
elseif (event.phase == “ended” or event.phase == “cancelled”) then
print('image '…size,event.id,event.phase,‘return false’)
stage:setFocus(image,nil)
end
return false
end
image:addEventListener( “touch”, image )
end

createCircle( 200 )
createCircle( 100 )

function runtouch( event )
print(‘Runtime’,event.id,event.phase,‘return false’)
return false
end
Runtime:addEventListener( “touch”, runtouch )[/lua] [import]uid: 8271 topic_id: 27893 reply_id: 113035[/import]