Multiple Simultaneous Button Press/Release Bug

Using the ui.lua code to make buttons OR just working with display objects… using a combined listener or separate listeners:

Once you press one button down, you cannot get events for pressing a second button down.

This capability is a show-stopper for two-player games which require pressing and holding buttons down.

Scott [import]uid: 5659 topic_id: 1128 reply_id: 301128[/import]

Case #229. This worked in beta 3.
[import]uid: 54 topic_id: 1128 reply_id: 2899[/import]

Wow. So I know you guys want to encourage folks to use new versions of the beta, but I would suggest given that you are expecting people to produce Apps with your betas you should not shut off previous versions… at the least you should keep current version - 1 alive.

This type of bug right here is a prime example of why you would want to actually do this. In my case I can’t do any more development on my this particular project because you’ve shut off beta 3. I’m at the point where any further development actually depends on being able to use the controls as two people would.

Please reconsider this move. It’s not only extremely frustrating, but costing to developers.

Scott
[import]uid: 5659 topic_id: 1128 reply_id: 2902[/import]

Interesting. This was assigned bug #229 and I’ve seen bugs #230 and #232 assigned a few days ago. This indicates that this is a known bug (at least know to Ansca). It would sure be nice for the developers to have access to the bug list so we don’t spend hours chasing something that is already known.

There were a number of postings over the weekend on this problem and I was wondering why there was silence from the Ansca tech team. Now I understand why.

Tom [import]uid: 6119 topic_id: 1128 reply_id: 2914[/import]

While we address this issue, we have made a special build of Beta 3 available on the server.

To create device builds using Beta 3 capabilities rather than Beta 4, use the following custom build parameter in “build.settings”:

settings = {  
 build = {  
 custom = "4832a99408fe61baaaae747262977616"  
 }  
}  

This will work even with the Beta 4 SDK. I’ve tested this myself, and it re-enables multiple button presses in the cases where it used to work in Beta 3. [import]uid: 3007 topic_id: 1128 reply_id: 2916[/import]

Scott, to be honest there were two problems that led to this mini-storm. First, we failed to extend the beta 2 timeout, so beta 3 actually timed out well before we expected it to. Second, the multi-button touch use case isn’t actually documented and isn’t in our test cases. So we missed that one completely. We’re sorry about this. Obviously, we’ll address both of these problems.

Hopefully the custom build will get you back on track. [import]uid: 54 topic_id: 1128 reply_id: 2917[/import]

It’s all a learning process and things get missed. :slight_smile:

I’m just glad you guys picked up the ball here. I got re-tasked tonite, but I’ll find out later on or tomorrow if the b3 extension works.

I’m really hoping “multi-touch” can be extended so we can track each finger via it’s initial touch. This is definitely a needed thing for interactive games like air hockey where you want to be able to put your finger on your little dowhickey and move it around the area (while your opponent does the same thing).

I can see two ways of doing this, one is to allow for an object to get tethered to it’s initial press, so all events w.r.t. that initial press go to the proper listener (kind of like when you set focus for an object, only you’d need to be able to set multiple objects for focus). The second is similar to how multi-touch works now only there would need to be additional “began” events for each finger. With this method, we simply manage all the events ourselves on one big background area.

I’d prefer the first way with multiple objects being able to claim focus for their respective press events.

At any rate, glad to hear the buttons will work again soon! :slight_smile: I’d prefer to be able track multiple presses (and moves), but I can get by with left/right buttons for now!

Scott [import]uid: 5659 topic_id: 1128 reply_id: 2930[/import]

It now works with your custom line in the code.

Thanks, I was able to get some work done. Tell me this is fixed for the next beta! :slight_smile:

Scott [import]uid: 5659 topic_id: 1128 reply_id: 2979[/import]

Glad this unblocked you for now!

This is actually not a simple fix, because the Beta 4 behavior is correct: an object with focus has all touches assigned to it until it loses focus. Without this, it would be hard to track multitouch gestures in many real-world cases. This touch-capturing also matches general iPhoneOS behavior, where you’ll notice that you can start dragging a slider (say, the brightness slider in Settings) and then keep dragging it even if your finger has moved off it.

However, the multi-button-simultaneous case requires different behavior, and the concept of “focus” isn’t really applicable there.

In short, Beta 3 had a bug – focus was broken when multitouch was activated. The custom build restores the bug temporarily. But we can’t just revert to this behavior forever, because I’m pretty sure it will break other stuff involving gestures.

What we need (I think) is a way to toggle focused and non-focused touch handling, depending on what you’re trying to achieve. I was hoping for a really quick fix by turning off “setFocus()” in the ui.lua button library, but that didn’t help; the issue seems to run deeper than that. [import]uid: 3007 topic_id: 1128 reply_id: 2981[/import]

Just so you know, in the beta 3 behavior I am *not* using ui.lua for my buttons. So I am not setting focus, my “buttons” are just display images.

It seems to me (having likely a naive outside view) the best option is for “finger tracking”… by that I mean when I put a finger down, that becomes an area which is tracked. It would become a layer inbetween the events of some touch and events dispatched to the app. This would allow for the events dispatched to the app to have a touch index allowing the app to not only associate a button with a touch index, but also allow the app to track “fingers” if they want to.

A setup like that seems to solve the issues on the app end and ties them up in a nice neat bow.

It essentially allows one to get events in two different fashions:

  1. Via object. If I have a listener on a specific object then I get events pertaining to that object. If the press begins on that object then all events for that finger go to that object…even if the finger moves off of it. So you have the following events: “pressed”, “released”, “left”, “entered”, “moved” and “cancelled”. moved only fires if they have enabled “movementTracking” for that object.

  2. Via runtime. This is for gestures and general “interface wide” events. These events also go to any objects that have requested them.

Both models require per finger tracking as the idea of setFocus kind of goes by the wayside since each finger has it’s own focus (assuming it “pressed” on an object that wants it). In fact, that can be how focus per finger is determined… if the object receiving the event returns true, focus goes to that button (still allowing other fingers to put their focus onto a different button).

So focus changes from app-wide to being a per finger basis.

Just my thoughts there.

Scott [import]uid: 5659 topic_id: 1128 reply_id: 2987[/import]

Looking at the way touch(multitouch) works in beta 4/5 and with the patch to enable beta 3 mode, I was wondering if that is not the simple solution. My guess is you don’t need both modes at the same time (or can code around it), so my suggestion is leaving touch(multitouch) as it is now and adding touch(multisingle) (or something like that) to enable Beta 3 multitouch mode. That way the developer can choose the way he/she wants to handle mutitouch.

I wrote a simple test program that displayed the event.id and event.phase with the Beta 3 patch I saw an unique ID for each finger touch (I tried 1 to 3) along with began/moved/ended events for each finger touch/movement.

Tom [import]uid: 6119 topic_id: 1128 reply_id: 3127[/import]

I would much rather have per finger tracking. That enables a whole plethora of interaction.

Right now when I play with the multitouch example it ends up that when I put down my first finger and then my second I can see it tracks parts of it, but when I lift my first finger the entire multitouch event ends, even tho my 2nd finger is still down.

It just seems quite the kludge to do it in this manner. If we had per finger (or per press if you like) tracking, there’s no need for different events for anything, only the addition of an index/counter so we can identify a “began” event with a specific finger/press.

It actually simplifies a lot of things.

There are some places for nice high level interface methods. Mouse/finger/touch handling is not one of them because there are so many different ways to use those events, higher level interfaces just get in the way most of the time.

My 2 pennies, of course. :slight_smile:

Scott [import]uid: 5659 topic_id: 1128 reply_id: 3137[/import]

Hey Scott,

Are you talking about the current Multitouch or the Beta 3 enabled Multitouch?

I played with the Beta 3 version and it looks like it’s doing what you want. I see a “began” touch event for all fingers and if the first finger goes away the other fingers still generate “moved” and “ended” events. Each finger has it’s own event.ID that doesn’t change as long as the finger is till touching the screen.

For the current Multitouch (beta 4/5) the events end when the first finger goes away. I think this is the right response if you are looking for pinch/zoom gestures.

Here is a modified version of MikeHart’s test code. You need to add the build.settings patch to see the Beta 3 Multitouch mode.

  
-- activate multitouch  
system.activate( "multitouch" )  
local tPrevious= 0  
local touchCnt = 0 -- total number of touches  
  
txtfps = display.newText( "fps:99", 240, 20, "Verdana-Bold", 16 )  
txtfps:setTextColor( 255,255,255 )  
   
   
txtTouch = display.newText( "cTouches: 99", 40, 20, "Verdana-Bold", 16 )  
txtTouch:setTextColor( 255,255,255 )  
txtTouch:setReferencePoint(display.TopLeftReferencePoint)  
--txtTouch.text = " ctouches: "..0  
   
txtTouchp = display.newText( " ptouches: 99", 40, 60, "Verdana-Bold", 16 )  
txtTouchp:setTextColor( 255,255,255 )  
txtTouchp.text = " ptouches: "..0  
  
txtTouchc = display.newText( "touch cnt: 0", 40, 100, "Verdana-Bold", 16 )  
txtTouchc:setTextColor( 255,255,255 )  
  
txtTouchType = display.newText( "previousTouch Type: nil", 45, 400, "Verdana-Bold", 16 )  
txtTouchType:setTextColor( 255,255,255 )  
  
txtTouchPhase = display.newText( "touch Phase: begin", 45, 440, "Verdana-Bold", 16 )  
txtTouchPhase:setTextColor( 255,255,255 )  
  
txtTouchID = display.newText( "touch ID: 999999", 50, 360, "Verdana-Bold", 16 )  
txtTouchID:setTextColor( 255,255,255 )  
   
local finish = function(target)  
 target.parent:remove( target )  
end  
   
-- create a table listener object for the bkgd image  
function onTouch( event )  
 --local result = true  
   
 --local phase = event.phase  
 --print (event.phase)  
 -- when multitouch is active, the event will contain an array of   
 -- all touch events that have changed since the last touch event  
 -- these should all share the same phase  
 local touches = event.touches  
   
 -- when multitouch is active, the event will contain an array of   
 -- all touches that are still touching the screen from previous events  
 local previousTouches = event.previousTouches  
   
 touchCnt = touchCnt + 1  
   
 print("txtTouch.x = " .. txtTouch.x .. ", txtTouch.y = " .. txtTouch.y) -- \*\*debug  
 txtTouch.text = "ctouches: " .. tostring(type(touches) ~= "nil" and #touches)  
 print("txtTouch.x = " .. txtTouch.x .. ", txtTouch.y = " .. txtTouch.y) -- \*\*debug  
 txtTouchp.text = "ptouches: " .. tostring(type(previousTouches) ~= "nil" and #previousTouches)   
  
 txtTouchc.text = "touch cnt: " .. touchCnt  
 txtTouchType.text = "previousTouches Type: " .. type(event.previousTouches)  
 txtTouchPhase.text = "touch Phase: " .. event.phase  
 txtTouchID.text = "touch ID: " .. tostring(event.id)  
  
  
end  
   
local onFrame = function(event)  
 local tDelta = event.time - tPrevious   
 tPrevious = event.time   
 txtfps.text = "fps: "..tDelta  
 collectgarbage("collect")  
end  
   
-- register table listener  
Runtime:addEventListener( "touch", onTouch )  
Runtime:addEventListener( "enterFrame", onFrame )  

Some of the comments may not be correct for the Beta 3 Multitouch mode.

Tom [import]uid: 6119 topic_id: 1128 reply_id: 3139[/import]

Hey Scott,

I re-read your last post and I guess I missed your point with my last response.

In my view the Beta 3 Multitouch does give you multiple finger tracking. The current Multitouch does seem suited for the way I would use it in my applications: touch, pinch/zoom objects. I think the way it’s implemented makes that pretty simple to do because everything is triggered off the first finger touch which removes a lot of the work for the programmer (see the Multitouch sample code in the SDK).

For times when I do want to track individual finger touches, than the Beta 3 Multitouch looks like that would be best.

I was suggesting the two Multitouch modes thinking it would be easy to implement in the existing SDK because the code already exists to implement both. In addition, programmers can test out both methods now to verify that it provides the needed flexibility.

Maybe an all-in-one solution can be found but my fear is that will take some time and it seems that enabling the Beta 3 patch removes the new features/fixes in Beta 5 (can Ansca confirm that?). Also, the patch doesn’t allow the Mutitouch mode to be changed within the program if a different mode is needed for a particular screen.

Just my 2 cents too.

Tom [import]uid: 6119 topic_id: 1128 reply_id: 3140[/import]

Ahhh, I’ll take a look again at what you have (or just test it in my app) with the beta 3 functionality.

If they can enable both kinds and it does what you indicate, then that would really be pretty dang awesome as it would let us do things like an air hockey game where you need to be able to quickly move your blocker around your end of the “ice”. This involves handling all the events yourself and you can’t do it with setting focus for both blockers (since only one at a time can have it)… so you set the focus on your arena and manage the events yourself.

If that really does work, I hope they’ll give us access to it in a later release.

Scott [import]uid: 5659 topic_id: 1128 reply_id: 3145[/import]

If you run my posted app it shows the event phase and ID for the finger touch/movements. You can create a version without the build.settings file for Beta 4/5 testing and with the file for Beta 3 testing and see the differences in how it handles the second+ fingers.

Tom [import]uid: 6119 topic_id: 1128 reply_id: 3146[/import]

Correct, the Beta 3 build setting is all-or-nothing: it literally builds all code under Beta 3, with nothing from Betas 4 or 5. [import]uid: 3007 topic_id: 1128 reply_id: 3149[/import]

@evank can you give us a status update on where this is going?

I understand wanting to support pinch/zoom in a high-level interface, but there is a definitive need for finger-level tracking as well. I keep referring to a fictitious “air hockey” game, but that is just one aspect of all of this.

Is the current multitouch bugged and will revert to the way it worked in beta 3? Or are you going to add a new activated system for handling events in the beta 3 fashion? *Or* are you going to rework the touch system altogether to provide, by default, several layers we can interface at which allow us to capture the type of events at the level we need them? The latter would be similar to most systems I’ve worked with.

E.g. if we “listen” on the Runtime, we get raw touch events including a touch id that tracks the specific finger that touched. If we “listen” at the object level, we get events like we do now, but also this new touch id. Enabling multitouch changes the behavior at the object level, but not at the Runtime level. This would allow us to get the best of both worlds. Returning true at the Runtime level would stop propagation of the event, like you already have set up for objects.

Sincerely hope you’re going to give this, or a system like it, some thought. Touches are the lifeblood of interactive apps.

Scott [import]uid: 5659 topic_id: 1128 reply_id: 3152[/import]

>>Is the current multitouch bugged and will revert to the way it worked in beta 3?

Scott,

Eric said before that the beta3 version was actually bugged. Now it is working like it should he said. [import]uid: 5712 topic_id: 1128 reply_id: 3160[/import]

He (they) also added the custom support for beta3 because of this particular problem.

That’s why my post was asking more than just which is the correct behavior.

The idea of not being able to make an app where you can have two people with their fingers on the screen is just … well it’s silly with the iPad in the picture.

Since the custom tag has continued to be valid with beta 5 available, my hope is there’s additional work being done to provide support for such an integral part of any app. This was the crux of what I’m asking.

Scott [import]uid: 5659 topic_id: 1128 reply_id: 3164[/import]