General Messaging/Listener question

In the old days, writing for Windows and OS/2, we had a message queue that we tapped into by including the message “name” in the switch.  There were thousands of messages that we never saw, unless we specifically included the name.  We could post or send messages (placing them with different priorities) to the message queue for our own communication or to communicate with the various devices and services within the environment.

With Corona/Lua, we create an event listener which taps into the app’s message queue.  We intercept the message via the listener, and allow it to continue to pass down the chain or end by the return value (true/false).

The Runtime:addEventListener(“enterFrame”, <listener>) appears to follow the first scenerio - meaning that the messages are always there, timed based upon the value set in config.lua.  Does this hold true in the other listeners? or does the setting of the listener trigger their particular events to post.  I’m specifically looking at hardware related events.

If the messages are always there and we just need to tap them, the coding is rather straight forward.  In the GPS, you would just listen for the message and ignore/throw away any events outside of your time or distance threshold.  If the addEventListener actually starts that particular message, there would be reason to believe that the removeEventListener would stop that message - and other things associated with it (such as GPS signal acquisition) would be impacted.

Could one of the Corona engineers chime in here on how this works.  I believe it would have a big impact on how developers address the use of these listeners.

Thanks

Dave

I’m not sure what the practical difference is between the messages “always being there” and their only being there if you are listening for them.  If a tree falls in the forest and there’s no one there, does it make a sound?

One case in which this does make a difference is GPS which can use power when active (even though it’s not much power these days) and we only request location updates from the operating system while there is a Lua listener to receive the messages.  For other hardware events the implementation is operating system dependent and is transparent to the Lua code.

And, yes, if  Runtime:addEventListener starts an operating system process, the  Runtime:removeEventListener  of the last listener will stop it again.

Thanks for responding, Perry.  GPS is the specific issue - and more specifically, the actual acquisition of the Satellite signal, rather than using Cell or Wifi.  In a native app, we can set the minimum time and distance that must be met before a message is sent.  Corona SDK only has the system.setLocationThreshold, which works to a limited extent.  I’ve been using it to attempt to use distance traveled at a certain speed to give me an effective “time” delay between messages.  It is not really that accurate.

The alternative is to call an addEventListener/removeEventListener combination on a performWithDelay of x Seconds,  (removing after receiving the event).  The big question is: if the Runtime:addEventListener starts an OS process, and Runtime:removeEventListener stops it, do you really ever get a consistent signal acquisition - or does it start the acquisition process over again?  Or, does the acquisition process continue after the removeEventListener is called, in which case, the timer on/off is the best method.

Thanks again

Dave

Which platform are you trying to optimize for?  And what are you trying to optimize?  Is the operating system telling you that your app is using a lot of power?  Also be aware that location accuracy varies wildly between devices and it’s easy to get too focused on the behavior of the device in front of you.

What do you mean by “the actual acquisition of the Satellite signal”?  Unless your app happens to be the only one on the device with access to the GPS (which seems very unlikely), the hardware should already be initialized.  If you’re trying to get fine grained location information this might be more of a concern but, again, I’d encourage you to determine that you have a problem before trying to eliminate it.

Everyone “knows” that GPS “kills your battery” but this is wisdom largely gleaned years ago and devices and operating systems are much more efficient about this now.

Running on both iOS and Android.  In the app, the user has the ability to designate a sampling rate in seconds (which defaults to 5).  I’m currently using the system.setLocationThreshold, set based upon a calculation of the speed returned in the event * the sampling rate * a multiplier that gets me to a sampling “approximately” close to the sampling rate.  It appears that the speed value is acceptably accurate.   Theoretically, multiplying the returned speed (meters per second) by the sampling rate, and using that value in the setLocationThreshold() should give me the spacing that I’m looking for.  However, reality is that even multiplying that value by 10 generates an unbelievable number of markers (where there should be just 1) when the trip is played back.  Another thing I’ve found is that straight cellular or setting up a phone as a hotspot and attaching a wifi-only tablet to it gives me an initial speed of 0.  After a device dependent time has passed, which I assumed (from documentation and google searches) was the acquiring of the signal,  I’m finding that it can take 15 - 45 secs to start getting events with a speed value and decent location accuracy.

I’m currently throwing away any event with a speed of 0, and find that I get duplicate events when the speed is 0, too.

Currently, I initially set the Accuracy to 5 meters.

If you are saying that using the add/removeEventListener method on a timer set to the sampling rate is the way to go and still get the speed (which I need as part of the tracking) then bingo - I’ll go that way.  I put in a suggested new feature report for creating a system.set LocationTime() type function that matches up with the system.setLocationThreshold().  Those two values combined can used in the Android Location Manager to adjust the event frequency, which is what I assume you guys eventually map to.

In any case, I appreciate the help and again if you think the add/remove concept is the way to go, I’ll take your advice.

Dave

There is a “startup cost” for Location Services handling. Because GPS takes time to acquire (and more if you’re the first app to access the GPS), it will always find a location via WiFi/Cell first. You can witness this by using mapping apps and you see the blue accuracy circle around your starting point large and then after a few seconds it becomes as small as your dot. Turning Location Services on an off frequently is a pretty expensive operation. 

I’m not sure what the practical difference is between the messages “always being there” and their only being there if you are listening for them.  If a tree falls in the forest and there’s no one there, does it make a sound?

One case in which this does make a difference is GPS which can use power when active (even though it’s not much power these days) and we only request location updates from the operating system while there is a Lua listener to receive the messages.  For other hardware events the implementation is operating system dependent and is transparent to the Lua code.

And, yes, if  Runtime:addEventListener starts an operating system process, the  Runtime:removeEventListener  of the last listener will stop it again.

Thanks for responding, Perry.  GPS is the specific issue - and more specifically, the actual acquisition of the Satellite signal, rather than using Cell or Wifi.  In a native app, we can set the minimum time and distance that must be met before a message is sent.  Corona SDK only has the system.setLocationThreshold, which works to a limited extent.  I’ve been using it to attempt to use distance traveled at a certain speed to give me an effective “time” delay between messages.  It is not really that accurate.

The alternative is to call an addEventListener/removeEventListener combination on a performWithDelay of x Seconds,  (removing after receiving the event).  The big question is: if the Runtime:addEventListener starts an OS process, and Runtime:removeEventListener stops it, do you really ever get a consistent signal acquisition - or does it start the acquisition process over again?  Or, does the acquisition process continue after the removeEventListener is called, in which case, the timer on/off is the best method.

Thanks again

Dave

Which platform are you trying to optimize for?  And what are you trying to optimize?  Is the operating system telling you that your app is using a lot of power?  Also be aware that location accuracy varies wildly between devices and it’s easy to get too focused on the behavior of the device in front of you.

What do you mean by “the actual acquisition of the Satellite signal”?  Unless your app happens to be the only one on the device with access to the GPS (which seems very unlikely), the hardware should already be initialized.  If you’re trying to get fine grained location information this might be more of a concern but, again, I’d encourage you to determine that you have a problem before trying to eliminate it.

Everyone “knows” that GPS “kills your battery” but this is wisdom largely gleaned years ago and devices and operating systems are much more efficient about this now.

Running on both iOS and Android.  In the app, the user has the ability to designate a sampling rate in seconds (which defaults to 5).  I’m currently using the system.setLocationThreshold, set based upon a calculation of the speed returned in the event * the sampling rate * a multiplier that gets me to a sampling “approximately” close to the sampling rate.  It appears that the speed value is acceptably accurate.   Theoretically, multiplying the returned speed (meters per second) by the sampling rate, and using that value in the setLocationThreshold() should give me the spacing that I’m looking for.  However, reality is that even multiplying that value by 10 generates an unbelievable number of markers (where there should be just 1) when the trip is played back.  Another thing I’ve found is that straight cellular or setting up a phone as a hotspot and attaching a wifi-only tablet to it gives me an initial speed of 0.  After a device dependent time has passed, which I assumed (from documentation and google searches) was the acquiring of the signal,  I’m finding that it can take 15 - 45 secs to start getting events with a speed value and decent location accuracy.

I’m currently throwing away any event with a speed of 0, and find that I get duplicate events when the speed is 0, too.

Currently, I initially set the Accuracy to 5 meters.

If you are saying that using the add/removeEventListener method on a timer set to the sampling rate is the way to go and still get the speed (which I need as part of the tracking) then bingo - I’ll go that way.  I put in a suggested new feature report for creating a system.set LocationTime() type function that matches up with the system.setLocationThreshold().  Those two values combined can used in the Android Location Manager to adjust the event frequency, which is what I assume you guys eventually map to.

In any case, I appreciate the help and again if you think the add/remove concept is the way to go, I’ll take your advice.

Dave

There is a “startup cost” for Location Services handling. Because GPS takes time to acquire (and more if you’re the first app to access the GPS), it will always find a location via WiFi/Cell first. You can witness this by using mapping apps and you see the blue accuracy circle around your starting point large and then after a few seconds it becomes as small as your dot. Turning Location Services on an off frequently is a pretty expensive operation.