Hi dev,
Missing a reply to my post regarding getting “timed out” messages after approx 30 sec.
MySQL shouldnt do that so can I hope its in your API, and that it is configurable?
Hi dev,
Missing a reply to my post regarding getting “timed out” messages after approx 30 sec.
MySQL shouldnt do that so can I hope its in your API, and that it is configurable?
I regularly pull 500 top N results from a table with 3 mil+ rows in well under 1 sec using MySQL and a simple network.request(). This basic stuff for a MySQL instance
Yes i know, thats peanuts for any sql.
The thing that happens is I send a rather large payload to the sql server in the cloud (digitalocean droplet), and sometimes i get a timeout return from the server via the coronium API.
I wonder how this is timed by the API, from the time the call is made and the payload upload start, or from the time payload has been delivered? I’m on a very low internet connection so my upload takes some time.
Once the payload has been delivered, doing serverside logic with the SQL and returning the result, doesnt take long.
I’ve since programmed around this issue (i hope) but there is still the thought at the back of my head, could this happen on a very congested connection?
Hi,
Sorry for the delay, I have been down sick. The API is simply using the Corona network.request method to push the payload, so there is nothing special going on in that regard.
Does the entire payload actually reach the server even though you get a timeout message? There is nothing off the top of my head that would cause a timeout in the Coronium API, but I can take a look again.
-dev
Hi dev,
No worries, hope you feel better.
“Does the entire payload actually reach the server” is the question i’ve not been able to answer.
I think for now, as this is nothing about your API we should let it rest and in case it shows up again, maybe dig some more.
Re-wrote my code to handle this so its working now without timeouts.
Get well!
Btw, noticed you updated your docs page, better to navigate now
Just FYR,
This is a dangerous design as if not handled well could cause your app to hang forever - waiting for a response that will never arrive.
Thanks for the input SGS.
I didnt void timeouts by extending anything, i just pull less data so it is much less likely to happen.
There is still code in place to deal with timeout replies.
Is the timeout error output in the Corona console or something that is in the Coronium Core server log?
-dev
its being returned from the server side api where i have this code after the sql select query:
if not result then
core.log(err)
return core.error(err)
end
on my device i pop up a nativeAlert showing this error message
I dont know if the server logs are kept on file any longer than what is displayed, or where that file might be stored, but currently the logs i get on screen are only 3 days running.
Hi,
Interestingly enough the Corona network.request has a default of 30 seconds for a network timeout, that is why I am curious if it is coming from the Corona client or not, in which case I can add a timeout parameter to the client plugin.
I wasn’t able to determine from looking through this thread what the exact error message was, can you post it again? The Corona network errors are piped through the same error handler as the server return errors, so I would need to see the exact message to tell where it might be coming from.
-dev
Hi,
So i ran some tests, the 45000 records seem to make it fine to the database, but the timeout is coming from the Corona client. I added a parameter to adjust the timeout and if I increase it to 60 seconds, I no longer get the timeout error.
I will push the updated plugin, but…depending on various factors you might never know what a “good” timeout might be, so breaking up the payloads would still probably be a good idea. You could of course just set a really high timeout value and hope for the best.
-dev
Hi dev,
I just tried to provoked the response by severly limiting my already limited my upload speed, while pusing up one of the largest records, and it took a lot longer than 30 seconds and yet, i got no error.
However, your idea about 60 sec is a good one.
Will you document how I can change it, unless its default?
Thanks for the support!
Hope many more than me get on board with using Coronium Core
That’s interesting. Are you able to recreate the error you were receiving earlier? I’m able to evoke the error when adjusting the timeout on my end.
And yes, will be adjustable and documented of course. The documentation is more for me than anyone really.
-dev
Hi,
In regards to pulling large data, a number of factors could be at play. One could be the proper indexing of the database fields.
There is a setting for the MySQL timeout, and I am adding the ability to adjust that on the server-side module, but that won’t be available until the 2.1.0 release, which I am hoping to deploy by the end of the week.
-dev
I’ll try provoking more tomorrow but I am 90% sure the message was “client timed out”.
I use your documentation all the time
Looking forward to the updates!
Now I am off to bed, its 01:00 AM here and i need may day job to pay the bills.
Cheers!
Anaqim
Hi again, i managed to provoke the time out and it works like this:
Client is sending a large batch of data to my own api on the server (core.api…)
The clientside callback is after some time (probably 30 sec) receiving an event error that says “Timed out”.
Looks like your idea that this may be caused by network.request may be a good one.
anaqim
A question, know if there is a way to implement compression on these transfers?
Would make a big impact on simple strings and arrays, i’d think.
There is also the possiblity to zip payload before uploading but thats a lot of hassle, and i suspec unzipping it doesnt work server side.
Just some thoughts
Something like this perhaps…
It doesn’t appear this was ever implemented so that is probably not going to be a possibility at this time.
Also, pushing such a large payload is kind of an edge case for Coronium Core in general. So that is part of the issue.
Using some other type of compression may be an option, but I would need to research.
-dev