Coronium Feedback

Hi,

Similar to the tips post, perhaps there is a need for a thread to post request, error messages, feedback etc, instead of each dev making a new post each time. 

Develephant can then also perhaps post info about implementation, fixes, replies in general, and we can all track status on issues?

Just delete this post if you dont agree.

Anaqim

There seem to be an api limit to the size a table can have before triggering an error.

I’ve seen this before when sending in a body using soap, so perhaps it is a related issue.

While it is possible to design a workaround, it would be great if instead the coronium api could automatically split and merge the payload.

EDIT 1 - I am uncertain about this, so will check further and edit this post once conclusive

EDIT 2 - Yes there is a limit and it is very small, much smaller than the soap body.

To clarify, I am sending in a table with about 20 columns.

When using soap body I was able to send in around 1000 rows per call.

With coronium the limit sits somewhere between 20 and 25 rows.

That is a major bump for me, so an automatic api split and merge would be a godsend.

A max string size issue perhaps?

EDIT 3 - Tried it with a table with only a single column, with the same error, so then string length is maybe not the issue?

EDIT 4 - Tried to encode/decode the tables as json in a feeble attempt to see if it matters, but i does not.

This is the log entry:

2017/09/23 08:12:48 [warn] 1596#0: *231 a client request body is buffered to a temporary file /usr/local/coronium/nginx/client_body_temp/0000000015, client: neutralized IP, server: , request: “POST / HTTP/1.1”, host: “neutralized IP:10001”

2017/09/23 08:12:48 [error] 1596#0: *231 lua entry thread aborted: runtime error: [string “coronium.input”]:0: Expected value but found invalid token at character 1

stack traceback:

coroutine 0:

        [C]: in function ‘’

        [string “coronium.input”]: in function ‘request’

        content_by_lua(coronium.locations.conf:111):2: in function <content_by_lua(coronium.locations.conf:111):1>, client: neutralized IP, server: , request: “POST / HTTP/1.1”, host: “neutralized IP:10001”

Hi,

I’ll take a look, could be the client buffer setting in nginx.

Thanks for reporting.

-dev

Hi,

I really would prefer not to have to split up my tables and make separate parallell api calls.

Halted coding until I hear from you on this issue.

Thx!

Hi,

The were a couple issues regarding the temp file that I’ve fixed up. Will be pushing an update a little later today after I add a few other pending changes.

Thanks again for reporting the issue.

-dev

There should be no need to do that after the update.

-dev

Thanks!  :slight_smile:

Hi,

An update is now available here. It should solve your issue. I will also explain a bit later on how to tweak the settings for servers with more memory. Currently I have it set for a baseline installation.

Thanks again for reporting.

-dev

Updated, testet and works like a charm.

Appreciate how you also keep documentation up to date.

Now I can push on with my current project

Thx again!  :smiley:

Hi,

Glad to hear it worked. Let me know if anything else comes up.

-dev

Hi again,

I think I might have found some strange behaviour I cannot explain.

On both PC and Android, this code work fine:

local function coreLogin() local function callback(event) if event.error then report("coreLogin "..event.error) else if #event.result==0 then coreAddUser() else flag.coreLoggedIn=true end end end local data={ db="tr", tbl="users", columns={"hash"}, where="hash='"..par.hash.."'", distinct=true} core.mysql.select(data,callback) end

Yet the following one works on PC but not on Android, that is, it runs but I never get any data returned, the array remain empty. I tried to add a long delay just to check if its a timing issue, but its not.

local function callback(event) if event.error then report("coreGet "..event.error) else artbase=table.copy(event.result) for i=1,#artbase do print(artbase[i].name) end filterArt() end) end end core.mysql.select({ db="tra", tbl="art", columns={"artid","sortid","name","cvr","stat","able"}, where="hash='"..par.hash.."'", orderby={name="ASC"} },callback)

I am not able to see the difference between these two other than the second being supposed to return a table with 24 rows and 6 columns, which is childsplay nada zip to handle.

I’ve never had any such “timing” issues across devices so i suspect coronium, being the new kid on the block, so to speak.

anaqim

EDIT - I know i’m using tablecopy but even referencing event.result directly does not work on android, besides after i copy i dont touch the source data until all is refreshed. And it works well on the PC.

Hi dev,

I found the error, or fault, It was consequencial on android.

I was using hmac md4 for a prerequesite parameter, and it seems android didnt like it (support it).

changing to a different algorithm, fixed the issue.

Cheers!

Hi,

Sorry about the delay, but I’m glad to hear it got worked out.

-dev

Hi dev,

When doing some larger SQL operations, the api is returning an error “Timed out” message.

There is nothing wrong happing on the server, it completes the operation.

I didnt time it but it feels like 10-15 seconds before it happens.

Is there a parameter I can set to give the server more time before triggering this?

Anaqim

EDIT - Perhaps its closer to 30 seconds before timeout happens.

One more thing,

Would this provider work with coronium?

https://www.upcloud.com/

Anaqim

Hi again,

I’m using the SQL on digitalocean and am experiencing network transfer speeds that vary quite a lot and far from max my 20mbit connection. A simple select all from an indexed table with 45000 rows and only requesting a column with a 22 digit id, take around 45 seconds to transfer.

I’ve tried upscaling the server as well as rebooting but the results are inconclusive, yet remain low.

My question is, is it possible that some process in your API could affect speed of lrage transfers?

Anaqim

Hi Dev,

Where are you at?  :slight_smile:

Hi,

Are the requests being initiated from a mobile client? Coronium is built with mobile in mind, so pulling 45000 rows is not a standard use case.

There are a number of possibilities as to why, one my be the rate limiting. Do you see any errors in the log about that?

-dev

I don’t know offhand, but I can take a look as soon as I get a chance. As long as they support Ubuntu 16 than most likely yes, but there are a few specific install commands that are custom to both DigitialOcean and Amazon in the installers.

-dev

Hi mate,

Regarding the 45000 rows pull, it was done on a win32 build.

I’ve been in contact with digitalocean who suggested that it might be related to my sql query.

I tried upgrading the server with more RAM but it didnt make a difference.

I’ve come to the conclusion that whatever the reason, the amount of data is simply too much, so i’ve adapted my code to pull much smaller amounts on demand, with basically instant delivery, so the practical issue is solved.

My question was if you believe it could be that something in your API could slow this down, not saying it is.

Trying to bridge my lack of knowledge and skill with some logical thinking  :smiley:

Regarding provider, no rush, I managed to do some speedtests and DO delivery so satisfied so far.

Anaqim