Interruption possible when writing to a file?

Hello,

is it possible that the app’s execution gets interrupted/terminated when writing to a file, like in the follow example:

-- Data (string) to write local saveData = "My app state data" -- Path for the file to write local path = system.pathForFile( "myfile.txt", system.DocumentsDirectory ) -- Open the file handle local file, errorString = io.open( path, "w" ) --------[[INTERRUPTION/TERMINATION BETWEEN HERE]]--------- if not file then -- Error occurred; output the cause print( "File error: " .. errorString ) else -- Write data to file file:write( saveData ) ------------------[[AND HERE]]----------------- -- Close the file handle io.close( file ) end file = nil

Because I noticed that the file gets erased just with the io.open( path, “w” ).

So if the app gets terminated before writing the content and closing the file, the data would be lost.

I know that probability is quite low, but is it technically possible? And if yes, is there a way to avoid this problem?

Best regards!

I ran into this issue with an SQL lite database, so I always made a backup copy of the file before doing anything with it. If the data became corrupted, I could then restore from the backup copy.

I ran into this issue with an SQL lite database

You can use transactions.

Well yes, I’m inserting 250,000+ records into 28 tables in under a minute, so transactions are kind of a given. Data corruption can still happen.

@bjoern, if that’s your code and you don’t have any intervening code, you’re microseconds before the next block of code executes. While it’s technically possible the device could crash at that exact moment, it’s a pretty microscopically low probability. 

Rob

I’m inserting 250,000+ records into 28 tables in under a minute

250,000recs * 28 tables = 7,000,000 * i.e. 4 byte double word  = 28,000,000 / 60sec ~ 466,666 recs/sec = less than 500 Kb of data per second. This is nothing even for oldest SD cards. Streaming mp3 sometimes need to be much more faster. 

But these records MUST be cached somehow to be PHYSICALLY written if file by LARGE BLOCKS of data. If you seek’n’write your data file per each record you will have crushes. That is why your data must be organized/sorted properly. 

But we can benchmark this to see how this really works, so the main question was that author asked to open the file without worrying to erase existing data «on file open/ something gone wrong»?

I ran into this issue with an SQL lite database, so I always made a backup copy of the file before doing anything with it. If the data became corrupted, I could then restore from the backup copy.

I ran into this issue with an SQL lite database

You can use transactions.

Well yes, I’m inserting 250,000+ records into 28 tables in under a minute, so transactions are kind of a given. Data corruption can still happen.

@bjoern, if that’s your code and you don’t have any intervening code, you’re microseconds before the next block of code executes. While it’s technically possible the device could crash at that exact moment, it’s a pretty microscopically low probability. 

Rob

I’m inserting 250,000+ records into 28 tables in under a minute

250,000recs * 28 tables = 7,000,000 * i.e. 4 byte double word  = 28,000,000 / 60sec ~ 466,666 recs/sec = less than 500 Kb of data per second. This is nothing even for oldest SD cards. Streaming mp3 sometimes need to be much more faster. 

But these records MUST be cached somehow to be PHYSICALLY written if file by LARGE BLOCKS of data. If you seek’n’write your data file per each record you will have crushes. That is why your data must be organized/sorted properly. 

But we can benchmark this to see how this really works, so the main question was that author asked to open the file without worrying to erase existing data «on file open/ something gone wrong»?