Decomposing video into images and video:seek()

Hi,

I would like to decompose a video filmed with a phone into several images which can be used afterwards.

I thought I could jump to some moments in the video (at regular time slots) with video:seek(), pause the video and capture the image each time.

So I’m trying to work with video:seek() but it seems like the video starts at the beginning each time instead of going to the requested time… Here is my code:

local video = native.newVideo( 960, 540, 1920, 1080 ) local function videoListener( event ) print( "Event phase: " .. event.phase ) if event.errorCode then native.showAlert( "Error!", event.errorMessage, { "OK" } ) end end video:load( myData.nameOfVideo, media.RemoteSource ) video:addEventListener( "video", videoListener ) -- jump to 5 seconds video:seek( 5 ) video:play()

So, my first question is: how can I jump to a specific moment of the video (in this case 5 seconds)?

And my second question is: would you know any other ways to achieve my initial goal (described above)?

  1. https://docs.coronalabs.com/api/type/Video/seek.html

  2. No way that I know to grab a frame from the video.

@All, Does anyone know, is it possible to fill a rectangle with a video any more?  It was once in the past.  I vaguely recall a tech demo of this.

If so, there might be a way to do this still.

The problem with your video not jumping to the specified time might be related to this explanation on the object.totalTime page:

https://docs.coronalabs.com/api/type/Video/totalTime.html

This should be called after the video is ready to play, otherwise an inaccurate value might be returned. You can detect this via a video event where event.phase is equal to “ready”.

Maybe your :seek() call is canceled out because the specified time is greater than the object.totalTime value of the unloaded video.

https://docs.coronalabs.com/api/type/Video/seek.html

timeInSeconds (required)

Number. Jumps to specified time in currently loaded video. Ensure that this is not greater than the object.totalTime property.

By the way, I spotted a contradiction in the docs. Both APIs seem to support only iOS and Android even though native.* states the opposite. Anyone tried that?

https://docs.coronalabs.com/api/library/native/newVideo.html#gotchas

This feature is only supported on Android and iOS.

If you require video support on other platforms, then you must use the media.playVideo() function instead.

https://docs.coronalabs.com/api/library/media/playVideo.html#gotchas

This API is not supported on Windows, Windows Phone, or macOS.

In case it’s useful to anybody, here is the source for the VLC stuff I mentioned in this thread. To allow video objects to be plastered onto display objects, they use external bitmaps: the current frame is just a big blob of bytes, which you can grab and use for other purposes if you so desire. I only implemented a very few methods–mostly just some weekend experiments–but certainly VLC has seek functionality.

I doubt I’ll return to this library any time soon. If anybody wants to adopt it, you’re more than welcome. (Caveat: the license is LGPL, so not marketplace-friendly, last I checked. I was thinking of just doing Theora later, since I have some old code I could dig up.)

The projects available now are for Windows and Mac. I didn’t have any luck building VLCKit, so I never got anywhere on the other platforms.

Thanks for your replies.

I changed my code to:

if event.phase == "ready" then print(video.totalTime) video:play() video:seek( 8 ) end

And now it’s working. It may be possible that my video:seek() was higher than the time duration.

So there’s no direct way to convert a video into an image sequence?

And is it normal that when I call the camera from my app, I cannot use options like slow motion?

Thanks in advance

I also read that we cannot make a screenshot of a video displayed in the app… Is it now possible? Or is there another way to proceed?

You cannot capture in Corona as Video runs native and Corona can only capture OpenGL.

If anyone could achieve it it would be @starcrunch

At one point we did a demo of video as a fill on iOS only. We never got it working on Android and we don’t have plans to revisit it. I believe that it should be still working on iOS, but again, it’s not really supported.

Other video tools like native.newVideo(), as @sgs mentioned are not part of our OpenGL system and our display.save()/.capture() functions only capture the OpenGL canvas.

Rob

Can we take screenshot of the video if we add native code to the mix? Is it possible to do so?

A while back somebody asked me about doing screen captures in my impack plugin, so I took a swing at it. Using the OpenGL “read pixels” call directly got me pretty far in Windows and Mac (with minor issues like extra padding and being upside-down), but gave me back nothing on mobile. On Android it looks like you need to use Bitmap (which seems to be Java only, with no NDK counterpart… stopped me cold!) and on iOS something with UIImage.

This gave me new respect for display.capture() and friends. :) Presumably those must fire after all display objects are done drawing, but before everything else, so to handle that “everything else” you would basically need to emulate the capture APIs using the stuff mentioned above, with all the grisly per-platform details these entail.

Once you’ve gotten that squared away, you have another problem: you have no way to schedule your capture when “everything else” is happening.

It might be possible to discover the native objects and then subscribe to their events, such as rendering. If so, then you’re in luck. But I don’t know enough to say how feasible and / or difficult this is.

I’m also not sure if the top-level objects render into the same video memory as your Corona program; I haven’t checked. If they do, the results could be captured at the start of the next frame. The trouble then is that Corona clears the screen and again you don’t have a way to jump in before this happens.

If everything is together in memory, a “capture before frame is cleared” API could actually be a decent compromise, and probably not too much trouble to implement.

Anyhow, I brought up my libVLC stuff above because it’s trivial to grab the video contents if you’re the one serving them in the first place!  :slight_smile:

  1. https://docs.coronalabs.com/api/type/Video/seek.html

  2. No way that I know to grab a frame from the video.

@All, Does anyone know, is it possible to fill a rectangle with a video any more?  It was once in the past.  I vaguely recall a tech demo of this.

If so, there might be a way to do this still.

The problem with your video not jumping to the specified time might be related to this explanation on the object.totalTime page:

https://docs.coronalabs.com/api/type/Video/totalTime.html

This should be called after the video is ready to play, otherwise an inaccurate value might be returned. You can detect this via a video event where event.phase is equal to “ready”.

Maybe your :seek() call is canceled out because the specified time is greater than the object.totalTime value of the unloaded video.

https://docs.coronalabs.com/api/type/Video/seek.html

timeInSeconds (required)

Number. Jumps to specified time in currently loaded video. Ensure that this is not greater than the object.totalTime property.

By the way, I spotted a contradiction in the docs. Both APIs seem to support only iOS and Android even though native.* states the opposite. Anyone tried that?

https://docs.coronalabs.com/api/library/native/newVideo.html#gotchas

This feature is only supported on Android and iOS.

If you require video support on other platforms, then you must use the media.playVideo() function instead.

https://docs.coronalabs.com/api/library/media/playVideo.html#gotchas

This API is not supported on Windows, Windows Phone, or macOS.

In case it’s useful to anybody, here is the source for the VLC stuff I mentioned in this thread. To allow video objects to be plastered onto display objects, they use external bitmaps: the current frame is just a big blob of bytes, which you can grab and use for other purposes if you so desire. I only implemented a very few methods–mostly just some weekend experiments–but certainly VLC has seek functionality.

I doubt I’ll return to this library any time soon. If anybody wants to adopt it, you’re more than welcome. (Caveat: the license is LGPL, so not marketplace-friendly, last I checked. I was thinking of just doing Theora later, since I have some old code I could dig up.)

The projects available now are for Windows and Mac. I didn’t have any luck building VLCKit, so I never got anywhere on the other platforms.

Thanks for your replies.

I changed my code to:

if event.phase == "ready" then print(video.totalTime) video:play() video:seek( 8 ) end

And now it’s working. It may be possible that my video:seek() was higher than the time duration.

So there’s no direct way to convert a video into an image sequence?

And is it normal that when I call the camera from my app, I cannot use options like slow motion?

Thanks in advance

I also read that we cannot make a screenshot of a video displayed in the app… Is it now possible? Or is there another way to proceed?

You cannot capture in Corona as Video runs native and Corona can only capture OpenGL.

If anyone could achieve it it would be @starcrunch