ios – How do you create a new AVAsset video that consists of only frames from given `CMTimeRange`s of another video?

Apple’s Video Track Identification Code includes the following delegated recall:

func cameraViewController(_ controller: CameraViewController, didReceiveBuffer buffer: CMSampleBuffer, orientation: CGImagePropertyOrientation) {
    let visionHandler = VNImageRequestHandler(cmSampleBuffer: buffer, orientation: orientation, options: [:])
    
    if gameManager.stateMachine.currentState is GameManager.TrackThrowsState {
        DispatchQueue.main.async {
            // Get the frame of rendered view
            let normalizedFrame = CGRect(x: 0, y: 0, width: 1, height: 1)
            self.jointSegmentView.frame = controller.viewRectForVisionRect(normalizedFrame)
            self.trajectoryView.frame = controller.viewRectForVisionRect(normalizedFrame)
        }
        // Perform the trajectory request in a separate dispatch queue.
        trajectoryQueue.async {
            do {
                try visionHandler.perform([self.detectTrajectoryRequest])
                if let results = self.detectTrajectoryRequest.results {
                    DispatchQueue.main.async {
                        self.processTrajectoryObservations(controller, results)
                    }
                }
            } catch {
                AppError.display(error, inViewController: self)
            }
        }
    } 
}

But instead of drawing user interface at any time detectTrajectoryRequest.results exists (https://developer.apple.com/documentation/vision/vndetecttrajectoriesrequest/3675672-results), I am interested in using CMTimeRange provided by each result to construct a new video. In reality, this would filter the original video down to only images with lanes. How can I do this, perhaps by writing only specific time intervals from an AVFoundation video to a new AVFoundation video?

William

Leave a Reply

Your email address will not be published. Required fields are marked *