查看: 182|回复: 1

GoPro Hero 12 VR180 Setup Guide with DaVinci Resolve Studio

[复制链接]

10

主题

3

回帖

57

积分

管理员

积分
57
发表于 2024-11-18 08:20:50 | 显示全部楼层 |阅读模式
作者:David Pacheco


Disclaimer: 免责声明:
This method uses 3rd Party KartaVR plugins. Assuming these plugins can be installed in the free version of DaVinci Resolve (untested on my end) this method should work in Resolve Free up to 19.0, with a limited export resolution since the free version does not support exporting beyond 4K. However, with the recent release of DaVinci Resolve 19.1, Blackmagic has phased out support for KartaVR plugins in the free version. While the tool still works in the paid version (DaVinci Resolve Studio), Andrew Hazelden, developer of KartaVR, warns that even this may start to change in upcoming releases.该方法使用了第三方KartaVR插件。假设这些插件可以在免费版DaVinci Resolve(在我的测试环境中尚未测试)中安装,那么该方法应该在Resolve Free 19.0版本中起作用,但由于免费版不支持超过4K的导出分辨率,因此导出分辨率将受到限制。然而,随着DaVinci Resolve 19.1的发布,Blackmagic已停止在免费版中支持KartaVR插件。尽管该工具在付费版(DaVinci Resolve Studio)中仍然可以使用,但KartaVR的开发者Andrew Hazelden警告说,即使在付费版中,该工具的功能也可能在未来的版本中发生变化。
Please also note that I am not an expert, just an enthusiast. There may be other more efficient ways of doing this that give better results but I am writing the guide I wish I had when I started, since information was hard to come by and a lot of trial and error was involved - hopefully others may read and provide feedback to improve the process even further.请注意,我不是专家,只是爱好者。可能还有其他更高效的方法可以达到更好的效果,但我正在撰写我希望在开始时就能拥有的指南,因为当时信息很难获取,而且需要进行大量的尝试和错误——希望其他人能够阅读并提供反馈,进一步改进这一过程。
Required Equipment: 所需设备:
Main Equipment 主要设备
For mounting of Stereo Mic (out of scope of this guide)对于立体麦克风(本指南未涉及)的安装
(Note: Amazon will not allow you to buy two Max Lens Mod 2.0s in one go, will have to purchase separately with at least a few days in between)(注:亚马逊不允许您一次性购买两个Max Lens Mod 2.0,必须至少间隔几天分别购买)
Hardware Setup: 硬件设置:
My Initial Setup consists of a Zoom H2N stereo recorder (11) mounted on an extension rod (10), connected to a Dual Camera Mount (9)  mounted on a Camera Tripod (6). Note that syncing audio from the Stereo Recorder with the GoPro footage can be done in DaVinci, but is out of scope of this guide.我的初始设置包括将Zoom H2N立体录音机(11)安装在伸缩杆(10)上,然后将其连接到安装在相机三脚架(6)上的双相机支架(9)上。请注意,在DaVinci中同步立体录音机的音频与GoPro视频是可以实现的,但这不在本指南的范围内。
  • Place the two GoPros (1) fitted with the Max Lens Mod 2.0 (2)  in the 3D Printed Mount (3). Add the Basic Buckle Clip (5) to the bottom. Ensure both GoPros have SD cards (7) installed.将两个装有Max Lens Mod 2.0(2)的GoPro(1)放入3D打印支架(3)中。在底部添加基本卡扣夹(5)。确保两个GoPro都已安装SD卡(7)。
  • Place the Tripod Mount Adaptor (4) from the kit to the Dual Camera Mount (9) or directly onto the Camera Tripod (6) if not using stereo mount.将套件中的三脚架适配器(4)安装到双摄像头支架(9)上,或者直接安装到摄像机三脚架(6)上,如果不使用立体摄像头支架。
  • Connect the GoPro Assembly to the Tripod Mount adapter and screw in.将GoPro组件连接到三脚架安装适配器上,然后拧紧。
Recording Setup: 记录设置:
  • Download the GoPro Quik app to your phone and set up both GoPros.在手机上下载GoPro Quik应用程序,并设置好两个GoPro设备。
  • Turn on GoPros. Ensure that you are in Hyperview Mode and that Max Lens 2.0 mode is OFF. Note, with Max Lens 2.0 Mode on, GoPro automatically applies distortion correction to the videos. However, these distortion corrected videos I found difficult to further correct in Resolve to get a good output for the headset. There are more widely available tools that correct Fisheye Distortion to Equirectangular, so I determined it was easier to work directly with the Fisheye imagery. Hence, keeping Max Lens 2.0 mode off. Ensure the resolution is set to the highest (5.3K). You will know the settings are correct if you see an egg shaped fisheye view.打开GoPro相机。确保您处于Hyperview模式,并且Max Lens 2.0模式关闭。请注意,在Max Lens 2.0模式下,GoPro会自动对视频进行失真校正。然而,我发现这些经过失真校正的视频在Resolve中很难进一步调整以获得头戴设备的良好输出。有更广泛可用的工具可以将鱼眼镜头畸变校正为矩形投影,因此我决定直接使用鱼眼镜头影像更为方便。因此,请保持Max Lens 2.0模式关闭。将分辨率设置为最高(5.3K)。如果您看到一个蛋形的鱼眼镜头视图,则表示设置正确。
  • After GoPros have turned on, go to the Quik app, click the three dots to the right of any of the cameras and press Sync Timecode. This will bring up a moving QR code. Show this to BOTH GoPro Cameras until you hear a beep and see a notification that the timecode has been updated on each one.在GoPro开启后,打开Quik应用程序,点击任意摄像头右侧的三个点图标,然后点击“同步时间码”。这将弹出一个移动中的二维码。将该二维码展示给两个GoPro摄像头,直到听到哔哔声并看到通知,表明每个摄像头的时间码已更新。
NOTE: You MUST do this every single time after powering up your GoPros before shooting a video. If you don’t, the timecodes won’t be in sync and you will have to manually sync them by eye or audio which is a painstaking process.注意:每次开机拍摄视频之前,都必须执行此操作。如果不这样做,时间码将无法同步,您将不得不手动通过目视或音频进行同步,这是一个费时费力的过程。
  • Start recording on both GoPros. You can use a remote to start both. However, later in this process we will have to label each video Left or Right. Since it can be hard to determine which is which later (since stereo disparity can be low), I would recommend starting the Left video first, and clapping or putting your hand in front of the camera, and then starting the Right video after this. Later, when labeling them, you will know that the video with the clap/hand is Left and can rename it accordingly.打开两个GoPro开始录制。您可以使用遥控器同时启动它们。然而,在接下来的步骤中,我们必须为每个视频标记“左”或“右”。由于在后期(因为立体视差可能较低)很难确定哪个是哪个,我建议先从左摄像头开始录制,然后拍手或把手放在摄像头前,然后再从右摄像头开始录制。之后,在标记视频时,您会知道带有拍手/手部视频的是左摄像头,并可以相应地重命名它。
  • Send the videos to your computer and save them as Left.mp4 and Right.mp4. You can use the GoPro Quik App or USB cable to do this.将视频发送到您的电脑上,并将它们保存为“Left.mp4”和“Right.mp4”。您可以使用GoPro Quik App或USB线缆来完成此操作。
Syncing Videos based on Timecode:基于时间码同步视频:
Note: When you timecode sync in the previous steps, the GoPro videos are embedded with metadata showing the start timecode for the video in the format HH:MM:SS:FF where FF is the frame number, as well as the duration of the video.注意:在前面的步骤中进行时间码同步时,GoPro视频中会嵌入元数据,显示视频的起始时间码格式为HH:MM:SS:FF,其中FF是帧号,以及视频的持续时间。
Disclaimer: DaVinci Resolve has a Sync by Timecode feature which you can use to clip the Left and Right videos to start and end at the same time. I figured out a faster way to do the same thing using Python and FFMPEG which I will outline here. However, if you wish to use DaVinci, here is a non-comprehensive rundown of how you can get started:免责声明:DaVinci Resolve具有“按时间码同步”功能,您可以使用该功能将左、右视频剪切到开始和结束时间相同。我找到了一种更快的方法,使用Python和FFMPEG来完成相同的操作,下面我将对此进行说明。但是,如果您希望使用DaVinci,这里是一个简要的启动指南,虽然不全面。
  • In DaVinci Resolve, drag the Left and Right Clips into the Media Pool, drag them into a new timeline, and then double click on either one. When you do this, the video will pop up on the top left of the screen. You can see the timecode on the top right. Using the slider on the bottom of the video, you can drag it to see the timecode at any point in the video.在DaVinci Resolve中,将左、右视频片段拖入媒体池,然后将它们拖入一个新的时间轴中。然后双击其中任意一个。这样做后,视频将在屏幕左上角弹出。您可以在屏幕右上角看到时间码。使用视频底部的滑块,您可以将视频拖动到视频中的任何时间点查看时间码。
  • DaVinci has a Sync By Timecode feature. Use this and then clip the two at the start and end at the same time. There are guides on how to do this such as this one:  https://www.youtube.com/watch?v=znK0In2WkUU达芬奇软件有一个“同步按时间码”的功能。使用这个功能,然后同时在开始和结束处剪辑这两个片段。有关如何操作的指南很多,比如这个:
  • You can confirm that they have the correct start and end timecodes by clicking on each one and using the slider to check the timecode at the start and end of each one. If they match, you are golden.你可以通过点击每一个视频,然后使用滑块来检查其开始和结束的时间码来确认它们是否正确。如果它们匹配,那么你就大功告成了。
Python Method Python方法
With my python method, I simply put the Left.mp4 and Right.mp4 files in the same folder as my Python script. I then run the script, which when run automatically outputs two new clips, Left_clipped.mp4 and Right.clipped.mp4. These will now have matching start and end timecodes and be in sync. If they are not, you will need to ensure that you remembered to show the QR code to each GoPro on startup before recording so that the timecodes synced properly. Again and I can’t stress this enough, this has to be done EVERY time the GoPros start up.使用我的Python脚本时,只需将Left.mp4和Right.mp4文件放在Python脚本所在的同一文件夹中。然后运行脚本,运行后会自动输出两个新剪辑Left_clipped.mp4和Right_clipped.mp4。这些剪辑将具有匹配的开始和结束时间码,并同步播放。如果它们不匹配,您需要确保在录制之前向每个GoPro展示QR码,以便正确同步时间码。再次强调,每次启动GoPro时都必须这样做。
                               
Here is my python code:这是我的Python代码:
import subprocess

import 进口 json

import 进口 os 的

import 进口 sys


def get_ffprobe_data(filename):

   
"""

   Uses ffprobe to extract metadata from the video file.使用ffprobe从视频文件中提取元数据。


   :param filename: Path to the video file.:参数 `filename`: 视频文件的路径。

   :return: Dictionary containing all metadata.:return: 包含所有元数据的字典。

   """


   cmd = [ CMD = [

      
'ffprobe' “ffprobe”,

      
'-v' “v”, 'error' “错误”,

      
'-print_format' “-print_format”, 'json' json的,

      
'-show_format' “-show_format”,

      
'-show_streams',

       filename

   ]


   
try:

       result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=结果 = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
True 真正的, check= 检查=True 真正的)

       data = json.loads(result.stdout)数据 = json.loads(result.stdout)

      
return 返回 data

   
except 除了 subprocess.CalledProcessError"subprocess.CalledProcessError" 是 Python 中的一个异常,表示在调用子进程时发生了错误。这个异常通常发生在子进程执行失败或返回状态码非零的情况下。as 作为 e:

       print( 打印(
f"Error running ffprobe on {filename}: {e.stderr}""在处理文件 {filename} 时,ffprobe 程序出现错误:{e.stderr}")

       sys.exit(
1)

   
except 除了 json.JSONDecodeError:

       print( 打印(
f"Error parsing ffprobe output for {filename}."“解析{filename}的ffprobe输出时出现错误。”)

       sys.exit(
1)


def extract_timecode_from_streams从流中提取时间码(data):

   
"""

   Extracts timecode from the video stream by prioritizing the video codec type.通过优先考虑视频编解码器类型来从视频流中提取时间码。


   :param data: ffprobe data dictionary.:param data: ffprobe 输出的数据字典。

   :return: Timecode string or None.:return: 时间码字符串或 None。

   """


   streams = data.get(
'streams' “流”, [])

   
for 为 stream 流in 在 streams:

      
if 如果 stream.get('codec_type') == 'video':

           tags = stream.get(
'tags', {})

           tc = tags.get(
'timecode', None 没有一个)

           
if 如果 tc:

               
return 返回 tc

   
return 返回 None 没有一个


def extract_timecode(data):

   
"""

   Extracts timecode from ffprobe data by searching the video stream first,通过首先搜索视频流来从ffprobe数据中提取时间码。

   then other streams if necessary.如果有必要,还可以添加其他流。


   :param data: ffprobe data dictionary.:param data: ffprobe 输出的数据字典。

   :return: Timecode string or None.:return: 时间码字符串或 None。

   """


   
# Attempt to extract from video stream尝试从视频流中提取数据。

   tc = extract_timecode_from_streams(data)tc = 从流中提取时间码(tc)(data)

   
if 如果 tc:

      
return 返回 tc


   
# Fallback: Search all streams备选方案:搜索所有流

   
for 为 stream 流in 在 data.get('streams' “流”, []):

       tags = stream.get(
'tags', {})

      
for 为 key, value in 在 tags.items():

           
if 如果 'timecode' in 在 key.lower():

               
return 返回 value


   
# Fallback: Search format tags

   format_tags = data.get(
'format', {}).get('tags', {})

   
for 为 key, value in 在 format_tags.items():

      
if 如果 'timecode' in 在 key.lower():

           
return 返回 value


   
return 返回 None 没有一个


def get_frame_rate(data):

   
"""

   Retrieves the frame rate from the video stream.


   :param data: ffprobe data dictionary.:param data: ffprobe 输出的数据字典。

   :return: Frame rate as a float.

   """


   streams = data.get(
'streams' “流”, [])

   
for 为 stream 流in 在 streams:

      
if 如果 stream.get('codec_type') == 'video':

           r_frame_rate = stream.get(
'r_frame_rate', '30/1')

           
try:

               num, den = map(int, r_frame_rate.split(
'/'))

               
return 返回 num / den

           
except 除了:

               print(
f"Invalid r_frame_rate format: {r_frame_rate}. Defaulting to 30 fps.")

               
return 返回 30.0

   print( 打印(
"No video stream found to determine frame rate. Defaulting to 30 fps.")

   
return 返回 30.0


def timecode_to_frames(tc, fps):

   
"""

   Converts a timecode string to total frames.


   :param tc: Timecode string in format "HH:MM:SS;FF" or "HH:MM:SS.FF":param tc: 以"HH:MM:SS;FF"或"HH:MM:SS.FF"格式表示的时间码字符串

   :param fps: Frames per second as a float.:param fps: 帧速率,以浮点数表示。

   :return: Total number of frames as integer.:return: 帧的总数,以整数形式返回。

   """


   
try:

      
if 如果 ';' in 在 tc:

           time_part, frame_part = tc.strip().split(时间部分和帧部分是通过使用 `tc.strip()` 函数将字符串分割成两部分得到的。其中,`tc.strip()` 函数用于去除字符串两端的空格或其他特殊字符。分割操作是通过使用字符串的分隔符实现的,这里使用的分隔符是字符串中的两个连续的字符 `[` 和 `]`。因此,`time_part` 和 `frame_part` 分别代表字符串中从 `[` 到字符串结束的部分和从 `]` 到字符串结束的部分。
';')

      
elif '.' in 在 tc:

           time_part, frame_part = tc.strip().split(时间部分和帧部分是通过使用 `tc.strip()` 函数将字符串分割成两部分得到的。其中,`tc.strip()` 函数用于去除字符串两端的空格或其他特殊字符。分割操作是通过使用字符串的分隔符实现的,这里使用的分隔符是字符串中的两个连续的字符 `[` 和 `]`。因此,`time_part` 和 `frame_part` 分别代表字符串中从 `[` 到字符串结束的部分和从 `]` 到字符串结束的部分。
'.')

      
else:

           print( 打印(
f"Unsupported timecode format: {tc}")

           sys.exit(
1)

      

       h, m, s = map(int, time_part.split(
':'))

       f = int(frame_part)

       total_seconds = h *
3600 + m * 60 + s

       total_frames = int(total_seconds * fps) + f

      
return 返回 total_frames

   
except 除了 ValueError:

       print( 打印(
f"Invalid timecode format: {tc}")

       sys.exit(
1)


def frames_to_timecode(total_frames, fps):

   
"""

   Converts total frames back to a timecode string.


   :param total_frames: Total number of frames as integer.

   :param fps: Frames per second as a float.:param fps: 帧速率,以浮点数表示。

   :return: Timecode string in format "HH:MM:SS;FF"

   """


   h = int(total_frames // (fps *
3600))

   remaining = total_frames - (h * fps *
3600)

   m = int(remaining // (fps *
60))

   remaining = remaining - (m * fps *
60)

   s = int(remaining // fps)

   f = int(remaining - (s * fps))

   

   
# Handle cases where frame count exceeds fps

   
if 如果 f >= int(fps):

       f =
0

       s +=
1

      
if 如果 s >= 60:

           s =
0

           m +=
1

           
if 如果 m >= 60:

               m =
0

               h +=
1

   

   
return 返回 f"{h:02}:{m:02}:{s:02};{f:02}"


def seconds_to_timecode(seconds, fps):

   
"""

   Converts total seconds to a timecode string in format "HH:MM:SS;FF"


   :param seconds: Total seconds as a float.

   :param fps: Frames per second as a float.:param fps: 帧速率,以浮点数表示。

   :return: Timecode string in format "HH:MM:SS;FF"

   """


   h = int(seconds //
3600)

   seconds %=
3600

   m = int(seconds //
60)

   seconds %=
60

   s = int(seconds)

   f = int((seconds - s) * fps)

   

   
# Handle cases where frame count exceeds fps

   
if 如果 f >= int(fps):

       f =
0

       s +=
1

      
if 如果 s >= 60:

           s =
0

           m +=
1

           
if 如果 m >= 60:

               m =
0

               h +=
1

   

   
return 返回 f"{h:02}:{m:02}:{s:02};{f:02}"


def get_timecode_and_duration(filename):

   
"""

   Retrieves the timecode, duration, and frame rate from the video file.


   :param filename: Path to the video file.:参数 `filename`: 视频文件的路径。

   :return: Tuple containing timecode string, duration in seconds, and frame rate.

   """


   data = get_ffprobe_data(filename)

   tc = extract_timecode(data)

   
if 如果 tc is None 没有一个:

       print( 打印(
f"No timecode found for {filename}.")

      
# Optionally, print all available tags for debugging

       print( 打印(
"Available metadata:")

       print(json.dumps(data, indent=
4))

       sys.exit(
1)

   

   
# Get duration

   duration = data.get(
'format', {}).get('duration' “持续时间”, None 没有一个)

   
if 如果 duration is None 没有一个:

       print( 打印(
f"No duration found for {filename}.")

       sys.exit(
1)

   

   
try:

       duration = float(duration)

   
except 除了 ValueError:

       print( 打印(
f"Invalid duration value for {filename}: {duration}")

       sys.exit(
1)

   

   
# Get frame rate

   fps = get_frame_rate(data)

   

   
return 返回 tc, duration, fps


def clip_video(input_file, output_file, start_time, duration, new_timecode):

   
"""

   Clips a video using FFmpeg and resets the timecode while preserving original codecs.


   :param input_file: Path to the input video file.

   :param output_file: Path to the output (clipped) video file.

   :param start_time: Start time in seconds.

   :param duration: Duration in seconds.

   :param new_timecode: New timecode to set (format "HH:MM:SS;FF").

   """


   cmd = [ CMD = [

      
'ffmpeg',

      
'-ss', f"{start_time:.6f}",

      
'-t', f"{duration:.6f}",

      
'-i', input_file,

      
'-c', 'copy',                  # Copy codecs

      
'-metadata', f'timecode={new_timecode}',  # Set new timecode

       output_file

   ]


   
try:

       print( 打印(
f"Clipping {input_file} -> {output_file}")

       subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, check=
True 真正的)

       print( 打印(
f"Successfully clipped {output_file}\n")

   
except 除了 subprocess.CalledProcessError"subprocess.CalledProcessError" 是 Python 中的一个异常,表示在调用子进程时发生了错误。这个异常通常发生在子进程执行失败或返回状态码非零的情况下。as 作为 e:

       print( 打印(
f"Error clipping {input_file}: {e.stderr}")

       sys.exit(
1)


def main():

   
# Define video filenames with correct casing

   videos = [ 视频= [
'Left.mp4', 'Right.mp4']


   
# Verify that both files exist

   
for 为 video 视频in 在 videos:

      
if 如果 not 不 os.path.isfile(video):

           print( 打印(
f"File not found: {video}")

           sys.exit(
1)


   
# Store video metadata

   video_data = {}


   
for 为 video 视频in 在 videos:

       tc, duration, fps = get_timecode_and_duration(video)

       start_frames = timecode_to_frames(tc, fps)

       duration_frames = int(duration * fps)

       end_frames = start_frames + duration_frames -
1  # Subtract 1 to get the last frame

       end_tc = frames_to_timecode(end_frames, fps)

      

      
# Extract video name without extension for printing

       video_name = os.path.splitext(video)[
0]

       print( 打印(
f"{video_name} Start: {tc}")

       print( 打印(
f"{video_name} Duration: {duration:.6f} seconds")

       print( 打印(
f"{video_name} End: {end_tc}\n")

      

       video_data[video] = {

           
'start_tc': tc,

           
'duration' “持续时间”: duration,

           
'fps' “帧”: fps,

           
'end_frames' “end_frames”: end_frames

       }


   
# Determine overlapping timecode interval

   video1, video2 = videos

   data1 = video_data[video1]

   data2 = video_data[video2]


   start1_sec = timecode_to_frames(data1[
'start_tc'], data1['fps' “帧”]) / data1['fps' “帧”]

   end1_sec = data1[
'end_frames' “end_frames”] / data1['fps' “帧”]


   start2_sec = timecode_to_frames(data2[start2_sec = 将时间码转换为帧数(timecode\_to\_frames),其中data2是输入数据。
'start_tc'], data2['fps' “帧”]) / data2['fps' “帧”]

   end2_sec = data2[
'end_frames' “end_frames”] / data2['fps' “帧”]


   
# Determine the overlapping interval

   clip_start_sec = max(start1_sec, start2_sec)

   clip_end_sec = min(end1_sec, end2_sec)


   
# Calculate clip duration

   clip_duration_sec = clip_end_sec - clip_start_sec


   
if 如果 clip_duration_sec <= 剪辑时长秒数小于等于0:

       print( 打印(
"No overlapping timecode interval found between the two videos."“未在两段视频中发现重叠的时间码间隔。”)

       sys.exit(
1)


   
# Print overlapping interval details

   overlapping_start_tc = seconds_to_timecode(clip_start_sec, data1[重叠开始TC = 将秒数转换为时间码(Timecode)格式(Timecode),其中clip_start_sec为剪辑开始的秒数,data1为视频数据的第1帧。
'fps' “帧”])

   overlapping_end_tc = seconds_to_timecode(clip_end_sec, data1[重叠结束TC = 将秒数转换为时间码(Timecode)格式(Timecode),其中clip_end_sec为剪辑结束的秒数,data1为包含时间码信息的数据结构。
'fps' “帧”])


   print( 打印(
f"Overlapping Interval:" “重叠区间:”)

   print( 打印(
f"Start: {overlapping_start_tc}""开始时间:{重叠开始时间}")

   print( 打印(
f"End: {overlapping_end_tc}"“结束时间:{重叠结束时间tc}”)

   print( 打印(
f"Duration: {clip_duration_sec:"持续时间:{剪辑持续时间秒}".6f} seconds\n" f}秒\ n”)


   
# Calculate relative start times for each video

   relative_start1 = clip_start_sec - start1_sec

   relative_start2 = clip_start_sec - start2_sec


   
# Ensure relative starts are not negative

   relative_start1 = max(relative_start1,`relative_start1` 的值等于 `max(relative_start1, [)` 中的最大值。这里的 `max()` 函数用于找到数组 `[` 中的最大值,并将其赋值给 `relative_start1`。
0.0)

   relative_start2 = max(relative_start2,`relative_start2` 的值等于 `max(relative_start2, [)` 中的最大值。这里的 `max()` 函数用于找到数组 `[` 中的最大值,并将其赋值给 `relative_start2`。
0.0)


   
# Define output filenames

   output_videos = {

       video1:
f"{os.path.splitext(video1)[f"{os.path.splitext(video1)[0]}.mp4"0]}_clipped.mp4",

       video2:
f"{os.path.splitext(video2)[f"{os.path.splitext(video2)[0]}.mp4"0]}_clipped.mp4"

   }


   
# Clip the videos

   clip_video(

       input_file=video1,

       output_file=output_videos[video1],

       start_time=relative_start1,

       duration=clip_duration_sec,

       new_timecode=overlapping_start_tc

   )


   clip_video(

       input_file=video2,

       output_file=output_videos[video2],

       start_time=relative_start2,

       duration=clip_duration_sec,

       new_timecode=overlapping_start_tc

   )


   
# Print new start and end timecodes for the clipped videos

   print( 打印(
"New Clipped Video Timecodes:")

   print( 打印(
f"{output_videos[video1]} Start: {overlapping_start_tc}")

   print( 打印(
f"{output_videos[video1]} End: {overlapping_end_tc}\n")


   print( 打印(
f"{output_videos[video2]} Start: {overlapping_start_tc}")

   print( 打印(
f"{output_videos[video2]} End: {overlapping_end_tc}\n")


   print( 打印(
"Clipping completed successfully.")


if 如果 __name__ == "__main__":

   main()
导入子流程
DaVinci Resolve Workflow:
  • Create a new DaVinci Resolve Project
  • Ensure you have installed KartaVR plugins for Resolve. Use this guide: https://www.youtube.com/watch?v=o9YF1zw2OJM
  • Change the default Timeline Resolution to Custom - 5976 x 5312. To do this, click on the gear icon on the bottom right and go to Master Settings.
  • Import Left_clipped.mp4 and Right_clipped.mp4 into Media Pool (if clipped external to DaVinci with the Python script).  If they were clipped within DaVinci, you will have to work out how to drag the clipped versions back into the Media Pool.
  • Right click Left_clipped.mp4 and select “Create New Timeline Using Selected Clips”. Name the timeline. If you changed the Default Resolution in Step 3, check “Use Project Settings”. Otherwise, uncheck it, go for Format and change resolution to Custom - 5976 x 5312.
  • Go to the Fusion Tab. Left_Clipped will be here as MediaIn1 connected to MediaOut. Double click the yellow line to disconnect them. Then, drag Right_Clipped.mp4 into the timeline. It will appear as MediaIn2.
  • Press Shift/Spacebar to bring up Node Searcher. Search Transform (xf) node and add it, then copy it since Left and Right will both use one.Rename them both and connect Left to MediaIn1 and Right to MediaIn2.
  • Double click on the Transform Node, uncheck “Use Size and Aspect” and change X Size to 0.6. We do this since we need to compress X so that when we apply kvrViewer distortion in next step, it doesn’t crop the edges (which it does if you leave it at the original resolution). Do this for both TransformLeft and TransformRight.双击“Transform节点”,取消勾选“Use Size and Aspect”,并将X Size更改为0.6。这是因为我们需要压缩X轴,以便在下一步应用kvrViewer的变形时,不会裁剪边缘(如果保持原始分辨率,就会出现这种情况)。请为TransformLeft和TransformRight分别进行此操作。
  • Press Shift/Spacebar to bring up Node Searcher. Search for kvrViewer node and add it, then copy it since Left and Right will both use one. Rename them both and connect Left to TransformLeft and Right to TransformRight.按下 Shift 键或空格键打开节点搜索器。搜索“kvrViewer”节点并添加它,然后复制它,因为 Left 和 Right 都将使用一个。将它们都重命名,然后将 Left 连接到“TransformLeft”,将 Right 连接到“TransformRight”。
  • Double click on the kvrViewer node. Change Image Projection to Fisheye and change the Diagonal Field of View to 260. Uncheck Auto Resolution and change Width to 2988 and Height to 5312, the resolution of each image. Do this for both kvrViewer nodes.双击“kvrViewer”节点。将“Image Projection”更改为“Fisheye”,并将“Diagonal Field of View”更改为260。取消选中“Auto Resolution”,并将“Width”更改为2988,“Height”更改为5312,这是每张图像的分辨率。为两个“kvrViewer”节点都这样做。
NOTE: Even though the diagonal Field of View of each GoPro with the Max Lens 2.0 mod in Hyperview is 177 degrees, when I entered 177 degrees here instead of 260 it did not look correct in the headset. These settings were determined after lots of trial and error and looked the most correct to me when wearing my headset (objects were where they should be at the correct size). I am not an expert in this area - perhaps there is a better way to do it but I am just sharing what worked for me.注意:虽然在Hyperview中安装Max Lens 2.0模块的GoPro的对角视野为177度,但我在这里输入177度而不是260度,在头显中看起来并不正确。这些设置是在多次尝试和错误后确定的,当我戴上头显时(物体在正确的大小和位置),这些设置看起来最正确。我不是这方面的专家,也许有更好的方法可以做到这一点,但我只是分享对我有效的方法。
  • Press Shift/Spacebar to bring up Node Searcher. Search Transform (xf) node and add it, then copy it since Left and Right will both use one for doing a Y transform. Rename them both and connect Left to kvrViewerLeft and Right to kvrViewerRight
  • Double click on the Transform Node, uncheck “Use Size and Aspect” and change Y Size to 0.95. This Shortens the image which will add black bars to the top and bottom. While this decreases the vertical FOV a bit it also makes objects look closer to their actual size in real life. Do this for both Left and Right.
  • Press Shift/Spacebar to bring up Node Searcher. Search  kvrCreateStereo node and add it. Connect TransformYLeft to the Left hand side and Transform Right to the bottom. Connect the output to MediaOut1.
  • Double Click on the kvrCreateStereo node. Select Dual In for Input Mode and Horiz for Output Mode. Ensure Image1 is TransformYLeft and Image2 TransformYRight otherwise eyes will be the wrong way around. If wrong, simply type the correct ones in the correct field.双击“kvrCreateStereo”节点。选择“Dual In”作为输入模式,“Horiz”作为输出模式。确保“Image1”的设置为“TransformYLeft”,“Image2”的设置为“TransformYRight”,否则眼睛的方向会颠倒。如果设置错误,只需在相应的字段中输入正确的值即可。
  • Double Click MediaOut1. Your timeline should look like this:双击“媒体输出1”。您的时间轴应该看起来像这样:
  • To Export, go to the Deliver tab. You may be able to export the video as an MP4, in which case you can ignore steps 17 to 20. However, in my case, this would crash DaVinci. I had to set the Format to QuickTime, Codec to DNxHR and Type to DNxHR HQ. Add to Render Queue and then Render to export your video.要导出视频,请转到“交付”选项卡。您可能可以将视频以MP4格式导出,在这种情况下,您可以忽略步骤17到20。然而,在我的案例中,这样做会导致DaVinci崩溃。我必须将格式设置为QuickTime,编码设置为DNxHR,类型设置为DNxHR HQ。然后将视频添加到渲染队列并进行渲染以导出视频。
NOTE: Only follow steps 17-20  if you were not able to directly export an MP4 from DaVinci.注意:如果您无法直接从 DaVinci 导出 MP4 文件,请只遵循步骤 17-20。
  • With my settings, DaVinci will output a multi gigabyte MOV file which is not easily playable. We must convert it to an MP4. To do this, download Handbrake: https://handbrake.fr/以我的设置为例,达芬奇将输出一个多GB的MOV文件,这种文件不容易播放。我们需要将其转换为MP4格式。为此,请下载Handbrake软件:https://handbrake.fr/
  • Open Handbrake and Drag your MOV video in. In the Summary tab, ensure Format is set to MP4.打开Handbrake软件,将您的MOV视频拖入其中。在“摘要”标签页中,确保“格式”设置为MP4。
  • In Dimensions tab, ensure Resolution is set to 5976 x 5312. Ensure Cropping is set to None. Otherwise, Handbrake will automatically crop the black bars from the top and bottom, affecting the transforms we applied to ensure objects look correct within the headset.在“尺寸”选项卡中,确保“分辨率”设置为5976 x 5312。确保“裁剪”设置为“无”。否则,Handbrake会自动裁剪头盔顶部和底部的黑条,影响我们为确保头盔内物体看起来正确而应用的变换。
  • After these steps are complete, press the arrow on green to Start Encode. This will create an MP4 in a location of your choice.完成这些步骤后,按下绿色箭头按钮开始编码。这将在您选择的位置创建一个MP4文件。
Alternative DaVinci Resolve Fusion Timeline:替代DaVinci Resolve Fusion时间线:
The previous fusion timeline creates a VR180 video with some small distortions and artifacts. The main limitation is objects closer to the camera appear warped. There is an alternative fusion timeline that can remove this warping so objects closer to the camera look correct - however this comes with the trade off of a lower field of view - I’d estimate it at 140-150 degrees. Here are the settings if interested:之前的融合时间线生成了一些轻微的失真和伪像的VR180视频。主要的限制是靠近摄像头的物体看起来变形了。有一个替代的融合时间线可以消除这种变形,这样靠近摄像头的物体看起来就正确了——但是这需要以更窄的视场角作为代价——我估计视场角在140-150度之间。如果有兴趣,以下是设置:
Viewing your VR Video: 观看您的VR视频:
  • I use Skybox VR Video Player to easily view my videos. You can download Skybox on your Quest and Desktop and set up a shared Airscreen folder between the two. From here, you can simply drag your MP4 into the shared folder, put on your headset, open Skybox and then browse to your synced folder (AirScreen - Directory). When starting the video - on the right in Stereo Mode ensure you are set to:我使用Skybox VR视频播放器来轻松查看我的视频。您可以在Quest和桌面上安装Skybox,并在两者之间设置共享的Airscreen文件夹。从这里,您只需将MP4文件拖放到共享文件夹中,戴上头显,打开Skybox,然后浏览到同步文件夹(AirScreen - 目录)。在开始播放视频时,确保在立体模式下设置为:
Normal 正常的
3D SBS 3 d SBS
VR180
  • Enjoy your VR Video! 尽情享受你的VR视频吧!
Note: This may seem like an involved process but after the initial set up, can be quite quick going from filming to VR output as you can reuse the same fusion timeline and just drag your new videos in - you don’t need to go through the node creation set up process every single time.注意:这看起来可能是一个复杂的过程,但在初始设置完成后,从拍摄到VR输出的过程可以非常快速,因为你可以重复使用相同的融合时间线,只需将新的视频拖入即可——你不需要每次都进行节点创建设置过程。

4

主题

6

回帖

28

积分

新手上路

积分
28
发表于 2024-11-21 14:34:06 | 显示全部楼层
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|Archiver|手机版|小黑屋|你的全景指南!

GMT+8, 2026-1-10 15:45 , Processed in 0.089720 second(s), 19 queries .

Powered by Discuz! X3.5

© 2001-2024 Discuz! Team.