Can't find what you're looking for? Use our feedback widget on the right to request more information. You must accept cookies from learn.foundry.com and disable any ad-blockers to provide feedback.
The Foundry Camera Tracker 1.0 v1 Mac
On the AutoTracks tab, in the Refinement section, there are three refinement controls to help improve your solve. If the Error and per frame controls on the CameraTracker tab show a relatively high value, try refining the solve using the inlier tracks. The inliers are defined by the curve thresholds and can refine the focal length, camera position or camera rotation (or a combination of these). You can manually edit the camera solve first on the Output tab, then select:
In scenes tracked with AE's 3D camera tracker, the ground plane of your scene often ends up as an oblique plane in 3D space. Orient World fixes this - with a single click you can turn a layer into the ground plane of your scene, and orient everything else accordingly. Or, instead of setting a ground plane, you can align your scene to walls. To tweak it even further, you can also manually move, rotate and scale your coordinate system in 3D space.
In contrast with the camera tracker built into After Effects, The Foundry's Camera Tracker creates a null object that all other layers are parented to. In order to make Orient World work with The Foundry's Camera Tracker, you need to delete this parent null before applying Orient World.
Above we can see the Live Link client with an open connection to an instance of Maya running our plugin (top left section).That instance is sending three subjects to the Editor: two Camera subjects (one named "EditorActiveCamera" and another named "camera1"), as well as a subject containing Transform data called "TestCube" (bottom left section).
In addition to receiving data through a Message Bus Source, Live Link supports Hand Tracking sources for devices like Magic Leap, as well as the ability to create Virtual Subjects that allow you to combine multiple Subjects into one "Virtual Subject".For example, you could take the lower body from Character A and the upper body from Character B, then combine them into a new Subject. Or you could use the camera tracking data from one source and combine just the translation from another tracked object and drive it manually.
Enable Camera Sync: Enables syncing of the Unreal Editor camera with an external editor. Internally this looks at Live Link for a subject called EditorActiveCamera. Both our internally developed Maya and Motionbuilder plugins support this.
For the Live Link Controller, in the Details panel, you can use the Subject Representation property and select from your connected Subjects. Based on the Subject, a role will automatically be assigned (you can change if needed). The Component to Control is what will actually be driven through Live Link.In the example below, we have a Cine Camera Actor with a Live Link Controller component that allows us to move the camera and change the Focal Length from Maya. We also use the Live Link Skeletal Animation component on a Skeletal Mesh and stream in animation data. To achieve this, the Live Link Pose node has been added to our Animation Blueprint and our Subject has been selected.
You can also use Blueprint function calls to access Live Link data. Below, the Evaluate Live Link Frame function attempts to get a Live Link Frame from a specified subject using a given role (in the case below, the Subject "camera1" and the role of Camera is accessed).
If you are using the Maya Live Link Plugin prior to Unreal Engine version 4.23, when driving a camera through Blueprint, it may not have the correct transform. You can fix this by adding an Add Relative Rotation node set to 0, 180, 90 as indicated below.
The advanced optional camera feature for the Endurance integrated sensors allows you to continuously monitor your process visually, while the LED sighting option permits to see the spot size on the target and make sure you have a clean line of sight to the target. The match function takes the guesswork out of setting the emissivity.
The original innovations of Ninja V continue to revolutionize the world of production. This lightweight, compact device has become an essential tool for filmmakers and video content creators everywhere. It offers ground-breaking advantages for HDR monitoring and RAW recording. Close partnerships with major camera manufacturers ensure that Ninja V is able to enhance an ever-widening range of digital cinema, mirrorless, and DSLR cameras. Ninja V will transform the way you work and provide you with new opportunities to realize your creative vision.
The versatile nature of Ninja V means it can be paired with nearly any camera, on any type or size of production. The professional monitoring tools make framing, exposure, and focusing a breeze. With Ninja V you can work confidently to perfect every shot.
Ninja V has been designed so that it can be paired equally well with smaller cameras as part of a minimal setup and with full-size camera rigs on large-scale cinema productions. The aluminum chassis and polycarbonate backplate have been crafted for maximum durability. The device weighs just 360g (0.79lbs), which means it can be used comfortably with a handheld camera. The device includes anti-rotational 3/8-16 mount points on the top and bottom. It ships with a 1/4-20 adapter, so that Ninja V can easily be adapted and mounted on industry-standard equipment from a wide range of manufacturers.
Ninja V includes a comprehensive range of monitoring tools including a waveform, focus peaking, false color, zoom controls, custom LUTs, and frame guides. Each offers an opportunity to perfect the composition and exposure for every shot and can be overlaid in any combination, unlike many other monitors that allow only one monitoring tool at a time. AtomOS software is easy to use and provides a platform for Atomos to easily update Ninja V, introduce new features, and add support for new cameras on release.
The AtomRemote app for iOS and macOS offers an array of external controls for ATOMOS CONNECT for Ninja V. The app enables you to perform a range of configuration tasks and operations up to 15 meters away from the device via Bluetooth LE. Input options include the ability to define camera connections, select Gamma/EOTF, and adjust Gamut settings. For monitoring, AtomRemote can be used to control playback, choose monitoring modes, apply custom 3D LUTs, or view image analysis tools including exposure and focus. Output controls include options for 4K to HD, LUT preview, and HDR output.
There is an optional SSDmini adaptor that allows CFAST II cards to be used, enabling you to recycle older media cards or align with your camera media. Alongside SSDmini we also qualify a range of 2.5-inch SSDs that can be used in conjunction with the MasterCaddy III which are required when using the ATOMOS CONNECT.
With Ninja V the only limitation to how much you can record is the size of the SSD. A core principle of Atomos devices is to provide much more flexibility in terms of recording codec, resolution, and frame rate than is normally available with internal recording. SSD media also provides more GB per $ than camera media cards. This not only provides extended recorded times for long form productions or recording events, but gives you the added security of always having a back-up to your camera recording.
Atomos Cloud Studio (ACS) is a collection of online video production services that represent a radical innovation for all video creators, streamers, and filmmakers. When paired with ATOMOS CONNECT, ACS allows Ninja V to livestream to popular platforms like Facebook Live, Twitch, YouTube, and custom RTMP/S destinations. It also offers full support for Adobe Camera to Cloud (C2C), powered by Frame.io, allowing anyone with a compatible camera or device to be able to capture full-resolution footage, simultaneously share proxy files, and collaborate in real-time.
Adobe C2C is being used by production teams every day to share footage from the shoot with remote team members. C2C is the fastest, easiest, and most secure method to share media and collaborate in real-time. It creates a direct path from production to the post-production teams, allowing media to be transferred from C2C certified devices, wherever you are, over standard network connections to the cloud, for viewing, approval, and editing. Clips can be reviewed on any device and editors can start cutting high-quality proxy files (with matching timecode and file names) before anyone calls it a wrap. The ATOMOS CONNECT accessory for Ninja V opens the C2C workflow to a significantly wider range of digital cinema, mirrorless, and DSLR cameras, allowing more filmmakers than ever before engage in cloud-based workflows and experience the future of production.* ATOMOS CONNECT required. 2ff7e9595c
Comments