In extremely low-light video recording, how does the iPhone's LiDAR scanner specifically improve autofocus performance?
In extremely low-light video recording, traditional autofocus systems, such as contrast detection and phase detection, struggle because they rely on detecting visual contrast or light phase differences within the scene. In the absence of sufficient visible light, these methods lack the necessary information to accurately and quickly determine focus, often leading to slow 'hunting' for focus or complete failure to acquire it. The iPhone's LiDAR (Light Detection and Ranging) scanner specifically improves autofocus performance by directly measuring the distance to objects in the scene, independent of ambient visible light. The LiDAR scanner emits invisible infrared laser pulses into the environment. These pulses travel to objects, reflect off them, and return to the scanner's sensor. The system then precisely measures the 'Time-of-Flight' for each pulse – the exact duration it takes for the light to travel to an object and return. By knowing the speed of light, the scanner calculates the precise distance to each point the laser hits, creating a detailed 3D depth map of the scene. This depth map provides the camera system with immediate and accurate distance information for all objects within its field of view. Instead of inferring focus from visual data, the camera directly receives precise numerical distances. This allows the iPhone's autofocus system to instantly know the exact distance to the intended subject. The camera can then move its lens elements directly to the correct focus position with significantly reduced or eliminated focus hunting, resulting in dramatically faster and more accurate initial focus acquisition. Furthermore, this continuous, light-independent depth information enables more reliable and smoother continuous autofocus and subject tracking during video recording, even in conditions where there is virtually no visible light, by constantly providing the exact distance to moving subjects.