100 SLAM-related technical interview questions

There are 4 sections - General, LiDAR SLAM, Visual SLAM, System design, Coding interview questions (Live / implementation). Each section has 10-25 questions - there many be a number of follow-up questions or questions with different approaches. The questions range from beginner to expert level, though not in any specific order.

General

  1. What are the different ways to represent rotations in 3D space?
    • Discuss the differences between the SO(3) matrix, Quaternion, Axis-angle, and Euler angle representations.
    • What problems does gimbal lock pose in the expression of 3D rotations?
    • What mathematical constraints are applicable to SO(3) matrices?
  2. Describe the structure of the SE(3) matrix.
    • What is the significance of the bottom row ([0,0,0,1]) in the SE(3) matrix?
  3. What sensors are suitable for SLAM (Simultaneous Localization and Mapping)?
    • Compare tightly-coupled fusion and loosely-coupled fusion in this context.
  4. Why is non-linear optimization used in SLAM?
    • Where do we encounter non-linearity in Visual-SLAM?
    • Where is non-linearity found in LiDAR SLAM?
  5. What optimization methods are applicable for non-linear optimization in SLAM?
    • Compare gradient descent, Newton-Raphson, Gauss-Newton, and Levenberg-Marquardt methods.
    • What is the trust-region method?
  6. What is loop closure and how is it achieved in SLAM?
  7. Define and differentiate the motion model and observation model in SLAM.
  8. What is RANSAC?
  9. Explain the concept of a robust kernel (or M-estimator).
  10. Discuss the Kalman filter and particle filter.
    • Highlight the differences between the Kalman filter (KF) and the Extended Kalman filter (EKF).
  11. Contrast filter-based SLAM with graph-based SLAM.
  12. Define the information matrix and covariance matrix in the context of SLAM.
  13. What is the Schur complement?
  14. Compare LU, Cholesky, QR, SVD, and Eigenvalue decomposition.
    • Which methods are commonly used in SLAM and why?
  15. Why is least squares optimization favored?
    • Explain how Maximum-a-posteriori (MAP) and Maximum Likelihood Estimation (MLE) are applied in SLAM.
  16. What representations are used to describe a map or structure in SLAM?
    • Which map representation would you choose for path planning and why?
    • Distinguish between sparse mapping and dense mapping.
  17. Explain the concepts of Lie groups and Lie algebra.
    • What are the Exp/Log maps?
  18. How can multiple maps be merged into a single map in SLAM?
  19. What is Inverse Depth Parameterization?
  20. Describe pose graph optimization in SLAM.
  21. Define drift in SLAM.
    • What is scale drift?
  22. How can computational costs be reduced in SLAM?
    • What is keyframe-based optimization?
    • Why is a Look-Up Table (LUT) considered an effective strategy?
  23. What is relocalization in SLAM?
    • How does relocalization differ from loop closure detection?
  24. What does marginalization entail in the context of SLAM?
  25. Explain the concept of IMU pre-integration in SLAM.

 

LiDAR SLAM

  1. Explain how the Iterative Closest Point (ICP) algorithm functions.
    • Which derivative work of the ICP algorithm do you prefer and why?
  2. Discuss the Point-to-Point and Point-to-Plane metrics in the context of the ICP algorithm.
  3. If an ICP algorithm fails to register two sets of point clouds, what could be the possible reasons?
  4. Explain the concept of a K-D tree.
    • How is the K-D tree utilized in processing LiDAR point clouds?
  5. Describe the Octree data structure.
  6. In which scenarios would you use a K-D tree over an Octree for LiDAR point cloud processing, and vice versa?
  7. What is point cloud downsampling and why is it used?
    • Describe the voxelization process.
    • What are the consequences of excessive downsampling?
  8. How is ground segmentation performed in point clouds?
    • What is the mathematical formulation of a 3D plane?
  9. What is a passthrough filter?
  10. What preprocessing techniques are available for removing outliers from LiDAR point clouds?
    • How does the Statistical Outlier Removal (SOR) filter work?
  11. Why is initial alignment crucial in ICP?
  12. Besides x, y, and z coordinates, what additional information can be embedded in a point cloud?
  13. What advantages are gained by integrating LiDAR with an IMU?
  14. How is loop detection performed using LiDAR point clouds?
  15. If a loop is detected, how should loop closure optimization be carried out?
    • How does loop closure in LiDAR SLAM differ from the bundle-adjustment technique in Visual SLAM?
  16. Why does z-drift often occur in LiDAR SLAM optimization using the ground plane?
  17. What is LiDAR de-skewing?
  18. What challenges arise in LiDAR SLAM when there are moving objects in the vicinity?
  19. What is the Multi-path problem in LiDAR?
  20. In what types of environments does LiDAR typically underperform?
  21. What are the different types of LiDAR?
  22. What are various methods for combining data from a camera and LiDAR?
  23. Contrast a point cloud, mesh, and surfel.
  24. What is a Fast Point Feature Histogram (FPFH) descriptor?
  25. What methods are available for detecting changes in a point cloud

 

Visual SLAM

  1. Explain the process of image projection.
    • What are intrinsic and extrinsic matrices?
    • Which formula is used to estimate depth from a single-view image?
  2. What does camera calibration entail and what information is gained from it?
    • Provide the formulas for the K matrix and the Distortion coefficient.
  3. Describe the characteristics of Monocular, Stereo, and RGB-D SLAM, along with their respective advantages and disadvantages.
    • How is the depth map generated in RGB-D?
    • How is the depth map generated in Stereo?
    • Explain the concept of stereo disparity.
    • Is there any way to restore scale in monocular VSLAM?
  4. Explain bundle adjustment.
    • What are the differences between local and global bundle adjustments?
  5. What are the Essential and Fundamental matrices?
    • Write down the formulas for the Essential and Fundamental matrices.
    • How many degrees of freedom do the Essential and Fundamental matrices have?
    • What is the 5/7/8-point algorithm?
  6. What is the Homography matrix?
  7. Describe the camera models you are familiar with.
  8. Explain the process of local feature matching.
    • What are the differences between a keypoint and a descriptor?
    • How does a feature in deep learning differ from a feature in SLAM?
    • What strategies are effective for accurate feature matching?
  9. Explain how local feature tracking is performed.
    • What can serve as a motion model?
    • What methods can be used for optical flow?
    • Describe template tracking.
    • How does optical flow differ from direct tracking?
  10. Explain the features and differences between PTAM, ORB-SLAM, and SVO.
  11. What are the differences between Visual Odometry, Visual-SLAM, and Structure-from-Motion (SfM)?
  12. Why isn’t SIFT used in real-time VSLAM?
    • What are some alternatives to SIFT?
    • What are the benefits of using deep learning-based local feature detection?
  13. What is reprojection error?
    • What is photometric error?
  14. What is the Perspective-n-Point (PnP) problem?
    • How do you determine the camera’s pose when there is a 2D-3D correspondence?
  15. What are the differences between Feature-based VSLAM and Direct VSLAM?
  16. What methods are effective in reducing blur in an image?
  17. What is a co-visibility graph?
  18. How is loop closure detection performed?
    • Describe the Bag-of-Visual-Words and VLADs.
    • How is a Bag-of-Visual-Words created?
    • Explain TF-IDF.
  19. What distinguishes a floating-point descriptor from a binary descriptor?
    • How can the distance between feature descriptors be calculated?
  20. What defines a good local feature?
    • What is meant by invariance?
  21. How is image patch similarity determined?
    • Compare SSD, SAD, and NCC.
  22. Explain Direct Linear Transform (DLT).
  23. Describe the Image Pyramid.
  24. Outline the methods for line/edge extraction.
  25. Explain Triangulation.

 

System design

  1. You have a mobile robot system with four cameras mounted on the front, back, and sides. Design a system that can use it for indoor mapping, localization, and path planning.
  2. Design a kiosk robot that can be used in an amusement park.
  3. Design a mapping/positioning system for parking in a garage (common in the US and Europe).
  4. Design a mapping/localization system for a parking garage.
  5. Design an augmented reality device for use on a moving Ferris wheel.
  6. Design an augmented reality device for a crowded subway station.
  7. Design a robust positioning system for an unmanned forklift to be used in a logistics facility. There are multiple forklifts moving around, and people are present.
  8. Design a SLAM system for a mobile robot to be used in a factory. There is constant lighting, but the floor is coated and highly reflective, and the factory equipment is made of metal and has few visual features and is highly reflective.
  9. You have 10 TB of real-world data. What development pipeline would you create to make SLAM work in as many environments as possible?
  10. Design a crowdsourced, automated HD-Map creation system for autonomous driving.

 

Coding interview questions (Live / implementation)

  1. Implement an image stitcher to create a panorama image from multiple consecutive images.
  2. Implement LiDAR SLAM using G-ICP based odometry. Loop closure should be implemented.
  3. Implement FAST keypoint detector.
  4. Implement an algorithm to find the camera pose, given 2D-3D correspondence data.
  5. Implement the PROSAC framework.
  6. Implement the ICP algorithm.
  7. Implement a brute-force matcher given a set of 2 pairs of feature descriptors.
  8. Implement a Vector / Matrix container. Basic operators should work (e.g. matrix-matrix addition, matrix-matrix multiplication, matrix-vector multiplication).
  9. Implement the A* algorithm
  10. Implement a fast matrix multiplication algorithm.
  11. (Live) Two Sum problem
  12. (Live) Maximum subarray sum problem
  13. (Live) Product of array except self problem
  14. (Live) Subarray sum equals k problem
  15. (Live) Longest common sequence problem