Code Monkey home page Code Monkey logo

Comments (4)

emanther avatar emanther commented on July 17, 2024

Additionally, if both the LIDAR and the ego vehicle / ego body frames have z-axes parallel to the current local ground plane, then it is not clear why the extrinsics between them are time-varying (in the calibrated_sensor data for the LIDAR). The offset between them should be invariant to time, shouldn't it?

I suspect the diagram is not precise when it shows the IMU frame as parallel to the ground plane normal, since the data format says the ego_pose is in global coordinates (i.e. the normal of the local ground plane is not involved in its computation), which contradicts the diagram.

from nuscenes-devkit.

oscar-nutonomy avatar oscar-nutonomy commented on July 17, 2024

Dear @emanther : thanks for your questions, and apologies for the delay.

  1. For the global coordinate system x and y points along the ground, z straight up in the direction of gravity.
  2. The ego-vehicle coordinate system is as you say the local ground-plane that contains the points where the four wheels touch the ground. It is translated and rotated with respect to the global coord system as define in the ego_pose table. I have included some code that calculates the angle between the global z and the local z. Hopefully that will help you get started on building what you need.
  3. You are right: the calibrated_sensor does not change over time. This is also reflected in the schema, and in my code-snippet below.
"""
This code is to sort out a reply to
https://github.com/nutonomy/nuscenes-devkit/issues/122
"""

import numpy as np
import matplotlib.pyplot as plt

from pyquaternion import Quaternion

from nuscenes import NuScenes

GLOBAL_X_AXIS = np.array([1, 0, 0])
GLOBAL_Z_AXIS = np.array([0, 0, 1])


def unit_vector(vector):
    """ Returns the unit vector of the vector.  """
    return vector / np.linalg.norm(vector)


def angle_between(v1, v2):
    """ Returns the angle in degrees between vectors 'v1' and 'v2' """
    v1_u = unit_vector(v1)
    v2_u = unit_vector(v2)
    return 180/np.pi * np.arccos(np.clip(np.dot(v1_u, v2_u), -1.0, 1.0))


def main():
    nusc = NuScenes('v1.0-mini')

    scene_number = 6

    # Grab first sample
    sample = nusc.get('sample', nusc.scene[scene_number]['first_sample_token'])

    # Cycle through all samples and store ego pose translation and rotation.
    poses = []
    while not sample['next'] == '':
        sample = nusc.get('sample', sample['next'])
        lidar = nusc.get('sample_data', sample['data']['LIDAR_TOP'])
        pose = nusc.get('ego_pose', lidar['ego_pose_token'])
        poses.append((pose['translation'], Quaternion(pose['rotation'])))

    # Print values for first pose.
    (x, y, _), q = poses[0]
    print('First location (x, y): {:.1f}, {:.1f}'.format(x, y))
    print('Orientation at first location: {:.2f} degrees'.format(q.degrees))

    # Plot all poses on global x, y plan along with orientation.
    # This doesn't really answer the question, it's just as a sanity check.

    for (x, y, z), q in poses:
        plt.plot(x, y, 'r.')

        # The x axis of the ego frame is pointing towards the front of the vehicle.
        ego_x_axis = q.rotate(GLOBAL_X_AXIS)
        plt.arrow(x, y, ego_x_axis[0], ego_x_axis[1])

    plt.axis('equal')
    plt.xlabel('x-global')
    plt.ylabel('y-global')

    # Calculate deviations from global vertical axis and plot.
    # This demonstrates that the z-axis of the ego poses are not
    # all pointing upwards.

    vertical_offsets = []
    for (_, _, _), q in poses:

        # The z axis of the ego frame indicates deviation from global vertical axis (direction of gravity)
        ego_z_axis = q.rotate(GLOBAL_Z_AXIS)
        vertical_offsets.append(angle_between(ego_z_axis, GLOBAL_Z_AXIS))

    plt.savefig('xyposes.png')
    plt.figure()
    plt.hist(vertical_offsets)
    plt.xlabel('Angle (degrees)')
    plt.ylabel('Count')
    plt.title('Offset from global vertical axis.')
    plt.savefig('vertical_offsets.png')

    # Finally show that all calibrated sensor tokens for a particular scene is indeed the same
    sample = nusc.get('sample', nusc.scene[scene_number]['first_sample_token'])

    cali_sensor_tokens = []
    while not sample['next'] == '':
        sample = nusc.get('sample', sample['next'])
        lidar = nusc.get('sample_data', sample['data']['LIDAR_TOP'])
        cali_sensor = nusc.get('calibrated_sensor', lidar['calibrated_sensor_token'])
        cali_sensor_tokens.append(cali_sensor['token'])

    # Assert that they all point to the same calibrated sensor record
    assert len(set(cali_sensor_tokens)) == 1


if __name__ == "__main__":
    main()

from nuscenes-devkit.

emanther avatar emanther commented on July 17, 2024

Dear @oscar-nutonomy, thanks for your detailed reply! I now understand that the ego_pose z-axis / IMU z-axis are not quite parallel to gravity, and that the calibrated sensor tokens are persistent within all samples of a scene. I'm assuming almost all of the non-parallel "error" is due to the slope of the plane the vehicle wheels rest on.

The time-varying aspect of the calibrated sensors I noticed must have been cross-scene (probably cross-vehicle). I checked all scenes in the trainval set, each has a unique calibrated_tensor_token.

from nuscenes-devkit.

oscar-nutonomy avatar oscar-nutonomy commented on July 17, 2024

Yes: the "errors" (offsets) are precisely because of the ground sloping.
And yes: for simplicity we record a calibrated sensor record per scene, even though some may be similar across scenes (If they are from the same log).

from nuscenes-devkit.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.