Evaluation

Data Preparation

The first step of evaluation is to prepare your own results. You need to run your code and generate a GraspGroup for each image in each scene. Then call the save_npy function of GraspGroup to dump the results.

To generate a GraspGroup and save it, you can directly input a 2D numpy array for the GraspGroup class:

gg=GraspGroup(np.array([[score_1, width_1, height_1, depth_1, rotation_matrix_1(9), translation_1(3), object_id_1],
                        [score_2, width_2, height_2, depth_2, rotation_matrix_2(9), translation_2(3), object_id_2],
                        ...,
                        [score_N, width_N, height_N, depth_N, rotation_matrix_N(9), translation_N(3), object_id_N]]
              ))
gg.save_npy(save_path)

where your algorithm predicts N grasp poses for an image. For the object_id, you can simply input 0. For the meaning of other entries, you should refer to the doc for Grasp Label Format-API Loaded Labels

The file structure of dump folder should be as follows:

|-- dump_folder
    |-- scene_0100
    |   |-- kinect
    |   |   |
    |   |   --- 0000.npy to 0255.npy
    |   |
    |   --- realsense
    |       |
    |       --- 0000.npy to 0255.npy
    |
    |-- scene_0101
    |
    ...
    |
    --- scene_0189

You can choose to generate dump files for only one camera, there will be no error for doing that.

Evaluation API

Get GraspNetEval instances.

# GraspNetAPI example for evaluate grasps for a scene.
# change the graspnet_root path
import numpy as np
from graspnetAPI import GraspNetEval

####################################################################
graspnet_root = '/home/gmh/graspnet' # ROOT PATH FOR GRASPNET
dump_folder = '/home/gmh/git/rgbd_graspnet/dump_affordance_iounan/' # ROOT PATH FOR DUMP
####################################################################

sceneId = 121
camera = 'kinect'    
ge_k = GraspNetEval(root = graspnet_root, camera = 'kinect', split = 'test')
ge_r = GraspNetEval(root = graspnet_root, camera = 'realsense', split = 'test')

Evaluate A Single Scene

# eval a single scene
print('Evaluating scene:{}, camera:{}'.format(sceneId, camera))
acc = ge_k.eval_scene(scene_id = sceneId, dump_folder = dump_folder)
np_acc = np.array(acc)
print('mean accuracy:{}'.format(np.mean(np_acc)))

Evaluate All Scenes

# # eval all data for kinect
# print('Evaluating kinect')
# res, ap = ge_k.eval_all(dump_folder, proc = 24)

Evaluate ‘Seen’ Split

# # eval 'seen' split for realsense
# print('Evaluating realsense')
# res, ap = ge_r.eval_seen(dump_folder, proc = 24)