Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 121 articles
Browse latest View live

ROS monocular camera_calibration error

$
0
0
I am trying to calibrate an Orbbec Astra Pro with the ROS camera_calibration package on a Jetson TK1 running Ubuntu 14.04 with ROS indigo installed. I installed the ROS camera calibration binaries for ROS Indigo. While the calibration data is shown in the terminal after pressing calibrate, when I afterwards press the commit button I get the following error: Unhandled exception in thread started by sys.excepthook is missing lost sys.stderr sys.excepthook is missing lost sys.stderr Exception in thread Thread-2 (most likely raised during interpreter shutdown):Exception in thread Thread-7 (most likely raised during interpreter shutdown): Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner File "/opt/ros/indigo/lib/python2.7/dist-packages/camera_calibration/camera_calibrator.py", line 74, in run Exception in thread Thread-5 (most likely raised during interpreter shutdown):: 'NoneType' object is not callable Traceback (most recent call last): How can I solve this problem? Thank you!

How to implement calibration algorithm for camera?

$
0
0
How do I 'implement' a new calibration algorithm for kinect camera of my robot? Should I create a new node for this? I'm new to ROS so would appreciate assistance:)

depth_camera_info and rgb_camera_info always same

$
0
0
I would like to simply get the distance from the **/camera/depth/image_rect** topic . I use an asus xtion pro live camera and also the openni ros indigo package. Now my problem is that the distance to any object is wrong by 5cm. And this error varrys with the distance to that object. So I thought this is an intrinsic calibration error! That is why I tried to calibrate my camera with that tutorial: http://wiki.ros.org/openni_launch/Tutorials/IntrinsicCalibration Now I would like to calibrate the depth and the rgb camera. Should they have the same calibration? Meaning should the topics /camera/rgb/camera_info /camera/depth/camera_info be the same? Cause even if I start my camera like this: I get the exact same values when echoing the /camera/rgb/camera_info and the /camera/rgb/camera_info topic Moreover I get an additional error: [ INFO] [1528801309.809967779]: Starting color stream. [ INFO] [1528801309.875827378]: Starting depth stream. [ INFO] [1528801310.236155035]: camera calibration URL: /home/mlampre/.ros/camera_info/depth_PS1080_PrimeSense.yaml [ERROR] [1528801310.236205499]: Invalid camera calibration URL: /home/mlampre/.ros/camera_info/depth_PS1080_PrimeSense.yaml **Edit1** I corrected the url names. Now I do not get any error anymore, but even though the two files are different if I echo the topics /camera/rgb/camera_info and /camera/depth/camera_info I get the same message.

About the camera calibration parameters

$
0
0
Hi, I have been looking at the camera calibration parameters generated during the calibration and the explanation of the cameraInfo parameters in the ROS wiki. I can't figure out what the K' matrix is and how the projection matrix is obtained. I thought it would be given by P=K*R but the math doesn't add up. Thanks!

Using camera calibration results in OpenCV

$
0
0
I've calibrated Kinect intrinsics (both depth and rgb) using camera_calibration module and saved a bunch of rgb images afterwards (calibration file was detected successfully; subscription to kinect1/rgb/image_color was used). Do I need to load distortion coefficients and camera matrix parameters into my OpenCV application which will process the images or all the required image processing was done by applying the calibration file and I need to feed a vector with zeros as distortion coefficients and a 3x3 zero matrix with 1 at (2,2) as camera matrix? I'm specifically interested in using solvePnP or solvePnPRansac to estimate object pose in OpenCV code (I'm also not sure which one to use feel free to comment about them as well). My launch file to publish images:

How to use camera calibration in uvc_camera node?

$
0
0
I have done camera calibration according to [this tutorial](http://wiki.ros.org/camera_calibration). I have my camera.yaml:> image_width: 640 image_height: 480 camera_name: camera camera_matrix: rows: 3 cols: 3 data: [695.522341, 0.000000, 308.978108, 0.000000, 689.343000, 229.440273, 0.000000, 0.000000, 1.000000] distortion_model: plumb_bob distortion_coefficients: rows: 1 cols: 5 data: [-1.057717, 0.862148, -0.010012, 0.006670, 0.000000] rectification_matrix: rows: 3 cols: 3 data: [1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000] projection_matrix: rows: 3 cols: 4 data: [492.770355, 0.000000, 310.383965, 0.000000, 0.000000, 577.829346, 219.297603, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000] I want to use this calibration file when I am using uvc_camera. After launching that, I echo the camera_info topic, comes follow: > header: seq: 973 stamp: secs: 1529935088 nsecs: 499953536 frame_id: camera height: 480 width: 640 distortion_model: plumb_bob D: [-1.057717, 0.862148, -0.010012, 0.00667, 0.0] K: [695.522341, 0.0, 308.978108, 0.0, 689.343, 229.440273, 0.0, 0.0, 1.0] R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0] P: [492.770355, 0.0, 310.383965, 0.0, 0.0, 577.829346, 219.297603, 0.0, 0.0, 0.0, 1.0, 0.0] binning_x: 0 binning_y: 0 roi: x_offset: 0 y_offset: 0 height: 0 width: 0 --- The params above are right. But when I run rosrun rqt_image_view rqt_image_view and subscribe the topic `/image_raw` I saw nothing changes. My question is : does the calibration work? If it is work, why I can't see the changes? If it does not, what's the right way to use the calibration file?

Remove a single ROS package

$
0
0
I'm still trying to solve [this](https://answers.ros.org/question/296087/camera_calibration-opens-gui-window-but-freezes/) problem in another question. I decided to uninstall camera_calibration and install it again, see if that helps. However if I try to remove the package with apt-get it also wants to remove the whole ROS! what's the correct way of removing a single package? see output below: sudo apt-get purge ros-kinetic-camera-calibration Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libaprutil1-dev libboost-all-dev libboost-date-time-dev libboost-filesystem-dev libboost-iostreams-dev libboost-log-dev libboost-regex-dev libboost-system-dev libboost-thread-dev libboost-wave-dev libegl1-mesa-dev libgazebo7-dev libgdk-pixbuf2.0-dev libgles2-mesa-dev libgtk2.0-dev libharfbuzz-dev libharfbuzz-gobject0 libinput-dev liblog4cxx-dev liblog4cxx10-dev libmirclient-dev libmircommon-dev libmircookie-dev libmircookie2 libmircore-dev libogre-1.9-dev libpango1.0-dev libpcl-dev libpyside2-dev libqt5opengl5-dev libqt5webkit5-dev libqt5x11extras5-dev libqt5xmlpatterns5-dev libqt5xmlpatterns5-private-dev libsdformat4-dev liburdfdom-dev libvtk6-qt-dev libwayland-bin libwayland-dev qtbase5-dev qtbase5-private-dev qtdeclarative5-dev qtdeclarative5-private-dev qtmultimedia5-dev qtscript5-dev qtscript5-private-dev qttools5-dev qttools5-private-dev ros-kinetic-actionlib ros-kinetic-actionlib-msgs ros-kinetic-actionlib-tutorials ros-kinetic-bond ros-kinetic-bond-core ros-kinetic-bondcpp ros-kinetic-bondpy ros-kinetic-camera-calibration-parsers ros-kinetic-camera-info-manager ros-kinetic-class-loader ros-kinetic-collada-parser ros-kinetic-collada-urdf ros-kinetic-common-msgs ros-kinetic-common-tutorials ros-kinetic-compressed-depth-image-transport ros-kinetic-compressed-image-transport ros-kinetic-control-msgs ros-kinetic-cpp-common ros-kinetic-cv-bridge ros-kinetic-depth-image-proc ros-kinetic-desktop ros-kinetic-diagnostic-aggregator ros-kinetic-diagnostic-analysis ros-kinetic-diagnostic-common-diagnostics ros-kinetic-diagnostic-msgs ros-kinetic-diagnostic-updater ros-kinetic-diagnostics ros-kinetic-dynamic-reconfigure ros-kinetic-eigen-conversions ros-kinetic-executive-smach ros-kinetic-filters ros-kinetic-gazebo-dev ros-kinetic-gazebo-msgs ros-kinetic-gazebo-plugins ros-kinetic-gazebo-ros ros-kinetic-gazebo-ros-pkgs ros-kinetic-geometric-shapes ros-kinetic-geometry ros-kinetic-geometry-msgs ros-kinetic-geometry-tutorials ros-kinetic-image-common ros-kinetic-image-geometry ros-kinetic-image-proc ros-kinetic-image-publisher ros-kinetic-image-rotate ros-kinetic-image-transport ros-kinetic-image-transport-plugins ros-kinetic-image-view ros-kinetic-interactive-marker-tutorials ros-kinetic-interactive-markers ros-kinetic-joint-state-publisher ros-kinetic-kdl-conversions ros-kinetic-kdl-parser ros-kinetic-laser-assembler ros-kinetic-laser-filters ros-kinetic-laser-geometry ros-kinetic-laser-pipeline ros-kinetic-librviz-tutorial ros-kinetic-map-msgs ros-kinetic-message-filters ros-kinetic-message-runtime ros-kinetic-mk ros-kinetic-nav-msgs ros-kinetic-nodelet ros-kinetic-nodelet-core ros-kinetic-nodelet-topic-tools ros-kinetic-nodelet-tutorial-math ros-kinetic-opencv3 ros-kinetic-pcl-conversions ros-kinetic-pcl-msgs ros-kinetic-pcl-ros ros-kinetic-perception-pcl ros-kinetic-pluginlib ros-kinetic-pluginlib-tutorials ros-kinetic-polled-camera ros-kinetic-python-qt-binding ros-kinetic-qt-dotgraph ros-kinetic-qt-gui ros-kinetic-qt-gui-cpp ros-kinetic-qt-gui-py-common ros-kinetic-random-numbers ros-kinetic-resource-retriever ros-kinetic-robot ros-kinetic-robot-model ros-kinetic-robot-state-publisher ros-kinetic-ros ros-kinetic-ros-base ros-kinetic-ros-comm ros-kinetic-ros-core ros-kinetic-ros-tutorials ros-kinetic-rosbag ros-kinetic-rosbag-storage ros-kinetic-rosbuild ros-kinetic-rosconsole ros-kinetic-rosconsole-bridge ros-kinetic-roscpp ros-kinetic-roscpp-core ros-kinetic-roscpp-serialization ros-kinetic-roscpp-traits ros-kinetic-roscpp-tutorials ros-kinetic-rosgraph-msgs ros-kinetic-roslaunch ros-kinetic-roslib ros-kinetic-roslisp ros-kinetic-rosmsg ros-kinetic-rosnode ros-kinetic-rosout ros-kinetic-rospack ros-kinetic-rospy ros-kinetic-rospy-tutorials ros-kinetic-rosservice ros-kinetic-rostest ros-kinetic-rostime ros-kinetic-rostopic ros-kinetic-rosunit ros-kinetic-roswtf ros-kinetic-rqt-action ros-kinetic-rqt-bag ros-kinetic-rqt-bag-plugins ros-kinetic-rqt-common-plugins ros-kinetic-rqt-console ros-kinetic-rqt-dep ros-kinetic-rqt-graph ros-kinetic-rqt-gui ros-kinetic-rqt-gui-cpp ros-kinetic-rqt-gui-py ros-kinetic-rqt-image-view ros-kinetic-rqt-launch ros-kinetic-rqt-logger-level ros-kinetic-rqt-moveit ros-kinetic-rqt-msg ros-kinetic-rqt-nav-view ros-kinetic-rqt-plot ros-kinetic-rqt-pose-view ros-kinetic-rqt-publisher ros-kinetic-rqt-py-common ros-kinetic-rqt-py-console ros-kinetic-rqt-reconfigure ros-kinetic-rqt-robot-dashboard ros-kinetic-rqt-robot-monitor ros-kinetic-rqt-robot-plugins ros-kinetic-rqt-robot-steering ros-kinetic-rqt-runtime-monitor ros-kinetic-rqt-rviz ros-kinetic-rqt-service-caller ros-kinetic-rqt-shell ros-kinetic-rqt-srv ros-kinetic-rqt-tf-tree ros-kinetic-rqt-top ros-kinetic-rqt-topic ros-kinetic-rqt-web ros-kinetic-rviz ros-kinetic-rviz-plugin-tutorials ros-kinetic-rviz-python-tutorial ros-kinetic-self-test ros-kinetic-sensor-msgs ros-kinetic-shape-msgs ros-kinetic-simulators ros-kinetic-smach-msgs ros-kinetic-smach-ros ros-kinetic-stage ros-kinetic-stage-ros ros-kinetic-std-msgs ros-kinetic-std-srvs ros-kinetic-stereo-image-proc ros-kinetic-stereo-msgs ros-kinetic-tf ros-kinetic-tf-conversions ros-kinetic-tf2 ros-kinetic-tf2-eigen ros-kinetic-tf2-geometry-msgs ros-kinetic-tf2-kdl ros-kinetic-tf2-msgs ros-kinetic-tf2-py ros-kinetic-tf2-ros ros-kinetic-theora-image-transport ros-kinetic-topic-tools ros-kinetic-trajectory-msgs ros-kinetic-turtle-actionlib ros-kinetic-turtle-tf ros-kinetic-turtle-tf2 ros-kinetic-turtlesim ros-kinetic-urdf ros-kinetic-urdf-tutorial ros-kinetic-vision-opencv ros-kinetic-visualization-marker-tutorials ros-kinetic-visualization-msgs ros-kinetic-visualization-tutorials ros-kinetic-viz ros-kinetic-xacro ros-kinetic-xmlrpcpp Use 'sudo apt autoremove' to remove them. The following packages will be REMOVED ros-kinetic-camera-calibration* ros-kinetic-desktop-full* ros-kinetic-image-pipeline* ros-kinetic-perception* 0 to upgrade, 0 to newly install, 4 to remove and 11 not to upgrade. After this operation, 238 kB disk space will be freed. Do you want to continue? [Y/n] n Abort.

camera_calibration opens gui window but freezes

$
0
0
I'm trying to run the camera_calibration node. I run this command: rosrun camera_calibration cameracalibrator.py --size 8x7 --square 0.023 --approximate=0.1 --no-service-check image:=/camera/color/image_raw I see a window opening but then it freezes indefinitely. I tried to debug cameracalibrator.py and saw that the freeze occurs at the cv2.namedWindow("display", cv2.WINDOW_NORMAL) in the run() method of DisplayThread which is created in the OpenCVCalibrationNode __init__(). I have tried to move the cv2.namedWindow somewhere else but it also freezes. In fact after trying a few times the freeze transforms into a segmentation fault when executing the cv2.namedWindow statement. Any idea what could the problem be? I'm running Ubuntu 16.04, ROS kinetic, opencv version is 3.3

Kinect camera calibration is taking hours. What to do?

$
0
0
How to draft a faster calibration method for kinect camera? It takes 3 hours now to complete a calibration. I want to reduce this time. May I request assistance from someone?

Why is image_raw better than image_rect_color ?

$
0
0
Hi, I am using Kinetic version. I made monocular camera calibration of Logitech c930e by using [this tutorial of ROS](http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration). Afterwards, I implemented rectification as described in [the document of image_proc](http://wiki.ros.org/image_proc). When I compare `image_raw` and `image_rect_color`, `image_raw` provides desired/rectified image (see the comparison image link below). Actual raw image is provided from `image_rect_color`. As I know, image_rect_color must be the rectified and desired result. All in all, I am confused about this issue and need an explanation. What am I doing wrong ? Comparison Image: https://drive.google.com/open?id=1eSQnJIIkmC9UVi04YxrQ6lpEiptPpPSV

Stereo Calibration from images file

$
0
0
Hi I have a set of images of a checkerboard pattern from a camera whose calibration info I would like to find out. I have a ROS driver for my Point Grey Chameleon3 Camerars and can publish rostopic list like /camera/left/camera_info /camera/left/image_raw /camera/left/image_raw/compressed /camera/left/image_raw/compressed/parameter_descriptions /camera/left/image_raw/compressed/parameter_updates /camera/left/image_raw/compressedDepth /camera/left/image_raw/compressedDepth/parameter_descriptions /camera/left/image_raw/compressedDepth/parameter_updates /camera/left/image_raw/theora /camera/left/image_raw/theora/parameter_descriptions /camera/left/image_raw/theora/parameter_updates /camera/parameter_descriptions /camera/parameter_updates /camera/right/camera_info /camera/right/image_raw /camera/right/image_raw/compressed /camera/right/image_raw/compressed/parameter_descriptions /camera/right/image_raw/compressed/parameter_updates /camera/right/image_raw/compressedDepth /camera/right/image_raw/compressedDepth/parameter_descriptions /camera/right/image_raw/compressedDepth/parameter_updates /camera/right/image_raw/theora /camera/right/image_raw/theora/parameter_descriptions /camera/right/image_raw/theora/parameter_updates /diagnostics Unfortunately I can not find instructions on how to feed images to the Camera calibration tool instead of the camera itself. Is there any way to do this using the ROS package? So far my understanding is, firstly to rectify the images coming from my camera. So I need to rectify the images that I saved and wonna use later for the Calibration. Is that right? I find this launch file that need to modify Then run this image proc.launch file Is that correcet sequence and if the case what after that?Alternatively, are there any other tools I could use? Any help? Thanks

How to use camera calibration restults from the ROS stereo calibration to check accuracy of the camera for long distances.

$
0
0
Hi Im using two point Grey Chameleon3 mono camera set up as Master Salve and synchronized so can work as stereo camera. I have a installed ROS driver and able to publish the camera topics . Im using this ROS driver https://github.com/KumarRobotics/flea3. And with roslaunch flea3 stereo_node.launch left:=18081067 right:=17496474 camera:=stereo_camera can launch the driver. Then was following the ROS tutorial camera_calibrationTutorialsStereoCalibration [http://wiki.ros.org/camera_calibration/Tutorials/StereoCalibration](http://wiki.ros.org/camera_calibration/Tutorials/StereoCalibration) to calibrate the cameras. I got 70 samples for the calibration and the epipolar error was around 0.2. This are the Calibration results Left: ('D = ', [-0.20826996106865595, 0.18366155924086058, -0.0034661778577718466, 0.00307931151347718, 0.0]) ('K = ', [432.82205088588205, 0.0, 272.95231180581044, 0.0, 435.6996693192078, 174.95641222266673, 0.0, 0.0, 1.0]) ('R = ', [0.9746296173669449, 0.02700939091034212, -0.22218821245470002, -0.026808041682390916, 0.9996329035169494, 0.003922640364378138, 0.22221259607033478, 0.0021333093834195356, 0.974995903139473]) ('P = ', [482.0586696457318, 0.0, 402.53031158447266, 0.0, 0.0, 482.0586696457318, 178.41748809814453, 0.0, 0.0, 0.0, 1.0, 0.0]) Right: ('D = ', [-0.20871659658963718, 0.13988041114304747, -0.0024096479983088267, 0.0031211255518143266, 0.0]) ('K = ', [428.59279077571426, 0.0, 275.84270706306677, 0.0, 430.39539990687126, 189.6284029604295, 0.0, 0.0, 1.0]) ('R = ', [0.9744460995294874, 0.030491070431326987, -0.22254234144476925, -0.03069272631969819, 0.9995256063000713, 0.002553213962635353, 0.2225146189867814, 0.004342461793354892, 0.9749196825188937]) ('P = ', [482.0586696457318, 0.0, 402.53031158447266, -71.0404082822227, 0.0, 482.0586696457318, 178.41748809814453, 0.0, 0.0, 0.0, 1.0, 0.0]) ('self.T ', [-0.14360295357921507, -0.004493432498569846, 0.03279579808809728]) ('self.R ', [0.999992392164315, -0.003887570979931969, 0.0003200083856870595, 0.0038871258997246238, 0.9999914930203251, 0.0013799055115972527, -0.0003253701440041504, -0.0013786511006187285, 0.9999989967272028]) None # oST version 5.0 parameters [image] width 512 height 384 [narrow_stereo/left] camera matrix 432.822051 0.000000 272.952312 0.000000 435.699669 174.956412 0.000000 0.000000 1.000000 distortion -0.208270 0.183662 -0.003466 0.003079 0.000000 rectification 0.974630 0.027009 -0.222188 -0.026808 0.999633 0.003923 0.222213 0.002133 0.974996 projection 482.058670 0.000000 402.530312 0.000000 0.000000 482.058670 178.417488 0.000000 0.000000 0.000000 1.000000 0.000000 # oST version 5.0 parameters [image] width 512 height 384 [narrow_stereo/right] camera matrix 428.592791 0.000000 275.842707 0.000000 430.395400 189.628403 0.000000 0.000000 1.000000 distortion -0.208717 0.139880 -0.002410 0.003121 0.000000 rectification 0.974446 0.030491 -0.222542 -0.030693 0.999526 0.002553 0.222515 0.004342 0.974920 projection 482.058670 0.000000 402.530312 -71.040408 0.000000 482.058670 178.417488 0.000000 0.000000 0.000000 1.000000 0.00000 Then I got some images of the same chessboard that used to calibrate the cameras by different distances like 4m, 8m, 10m, 20m, 30 m and 40 m. I measured the distance from the camera to the chessboard with laser range finder very accurate. My question is how to use those calibration results knowing the distance to the object to see how accurate is the camera and to get the disparity and how accurate can detect smallest object by certain distance? Like can use the cemara matrix or any formula to get disparity from the camera base line? Any help? Thanks

How to publish/stream roi of rectified images by using image_proc?

$
0
0
I have calibrated camera by using cameracalibrator.py node. When I use image_proc node to compare raw and rectified images, I don't get what I want. Although image is rectified, border of the image is getting distorted toward opposite direction as the image below; ![left is raw, right is image_rect_color](/upfiles/15361753278679593.png) As a result, rectified image is still useless for me. Thus, this time, I calibrated my camera using opencv so that I can get region of interest (ROI) in the image after undistortion operation. Thus, the rectified image becomes perfect for me. However, I need ROS to do that while streaming rectified image by using image_proc. Is there anyway to publish rectified images without unwanted border?

camera calibration with different chess board size

$
0
0
Hi, I want to perform monocular camera calibration using a chessboard that has 6X6 squares in it where each square has side length as 0.054 m. The number of squares in the chessboard and side of squares are different than the example shown in [http://wiki.ros.org/camera_calibration](http://wiki.ros.org/camera_calibration). My question is that can I get good calibration with the modified chessboard that I mentioned I have been using which is different from th given example? Thank you in advance.

Unable to launch calibration window when doing monocular camera calibration

$
0
0
I'm running kinetic on Ubuntu 16.04, and I attempted to follow the camera calibration tutorial, but I could not reliably get the calibration window to launch despite receiving no errors after I executed: rosrun camera_calibration cameracalibrator.py --size 7x6 --square 0.015 image:=/camera/image_raw camera:=/usb_cam But if I add --approximate value, it can run normally. I am confused because the option is used on stereo cameras.

ROS Projector Camera calibration package

$
0
0
I am trying to calibrate a projector-camera stereo pair. Is there a projector to camera calibration package in ROS? I am looking for something similar to the one available for the monocular or stereo camera calibration available in ROS. (http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration)

How to configure CameraInfoManager set_camera_info service name?

$
0
0
How do I configure CameraInfoManager to publish the correct set_camera_info services? I am working on a very simple node to take stereo images produced by a stereo USB camera where the images come in as a single double-wide image consisting of left/right images concatenated. The node simply splits the incoming image down the middle and republishes on left and right image topics. I am trying to add the appropriate CameraInfoManager machinery to load/save calibration data. The current code is in the branch 'cam_info' at https://github.com/dbc/side_x_side_stereo I create two pointers to CamInfoManager instances: // Camera info managers. camera_info_manager::CameraInfoManager *left_cinfo_; camera_info_manager::CameraInfoManager *right_cinfo_; Allocate and init: // Allocate and initialize camera info managers. left_cinfo_ = new camera_info_manager::CameraInfoManager(nh, "left_camera"); right_cinfo_ = new camera_info_manager::CameraInfoManager(nh, "right_camera"); left_cinfo_->loadCameraInfo(""); right_cinfo_->loadCameraInfo(""); But I don't see 'left_camera' or 'right_camera' associated set_camera_info services being advertised: dave@ai1:~/catkin_ws$ rosservice list /cameracalibrator/get_loggers /cameracalibrator/set_logger_level /image_raw/compressed/set_parameters /image_raw/compressedDepth/set_parameters /image_raw/theora/set_parameters /libuvc_camera/get_loggers /libuvc_camera/set_logger_level /libuvc_camera/set_parameters /rosout/get_loggers /rosout/set_logger_level /rqt_gui_cpp_node_5296/get_loggers /rqt_gui_cpp_node_5296/set_logger_level /rqt_gui_py_node_5296/get_loggers /rqt_gui_py_node_5296/set_logger_level /set_camera_info /side_by_side_stereo_splitter_node/get_loggers /side_by_side_stereo_splitter_node/set_camera_info /side_by_side_stereo_splitter_node/set_logger_level /stereo/left/image_raw/compressed/set_parameters /stereo/left/image_raw/compressedDepth/set_parameters /stereo/left/image_raw/theora/set_parameters /stereo/right/image_raw/compressed/set_parameters /stereo/right/image_raw/compressedDepth/set_parameters /stereo/right/image_raw/theora/set_parameters There is /side_by_side_stereo_splitter_node/set_camera_info, but I am confused as to what is creating that. So, I am clearly not setting up the CameraInfoManagers correctly to get the topics that camera_calibration expects. It is kind of whiney: Traceback (most recent call last): File "/opt/ros/melodic/lib/python2.7/dist-packages/camera_calibration/camera_calibrator.py", line 247, in on_mouse if self.do_upload(): File "/opt/ros/melodic/lib/python2.7/dist-packages/camera_calibration/camera_calibrator.py", line 207, in do_upload response = self.set_left_camera_info_service(info[0]) File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 439, in __call__ return self.call(*args, **kwds) File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 499, in call service_uri = self._get_service_uri(request) File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 467, in _get_service_uri raise ServiceException("service [%s] unavailable"%self.resolved_name) rospy.service.ServiceException: service [/left_camera/set_camera_info] unavailable What do I need to do to get CameraInfoManager to advertise sensible services?

Read the camera's calibration parameters from the file and publish them on the `/CAM_NAME/camera_info` topic

$
0
0
I have calibrated my camera according to this [tutorial](http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration), and the obtained calibration parameters are stored in the `yaml` and `ini` files. Is there any way to read these files and then publish the camera parameters to the specified topic `/CAM_NAME/camera_info`?

Reading Stereo Camera Calibration using rosparam

$
0
0
Hi all! I have a YAML file which contains calibration matrices for a stereo pair of cameras. I just can't figure out how to get the calibration data into a CameraInfoManager instance. The YAML file looks like this: # Transform that takes a point in lidar frame and puts it in the left DAVIS camera frame. T_cam0_lidar: - [ 0.01140786, -0.99985524, 0.01262402, 0.03651404] - [ 0.04312291, -0.01212116, -0.99899624, -0.08637514] - [ 0.99900464, 0.0119408 , 0.04297839, -0.12882002] - [ 0. , 0. , 0. , 1. ] cam0: # Transform that takes a point in the left DAVIS IMU frame to the left DAVIS camera frame. T_cam_imu: - [ 0.9998771896957381, -0.015128404695721132, -0.004091075349566317, -0.002760732198752687] - [ 0.015081690576531692, 0.9998233340761795, -0.011217987615716542, -0.001652095229536009] - [ 0.004260062852482415, 0.011154909598696812, 0.99992871, -0.017829494499329843] - [0.0, 0.0, 0.0, 1.0] projection_matrix: - [199.6530123165822, 0.0, 177.43276376280926, 0.0] - [0.0, 199.6530123165822, 126.81215684365904, 0.0] - [0.0, 0.0, 1.0, 0.0] rectification_matrix: - [0.999877311526236, 0.015019439766575743, -0.004447282784398257] - [-0.014996983873604017, 0.9998748347535599, 0.005040367172759556] - [0.004522429630305261, -0.004973052949604937, 0.9999774079320989] camera_model: pinhole distortion_coeffs: [-0.048031442223833355, 0.011330957517194437, -0.055378166304281135, 0.021500973881459395] distortion_model: equidistant intrinsics: [226.38018519795807, 226.15002947047415, 173.6470807871759, 133.73271487507847] resolution: [346, 260] rostopic: /davis/left/image_raw cam1: # Transform that takes a point in the left DAVIS IMU frame to the right DAVIS camera frame. T_cam_imu: - [0.9999459669775138, -0.002658051013083418, -0.010049770654903653, -0.10052351879548456] - [0.002628365498611729, 0.9999921475216119, -0.0029659045702786734, -0.002149609013368439] - [0.01005757526494452, 0.0029393298430320275, 0.9999451012529955, -0.01780039434280798] - [0.0, 0.0, 0.0, 1.0] # Transform that takes a point in the left DAVIS camera frame to the right DAVIS camera frame. T_cn_cnm1: - [0.9999285439274112, 0.011088072985503046, -0.004467849222081981, -0.09988137641750752] - [-0.011042817783611191, 0.9998887260774646, 0.01002953830336461, -0.0003927067773089277] - [0.004578560319692358, -0.009979483987103495, 0.9999397215256256, 1.8880107752680777e-06] - [0.0, 0.0, 0.0, 1.0] projection_matrix: - [199.6530123165822, 0.0, 177.43276376280926, -19.941771812941038] - [0.0, 199.6530123165822, 126.81215684365904, 0.0] - [0.0, 0.0, 1.0, 0.0] rectification_matrix: - [0.9999922706537476, 0.003931701344419404, -1.890238450965101e-05] - [-0.003931746704476347, 0.9999797362744968, -0.005006836150689904] - [-7.83382948021244e-07, 0.0050068717705076754, 0.9999874655386736] camera_model: pinhole distortion_coeffs: [-0.04846669832871334, 0.010092844338123635, -0.04293073765014637, 0.005194706897326005] distortion_model: equidistant intrinsics: [226.0181418548734, 225.7869434267677, 174.5433576736815, 124.21627572590607] resolution: [346, 260] rostopic: /davis/right/image_raw and I am able to access the data through a nodehandle by adding a rosparam to my launch file: and std::vector test; nh.getParam("event_stereo/stereo_params/cam0/projection_matrix", test); This works fine, so I am able to access what is in the YAML through the nodehandle. What I'd like to do now, is actually load the matrices into a image_geometry::PinholeCameraModel so I can rectify incoming images and to load the camera extrinsics in order to do some stereo camera operations. I've tried all sorts of things, increasingly inelegant, but there must be an easy way... Many thanks in advance!

Asus Xtion / MS Kinect Extinsics Calibration

$
0
0
Hi, I am following [this tutorial](http://wiki.ros.org/openni_launch/Tutorials/ExtrinsicCalibration) to calibrate my ASUS Xtion PRO Live's rgb and depth camera on an Ubuntu 18.04 running ROS Melodic. However, I cannot install the [camera_pose_calibration](https://github.com/ros-perception/camera_pose) package. As I understand, the package that is referred to in the tutorial was only supported up to Groovy. Are there any alternatives to achieve the calibration? Any help would be highly appreciated. Thanks. P.S. It has just been a week since I've gotten my hands dirty with ROS.
Viewing all 121 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>