Hi everybody
I am using Logitech and I got camra calibrated results
camera matrix
512.674949 0.000000 315.382678
0.000000 510.413978 205.677427
0.000000 0.000000 1.000000
projection
511.625946 0.000000 313.445886 0.000000
0.000000 511.368134 204.690908 0.000000
0.000000 0.000000 1.000000 0.000000
distortion
0.047498 -0.123536 -0.000940 -0.002017 0.0000
board size : 15 by 11 and square 70 mm or 0.007 m
Could any body tell me that results is acceptable with this parameters.
Best
Taher
↧
camera matrix differs from given example in ROS
↧
How to use gscam with ar_pose?
Hi,
I'm new to ROS and I'm trying to use ar_pose live tracking with webcam in the laptop (using gscam). But I'm not receiving any image in rviz. This is the error I receive in the terminal
[ERROR] [1403618585.907565185]: No camera_parameters.txt file found. Use default file if no other is available.
ERROR: cannot launch node of type [ar_pose/ar_single]: can't locate node [ar_single] in package [ar_pose]
Processing...
[rviz-1] process has died [pid 10906, exit code -11, cmd /opt/ros/hydro/lib/rviz/rviz -d /home/gautham/hydro_workspace/sandbox/src/ar_tools/ar_pose/launch/live_single.rviz __name:=rviz __log:=/home/gautham/.ros/log/8edb2bfe-fb8f-11e3-8219-b13485d129c0/rviz-1.log].
This is my launch file
And in the rviz file:
Image Topic: camera/gscam/image_raw
Any help is appreciated. Thanks in advance :)
↧
↧
camera_calibration.py not found
Hi,
I have an issue with the camera_calibration package inside image_pipeline. The camera_calibration.py file can't be found
When running:
rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.108 image:=/camera/image_raw camera:=/camera
I get: `[rospack] Error: package 'cameracalibrator.py' not found`
I followed the tutorial on here:
http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration
I am using ROS Indigo armhf
↧
Reuse camera calibration parameters?
I am using a stereo camera set up which I need to recalibrate every time I want to use it. Is there anyway to save the best camera calibraiton parameters I got and just reuse them?
It would probably only work under the assumption that the camera's do not change in orientation with respect to each other, which is fairly trivial to do.
↧
How can I use a non-native resolution calibration with ar_pose?
Hello all, I'm trying to use ar_pose with a fisheye lens on a GigE Vision camera (specifically, a Vimba). Since the lens doesn't expose the whole sensor I am left with a choice, either use cameracalibration.py with very poor x coverage, or set the region of interest in the camera to only send back the square of pixels that actually contain information and calibrate on that. I tried it the first way first with mediocre results so I'm trying it the second way now but I'm hitting a problem; when I hit Store to store the calibration parameters in the camera it errors out because the resolution of the calibration doesn't match the resolution of the sensor.
What I want to do is use the camera with the reduced region of interest, but change the node that ar_pose gets the camera_info topic from (currently it's from the driver that gets the info from the camera, I want to replace it with something that reads it from disk). I don't think that's too hard to do, I'm just a ROS newb.
↧
↧
Zoom camera calibration in ROS?
How would you tackle the task of zoom camera calibration using standard ROS tools? Both the calibration process itself, and `camera_info` publication afterwards? I haven't found anything in `camera_calibration` or `camera_info_manager`.
If I understand it correctly, you have to estimate the dependency of the `K` matrix on the current zoom level using a function. The dependency of `K` on focal length is IMO linear (as long as we ignore skew and the other odd terms).
There are several quantities that can be reported by the camera:
- "zoom ratio": then I'd just multiply the `f_x` and `f_y` terms of the `K` matrix with this ratio (assuming the camera was calibrated with ratio 1)
- focal length: I'd simply put the focal length in the matrix
- field of view: I can estimate sensor width from the calibration, and then use it to get the focal length (`2*atan(0.5*width/f)`).
This of course ignores radial (and other) distortion, which may also by influenced by the focal length (at least so do I think).
Then I'd publish the updated camera matrix to the appropriate `camera_info` topic.
What do you think about this approach? Has there already been something doing a similar task?
↧
Unable to locate package ros-fuerte-camera-pose
I am using ubuntu 14.04. I am trying to calibrate extrinsic camera. I am getting this error. any idea how to solve it ?
↧
camera_calibration "ImportError" for stereo setup
Hi all;
I am using camera_calibration node (http://wiki.ros.org/camera_calibration) to calibrate a stereo system based on a VRMagic D3 camera. To access the images of the smartcam I am using vrmagic_ros_bridge_server node.
I am running ROS INDIGO in an Ubuntu 64 bit machine.
After remapping images to a common workspace, I have the following topics:
/img_pub1
/img_pub1_camera_info
/img_pub2
/img_pub2_camera_info
/object_image/left/camera_info
/object_image/left/image_raw
/object_image/left/image_raw/compressed
/object_image/left/image_raw/compressed/parameter_descriptions
/object_image/left/image_raw/compressed/parameter_updates
/object_image/left/image_raw/compressedDepth
/object_image/left/image_raw/compressedDepth/parameter_descriptions
/object_image/left/image_raw/compressedDepth/parameter_updates
/object_image/left/image_raw/theora
/object_image/left/image_raw/theora/parameter_descriptions
/object_image/left/image_raw/theora/parameter_updates
/object_image/right/camera_info
/object_image/right/image_raw
/object_image/right/image_raw/compressed
/object_image/right/image_raw/compressed/parameter_descriptions
/object_image/right/image_raw/compressed/parameter_updates
/object_image/right/image_raw/compressedDepth
/object_image/right/image_raw/compressedDepth/parameter_descriptions
/object_image/right/image_raw/compressedDepth/parameter_updates
/object_image/right/image_raw/theora
/object_image/right/image_raw/theora/parameter_descriptions
/object_image/right/image_raw/theora/parameter_updates
/rosout
/rosout_agg
Following the StereoCalibration tutorial step by step, I call the stereo calibration procedure:
rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.024 right:=/object_image/right/image_raw left:=/object_image/left/image_raw right_camera:=/object_image/right left_camera:=/object_image/left
And I obtain the following error, seems some kind of dependency, but I have installed everything (I think) correctly:
Traceback (most recent call last):
File "/opt/ros/indigo/lib/camera_calibration/cameracalibrator.py", line 50, in
from camera_calibration.calibrator import MonoCalibrator, StereoCalibrator, ChessboardInfo, Patterns
ImportError: No module named calibrator
I would appreciate any clue to solve this, I need to obtain the camera_info information of both cameras asap !!
Thnak you all in advance,
Alberto
↧
camera calibration service not available (specific driver for VRMagic)
Hello all;
I am using "camera calibration" node to calibrate a stereo system based on a VRMagic D3 stereo camera.
The driver I am using to obtain the images from the camera is "vrmagic_ros_bridge_server".
My launch file is:
I have remapped the images to a common workspace, and the following topics:
/img_pub1
/img_pub1_camera_info
/img_pub2
/img_pub2_camera_info
/rosout
/rosout_agg
/stereo/camera_info
/stereo/left
/stereo/left/compressed
/stereo/left/compressed/parameter_descriptions
/stereo/left/compressed/parameter_updates
/stereo/left/compressedDepth
/stereo/left/compressedDepth/parameter_descriptions
/stereo/left/compressedDepth/parameter_updates
/stereo/left/theora
/stereo/left/theora/parameter_descriptions
/stereo/left/theora/parameter_updates
/stereo/right
/stereo/right/compressed
/stereo/right/compressed/parameter_descriptions
/stereo/right/compressed/parameter_updates
/stereo/right/compressedDepth
/stereo/right/compressedDepth/parameter_descriptions
/stereo/right/compressedDepth/parameter_updates
/stereo/right/theora
/stereo/right/theora/parameter_descriptions
/stereo/right/theora/parameter_updates
The services that I have available are:
/publish_image_and_info_to_common_ws_node_1/get_loggers
/publish_image_and_info_to_common_ws_node_1/set_logger_level
/publish_image_and_info_to_common_ws_node_2/get_loggers
/publish_image_and_info_to_common_ws_node_2/set_logger_level
/rosout/get_loggers
/rosout/set_logger_level
/stereo/left/compressed/set_parameters
/stereo/left/compressedDepth/set_parameters
/stereo/left/theora/set_parameters
/stereo/right/compressed/set_parameters
/stereo/right/compressedDepth/set_parameters
/stereo/right/theora/set_parameters
/vrmagic_ros_bridge_server_node/get_loggers
/vrmagic_ros_bridge_server_node/set_logger_level
Unfortunately, when I use the "camera_calibration" node, I have the following error:
rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.024 right:=/stereo/right left:=/stereo/left right_camera:=/right left_camera:=/left
('Waiting for service', '/left/set_camera_info', '...')
Service not found
('Waiting for service', '/right/set_camera_info', '...')
Service not found
Which is obvious, because my vrmagic camera driver is not providing that services. If I use "--no-service-check" flag, the terminal get blocked and I obtain no results.
I know this issue has appeared more times on the forums, and all the answers where related to solutions changing or modifying the camera driver to obtain results, but, what if the driver that we are using does NOT provide that services in anyway?
It is not possible to calibrate cameras without using stardard camera drivers?
I would appreciate any king of help a lot...
Thank you,
Alberto
↧
↧
Depth sensor + LWIR registration to generate coloured pointclouds
Hi,
I am trying to generate a colored point cloud using a depth camera and an thermal camera. I have 2 questions.
1. I am unclear on how to do external calibration to find the tf of the thermal camera wrt to depth camera. I want to do something like this http://wiki.ros.org/openni_launch/Tutorials/ExtrinsicCalibrationExternal but this package isn't available for indigo. can I use [camera_pose_calibration](http://wiki.ros.org/camera_pose_calibration) supported on indigo? or should I use [industrial_extrinsic_calibration](http://wiki.ros.org/industrial_extrinsic_cal) ?
2. To create a colored point cloud, i looked at depth_image_proc/point_cloud_xyzrgb which requires a rectified color image, but the thermal camera images are gray scale an i have a image_rect instead of image_rect_color. can I remap the image as image_rect_color or should i convert my gray scale image to a color image using `cv::colormap()`.
System details:
ROS version :Indigo
Ubuntu 14.04 running on odroid
Any help regarding this is appreciated
Thanks
Akshay
↧
usb_cam package and head_camera.yaml missing
Hello,
I am new to ROS. I have been trying to do usb camera integration into a robotics project at our university's lab.
I will write the steps i did and the errors that i got.
1. I went into my ~/catkin_ws/src folder and i ran
git clone https://github.com/bosch-ros-pkg/usb_cam
2. Then I went into my ~/catkin_ws folder and ran
source /devel/setup.bash
3. Then I ran
roscore
4. Then I ran
roslaunch usb_cam usb_cam-test.launch
5. This was the output
... logging to /home/submetu/.ros/log/1fd73732-55ad-11e6-a65d-d8fc93edb7dc/roslaunch-submetu-9724.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://submetu:40095/
SUMMARY
========
PARAMETERS
* /image_view/autosize: True
* /rosdistro: kinetic
* /rosversion: 1.12.2
* /usb_cam/camera_frame_id: usb_cam
* /usb_cam/image_height: 480
* /usb_cam/image_width: 640
* /usb_cam/io_method: mmap
* /usb_cam/pixel_format: yuyv
* /usb_cam/video_device: /dev/video0
NODES
/
image_view (image_view/image_view)
usb_cam (usb_cam/usb_cam_node)
ROS_MASTER_URI=http://localhost:11311
core service [/rosout] found
process[usb_cam-1]: started with pid [9742]
ERROR: cannot launch node of type [image_view/image_view]: can't locate node [image_view] in package [image_view]
[ INFO] [1469813059.142822397]: using default calibration URL
[ INFO] [1469813059.142904972]: camera calibration URL: file:///home/submetu/.ros/camera_info/head_camera.yaml
[ INFO] [1469813059.142970068]: Unable to open camera calibration file [/home/submetu/.ros/camera_info/head_camera.yaml]
[ WARN] [1469813059.142995991]: Camera calibration file /home/submetu/.ros/camera_info/head_camera.yaml not found.
[ INFO] [1469813059.143027047]: Starting 'head_camera' (/dev/video0) at 640x480 via mmap (yuyv) at 30 FPS
[ERROR] [1469813064.141746901]: VIDIOC_S_FMT error 5, Input/output error
terminate called after throwing an instance of 'boost::exception_detail::clone_impl>'
what(): boost: mutex lock failed in pthread_mutex_lock: Invalid argument
[usb_cam-1] process has died [pid 9742, exit code -6, cmd /home/submetu/catkin_ws/devel/lib/usb_cam/usb_cam_node __name:=usb_cam __log:=/home/submetu/.ros/log/1fd73732-55ad-11e6-a65d-d8fc93edb7dc/usb_cam-1.log].
log file: /home/submetu/.ros/log/1fd73732-55ad-11e6-a65d-d8fc93edb7dc/usb_cam-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done
I tried the tutorial of http://wiki.ros.org/camera_calibration , but I am not getting any camera output because the above error comes everytime. So I cant calibrate i.e. i cant get the head_camera.yaml file.
I looked at many tutorials but I could not find anything helpful to me.
Please help :(
Thank you in advance
(im using a lenovo thinkpad e440 with ubuntu. My camera is a logitech camera(it is connected as cheese is giving video output))
↧
Are camera calibration files resolution agnostic?
Are the calibration parameters, generated by cameracalibrator.py, portable across arbitrary camera resolutions?
I'm attempting to calibrate a Raspberry Pi camera following the [monocular camera calibration tutorial](http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration). The process seemed to complete flawlessly.
However, I ran the calibration routine with the image resolution at 640x480. Normally, I run the driver at 320x240 resolution to limit CPU. Can I use the same calibration parameters for 320x240 or do I need to re-run the calibration routine for that resolution and use a separate calibration file?
↧
Asus Xtion not supported in Hydro using openni2?
I'm having trouble with the Asus Xtion and openni2 in Hydro.
I'm getting this error:
`[ERROR] [140934478.275581907]: Unsupported color video mode - Resolution: 640x480 30Hz Format:RGB888`
basically what this guy is seeing: http://answers.ros.org/question/191498/
I will be trying to calibrate, as suggested in that thread, once I am able to get a hold of a camera calibration grid.
Has a solution been found anywhere? Other than upgrading to Indigo I mean.
Thank you.
J
EDIT:
I've attempted to calibrate following these instructions:
http://wiki.ros.org/camera_calibration
I set the --no-service-check option to `default=False` as suggested here: http://answers.ros.org/question/223530/intel-realsense-r200-camera-calibration/
I did this because I was getting this message: `('Waiting for service', '//set_camera_info', '...')
Service not found` when running `rosrun camera_calibration cameracalibrator. --size 5x8 --square 0.03 image:=/camera/image camera:=/ ` (I'm currently using a 5x8 30mm calibration grid)
Notes:
No window showing a video feed/ capture pops up at any point.
`rosmake camera_calibration` ran without error
This is on Ubuntu 12.04 ROS Hydro using an Asus Xtion
Still receiving the above Unsupported color video mode error
EDIT2:
I've upgraded to Ubuntu 14.04 and Indigo. I receive this error:
`[ERROR] [1455146167.444460553]: Unsupported color video mode - Resolution: 640x480@30Hz Format: RGB888`
Here's the output from `lsusb -v`:
Bus 001 Device 004: ID 1d27:0600 ASUS
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 64
idVendor 0x1d27 ASUS
idProduct 0x0600
bcdDevice 0.01
iManufacturer 2 PrimeSense
iProduct 1 PrimeSense Device
iSerial 0
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 69
bNumInterfaces 1
bConfigurationValue 1
iConfiguration 0
bmAttributes 0x80
(Bus Powered)
MaxPower 500mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 3
bInterfaceClass 255 Vendor Specific Class
bInterfaceSubClass 0
bInterfaceProtocol 0
iInterface 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 1
Transfer Type Isochronous
Synch Type None
Usage Type Data
wMaxPacketSize 0x0b70 2x 880 bytes
bInterval 1
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x82 EP 2 IN
bmAttributes 1
Transfer Type Isochronous
Synch Type None
Usage Type Data
wMaxPacketSize 0x135c 3x 860 bytes
bInterval 1
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x83 EP 3 IN
bmAttributes 1
Transfer Type Isochronous
Synch Type None
Usage Type Data
wMaxPacketSize 0x1040 3x 64 bytes
bInterval 1
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 1
bNumEndpoints 3
bInterfaceClass 255 Vendor Specific Class
bInterfaceSubClass 0
bInterfaceProtocol 0
iInterface 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 1
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x82 EP 2 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 1
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x83 EP 3 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 1
Device Qualifier (for other device speed):
bLength 10
bDescriptorType 6
bcdUSB 2.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 64
bNumConfigurations 1
Device Status: 0x0000
(Bus Powered)
So, it seems to be detecting the Asus Xtion
EDIT3:
Ok. This is interesting. I'm not sure that the problem is actually with the Asus now...
I was playing around with the turtlebot simulator in rviz and Gazebo. When trying to view the "video" from the *simulated* Asus Xtion on the turtlebot using this command `roslaunch turtlebot_bringup 3dsensor.launch`... I got this error:
process[depthimage_to_laserscan-15]: started with pid [27430]
[ INFO] [1455209239.662860866]: Initializing nodelet with 4 worker threads.
[ INFO] [1455209240.960930597, 537.700000000]: Device "1d27/0600@1/4" found.
Warning: USB events thread - failed to set priority. This might cause loss of data...
[ERROR] [1455209241.185416940, 537.930000000]: Unsupported color video mode - Resolution: 640x480@30Hz Format: RGB888
[ INFO] [1455209241.781195783, 538.520000000]: Starting depth stream.
[ INFO] [1455209242.204071388, 538.950000000]: using default calibration URL
[ INFO] [1455209242.204147528, 538.950000000]: camera calibration URL: file:///home/developer-admin/.ros/camera_info/rgb_PS1080_PrimeSense.yaml
[ INFO] [1455209242.204234412, 538.950000000]: Unable to open camera calibration file [/home/developer-admin/.ros/camera_info/rgb_PS1080_PrimeSense.yaml]
[ WARN] [1455209242.204268646, 538.950000000]: Camera calibration file /home/developer-admin/.ros/camera_info/rgb_PS1080_PrimeSense.yaml not found.
[depthimage_to_laserscan-15] process has finished cleanly
log file: /home/developer-admin/.ros/log/2792ea80-d04b-11e5-a19a-d850e6c4e2f6/depthimage_to_laserscan-15*.log
[FATAL] [1455209248.308692029, 545.050000000]: Service call failed!
[FATAL] [1455209248.308739353, 545.050000000]: Service call failed!
[camera/camera_nodelet_manager-1] process has died [pid 27323, exit code -11, cmd /opt/ros/indigo/lib/nodelet/nodelet manager __name:=camera_nodelet_manager __log:=/home/developer-admin/.ros/log/2792ea80-d04b-11e5-a19a-d850e6c4e2f6/camera-camera_nodelet_manager-1.log].
log file: /home/developer-admin/.ros/log/2792ea80-d04b-11e5-a19a-d850e6c4e2f6/camera-camera_nodelet_manager-1*.log
[camera/depth_registered_metric-12] process has died [pid 27401, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/convert_metric camera_nodelet_manager --no-bond image_raw:=depth_registered/image_raw image:=depth_registered/image __name:=depth_registered_metric __log:=/home/developer-admin/.ros/log/2792ea80-d04b-11e5-a19a-d850e6c4e2f6/camera-depth_registered_metric-12.log].
log file: /home/developer-admin/.ros/log/2792ea80-d04b-11e5-a19a-d850e6c4e2f6/camera-depth_registered_metric-12*.log
[camera/disparity_depth-13] process has died [pid 27412, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/disparity camera_nodelet_manager --no-bond left/image_rect:=depth/image_rect_raw right:=projector left/disparity:=depth/disparity __name:=disparity_depth __log:=/home/developer-admin/.ros/log/2792ea80-d04b-11e5-a19a-d850e6c4e2f6/camera-disparity_depth-13.log].
log file: /home/developer-admin/.ros/log/2792ea80-d04b-11e5-a19a-d850e6c4e2f6/camera-disparity_depth-13*.log
I am however able to see the PointCloud data and DepthCloud data in rviz on the simulated device.
EDIT 4:
Well, this is disappointing. It looks like my specific model of Asus Xtion doesn't play nice with ROS.
I've tried a different Asus Xtion (the Pro Live ... I think it's the Q6670 Second Edition) and it works out of the box. The only difference I can tell is that this one is specifically marked on the box as "RGB and Depth Sensor" and has "3 different camera looking things" on the front... the RGB, and depth sensors... as opposed to the two that the Asus Xtion Pro has.
↧
↧
Impact,on the calibration of a camera, of the square size
Hi all,
I am at the 3rd year of my phd and I finally really need cameras for some tasks.
First of all I need to calibrate my camera. I need to find the intrinsic parameters. I don't like to have an approach like : "Ok it is working let's go on." and therefore I have a question.
I am using the following tutorial to calibrate my camera:
http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration
And I was wondering what happens when I calibrate a camera and I give the wrong size of the square, after `--square` in the command:
$ rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.108 image:=/camera/image_raw camera:=/camera
I really would like to have some details about it. Explanation with formulas and so on because I am not able to understand. I did 3 calibrations and I generated 3 calibration files. To me they look almost the same but only 2 of them are with the right size of the square. For this reason a question came to my mind: why do I need to specify that number?
Thanks in advance for all your explanations!
↧
stereo_image_proc depth values opposite of what they should be
Hi,
I have used openCV to calibrate my stereo camera, rectify it and then also calculated the projection matrices. I am feeding them in using the `camera_info_manager`.
I'm using `stereo_image_proc` to generate a pointcloud. There are no errors in the node. While the cloud looks fine in all other aspects, the depth value (z value) is completely reversed i.e. closer objects appear to be far away in the point cloud and far away objects appear to be nearby. What could be the reason for this kind of behavior?
Thank you.
edit: I believe the x and y values are also opposite of what they should be based on the frame defined in `stereo_image_proc`
↧
Improved checkerboard detector for camera calibration
Hello!
I was wondering if there is a better checkerboard detector to substitute the one that is being used in the camera_calibrator.py node from the ROS image_pipeline.
This is because the calibration is absolutely impossible to carry out for distances larger than 5 meters. The rospy node just does not detect the checkerboard anymore, even though I am using a checkerboard with 108 mm of square size which is quite big.
With Matlab the checkerboard is detected for much larger distances, so I am sure that having a better algorithm to detect the corners of the checkerboard would help.
What would be the best strategy to detect the checkerboard also for large distances? Is this being developed already by someone?
Thanks in advance for your help!
--------------------------- EDIT ---------------------------------
I went through the calibration procedure in the image_pipeline
https://github.com/ros-perception/image_pipeline/blob/indigo/camera_calibration/src/camera_calibration/calibrator.py
There is an hard-coded threshold in the is_good_sample method that might be rejecting too many good checkerboard detections. That is one ting that might be improved, modified.
Another problem is the downsample and detect approach, in which the images are scaled back to VGA resolution and then detections are upsampled to original size. I think that really makes it impossible to have good detection for distant objects and it should be avoided for high quality camera calibration.
Has anyone tried to get rid of the downsample and detect method? Is the checkerboard detected better without it? Does it become too slow?
↧
camera calibration quality
Hi, all!
I am using [camera calibration node](http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration) for monocular camera. Is there any criterion that says how well the camera was calibrated?
And the second question is: should I pass rectified image to some algorithm after calibration or it will be redundant?
↧
↧
cameracalibrator.py GUI opens but is not drawn
I'm trying to calibrate a single camera using cameracalibrator, as explained in http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration
Despite being able to see the camera using `rosrun image_view image_view image:=/camera/image_raw`, I cannot start cameracalibrator.
After issuing `rosrun camera_calibration cameracalibrator.py --size 9x6 --square 0.05 images:=/camera/image_raw camera:=/camera`, I get the following output:
('Waiting for service', '/camera/set_camera_info', '...')
OK
init done
The window appears, but contents are not draw. There's no video feed nor slider controls when I use that command. How do I solve that? This is using Kinetic on Ubuntu 16.04
If it helps:
$ rostopic list
/camera/camera_info
/camera/image_raw
/camera/image_raw/compressed
/camera/image_raw/compressed/parameter_descriptions
/camera/image_raw/compressed/parameter_updates
/camera/image_raw/compressedDepth
/camera/image_raw/compressedDepth/parameter_descriptions
/camera/image_raw/compressedDepth/parameter_updates
/camera/image_raw/theora
/camera/image_raw/theora/parameter_descriptions
/camera/image_raw/theora/parameter_updates
/camera/mycam/parameter_descriptions
/camera/mycam/parameter_updates
/rosout
/rosout_agg
↧
image_proc vs image_raw
Hi, all!
I am quite misunderstood with playing image_view node and my mono camera. If I pass image_rect it shows very distorted image, whereas image_raw looks much better than image_proc.
If I calibrated camera, I will need to use image_rect topic messages, right? Not image_raw, yes?
Does it mean that my camera is not well enough calibrated?
When I run cameracheck.py it shows that my Reprojection RMS Error varies from 3 to 18 pixels. Is it bad?
↧
Parameters for camera_pose_calibration
I'm trying to use the camera_pose_calibration package, which seems easy to use. Thank you! Can you help me understand some of the parameters:
PatternParameters pattern_width: Is this the # of dots in a row?
PatternParameters pattern_height: # of dots in a column?
PatternParameters pattern_distance: distance between circle centers? in meters?
neighbor_distance: The readme says this is distance between pixels. In meters, I guess? I think this should be a small number with a RealSense camera, like 0.1mm
Finally, is this [OpenCV "asymmetric circle grid"](https://nerian.com/support/resources/patterns/) the type of pattern to use?
↧