RealSenseをOpenCVで使う
今回紹介するクラスによって,RealSenseでVR or ARを実現するために必要なRealSenseの機能を簡単に使えるようになります.具体的には,
- RGBカメラ・デプスカメラ画像の取得
- デプスマップをRGBカメラから見た画像(とその逆)の取得
- デプスカメラを原点とする座標系(=RealSenseのワールド座標系)でのXYZ頂点座標の取得
- カメラ内部・外部パラメータ(デプスカメラから見たRGBカメラの位置姿勢)を取得
- 自動露出,自動ホワイトバランスの有効化・無効化
が可能となります.RGBカメラとデプスカメラは空間的に位置がずれているので,ARに使うためには外部パラメータが不可欠です.また,色に関する処理をする場合は自動露出がかえって邪魔になることがあるので,こちらも切れるようにしました.データはすべてOpenCVのcv::Matもしくはstd::vector<cv::Point3d>で取得できます.なお,カメラ内部パラメータはOpenCVで使える形式にしています.
サンプル
こんな感じに使います.
#include "RealSenseCVWrapper.h" int main(void) { RealSenseCVWrapper rscv(640, 480); rscv.getCalibrationStatus(); std::cout << "calibration data: " << "\n camera matrix = \n" << rscv.cameraMatrix << "\n camera distortion params = \n" << rscv.distCoeffs << "\n transform matrix of RGB cam = \n" << rscv.transform << std::endl; while (1) { cv::Mat src; // RealSenseのストリームを開始 rscv.queryFrames(); //rscv.getColorBuffer(src); // デプス画像にマップしたカラー画像を取得 rscv.getMappedColorBuffer(src); // 頂点座標の勾配を推定して陰影表示 rscv.getXYZBuffer(); cv::Mat XYZImage(rscv.bufferSize, CV_32FC1); for (int i = 1; i < rscv.bufferSize.height-1; i++) { for (int j = 1; j < rscv.bufferSize.width-1; j++) { // 隣4隅への勾配ベクトルを算出 float intensity = 0.0f; cv::Point3f v0, v1, v2, v3, v4; v0 = rscv.xyzBuffer[i*rscv.bufferSize.width + j]; if (v0.z != 0) { v1 = rscv.xyzBuffer[(i - 1)*rscv.bufferSize.width + (j - 1)] - v0; v2 = rscv.xyzBuffer[(i - 1)*rscv.bufferSize.width + (j + 1)] - v0; v3 = rscv.xyzBuffer[(i + 1)*rscv.bufferSize.width + (j + 1)] - v0; v4 = rscv.xyzBuffer[(i + 1)*rscv.bufferSize.width + (j - 1)] - v0; // それぞれの勾配から法線ベクトルを4種類算出して平均 // v1 v2 // v0 // v4 v3 // 法線方向は手前側が正 cv::Point3f n0, n1, n2, n3, n4; n1 = v2.cross(v1); n2 = v3.cross(v2); n3 = v4.cross(v3); n4 = v1.cross(v4); n0 = (n1 + n2 + n3 + n4) / cv::norm(n1 + n2 + n3 + n4); cv::Point3f light(3.f, 3.f, 0.f); intensity = n0.ddot(light/cv::norm(light)); } else { intensity = 0.f; } XYZImage.at<float>(i, j) = intensity; } } // RealSenseのストリームを終了 rscv.releaseFrames(); cv::imshow("ts", src); cv::imshow("xyz", XYZImage); int c = cv::waitKey(10); if (c == 27) break; if (c == 'a') { rscv.useAutoAdjust(false); rscv.setExposure(-8); rscv.setWhiteBalance(6000); } if (c == 'A') { rscv.useAutoAdjust(true); } } return 0; }
RealSenseCVWrapper
クラスを初期化するには引数を指定する必要があります.あんまりC++の知識がないのでよくない実装な気もしますが,とりあえず動いているのでよしとしましょう.また,RealSenseCVWrapper::getXxxxBuffer()
を利用する前と後をqueryFrames()
とreleaseFrames()
で挟む必要があります.デストラクタで終了処理をしているので,終了時については特に気にする必要はないと思います.
結果は次の画像のようになります.デプスカメラから見たカラー画像も,コマンド一つで取得可能です.また,XYZ座標値を可視化するために,法線を算出して簡単な拡散反射モデルで表示しています.
ソースコード
今回のクラスのソースコードです.自己責任でご利用ください.
RealSenseCVWrapper.h
#pragma once #include <opencv2\opencv.hpp> #include <RealSense\SenseManager.h> #include <RealSense\SampleReader.h> #pragma comment(lib, "libpxc.lib") class RealSenseCVWrapper { public: // constructor // @param // width : width of picture size // height: height of picture size RealSenseCVWrapper(); RealSenseCVWrapper(int width, int height); ~RealSenseCVWrapper(); void safeRelease(); // query RealSense frames // you should execute queryStream() function before get buffers // and releaseFrames(); function after all by every frame // @return // true = frame is succcessfully queried bool queryFrames(); void releaseFrames(); // get camera buffers void getColorBuffer(); void getDepthBuffer(); void getMappedDepthBuffer(); void getMappedColorBuffer(); void getXYZBuffer(); void getColorBuffer(cv::Mat &color); void getDepthBuffer(cv::Mat &depth); void getMappedDepthBuffer(cv::Mat &mappedDepth); void getMappedColorBuffer(cv::Mat &mappedColor); void getXYZBuffer(std::vector<cv::Point3f> xyz); // adjust camera settings // @param // use_aa: flag to use auto exposure and auto white balance void useAutoAdjust(bool use_aa); // set RGB camera exposure // @param // value: exposure time = 2 ^ value [s] // if you want 30fps you should set under 33 ~ 2 ^ -5 [ms] void setExposure(int32_t value); // set RGB camera white balance // @param // value: white point = value [K] // (ex: D65 standard light source = about 6500[K]) void setWhiteBalance(int32_t value); // get calibration status // - camera matrix // - distortion parameters // - transform matrix (RGB camera origin to depth camera origin) void getCalibrationStatus(); // OpenCV image buffer cv::Size bufferSize; // buffer size cv::Mat colorBuffer; // RGB camera buffer (BGR, 8UC3) cv::Mat depthBuffer; // depth camera buffer (Gray, 32FC1) cv::Mat colorBufferMapped; // RGB camera buffer mapped to depth camera (BGR, 8UC3) cv::Mat depthBufferMapped; // depth camera buffer mapped to RGB camera (Gray, 32FC1) std::vector<cv::Point3f> xyzBuffer; // XYZ point cloud buffer from depth camera (XYZ) // RGB camera calibration data writtern by OpenCV camera model cv::Mat cameraMatrix; // inclueds fx,fy,cx,cy cv::Mat distCoeffs; // k1,k2,p1,p2,k3 cv::Mat transform; // 4x4 coordinate transformation from RGB camera origin to the world (=depth) system origin protected: Intel::RealSense::SenseManager *rsm; Intel::RealSense::Sample *sample; };
RealSenseCVWrapper.cpp
(修正1/23)getColorBuffer()の問題を修正
#include "RealSenseCVWrapper.h" using namespace Intel::RealSense; using namespace cv; using namespace std; RealSenseCVWrapper::RealSenseCVWrapper() { } RealSenseCVWrapper::RealSenseCVWrapper(int w = 640, int h = 480) { rsm = SenseManager::CreateInstance(); if (!rsm) { cout << "Unable to create SenseManager" << endl; } // RealSense settings rsm->EnableStream(Capture::STREAM_TYPE_COLOR, w, h); rsm->EnableStream(Capture::STREAM_TYPE_DEPTH, w, h); if (rsm->Init() < Status::STATUS_NO_ERROR) { cout << "Unable to initialize SenseManager" << endl; } bufferSize = Size(w, h); } RealSenseCVWrapper::~RealSenseCVWrapper() { safeRelease(); } void RealSenseCVWrapper::safeRelease() { if (rsm) { //rsm->Release(); rsm->Close(); } } bool RealSenseCVWrapper::queryFrames() { if (rsm->AcquireFrame(true) < Status::STATUS_NO_ERROR) { return false; } sample = rsm->QuerySample(); } void RealSenseCVWrapper::releaseFrames() { rsm->ReleaseFrame(); } void RealSenseCVWrapper::getColorBuffer() { // Acquire access to image data Image *img_c = sample->color; Image::ImageData data_c; img_c->AcquireAccess(Image::ACCESS_READ_WRITE, Image::PIXEL_FORMAT_BGR, &data_c); // create OpenCV Mat from Image::ImageInfo Image::ImageInfo cinfo = img_c->QueryInfo(); colorBuffer = Mat(cinfo.height, cinfo.width, CV_8UC3); // copy data colorBuffer.data = data_c.planes[0]; colorBuffer = colorBuffer.clone(); // release access img_c->ReleaseAccess(&data_c); //img_c->Release(); } void RealSenseCVWrapper::getDepthBuffer() { Image *img_d = sample->depth; Image::ImageData data_d; img_d->AcquireAccess(Image::ACCESS_READ_WRITE, Image::PIXEL_FORMAT_DEPTH_F32, &data_d); Image::ImageInfo dinfo = img_d->QueryInfo(); depthBuffer = Mat(dinfo.height, dinfo.width, CV_32FC1); depthBuffer.data = data_d.planes[0]; depthBuffer = depthBuffer.clone(); img_d->ReleaseAccess(&data_d); //img_d->Release(); } void RealSenseCVWrapper::getMappedDepthBuffer() { // create projection stream to acquire mapped depth image Projection *projection = rsm->QueryCaptureManager()->QueryDevice()->CreateProjection(); Image *depth_mapped = projection->CreateDepthImageMappedToColor(sample->depth, sample->color); // acquire access to depth data Image::ImageData ddata_mapped; depth_mapped->AcquireAccess(Image::ACCESS_READ, Image::PIXEL_FORMAT_DEPTH_F32, &ddata_mapped); // copy to cv::Mat Image::ImageInfo dinfo = depth_mapped->QueryInfo(); depthBufferMapped = Mat(dinfo.height, dinfo.width, CV_32FC1); depthBufferMapped.data = ddata_mapped.planes[0]; depthBufferMapped = depthBufferMapped.clone(); // release access depth_mapped->ReleaseAccess(&ddata_mapped); depth_mapped->Release(); projection->Release(); } void RealSenseCVWrapper::getMappedColorBuffer() { // create projection stream to acquire mapped depth image Projection *projection = rsm->QueryCaptureManager()->QueryDevice()->CreateProjection(); Image *color_mapped = projection->CreateColorImageMappedToDepth(sample->depth, sample->color); // acquire access to depth data Image::ImageData cdata_mapped; color_mapped->AcquireAccess(Image::ACCESS_READ, Image::PIXEL_FORMAT_BGR, &cdata_mapped); // copy to cv::Mat Image::ImageInfo cinfo = color_mapped->QueryInfo(); colorBufferMapped = Mat(cinfo.height, cinfo.width, CV_8UC3); colorBufferMapped.data = cdata_mapped.planes[0]; colorBufferMapped = colorBufferMapped.clone(); // release access color_mapped->ReleaseAccess(&cdata_mapped); color_mapped->Release(); projection->Release(); } void RealSenseCVWrapper::getXYZBuffer() { Projection *projection = rsm->QueryCaptureManager()->QueryDevice()->CreateProjection(); std::vector<Point3DF32> vertices; vertices.resize(bufferSize.width * bufferSize.height); projection->QueryVertices(sample->depth, &vertices[0]); xyzBuffer.clear(); for (int i = 0; i < bufferSize.width*bufferSize.height; i++) { Point3f p; p.x = vertices[i].x; p.y = vertices[i].y; p.z = vertices[i].z; xyzBuffer.push_back(p); } projection->Release(); } void RealSenseCVWrapper::getColorBuffer(cv::Mat & color) { getColorBuffer(); color = colorBuffer.clone(); } void RealSenseCVWrapper::getDepthBuffer(cv::Mat & depth) { getDepthBuffer(); depth = depthBuffer.clone(); } void RealSenseCVWrapper::getMappedDepthBuffer(cv::Mat & mappedDepth) { getMappedDepthBuffer(); mappedDepth = depthBufferMapped.clone(); } void RealSenseCVWrapper::getMappedColorBuffer(cv::Mat & mappedColor) { getMappedColorBuffer(); mappedColor = colorBufferMapped.clone(); } void RealSenseCVWrapper::getXYZBuffer(std::vector<cv::Point3f> xyz) { getXYZBuffer(); xyz = xyzBuffer; } void RealSenseCVWrapper::useAutoAdjust(bool use_aa) { Device *device = rsm->QueryCaptureManager()->QueryDevice(); // get current values if you use manual mode device->SetColorAutoExposure(use_aa); device->SetColorAutoWhiteBalance(use_aa); } void RealSenseCVWrapper::setExposure(int32_t value) { rsm->QueryCaptureManager()->QueryDevice()->SetColorExposure(value); } void RealSenseCVWrapper::setWhiteBalance(int32_t value) { rsm->QueryCaptureManager()->QueryDevice()->SetColorWhiteBalance(value); } void RealSenseCVWrapper::getCalibrationStatus() { // aquire calibration data of RGB camera stream Projection * projection = rsm->QueryCaptureManager()->QueryDevice()->CreateProjection(); Calibration::StreamCalibration calib; Calibration::StreamTransform trans; projection->QueryCalibration() ->QueryStreamProjectionParameters(StreamType::STREAM_TYPE_COLOR, &calib, &trans); // copy RGB camera calibration data to OpenCV cameraMatrix = Mat::eye(3, 3, CV_32FC1); cameraMatrix.at<float>(0, 0) = calib.focalLength.x; cameraMatrix.at<float>(1, 1) = calib.focalLength.y; cameraMatrix.at<float>(0, 2) = calib.principalPoint.x; cameraMatrix.at<float>(1, 2) = calib.principalPoint.y; distCoeffs = Mat::zeros(5, 1, CV_32FC1); distCoeffs.at<float>(0) = calib.radialDistortion[0]; distCoeffs.at<float>(1) = calib.radialDistortion[1]; distCoeffs.at<float>(2) = calib.tangentialDistortion[0]; distCoeffs.at<float>(3) = calib.tangentialDistortion[1]; distCoeffs.at<float>(4) = calib.radialDistortion[2]; transform = Mat::eye(4, 4, CV_32FC1); for (int i = 0; i < 3; i++) { for (int j = 0; j < 3; j++) { transform.at<float>(i, j) = trans.rotation[i][j]; // todo: rotationの行と列の順を確認 } transform.at<float>(i, 3) = trans.translation[i]; } projection->Release(); }