• SDK 2.0 变更

    Software Support Model - Intel® RealSense™ SDK 2.0 Overview With the release of the D400 series of Intel® RealSense™ devices, Intel RealSense Group is introducing a number of changes to the librealsense support. Software support for Intel® RealSense™ technology will be split into two coexisting release lineups: librealsense 1.x and librealsense 2+. librealsense 1.x will continue to provide support for RealSense™ devices: F200, R200 and ZR300. librealsense 2+ will support the next generation of RealSense™ devices, starting with the RS300 and RS400. ## API Changes librealsense2 brings the following improvements and new capabilities (which are incompatible with older RealSense devices): Streaming API librealsense2 provides a more flexible interface for frame acquisition. Instead of a single wait_for_frames loop, the API is based on callbacks and queues: // Configure queue of size one and start streaming frames into it rs2::frame_queue queue(1); dev.start(queue); // This call will block the current thread until new frames become available auto frame = queue.wait_for_frame(); auto pixels = frame.get_data(); // pointer to frame data The same API can be used in a slightly different way for low-latency applications: // Configure direct callback for new frames: dev.start([](rs2::frame frame){ auto pixels = frame.get_data(); // pointer to frame data }); // The application will be notified as soon as new frames become available. Note: This approach allows users to bypass the buffering and synchronization that was done by wait_for_frames. Users who do need to synchronize between different streams can take advantage of the rs2::syncer class: sycner sync; dev.start(sync); // The following call, using the frame timestamp, will block the // current thread until the next coherent set of frames become available auto frames = sync.wait_for_frames(); for (auto&& frame : frames) { auto pixels = frame.get_data(); // pointer to frame data } This version of wait_for_frames is thread-safe by design. You can safely pass a rs::frame object to a background thread. This is done without copying the frame data and without extra dynamic allocations. Multi-Streaming Model librealsense2 eliminates limitations imposed by previous versions with regard to multi-streaming: Multiple applications can use librealsense2 simultaneously, as long as no two users try to stream from the same camera endpoint. In practice, this means that you can: Stream multiple cameras within a single process Stream from camera A in one process and from camera B in another process Stream depth from camera A in one process while streaming fisheye / motion from the same camera in another process Stream from camera A in one process while issuing controls to camera A from another process The following streams of the RS400 act as independent endpoints: Depth, Fisheye, Motion-tracking, Color Each endpoint can be exclusively locked using open/close methods: // Configure depth to run at first permitted configuration dev.open(dev.get_stream_profiles().front()); // From this point on, device streaming is exclusively locked. dev.close(); // Release device ownership Alternatively, users can use the rs2::util::config helper class to configure multiple endpoints at once: rs2::util::config config; // Declare your preferences config.enable_all(rs2::preset::best_quality); // The config object resolves them into concrete capabilities for the supplied camera auto stream = config.open(dev); stream.start([](rs2::frame) {}); New Functionality librealsense2 will be shipped with a built-in Python Wrapper for easier integration. New troubleshooting tools are now part of the package, including a tool for hardware log collection. librealsense2 is capable of handling device disconnects and the discovery of new devices at runtime. Playback & Record functionality is available out-of-the-box. Transition to CMake librealsense2 does not provide hand-written Visual Studio, QT-Creator and XCode project files as you can build librealsense with the IDE of your choice using portable CMake scripts. Intel® RealSense™ RS400 and the Linux Kernel The Intel® RealSense™ RS400 series (starting with kernel 4.4.0.59) does not require any kernel patches for streaming Advanced camera features may still require kernel patches. Currently, getting hardware timestamps is dependent on a patch that has not yet been up-streamed. Without the patch applied you can still use the camera but you will receive a system-time instead of an optical timestamp.
  • D4 系列高级模式

    D400 Advanced Mode Overview RS400 is the newest in the series of stereo-based RealSense devices. It is provided in a number distinct models, all sharing the same depth-from-stereo ASIC. The models differ in optics, global vs. rolling shutter, and the projector capabilities. In addition, different use-cases may introduce different lighting conditions, aim at different range, etc… As a result, depth-from-stereo algorithm in the hardware has to be able to adjust to vastly different stereo input images. To achieve this goal, RS400 ASIC provides an extended set of controls aimed at advanced users, allowing to fine-tune the depth generation process. (and some other like color-correction) Advanced mode hardware APIs are designed to be safe, in the sense that the user can’t brick the device. You always have the option to exit advanced mode, and fall back to default mode of operation. However, while tinkering with the advanced controls, depth quality and stream frame-rate are not guaranteed. You are making changes on your own risk. Long-term vs. Short-term In the short-term, librealsense provides a set of advanced APIs you can use to access advanced mode. Since, this includes as much as 100 new parameters we also provide a way to serialize and load these parameters, using JSON file structure. We encourage the community to explore the range of possibilities and share interesting presets for different use-cases. The limitation of this approach is that the existing protocol is cumbersome and not efficient. We can’t provide clear estimate on how much time it will take until any set of controls will take effect, and the controls are not standard in any way in the OS. In the long-term, Intel algorithm developers will come up with best set of recommended presets, as well as most popular presets from our customers, and these will be hard-coded into the camera firmware. This will provide fast, reliable way to optimize the device for a specific use case. Building Advanced Mode APIs To build advanced mode APIs in addition to librealsense, you need to configure cmake with the following additional parameter: -DBUILD_RS400_EXTRAS=true. For example: cmake .. -DBUILD_EXAMPLES=true -DBUILD_RS400_EXTRAS=true You will need one of the following compilers (or newer) for JSON support: GCC 4.9 Clang 3.4 Microsoft Visual C++ 2015 / Build Tools 14.0.25123.0 (based on github.com/nlohmann/json) Once the library is built, navigate to the output folder (./build/rs400/examples from librealsense folder on Linux) and run rs400-advanced-mode-sample: This application allows you to adjust various controls live and load existing presets (you can drag & drop JSON file into the application) Programming interface At the moment, RS400 advanced mode produces single library rs400-advanced-mode (.so / .dll) dependent on librealsense. This library provides C wrappers for the various controls: #include <rs400_advanced_mode/rs400_advanced_mode.h> // Create one of the low level control groups: STDepthControlGroup depth_control; printf("Reading deepSeaMedianThreshold from Depth Control...\n"); // Query current values from the device: int result = get_depth_control(dev, &depth_control); if (result) { printf("Advanced mode get failed!\n"); return EXIT_FAILURE; } printf("deepSeaMedianThreshold = %d\n", depth_control.deepSeaMedianThreshold); printf("Writing Depth Control back to the device...\n"); // Write new values to the device: result = set_depth_control(dev, &depth_control); if (result) { printf("Advanced mode set failed!\n"); return EXIT_FAILURE; } (see ./rs400/examples/c-sample.c) In addition, you can use advanced mode functionality in your C++ application without linking with any additional dependencies by just including rs400_advanced_mode/rs400_advanced_mode.hpp (under /rs400/include): #include <rs400_advanced_mode/rs400_advanced_mode.hpp> // Define a lambda to tie the advanced mode to an existing realsense device (dev) auto send_receive = [&dev](const std::vector<uint8_t>& input) { return dev->debug().send_and_receive_raw_data(input); }; // Create advanced mode abstraction on to of that rs400::advanced_mode advanced(send_receive); // Create one of the low level control groups rs400::STDepthControlGroup depth_control; advanced.get(&depth_control); // Query current values std::cout << "deepSeaMedianThreshold: " << depth_control.deepSeaMedianThreshold << std::endl; It is recommended to set advanced mode controls when not streaming.
  • frame 生命周期

    Frame Management librealsense2 provides flexible model for frame management and synchronization. The document will overview frame memory management, passing frames between threads and synchronization. API Overview The core C++ abstraction when dealing is the rs2::frame class and the rs2::device::start method. All other management and synchronization primitives can be derived from those two APIs. /** * Start passing frames into user provided callback * \param[in] callback Stream callback, can be any callable object accepting rs2::frame */ template<class T> void start(T callback) const; Once you call start, the library will start dispatching new frames from selected device into the callback you provided. The callback will be invoked from the same thread handling the low-level IO ensuring minimal latency. Any object implementing void operator()(rs2::frame) can be used as a callback. In particular, you can pass an anonymous function (lambda with capture) as the frame callback: dev.start([](rs::frame f){ std::cout << "This line be printed every frame!" << std::endl; }); As a side-note, rs2::device::stop will block until all pending callbacks return. This way within callback scope you can be sure the device object is available. Frame Memory Management rs2::frame is a smart reference to the underlying frame - as long as you hold ownership of the rs2::frame the underlying memory is exclusively yours and will not be modified or freed. If no processing was necessary on the frame, rs2::frame::get_data will provide a direct pointer to the buffer provided by the underlying driver stack. No extra memory copies are performed in this case. If some processing was required (for example, whenever you configure RS2_FORMAT_RGB8 it is likely librealsense will do the conversion from YUY format internally) librealsense will store the processing output in an internal buffer, and rs2::frame::get_data will point to it. You can extend the lifetime of the rs2::frame object by moving it out of the callback into some global, thread-safe, data structure. (See below) Moving rs2::frame does not involve a mem-copy of its content. Except some initial stabilization period, librealsense ensures no heap allocations are being made when using frame callbacks. (This also applies to rs2::frame_queue but not to rs2::syncer primitive) If you are not releasing rs2::frame objects in less then the 1000 / fps milliseconds, you will likely encounter frame drops. These events will be visible in the log, if you decrease the severity to DEBUG level. Frames and Threads Callbacks are invoked from an internal thread to minimize latency. If you have a lot of processing to do, or simply want to handle the frame in your main event loop, librealsense provides rs2::frame_queue primitive to move frames from one thread to another in a thread-safe fashion: rs2::frame_queue q; dev.start([](rs2::frame f){ q.enqueue(std::move(f)); // enqueue any new frames into q }); while(true) { rs2::frame f = q.wait_for_frame(); // wait until new frame is available and dequeue it // handle frames in the main event loop } Since rs2::frame_queue implements operator() you can also pass the queue directly to start: rs2::frame_queue q; dev.start(q); You could also have a separate queue for each stream type: rs2::frame_queue depth_q; dev.start(RS2_STREAM_DEPTH, depth_q); rs2::frame_queue ir_q; dev.start(RS2_STREAM_INFRARED, ir_q); This is particularly handy if you want to set-up different processing pipeline for each stream type. Frame-Drops vs. Latency There are two common types of applications of the streaming API: Those who need the most relevant data as soon as possible (low latency) Those who want all the data, but don’t mind waiting for it (low frame-drops) librealsense provides some degree of control over this trade-off using RS2_OPTION_FRAMES_QUEUE_SIZE option. If you increase this number, your application will consume more memory and some frames might potentially wait in line more time, but frame drops will be less likely to happen. On the flip side, if you decrease this number, you will get frames faster, but if new frame will arrive while you are busy it will get dropped. Frame Syncer Often the input to an image processing application is not simply a frame, but rather a coherent set of frames, preferably taken at the same time. librealsense provides rs2::syncer primitive to help with this problem: auto sync = dev.create_syncer(); // syncronization algorithm can be device specific dev.start(sync); while(true) { auto frameset = sync.wait_for_frames(); // wait for a coherent set of frames for (auto&& frame : frameset) { // handle frame } } In general, there is no guarantee on the quality of the temporal synchronization. If hardware timestamps are available, librealsense will take advantage of them. If the device supports hardware sync, librealsense will try to take advantage of it if it’s enabled, but will not implicitly enable it. You can also use a single rs2::syncer to synchronize between devices, assuming it makes sense.
  • frame 原理解析

    Frame Metadata Frame metadata is a set of parameters (or attributes) that provide a snapshot of the sensor configuration and/or system state present during the frame’s generation. The attributes are recalculated and updated on per-frame basis. librealsense2 supports querying a set predefined attributes as part of the frame data available to the end-user. In this explanatory we will overview The Design and implementation of frame metadata in librealsense2 The Metadata support by Intel® RealSense™ devices Software Design and Implementation The library’s approach on metadata is that any piece of data that provides an unique frame-related info is a potential attribute. Therefore librealsense2 does not limit itself to hardware-originated data, but establishes an infrastructure to capture and encapsulate software-originated attributes as well. The low-level design and implementation follows those two guidelines: All the attributes shall be prepared, packed and distributed as an integral part of rs2_frame object. A metadata handler class is the basic execution unit responsible to parse and provide a single attribute. Metadata Registration When a new device is recognized by librealsense2 the internal resources for its management, including UVC endpoint classes, are allocated. The UVC endpoints perform registration for the specific metadata attributes. In the process a dedicated metadata parser class will be assigned to handle the actual job of de-serializing and validating an attribute. Note that each metadata attribute requires an explicit parser, and querying an unregistered attribute will result in an appropriate exception raised by the library. With the metadata handlers in place, the inbound frames for the specific endpoint can be queried for the requested attributes. The metadata registration flow is summarized in the following diagram*: Note the diagrams present the high-level design. The actual implementation may differ. Metadata Acquisition When a new frame is received by Host, librealsense2 backend is responsible to process and attach the metadata attributes. For hardware-generated attributes, the backend will checks whether the metadata payload is valid. If positive, the metadata payload will be stored along with the pixels data in rs2_frame object. Since querying the attributes is optional and in many cases will not be requested, the payload is stored as raw data, and the library waits for the appropriate API calls to do the actual attribute parsing. The backend is also responsible to prepare and attach the software-generated attributes to the frame data. And contrary to hardware-originated data, those attributes will be generated to all frames. Metadata attributes propagation from the library to the user code is described in the following diagram: Metadata Query API librealsense2 introduces two functions into its public API to query metadata attributes: /** * determine device metadata * \param[in] frame frame handle returned from a callback * \param[in] the metadata attribute to check for support * \return true if device has this metadata */ int rs2_supports_frame_metadata(const rs2_frame* frame, rs2_frame_metadata frame_metadata, rs2_error** error); verifies that the hardware and software-applied preconditions for metadata parsing are met: the attribute was registered for retrieval metadata payload is valid* the specific attribute is included in the payload data blocks* Note * applicable for hardware-originated metadata payload. /** * retrieve metadata from frame handle * \param[in] frame handle returned from a callback * \param[in] frame_metadata the rs2_frame_metadata whose latest frame we are interested in * \return the metadata value **/ rs2_metadata_t rs2_get_frame_metadata(const rs2_frame* frame, rs2_frame_metadata frame_metadata, rs2_error** error); will invoke the metadata parser to read the actual value from metadata payload. In case the attribute’s origin is other than the payload, such as Auto-Exposure for Fisheye stream, then its value will be calculated internally by librealsense2. Librealsense2 API for Metadata The envisaged mode of querying metadata attribute is check-then-query rs2_metadata_t val = 0; ... if (rs2_supports_frame_metadata(.., attribute,...)) val = rs2_get_frame_metadata(.., attribute,...) Calling rs2_get_frame_metadata without testing for attribute support may result in librealsense2 generating an exception. This is a design decision consistent with the error-handling model employed by librealsense2. Employing metadata attributes in demos The samples that demonstrate querying and retrieval of metadata are rs-save-to-disk and rs-config-ui. rs-save-to-disk saves the metadata attributes available for each stream into a comma-separated text file. For instance, for the Depth stream the possible output is: Stream Depth Metadata Attribute Value FRAME_COUNTER 41 FRAME_TIMESTAMP 179708225 SENSOR_TIMESTAMP 179671458 ACTUAL_EXPOSURE 6951 GAIN_LEVEL 16 AUTO_EXPOSURE 1 TIME_OF_ARRIVAL 1523871918775 BACKEND_TIMESTAMP 1523871918741 ACTUAL_FPS 30 rs-config-ui includes a checkbox in the upper-right corner of stream’s canvas, clicking on which will bring up an overlay window with the metadata attributes available. ​ Metadata Support for Intel® RealSense™ devices In order for librealsense2 to get access for device-generated attributes the following system preconditions shall be met: OS shall support metadata provision - for Linux the specific kernel patches shall be applied. Device firmware shall declare and implement metadata attributes payload. OS support Linux OS - the standard Linux UVC driver does not provide metadata support. librealsense2 package includes metadata patch for uvcvideo kernel object. The patches are intended to be deployed with Ubuntu 14, 16.01/02 LTS with kernel versions 4.4, 4.8 and 4.10, and are applied as part of Linux backend installation . See Linux installation-guide for more details. The patches were also successfully ported to Yocto-based distribution and made available for Yocto Reference Kit based on Poky 2.1.2 with kernel version 4.4.26. Windows OS - Metadata extraction is supported by Microsoft starting with Windows10. Check Windows installation guide for details. Metadata attributes in RS400 Devices The device firmware implements a custom metadata payload compatible with Microsoft Extensions for UVC spec, and emits metadata attributes in the frame payload header. The custom payload comprise of several data chunks (“C” structs) with multiple attributes each. The chunks are categorized as calibration/configuration/capture/status. During streaming the chunks are arranged into ordered sets. For instance, given attributes chunks named a,b,c,d,e,f,g, the possible sets could be {a,b,c,d}, {a,f,g,d},{d,e,f,g}. The design requires ,though, that the essential frame attributes, such as Frame Number and Timestamp shall be included in all configured sets, to allow a consistent track of the frames arrived. During the regular operation course the firmware internal state machine decides which attributes set to generate for the current frame. For instance, the first frames will be dispatched with configuration/calibration payloads, while the rest will carry the capture attributes. From the user’s perspective it means that a properly registered metadata attribute may still not be available for some frames, as metadata generation is governed by firmware design.
  • D435i 介绍

    D435i
  • 模仿 Google Test 单元测试框架

    谷歌单元测试框架
  • 1
  • 2