接下来将了解Android HAL是如何与相机设备进行交互的,一般各硬件厂商的 camera HAL会有个
v4l2_camera_hal.cpp 文件,在这个文件中向frameworks提供HAL的对外接口,该文件会通过
HAL_MODULE_INFO_SYM 修饰一个 camera_module_t 结构体;camera Provider服务就是
通过 HAL_MODULE_INFO_SYM 找到 camera_module_t,从而操作Camera HAL达到操作camera设备。
下一篇《谁在调用 v4l2_camera_HAL 摄像头驱动》中会描述android系统frameworks装载cameraHAL
模块过程,我们共同梳理系统是如何通过C/S模式向android系统上用户提供camera服务功能。
本篇只关注 camera HAL框架实现内容,因笔者需要使用虚拟摄像头给android用户提供摄像头功能,
续重构 camera hal部分源码,因此我们以android系统提供的camera-hal样本例程为分析对象。
源码路径 @hardware/libhardware/modules/camera/3_4/ 下,其中v4l2_camera_hal.cpp是入口
函数
static int open_dev(const hw_module_t* module,
const char* name,
hw_device_t** device) {
return gCameraHAL.openDevice(module, name, device);
}
} // namespace v4l2_camera_hal
static hw_module_methods_t v4l2_module_methods = {
.open = v4l2_camera_hal::open_dev};
camera_module_t HAL_MODULE_INFO_SYM __attribute__((visibility("default"))) = {
.common =
{
.tag = HARDWARE_MODULE_TAG,
.module_api_version = CAMERA_MODULE_API_VERSION_2_4,
.hal_api_version = HARDWARE_HAL_API_VERSION,
.id = CAMERA_HARDWARE_MODULE_ID,
.name = "V4L2 Camera HAL v3",
.author = "The Android Open Source Project",
.methods = &v4l2_module_methods,
.dso = nullptr,
.reserved = {0},
},
.get_number_of_cameras = v4l2_camera_hal::get_number_of_cameras,
.get_camera_info = v4l2_camera_hal::get_camera_info,
.set_callbacks = v4l2_camera_hal::set_callbacks,
.get_vendor_tag_ops = v4l2_camera_hal::get_vendor_tag_ops,
.open_legacy = v4l2_camera_hal::open_legacy,
.set_torch_mode = v4l2_camera_hal::set_torch_mode,
.init = nullptr,
.reserved = {nullptr, nullptr, nullptr, nullptr, nullptr}};
static V4L2CameraHAL gCameraHAL 是静态全局变量,申明也在此文件中,V4L2CameraHAL的构造函数和申明如下:
namespace v4l2_camera_hal {
// Default global camera hal.
static V4L2CameraHAL gCameraHAL;
V4L2CameraHAL::V4L2CameraHAL() : mCameras(), mCallbacks(NULL) {
HAL_LOG_ENTER();
// Adds all available V4L2 devices.
// List /dev nodes.
DIR* dir = opendir("/dev");
if (dir == NULL) {
HAL_LOGE("Failed to open /dev");
return;
}
// Find /dev/video* nodes.
dirent* ent;
std::vector nodes;
while ((ent = readdir(dir))) {
std::string desired = "video";
size_t len = desired.size();
if (strncmp(desired.c_str(), ent->d_name, len) == 0) {
if (strlen(ent->d_name) > len && isdigit(ent->d_name[len])) {
// ent is a numbered video node.
nodes.push_back(std::string("/dev/") + ent->d_name);
HAL_LOGV("Found video node %s.", nodes.back().c_str());
}
}
}
// Test each for V4L2 support and uniqueness.
std::unordered_set buses;
std::string bus;
v4l2_capability cap;
int fd;
int id = 0;
for (const auto& node : nodes) {
// Open the node.
fd = TEMP_FAILURE_RETRY(open(node.c_str(), O_RDWR));
if (fd < 0) {
HAL_LOGE("failed to open %s (%s).", node.c_str(), strerror(errno));
continue;
}
// Read V4L2 capabilities.
if (TEMP_FAILURE_RETRY(ioctl(fd, VIDIOC_QUERYCAP, &cap)) != 0) {
HAL_LOGE(
"VIDIOC_QUERYCAP on %s fail: %s.", node.c_str(), strerror(errno));
} else if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)) {
HAL_LOGE("%s is not a V4L2 video capture device.", node.c_str());
} else {
// If the node is unique, add a camera for it.
bus = reinterpret_cast(cap.bus_info);
if (buses.insert(bus).second) {
HAL_LOGV("Found unique bus at %s.", node.c_str());
std::unique_ptr cam(V4L2Camera::NewV4L2Camera(id++, node));
if (cam) {
mCameras.push_back(std::move(cam));
} else {
HAL_LOGE("Failed to initialize camera at %s.", node.c_str());
}
}
}
TEMP_FAILURE_RETRY(close(fd));
}
}
};
在加载 camera.v4l2.so 库时,将会调用 V4L2CameraHAL 类的构造函数,在构造函数中,主要是探测 /dev/ 目录下有多少个 video 节点且支持V4L2 video capture,
并将探测结果保存在 std::vector
的对象是 v4l2_camera_hal::V4L2Camera 类的实例,该 V4L2Camera类是继承 v4l2_camera_hal::Camera 类。程序中调用mCameras对象方法就是来源与上述两个类。
在加载 camera.v4l2.so 库时,调用 V4L2CameraHAL 构造函数,此构造函数中调用 V4L2Camera::NewV4L2Camera(id++, node) 构造摄像头对象,其函数如下:
@hardware/libhardware/modules/camera/3_4/v4l2_camera.h
namespace v4l2_camera_hal {
// V4L2Camera is a specific V4L2-supported camera device. The Camera object
// contains all logic common between all cameras (e.g. front and back cameras),
// while a specific camera device (e.g. V4L2Camera) holds all specific
// metadata and logic about that device.
class V4L2Camera : public default_camera_hal::Camera {
public:
// Use this method to create V4L2Camera objects. Functionally equivalent
// to "new V4L2Camera", except that it may return nullptr in case of failure.
static V4L2Camera* NewV4L2Camera(int id, const std::string path);
~V4L2Camera();
private:
// Constructor private to allow failing on bad input.
// Use NewV4L2Camera instead.
V4L2Camera(int id,
std::shared_ptr<V4L2Wrapper> v4l2_wrapper,
std::unique_ptr<Metadata> metadata);
int enqueueRequest(
std::shared_ptr<default_camera_hal::CaptureRequest> request) override;
/ Async request processing helpers.
// Dequeue a request from the waiting queue.
// Blocks until a request is available.
std::shared_ptr<default_camera_hal::CaptureRequest> dequeueRequest();
std::unique_ptr<Metadata> metadata_;
std::mutex request_queue_lock_;
std::queue<std::shared_ptr<default_camera_hal::CaptureRequest>>
request_queue_;
std::mutex in_flight_lock_;
// Maps buffer index : request.
std::map<uint32_t, std::shared_ptr<default_camera_hal::CaptureRequest>>
in_flight_;
// Threads require holding an Android strong pointer.
android::sp<android::Thread> buffer_enqueuer_;
android::sp<android::Thread> buffer_dequeuer_;
std::condition_variable requests_available_;
std::condition_variable buffers_in_flight_;
int32_t max_input_streams_;
std::array<int, 3> max_output_streams_; // {raw, non-stalling, stalling}.
}
@hardware/libhardware/modules/camera/3_4/camera.h
namespace default_camera_hal {
// Camera represents a physical camera on a device.
// This is constructed when the HAL module is loaded, one per physical camera.
// TODO(b/29185945): Support hotplugging.
// It is opened by the framework, and must be closed before it can be opened
// again.
// This is an abstract class, containing all logic and data shared between all
// camera devices (front, back, etc) and common to the ISP.
class Camera {
public:
// id is used to distinguish cameras. 0 <= id < NUM_CAMERAS.
// module is a handle to the HAL module, used when the device is opened.
Camera(int id);
virtual ~Camera();
// Common Camera Device Operations (see )
int openDevice(const hw_module_t *module, hw_device_t **device);
int getInfo(struct camera_info *info);
int close();
// Camera v3 Device Operations (see )
int initialize(const camera3_callback_ops_t *callback_ops);
int configureStreams(camera3_stream_configuration_t *stream_list);
const camera_metadata_t *constructDefaultRequestSettings(int type);
int processCaptureRequest(camera3_capture_request_t *temp_request);
void dump(int fd);
int flush();
protected:
... //> 省略纯虚方法声明内容
// Callback for when the device has filled in the requested data.
// Fills in the result struct, validates the data, sends appropriate
// notifications, and returns the result to the framework.
void completeRequest(
std::shared_ptr<CaptureRequest> request, int err);
// Prettyprint template names
const char* templateToString(int type);
private:
// Camera device handle returned to framework for use
camera3_device_t mDevice;
// Get static info from the device and store it in mStaticInfo.
int loadStaticInfo();
// Confirm that a stream configuration is valid.
int validateStreamConfiguration(
const camera3_stream_configuration_t* stream_config);
// Verify settings are valid for reprocessing an input buffer
bool isValidReprocessSettings(const camera_metadata_t *settings);
// Pre-process an output buffer
int preprocessCaptureBuffer(camera3_stream_buffer_t *buffer);
// Send a shutter notify message with start of exposure time
void notifyShutter(uint32_t frame_number, uint64_t timestamp);
// Send an error message and return the errored out result.
void completeRequestWithError(std::shared_ptr<CaptureRequest> request);
// Send a capture result for a request.
void sendResult(std::shared_ptr<CaptureRequest> request);
// Is type a valid template type (and valid index into mTemplates)
bool isValidTemplateType(int type);
// Identifier used by framework to distinguish cameras
const int mId;
// CameraMetadata containing static characteristics
std::unique_ptr<StaticProperties> mStaticInfo;
// Flag indicating if settings have been set since
// the last configure_streams() call.
bool mSettingsSet;
// Busy flag indicates camera is in use
bool mBusy;
// Camera device operations handle shared by all devices
const static camera3_device_ops_t sOps;
// Methods used to call back into the framework
const camera3_callback_ops_t *mCallbackOps;
// Lock protecting the Camera object for modifications
android::Mutex mDeviceLock;
// Lock protecting only static camera characteristics, which may
// be accessed without the camera device open
android::Mutex mStaticInfoLock;
android::Mutex mFlushLock;
// Standard camera settings templates
std::unique_ptr<const android::CameraMetadata> mTemplates[CAMERA3_TEMPLATE_COUNT];
// Track in flight requests.
std::unique_ptr<RequestTracker> mInFlightTracker;
};
} // namespace default_camera_hal
这两个头文件基本上涵盖了V4L2camera对象成员变量和方法内容, NewV4L2Camera() 构造方法返回是 static V4L2Camera 对象,
被存放到容器中,我们简单分析一下摄像头构造过程源码。
@hardware/libhardware/modules/camera/3_4/v4l2_camera.cpp
V4L2Camera* V4L2Camera::NewV4L2Camera(int id, const std::string path) {
HAL_LOG_ENTER();
std::shared_ptr<V4L2Wrapper> v4l2_wrapper(V4L2Wrapper::NewV4L2Wrapper(path));
if (!v4l2_wrapper) {
HAL_LOGE("Failed to initialize V4L2 wrapper.");
return nullptr;
}
std::unique_ptr<Metadata> metadata;
int res = GetV4L2Metadata(v4l2_wrapper, &metadata);
if (res) {
HAL_LOGE("Failed to initialize V4L2 metadata: %d", res);
return nullptr;
}
return new V4L2Camera(id, std::move(v4l2_wrapper), std::move(metadata));
}
V4L2Camera::V4L2Camera(int id,
std::shared_ptr<V4L2Wrapper> v4l2_wrapper,
std::unique_ptr<Metadata> metadata)
: default_camera_hal::Camera(id),
device_(std::move(v4l2_wrapper)),
metadata_(std::move(metadata)),
max_input_streams_(0),
max_output_streams_({{0, 0, 0}}),
buffer_enqueuer_(new FunctionThread(
std::bind(&V4L2Camera::enqueueRequestBuffers, this))),
buffer_dequeuer_(new FunctionThread(
std::bind(&V4L2Camera::dequeueRequestBuffers, this))) {
HAL_LOG_ENTER();
}
我们看到通过 bus_path创建 v4l2_wrapper()对象,根据该对象获取摄像头缺省配置参数,摄像头所具有的兼容能力;
通过 id、v4l2_wrapper和metadata 构造出该摄像头对象,
并存放到 std::vector
摄像头设备打开是HAL_MODULE_INFO_SYM模块的 .methods 方法,过程如下
//> 1 HAL_MODULE_INFO_SYM
static int open_dev(const hw_module_t* module,
const char* name,
hw_device_t** device) {
return gCameraHAL.openDevice(module, name, device);
}
//> 2
int V4L2CameraHAL::openDevice(const hw_module_t* module,
const char* name,
hw_device_t** device) {
HAL_LOG_ENTER();
if (module != &HAL_MODULE_INFO_SYM.common) {
HAL_LOGE(
"Invalid module %p expected %p", module, &HAL_MODULE_INFO_SYM.common);
return -EINVAL;
}
int id;
if (!android::base::ParseInt(name, &id, 0, getNumberOfCameras() - 1)) {
return -EINVAL;
}
// TODO(b/29185945): Hotplugging: return -EINVAL if unplugged.
return mCameras[id]->openDevice(module, device);
}
//> 3. 具体执行函数
int Camera::openDevice(const hw_module_t *module, hw_device_t **device)
{
ALOGI("%s:%d: Opening camera device", __func__, mId);
ATRACE_CALL();
android::Mutex::Autolock al(mDeviceLock);
if (mBusy) {
ALOGE("%s:%d: Error! Camera device already opened", __func__, mId);
return -EBUSY;
}
int connectResult = connect();
if (connectResult != 0) {
return connectResult;
}
mBusy = true;
mDevice.common.module = const_cast(module);
*device = &mDevice.common;
return 0;
}
连接上摄像头后、就是打开此摄像头设备;摄像头连接过程我们暂时不做分析,调用程序如下:
int V4L2Camera::connect() {
HAL_LOG_ENTER();
if (connection_) {
HAL_LOGE("Already connected. Please disconnect and try again.");
return -EIO;
}
connection_.reset(new V4L2Wrapper::Connection(device_));
if (connection_->status()) {
HAL_LOGE("Failed to connect to device.");
return connection_->status();
}
// TODO(b/29185945): confirm this is a supported device.
// This is checked by the HAL, but the device at |device_|'s path may
// not be the same one that was there when the HAL was loaded.
// (Alternatively, better hotplugging support may make this unecessary
// by disabling cameras that get disconnected and checking newly connected
// cameras, so connect() is never called on an unsupported camera)
// TODO(b/29158098): Inform service of any flashes that are no longer
// available because this camera is in use.
return 0;
}
此程序调用的 connection_ 定义如下:std::unique_ptrV4L2Wrapper::Connection connection_;此V4L2Wrapper类
是对 V4L2_CAMERA 的接口封装,new V4L2Wrapper::Connection(device_)源码如下:
int V4L2Wrapper::Connect() {
HAL_LOG_ENTER();
std::lock_guard<std::mutex> lock(connection_lock_);
if (connected()) {
HAL_LOGV("Camera device %s is already connected.", device_path_.c_str());
++connection_count_;
return 0;
}
// Open in nonblocking mode (DQBUF may return EAGAIN).
int fd = TEMP_FAILURE_RETRY(open(device_path_.c_str(), O_RDWR | O_NONBLOCK));
if (fd < 0) {
HAL_LOGE("failed to open %s (%s)", device_path_.c_str(), strerror(errno));
return -ENODEV;
}
device_fd_.reset(fd);
++connection_count_;
// Check if this connection has the extended control query capability.
v4l2_query_ext_ctrl query;
query.id = V4L2_CTRL_FLAG_NEXT_CTRL | V4L2_CTRL_FLAG_NEXT_COMPOUND;
extended_query_supported_ = (IoctlLocked(VIDIOC_QUERY_EXT_CTRL, &query) == 0);
// TODO(b/29185945): confirm this is a supported device.
// This is checked by the HAL, but the device at device_path_ may
// not be the same one that was there when the HAL was loaded.
// (Alternatively, better hotplugging support may make this unecessary
// by disabling cameras that get disconnected and checking newly connected
// cameras, so Connect() is never called on an unsupported camera)
return 0;
}
我们看到 open(device_path_.c_str(), O_RDWR | O_NONBLOCK) 以非阻塞方式成功打开该摄像头,就表示连接摄像头成功。
拍照或录像本质是把 camera output_buffer 内容、发送到frameworks层,该层接收到消息通知时,在调用 用户空间的
回调方法,我们姑且认为是这样逻辑,我将在《谁在调用 v4l2_camera_HAL 摄像头驱动》博文中分析此部分。HAL层涉及到
的源码内容如下:
static int process_capture_request(const camera3_device_t *dev,
camera3_capture_request_t *request)
{
return camdev_to_camera(dev)->processCaptureRequest(request);
}
//> CaptureRequest
int Camera::processCaptureRequest(camera3_capture_request_t *temp_request)
{
int res;
// TODO(b/32917568): A capture request submitted or ongoing during a flush
// should be returned with an error; for now they are mutually exclusive.
android::Mutex::Autolock al(mFlushLock);
ATRACE_CALL();
if (temp_request == NULL) {
ALOGE("%s:%d: NULL request recieved", __func__, mId);
return -EINVAL;
}
// Make a persistent copy of request, since otherwise it won't live
// past the end of this method.
std::shared_ptr<CaptureRequest> request = std::make_shared<CaptureRequest>(temp_request);
ALOGV("%s:%d: frame: %d", __func__, mId, request->frame_number);
if (!mInFlightTracker->CanAddRequest(*request)) {
// Streams are full or frame number is not unique.
ALOGE("%s:%d: Can not add request.", __func__, mId);
return -EINVAL;
}
// Null/Empty indicates use last settings
if (request->settings.isEmpty() && !mSettingsSet) {
ALOGE("%s:%d: NULL settings without previous set Frame:%d",
__func__, mId, request->frame_number);
return -EINVAL;
}
if (request->input_buffer != NULL) {
ALOGV("%s:%d: Reprocessing input buffer %p", __func__, mId,
request->input_buffer.get());
} else {
ALOGV("%s:%d: Capturing new frame.", __func__, mId);
}
if (!isValidRequestSettings(request->settings)) {
ALOGE("%s:%d: Invalid request settings.", __func__, mId);
return -EINVAL;
}
// Pre-process output buffers.
if (request->output_buffers.size() <= 0) {
ALOGE("%s:%d: Invalid number of output buffers: %d", __func__, mId,
request->output_buffers.size());
return -EINVAL;
}
for (auto& output_buffer : request->output_buffers) {
res = preprocessCaptureBuffer(&output_buffer); //> 同步 output_buffer 时间
if (res)
return -ENODEV;
}
// Add the request to tracking.
if (!mInFlightTracker->Add(request)) {
ALOGE("%s:%d: Failed to track request for frame %d.",
__func__, mId, request->frame_number);
return -ENODEV;
}
// Valid settings have been provided (mSettingsSet is a misnomer;
// all that matters is that a previous request with valid settings
// has been passed to the device, not that they've been set).
mSettingsSet = true;
// Send the request off to the device for completion.
enqueueRequest(request);
// Request is now in flight. The device will call completeRequest
// asynchronously when it is done filling buffers and metadata.
return 0;
}
output_buffer 内容准备好后、把request内容送入到队列中,源码内容如下:
int V4L2Camera::enqueueRequest(
std::shared_ptr<default_camera_hal::CaptureRequest> request) {
HAL_LOG_ENTER();
// Assume request validated before calling this function.
// (For now, always exactly 1 output buffer, no inputs).
{
std::lock_guard<std::mutex> guard(request_queue_lock_);
request_queue_.push(request);
requests_available_.notify_one();
}
return 0;
}
enqueueRequest() 方法中把 request 对象 push 到队列后,发送 requests_available_ 条件变量,
通知等待该条件变量线程源码如下:
bool V4L2Camera::enqueueRequestBuffers() {
// Get a request from the queue (blocks this thread until one is available).
std::shared_ptr<default_camera_hal::CaptureRequest> request =
dequeueRequest();
// Assume request validated before being added to the queue
// (For now, always exactly 1 output buffer, no inputs).
// Setting and getting settings are best effort here,
// since there's no way to know through V4L2 exactly what
// settings are used for a buffer unless we were to enqueue them
// one at a time, which would be too slow.
// Set the requested settings
int res = metadata_->SetRequestSettings(request->settings);
if (res) {
HAL_LOGE("Failed to set settings.");
completeRequest(request, res);
return true;
}
// Replace the requested settings with a snapshot of
// the used settings/state immediately before enqueue.
res = metadata_->FillResultMetadata(&request->settings);
if (res) {
// Note: since request is a shared pointer, this may happen if another
// thread has already decided to complete the request (e.g. via flushing),
// since that locks the metadata (in that case, this failing is fine,
// and completeRequest will simply do nothing).
HAL_LOGE("Failed to fill result metadata.");
completeRequest(request, res);
return true;
}
// Actually enqueue the buffer for capture.
{
std::lock_guard<std::mutex> guard(in_flight_lock_);
uint32_t index;
res = device_->EnqueueBuffer(&request->output_buffers[0], &index);
if (res) {
HAL_LOGE("Device failed to enqueue buffer.");
completeRequest(request, res);
return true;
}
// Make sure the stream is on (no effect if already on).
res = device_->StreamOn();
if (res) {
HAL_LOGE("Device failed to turn on stream.");
// Don't really want to send an error for only the request here,
// since this is a full device error.
// TODO: Should trigger full flush.
return true;
}
// Note: the request should be dequeued/flushed from the device
// before removal from in_flight_.
in_flight_.emplace(index, request);
buffers_in_flight_.notify_one();
}
return true;
}
该线程调用 device_->EnqueueBuffer(&request->output_buffers[0], &index);让Camera把buffer放入队列中,
int V4L2Wrapper::EnqueueBuffer(const camera3_stream_buffer_t* camera_buffer,
uint32_t* enqueued_index) {
if (!format_) {
HAL_LOGE("Stream format must be set before enqueuing buffers.");
return -ENODEV;
}
// Find a free buffer index. Could use some sort of persistent hinting
// here to improve expected efficiency, but buffers_.size() is expected
// to be low enough (<10 experimentally) that it's not worth it.
int index = -1;
{
std::lock_guard<std::mutex> guard(buffer_queue_lock_);
for (int i = 0; i < buffers_.size(); ++i) {
if (!buffers_[i]) {
index = i;
break;
}
}
}
if (index < 0) {
// Note: The HAL should be tracking the number of buffers in flight
// for each stream, and should never overflow the device.
HAL_LOGE("Cannot enqueue buffer: stream is already full.");
return -ENODEV;
}
// Set up a v4l2 buffer struct.
v4l2_buffer device_buffer;
memset(&device_buffer, 0, sizeof(device_buffer));
device_buffer.type = format_->type();
device_buffer.index = index;
// Use QUERYBUF to ensure our buffer/device is in good shape,
// and fill out remaining fields.
if (IoctlLocked(VIDIOC_QUERYBUF, &device_buffer) < 0) {
HAL_LOGE("QUERYBUF fails: %s", strerror(errno));
return -ENODEV;
}
// Lock the buffer for writing (fills in the user pointer field).
int res =
gralloc_->lock(camera_buffer, format_->bytes_per_line(), &device_buffer);
if (res) {
HAL_LOGE("Gralloc failed to lock buffer.");
return res;
}
if (IoctlLocked(VIDIOC_QBUF, &device_buffer) < 0) {
HAL_LOGE("QBUF fails: %s", strerror(errno));
gralloc_->unlock(&device_buffer);
return -ENODEV;
}
// Mark the buffer as in flight.
std::lock_guard<std::mutex> guard(buffer_queue_lock_);
buffers_[index] = true;
if (enqueued_index) {
*enqueued_index = index;
}
return 0;
}
当准备好buffer内容后,程序发送 buffers_in_flight_.notify_one(); 条件变量,通知等待读取buffer的V4L2Camera::dequeueRequestBuffers()线程。
bool V4L2Camera::dequeueRequestBuffers() {
// Dequeue a buffer.
uint32_t result_index;
int res = device_->DequeueBuffer(&result_index);
if (res) {
if (res == -EAGAIN) {
// EAGAIN just means nothing to dequeue right now.
// Wait until something is available before looping again.
std::unique_lock lock(in_flight_lock_);
while (in_flight_.empty()) {
buffers_in_flight_.wait(lock);
}
} else {
HAL_LOGW("Device failed to dequeue buffer: %d", res);
}
return true;
}
// Find the associated request and complete it.
std::lock_guard guard(in_flight_lock_);
auto index_request = in_flight_.find(result_index);
if (index_request != in_flight_.end()) {
completeRequest(index_request->second, 0);
in_flight_.erase(index_request);
} else {
HAL_LOGW(
"Dequeued non in-flight buffer index %d. "
"This buffer may have been flushed from the HAL but not the device.",
index_request->first);
}
return true;
}
该线程调用 device_->DequeueBuffer(&result_index);读取摄像头buffer内容;
int V4L2Wrapper::DequeueBuffer(uint32_t* dequeued_index) {
if (!format_) {
HAL_LOGV(
"Format not set, so stream can't be on, "
"so no buffers available for dequeueing");
return -EAGAIN;
}
v4l2_buffer buffer;
memset(&buffer, 0, sizeof(buffer));
buffer.type = format_->type();
buffer.memory = V4L2_MEMORY_USERPTR;
int res = IoctlLocked(VIDIOC_DQBUF, &buffer);
if (res) {
if (errno == EAGAIN) {
// Expected failure.
return -EAGAIN;
} else {
// Unexpected failure.
HAL_LOGE("DQBUF fails: %s", strerror(errno));
return -ENODEV;
}
}
// Mark the buffer as no longer in flight.
{
std::lock_guard<std::mutex> guard(buffer_queue_lock_);
buffers_[buffer.index] = false;
}
// Now that we're done painting the buffer, we can unlock it.
res = gralloc_->unlock(&buffer);
if (res) {
HAL_LOGE("Gralloc failed to unlock buffer after dequeueing.");
return res;
}
if (dequeued_index) {
*dequeued_index = buffer.index;
}
return 0;
}
总结:
(1). 用户程序调用拍照时,首先是调用 processCaptureRequest() 处理请求时,将产生发送条件变量通知 enqueueRequestBuffers()线程,
该线程接收到条件变量后,就开始从 摄像头设备读环形buffer内容;
(2). 读取成功后发送 buffers_in_flight_ 条件变量,通知 dequeueRequestBuffers() 线程、该线程接收到条件变量后、
调用 completeRequest(index_request->second, 0); 执行回调函数 mCallbackOps->process_capture_result(mCallbackOps, &result);
把buffer内容以形参方式、传递给用户空间的回调函数。以此形成拍照的数据流走向。
(3). 当用户程序连续发送CaptureRequest就形成录像的数据流走向,在android用户空间中、是通过设置模式方式来实现拍照或录像,如果仅发送一次
就是拍照行为,如果连续发送就是录像行为。
completeRequest 函数调用流程如下:
--> void Camera::completeRequest(std::shared_ptr<CaptureRequest> request, int err)
----> sendResult(request);
------> void Camera::sendResult(std::shared_ptr<CaptureRequest> request)
--------> mCallbackOps->process_capture_result(mCallbackOps, &result);
开启摄像头流是在 enqueueRequestBuffers() 中调用 device_->StreamOn() 中开启摄像头流,
在 V4L2Camera::disconnect() 函数中关闭流 device_->StreamOff();
至此,我们把 V4L2CameraHAL 的驱动程序做简单总结:
(1). v4l2_camera_HAL.cpp 是 camera hal驱动程序入口,此部分注意是起到 wrapper 作用,填充 camera_module_t HAL_MODULE_INFO_SYM 结构体
相关接口内容后,通过 gCameraHAL.openDevice(module, name, device) 打开摄像头后,其他功能都在 v4l2_camera.cpp 中实现。
(2). v4l2_camera.cpp 是摄像头hal驱动注意实现,它继承了 camera.cpp中统一摄像头属性和方法;同时引入 v4l2_wrapper 和 capture_request 两个类,
v4l2_wrapper作为管理摄像头接口类,capture_request作为管理与用户拍照、录像交互对象。
(3). android 用户空间拍照请求,所引发的数据流向、我们姑且假定上述推演过程,后面将分析在 median.camera 守护线程时、来验证此推演过程上的误差。
通过v4l2_camera驱动底层接口使用逻辑、此数据流推演结果基本没有问题。