如何在Tegra Ventana上使用kinect 2.0

1355人阅读
Android Tegra平台back camera 驱动实现
&&&&&&&&本Android Tegra平台back camera 驱动实现主要包含下述主要内容,在开序之前我先介绍下Tegra平台。Tegra是一种采用片上系统设计(system-on-a-chip)芯片,它集成了ARM架构处理器和NVIDIA的GeforceGPU,并内置了其它功能,产品主要面向小型设备。和Intel以PC为起点的x86架构相比,ARM架构的Tegra更像是以手机处理器为起点做出的发展。它不能运行x86 PC上的Windows XP等操作系统,但在手机上应用多年的ARM架构轻量级操作系统更能适应它高速低功耗的需求。
&&&&&&&&camera 架构
&&&&&&&&物理架构
&&&&&&& 这个包括lens(镜头)、sensor(图像传感器) 、 ISP(图像信号处理)
&&&&&&&&Android camera架构
&&&&&&&&APP - Framework 、Framework - HAL、 Overlay 、HAL - driver&
&&&&&&&& camera工作原理
&&&&&&&&原理简介
&&&&&&&&camera chip 电路图
&&&&&&&&camera驱动的实现
&&&&&&&&理解摄像头驱动需要三个前提
&&&&&&&&camera驱动实现过程
&&&&&&&&配置GPIO口和芯片上电时序(power on sequence)
&&&&&&&&配置 I2C
&&&&&&&&camera HAL的实现
&&&&&&&&camera HAL实现流程
&&&&&&&&camera具体实现files图表
&&&&&&&&HAL实现的Interface
&&&&&&&& camera HAL主要的库&
&&&&&&&& android tegra平台中添加一个camera
&&&&&&&&定义标识序列
&&&&&&&&配置camera硬件连接参数
&&&&&&&&关联camera设备入口地址和GUID
&&&&&&&&在Android.mk文件中添加相应files
&&&&&&&&添加HAL层枚举的camera类型
&&&&&&&&camera设备的配置和功能的具体实现
&&&&&&&&camera驱动过程中的debug
二 camera物理架构
&&&&&&&&一般来说,camera主要是由lens和sensor IC两部分组成,其中有的sensor IC集成了DSP,有的没有集成,但也需要外部的DSP处理。细分的来讲,camera设备由下边几个部分构成:
&&&&&&& lens即镜头,一般camera的镜头结构是由几片透镜组成,分有塑胶透镜(Plastic)和玻璃透镜(Glass),通常镜头有:1P,2P,1G1P,1G2P,2G2P,4G等
&&&&&&& 即图像传感器,Sensor是一种半导体芯片,有两种类型:CCD和CMOS。Sensor将从lens上传到过来的光线转换为电信号,再通过内部的AD转换为数字信号。由于Sensor的每个pixel只能感光R光或者B光或者G光,因此每个像素此时存贮的是单色的,我们称之为RAW DATA数据。要想将每个像素的RAW DATA数据还原成三基色,就需要ISP来处理。
&&&&&&&&ISP即图像信号处理,主要完成数字图像的处理工作,把sensor采集到的原始数据转换为显示支持的格式。
&&&&&&&&硬件方面,camera系统分为主控制器和摄像头设备,功能上主要有preview预览,takePicture拍照和recording录像。
&&&&&&&&IPU - Image Process Unit 图像处理单元,用于控制摄像机和显示屏。
&&&&&&&&图像采集 - 由camera采集的图像数据信息通过IPU的CSI接口控制。
&&&&&&&&DMA映射到内存 - IPU将采集到得数据通过DMA映射到一段内存。
&&&&&&&&队列机制 - 为了更高效地传送数据,将内存中的数据取出加入一队列,并传送到另一队列。
&&&&&&&&视频输出 - 将视频数据从队列中取出,通过IPU控制这段独立显存,最终将视频显示出来。
三 Android中的camera架构及工作原理
【感谢终结者投递本文】
&&&&&&& 本篇文章主要介绍Android中的camera架构及其工作原理。
Android中的camera架构
&&&&&&&&Android的camera系统架构自上而下分别为应用层、框架层、硬件抽象层及linux驱动层。下面将通过对其框架层、硬件抽象层即Linux驱动层做简单的介绍。
APP - Framework
&&&&&&&&应用层与java框架层的间主要由Binder机制进行通信。
&&&&&&&&系统初始化时会开启一个CameraService的守护进程,为上层应用提供camera对的功能接口。
Framework - HAL
&&&&&&&&框架层与硬件抽象层间通过回调函数传递数据。
&&&&&&&&Overlay层由ServiceFlinger和OverlayHal组成,实现视频输出功能,只有camera框架层或者视频框架层能调用它,上层无法直接调用。
HAL - driver
&&&&&&&&抽象层位于用户空间,通过系统调用如open(),read(),ioctl()等与内核空间进行数据传递。
camera工作原理
&&&&&&&&外部光线穿过后,经过color filter滤波后照射到sensor面上,sensor将从lens上传到过来的光线转换成电信号,再通过内部的AD转换为数字信号,如果sensor没有集成DSP,则通过DVP的方式传输到baseband,此时的数据格式是RAW
DATA。如果集成了DSP,这RAW DATA数据经过AWB,color matrix,lens shading,gamma,sharpness,AE和de-noise处理后输出YUV或者RGB格式的数据。最后会由CPU送到framebuffer中进行显示,这样我们就看到camera拍摄到的景象了。
camera chip 电路图:
&&&&&&&&5M back camera chip mt9d111的硬件电路图如下:
&&&&&&&&拿到原理图,我们需要关注的是19、21两个管脚分别连接到CAM_I2C_SDA和CAM_I2C_SCL,可以通过I2C来配置摄像头。另外调试摄像头的时候,可以根据这个原理图使用示波器来测量波形以验证代码是否正确。
&&&&&&&&这里还需要注意的是开发驱动之前最好用万用表测量摄像头的各个管脚是否和芯片连接正确,否则即使代码没有问题也看不到图像。
&&&&&&&&mt9d111是CMOS接口的图像传感器芯片,可以感知外部的视觉信号并将其转换为数字信号并输出。
&&&&&&&&我们需要通过XVCLK1给摄像头提供时钟,RESET是复位线,PWDN在摄像头工作时应该始终为低。HREF是行参考信号,PCLK是像素时钟,VSYNC是场同步信号。一旦给摄像头提供了时钟,并且复位摄像头,摄像头就开始工作了,通过HREF,PCLK和VSYNC同步传输数字图像信号。mt9d111向外传输的图像格式是YUV的格式,YUV是一种压缩后的图像数据格式,它里面还包含很多具体的格式类型,我们的摄像头对应的是YCbCr(8 bits, 4:2:2, Interpolated color)。后面的驱动里面设置的格式一定要和这个格式一致。
Android Tegra平台back camera 驱动实现四 camera驱动的实现
【感谢终结者投递本文】
&&&&&&& 本篇文章主要介绍camera驱动的实现,这将会结合代码来叙述,这理解摄像头驱动需要三个前提,分别是:
摄像头基本的工作原理
platform_device和platform_driver工作原理
Linux内核I2C驱动架构
&&&&&&&&驱动的编写主要有配置GPIO、I2C、MIPI、电压、时钟等, 编写驱动代码,调通camera sensor驱动,并实现前后置双camera的切换。根据芯片手册,实现基本功能 - 预览, 拍照, 录像, 效果(scene, effect, ev, iso, wb, contrast等等)。当然还要实现进阶扩展功能 - 防抖, 自动对焦, 闪光灯, 固件升级, 720P, wdr, panorama等。本篇只讲述些基本的功能实现。
配置GPIO口和芯片上电时序(power on sequence)
&&&&&&&&查看datesheet芯片上电时序如下图所示:
上电时序一
上电时序二
&&&&&&&&对应代码如下:
static&struct&camera_gpios&yuv5_sensor_gpio_keys[]&=&{&&&&&&[0]&=&CAMERA_GPIO(&cam_power_en&,&CAMERA_POWER_GPIO,&1,&0,&GPIO_FREE),&&&&&&[1]&=&CAMERA_GPIO(&yuv5_sensor_pwdn&,&YUV5_PWR_DN_GPIO,&0,&0,&GPIO_FREE),&&&&&&[2]&=&CAMERA_GPIO(&yuv5_sensor_rst_lo&,&YUV5_RST_L_GPIO,&1,&0,&GPIO_FREE),&&};&&static&int&yuv5_sensor_power_on(void)&&{&&&&&&int&&&&&&&int&i;&&&&&&pr_err(&%s:&sandow&pmu\n&,&__func__);&&&&&&for&(i&=&0;&i&&&ARRAY_SIZE(yuv5_sensor_gpio_keys);&i++)&{&&&&&&&&&&&&&&&&&&tegra_gpio_enable(yuv5_sensor_gpio_keys[i].gpio);&&&&&&&&&&ret&=&gpio_request(yuv5_sensor_gpio_keys[i].gpio,&&&&&&&&&&&&&&yuv5_sensor_gpio_keys[i].name);&&&&&&&&&&if&(ret&&&0)&{&&&&&&&&&&&&&&pr_err(&%s:&gpio_request&failed&for&gpio&#%d\n&,&&&&&&&&&&&&&&&&&&__func__,&i);&&&&&&&&&&&&&&goto&&&&&&&&&&&}&&&&&&&&&&gpio_direction_output(yuv5_sensor_gpio_keys[i].gpio,&&&&&&&&&&&&&&yuv5_sensor_gpio_keys[i].enabled);&&&&&&&&&&&&&&&&&&gpio_export(yuv5_sensor_gpio_keys[i].gpio,&false);&&&&&&}&&&&&&&&return&0;&&fail:&&&&&&while&(i--)&&&&&&&&&&gpio_free(yuv5_sensor_gpio_keys[i].gpio);&&&&&&return&&&}&&&&static&int&yuv5_sensor_power_off(void)&&{&&&&&&int&i;&&&&&&int&gpio_pw_dn,&gpio_camera_&&&&&&pr_err(&%s:&sandow&pmu\n&,&__func__);&&&&&&&&&&&&gpio_direction_output(YUV5_PWR_DN_GPIO,1);&&&&&&&&&&&&gpio_direction_output(CAMERA_POWER_GPIO,0);&&&&&&&&&&&&gpio_pw_dn&=&gpio_get_value(YUV5_PWR_DN_GPIO);&&&&&&gpio_camera_power=&gpio_get_value(CAMERA_POWER_GPIO);&&&&&&&&&&&&&&&printk(&%s:&sandow&pmu&gpio_pw_dn:&%d,&camera_power:&%d&,&__func__,&gpio_pw_dn,&gpio_camera_power);&&&&&&&&&&&&&&&&&i&=&ARRAY_SIZE(yuv5_sensor_gpio_keys);&&&&&&while&(i--)&&&&&&&&&&gpio_free(yuv5_sensor_gpio_keys[i].gpio);&&&&&&return&0;&&}&&&&struct&yuv5_sensor_platform_data&yuv5_sensor_data&=&{&&&&&&.power_on&=&yuv5_sensor_power_on,&&&&&&.power_off&=&yuv5_sensor_power_off,&&};&&
&&&&&&&&I2C通信本身要注意两点:
&&&&&&& 1、SDA第9位ACK位为低时说明从设备有响应;
&&&&&&& 2、Slave address,Spec给出的是8位地址,第8位是指Write-0或者Read-1,实际的I2C芯片地址是7位的。Linux源码里struct i2c_board_info的板基信息应填写7位I2C地址。
代码位于:board-ventana-sensors.c
{&&&&&&&&&&&I2C_BOARD_INFO(SENSOR_5M_NAME,&0x3D),&&&&&&&&&&.platform_data&=&&yuv5_sensor_data,&&&},&&……&&#define&SENSOR_5M_NAME&&&mt9d111&&&……&&&&static&struct&i2c_driver&sensor_i2c_driver&=&{&&.driver&=&{&&&&&&&&&&.name&=&SENSOR_5M_NAME,&&&&&&&&&&.owner&=&THIS_MODULE,&&&&&&},&&&&&&.probe&=&sensor_probe,&&&&&&.remove&=&sensor_remove,&&&&&&.id_table&=&sensor_id,&&};&&static&struct&miscdevice&sensor_device&=&{&&&&&&.minor&=&MISC_DYNAMIC_MINOR,&&&&&&.name&=&SENSOR_5M_NAME,&&&&&&.fops&=&&sensor_fileops,&&};&&
&&&&&&&&本例的camera驱动代码在/kernel/driver/media/video/tegra下:
&&&&&&&&ltc3216.c(LED驱动器LTC3216)、yuv5_sensor.c、yuv_sensor.c、tegra_camera.c。这些源码里面最基础的是tegra_camera.c,这里面注册了一个platform_driver,在相应的平台代码里面有对应的platform_device的描述。这种SOC上的控制器一般都会挂接在platform_bus上以实现在系统初始化时的device和driver的匹配。在driver的probe函数里面,主要完成了资源获取以及misc设备的注册。
平台信息:
struct&yuv5_sensor_platform_data&yuv5_sensor_data&=&{&&&&&&.power_on&=&yuv5_sensor_power_on,&&&&&&.power_off&=&yuv5_sensor_power_off,&&};&&
&&&&&&&&Probe具体流程如下图所示:
五 camera HAL的实现流程
&&&&&&&& 我们接着来介绍camera HAL的实现流程,为了实现一个具体功能的Camera,在HAL层需要一个硬件相关的Camera库(例如通过调用video for linux驱动程序和Jpeg编码程序实现或者直接用各个chip厂商实现的私有库来实现(本例使用后者),比如Qualcomm实现的libcamera.so和libqcamera.so),实现CameraHardwareInterface规定的接口,来调用相关的库,驱动相关的driver,实现对camera硬件的操作。这个库将被Camera的服务库libcameraservice.so调用。
camera具体实现files图表
&&&&&&&&具体实现文档如下图表所示:
&&&&&&&&HAL代码:
vendor/nvidia/tegra/hal/libnvomxcamera:
custcamerasettingsdefinition.cpp
nvomxcameracallbacks.h
nvomxcamera.cpp
nvomxcamera.h
nvomxcamerasettingsdefinition.h
nvomxcamerasettingsparser.cpp
nvomxcamerasettingsparser.h
HAL实现的Interface
&&&&&&&&在tegra2平台中特有的:nvomxcamera.cpp,libcamera.so,实现CameraHardwareInterface接口,openCameraHardware()在该库中实现。
extern&&C&&sp&CameraHardwareInterface&&HAL_openCameraHardware(int&cameraId)&&{&&&&&&LOGVV(&HAL_openCameraHardware&++\n&);&&&&&&&&return&NvOmxCamera::createInstance(cameraId);&&}&&
camera HAL主要的库
&&&&&&&&vendor/nvidia/tegra/core/drivers/openmax/il,libnvomx.so:OMX core 库,libnvodm_imager.so:odm image 的hal库,这部分默认情况下NV只提供了binary,在full build时会将此库copy到system目录下,继而集成到system.img中去。libnvodm_query.so:odm 的查询库,对GPIO,供电,i2c等其他相关硬件配置在本库完成。
&&&&&&&&对于pad的前后camera的问题,个人看法是上层告诉底层使用哪一个camera,然后每次OMX会重新构建OMX Graph,并在最后enable port的时候使用不同的camera硬件。上层处理基本保持一致。
六 添加一个camera
&&&&&&& 分析完了,我们就开始尝试着添加这么一个camera,具体的是在目录vendor/nvidia/tegra/odm/ventana/下,现在添加一个camera及其驱动,主要步骤如下:
定义标识序列
&&&&&&&&vendor/nvidia/tegra/odm/ventana/odm_kit/query/include/nvodm_query_discovery_imager.h
&&&&&&&&定义一个标识序列,例如:
#define&SENSOR_YUV_GUID&&&&&&&&&&&&&NV_ODM_GUID('s',&'_',&'S',&'M',&'P',&'Y',&'U',&'V')&&#define&SENSOR_YUV5_GUID&&&&&&&&&&&&NV_ODM_GUID('s',&'_',&'S',&'M',&'P',&'Y',&'U',&'5')&&
配置camera硬件连接参数
&&&&&&&&odm_kit/query/subboards/ nvodm_query_discovery_pm275_addresses.h
&&&&&&&&配置camera的硬件连接参数,如:
#define&QQ1234_PINS&(NVODM_CAMERA_DEVICE_IS_DEFAULT)&&static&const&NvOdmIoAddress&s_ffaImagerQQ1234Addresses[]&=&&{&&I2C配置;&&Reset&GPIO&配置;&&powerdown&GPIO&配置;&&Camera&VDD&配置;&&VCSI&配置;&&Video&input&配置;&&external&Clock&(CSUS)配置;&&};&&&&#define&OV5650_PINS&(NVODM_CAMERA_SERIAL_CSI_D1A&|&\&&&&&&&&&&&&&&&&&&&&&&&NVODM_CAMERA_DEVICE_IS_DEFAULT)&&static&const&NvOdmIoAddress&s_ffaImagerOV5650Addresses[]&=&&{&&&&&&{&NvOdmIoModule_I2c,&&0x0,&0x6C&},&&&&&&{&NvOdmIoModule_VideoInput,&0x00,&OV5650_PINS&},&&&&&&{&NvOdmIoModule_ExternalClock,&2,&0&}&&&};&&
&&&&&&& 本文中:
static&const&NvOdmIoAddress&s_ffaImagerSensorYUVAddresses[]&=&&{&&&&&&{&NvOdmIoModule_ExternalClock,&2,&0&}&&&};&&&&&&&&static&const&NvOdmIoAddress&s_ffaImagerSensorYUV5Addresses[]&=&&{&&&&&&{&NvOdmIoModule_ExternalClock,&2,&0&}&&&};&&
关联camera设备入口地址和GUID
&&&&&&&&odm_kit/query/subboards/ nvodm_query_discovery_pm275_peripherals.h
&&&&&&&&camera设备入口地址同GUID关联:
&&#if&CONFIG_I_LOVE_XX&&{&&&&&&SENSOR_YUV5_GUID,&&&&&&s_ffaImagerSensorYUV5Addresses,&&&&&&NV_ARRAY_SIZE(s_ffaImagerSensorYUV5Addresses),&&&&&&NvOdmPeripheralClass_Imager&&},&&&&{&&&&&&SENSOR_YUV_GUID,&&&&&&s_ffaImagerSensorYUVAddresses,&&&&&&NV_ARRAY_SIZE(s_ffaImagerSensorYUVAddresses),&&&&&&NvOdmPeripheralClass_Imager&&},&&
在Android.mk文件中添加相应files
&&&&&&&&vendor/nvidia/tegra/odm/template/odm_kit/adaptations/imager/Android.mk
LOCAL_SRC_FILES&+=&sensor_yuv.c&&LOCAL_SRC_FILES&+=&sensor_yuv5.c&&
添加HAL层枚举的camera类型
&&&&&&&&vendor/nvidia/tegra/odm/template/odm_kit/adaptations/imager/imager_hal.c
&&&&&&&&添加Hal层会枚举的camera类型如下:
#include&&sensor_yuv.h&&&#include&&sensor_yuv5.h&&&DeviceHalTable&g_SensorHalTable[]&={&&....&&{SENSOR_YUV_GUID,&&&&&&&&&&&&SensorYuv_GetHal},&&{SENSOR_YUV5_GUID,&&&&&&&&&&&SensorYuv5_GetHal},&&....&&};&&
camera设备的配置和功能的具体实现
&&&&&&&&vendor/nvidia/tegra/odm/template/odm_kit/adaptations/imager/ sensor_yuv.c
&&&&&&&&vendor/nvidia/tegra/odm/template/odm_kit/adaptations/imager/ sensor_yuv5.c
&&&&&&&&NvBool SensorYuv_GetHal(NvOdmImagerHandle hImager);
&&&&&&&&这是对camera设备的配置和功能的具体实现的文件。硬件校准之类的工作主要就是修改sensor_yuv.c(front camera)和sensor_yuv5.c(back camera)
NvBool&SensorYuv_GetHal(NvOdmImagerHandle&hImager)&&{&&&&&&if&(!hImager&||&!hImager-&pSensor)&&&&&&&&&&return&NV_FALSE;&&&&&&&&hImager-&pSensor-&pfnOpen&=&SensorYuv_O&&&&&&hImager-&pSensor-&pfnClose&=&SensorYuv_C&&&&&&hImager-&pSensor-&pfnGetCapabilities&=&SensorYuv_GetC&&&&&&hImager-&pSensor-&pfnListModes&=&SensorYuv_ListM&&&&&&hImager-&pSensor-&pfnSetMode&=&SensorYuv_SetM&&&&&&hImager-&pSensor-&pfnSetPowerLevel&=&SensorYuv_SetPowerL&&&&&&hImager-&pSensor-&pfnGetPowerLevel&=&SensorYuv_GetPowerL&&&&&&hImager-&pSensor-&pfnSetParameter&=&SensorYuv_SetP&&&&&&hImager-&pSensor-&pfnGetParameter&=&SensorYuv_GetP&&&&&&return&NV_TRUE;&&}&&
七 camera驱动过程中的debug
&&&&&&& 代码也分析了,camera也添加了,添加对了吗?代码有问题吗?所以还必须debug这个camera,最后分享下camera驱动的调试过程。
首先对照电路图,检查Camera的电路连接是否正确;
用万用表量Camera的电源管脚,查看Camera的供电是否正常,确定是否需要我们在程序中进行电源控制;
查看Camera的Spec文档,检查PWDN和RESET的管脚触发是否正常,是否需要在程序中进行控制;
在Camera的Datasheet中找出该设备的I2C地址,检查I2C地址配置是否正确;
查看I2C通信是否正常,是否能正常进行读写,用示波器量出I2C的SCL和SDA的波形是否正常,未通信时都为高电平,通信时SCL为I2C时钟信号,SDA为I2C数据通路信号;
用示波器量Camera的MCLK管脚,看是否正确,如果MCLK正常,通常情况下PCLK也应该有波形;
检查Camera的初始化寄存器列表的配置是否正确。
通过dynamic debug查看具体模块的log信息,分析driver是否work OK
操作驱动给出的相应接口来进行分析调试
&&&&&&& 至此,在Android分析camera驱动及整个实现过程就全结束了。
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:59884次
排名:千里之外
转载:109篇
(2)(5)(2)(1)(2)(1)(2)(3)(4)(3)(4)(6)(5)(1)(2)(9)(3)(2)(1)(6)(9)(2)(3)(1)(1)(2)(1)(1)(1)(1)(3)(15)(1)(3)(1)(12)(1)(1)Using Android binary package with Eclipse & OpenCV v2.4.0 documentation
Navigation
Using Android binary package with Eclipse
This tutorial was tested using Ubuntu 10.04 and Windows 7 SP1 operating systems. Nevertheless, it should also work on any other OSes supported by Android SDK (including Mac OS X). If you encounter errors after following the steps described here, feel free to contact us via android-opencv discussion group
and we will try to help you.
Quick environment setup for Android development
If you are making a clean environment installation then you can try Tegra Android Development Pack (TADP) released by NVIDIA:
It will cover all of the environment set up automatically and you can go to the next step
right after automatic setup.
If you are a beginner in Android development then we recommentd you to start with TADP.
NVIDIA‘s Tegra Android Development Pack includes some special features for
but it is not just for Tegra devices
You need at least 1.6 Gb free disk space for installation.
TADP will download Android SDK platforms and Android NDK from Google’s server, so you need an Internet connection for the installation.
TADP can ask you to flash your development kit at the end of installation process. Just skip this step if you have no .
(UNIX) TADP will ask you for a root in the middle of installation, so you need to be a member of sudo group.
Manual environment setup for Android Development
You need the following tools to be installed:
and download installer for your OS.
Here is a detailed JDK installation guide for Ubuntu and Mac OS:
(only JDK sections are applicable for OpenCV)
OpenJDK is not usable for Android development because Android SDK supports only Sun JDK.
If you use Ubuntu, after installation of Sun JDK you should run the following command to set Sun java environment:
sudo update-java-alternatives --set java-6-sun
Android SDK
Get the latest Android SDK from
Here is Google’s install guide for SDK
If you choose SDK packed into Windows installer, then you should have 32-bit JRE installed. It is not needed for Android development, but installer is x86 application and requires 32-bit Java runtime.
If you are running x64 version of Ubuntu Linux, then you need ia32 shared libraries for use on amd64 and ia64 systems to be installed. You can install them with the following command:
sudo apt-get install ia32-libs
For Red Hat based systems the following command might be helpful:
sudo yum install libXtst.i386
Android SDK components
You need the following SDK components to be installed:
Android SDK Tools, revision12 or newer
Older revisions should also work, but they are not recommended.
SDK Platform Android 2.2, API 8, revision 2 (also known as
android-8)
This is minimal platform supported by OpenCV Java API. And it is set as default for OpenCV distribution. It is possible to use newer platform with OpenCV package, but it requires to edit OpenCV project settings.
for help with installing/updating SDK components.
Eclipse IDE
document for a list of Eclipse versions that are compatible with the Android SDK.
For OpenCV 2.4.0 we recommend Eclipse 3.6 (Helios) or later versions. They work well for OpenCV under both Windows and Linux.
If you have no Eclipse installed, you can download it from this location:
ADT plugin for Eclipse
This instruction is copied from
. Please, visit that page if you have any troubles with ADT plugin installation.
Assuming that you have Eclipse IDE installed, as described above, follow these steps to download and install the ADT plugin:
Start Eclipse, then select Help ? Install New Software...
Click Add (in the top-right corner).
In the Add Repository dialog that appears, enter “ADT Plugin” for the Name and the following URL for the Location:
If you have trouble acquiring the plugin, try using “http” in the Location URL, instead of “https” (https is preferred for security reasons).
In the Available Software dialog, select the checkbox next to Developer Tools and click Next.
In the next window, you’ll see a list of the tools to be downloaded. Click Next.
Read and accept the license agreements, then click Finish.
If you get a security warning saying that the authenticity or validity of the software can’t be established, click OK.
When the installation completes, restart Eclipse.
Get the OpenCV package for Android development
and download the latest available version. Currently it is
Create new folder for Android+OpenCV development.
Better to use a path without spaces in it. Otherwise you will probably have problems with ndk-build.
Unpack the OpenCV package into that dir.
You can unpack it using any popular archiver (for example with ):
On Unix you can also use the following command:
tar -jxvf ~/Downloads/OpenCV-2.4.0-android-bin.tar.bz2
For this tutorial I have unpacked OpenCV to the C:\Work\android-opencv\ directory.
Open OpenCV library and samples in Eclipse
Start the Eclipse and choose your workspace location.
I recommend to start familiarizing yourself with OpenCV for Android from a new clean workspace. So I have chosen my OpenCV package directory for the new workspace:
Configure your ADT plugin
ADT plugin settings are workspace-dependent. So you have to repeat this step each time when you create a new workspace.
Once you have created a new workspace, you have to point the ADT plugin to the Android SDK directory. This setting is stored in workspace metadata, as result this step is required each time when you are creating new workspace for Android development. See
document for the original instructions from Google.
Select Window ? Preferences... to open the Preferences panel (Mac OS X: Eclipse ? Preferences):
Select Android from the left panel.
You may see a dialog asking whether you want to send usage statistics to Google. If so, make your choice and click Proceed. You cannot continue with this procedure until you click Proceed.
For the SDK Location in the main panel, click Browse... and locate your Android SDK directory.
Click Apply button at the bottom right corner of main panel:
Click OK to close preferences dialog.
Import OpenCV and samples into workspace.
OpenCV library is packed as a ready-for-use . You can simply reference it in your projects.
Each sample included into OpenCV-2.4.0-android-bin.tar.bz2 is a regular Android project that already references OpenCV library.
Follow next steps to import OpenCV and samples into workspace:
Right click on the Package Explorer window and choose Import... option from the context menu:
In the main panel select General ? Existing Projects into Workspace and press Next button:
For the Select root directory in the main panel locate your OpenCV package folder. (If you have created workspace in the package directory, then just click Browse... button and instantly close directory choosing dialog with OK button!) Eclipse should automatically locate OpenCV library and samples:
Click Finish button to complete the import operation.
After clicking Finish button Eclipse will load all selected projects into workspace. And... will indicate numerous errors:
However all these errors are only false-alarms!
To help Eclipse to understand that there are no any errors choose OpenCV library in Package Explorer (left mouse click) and press F5 button on your keyboard. Then choose any sample (except first samples in Tutorial Base and Tutorial Advanced) and also press F5.
After this manipulation Eclipse will rebuild your workspace and error icons will disappear one after another:
Once Eclipse completes build you will have the clean workspace without any build errors:
If you are importing only OpenCV library without samples then instead of second refresh command (F5) you might need to make Android Tools ? Fix Project Properties from project context menu.
Running OpenCV Samples
At this point you should be able to build and run all samples except two from Advanced tutorial (these samples require Android NDK to build working applications, see the next tutorial
to learn how to compile them).
Also I want to note that only Tutorial 1 Basic - 0. Android Camera and Tutorial 1 Basic - 1. Add OpenCV samples are able to run on Emulator from Android SDK. Other samples are using OpenCV Native Camera which does not work with emulator.
Latest Android SDK tools, revision 12 can run ARM v7 OS images but Google does not provide such images with SDK.
Well, running samples from Eclipse is very simple:
Connect your device with adb tool from Android SDK or create Emulator with camera support.
document for help with Android Emulator.
for help with real devices (not emulators).
Select project you want to start in Package Explorer:guilabel: and just press Ctrl + F11 or select option Run ? Run from main menu, or click Run button on the toolbar.
Android Emulator can take several minutes to start. So, please, be patient.
On the first run Eclipse will ask you how to run your application:
Select the Android Application option and click OK button. Eclipse will install and run the sample.
Here is Tutorial 1 Basic - 1. Add OpenCV sample detecting edges using Canny algorithm from OpenCV:
How to use OpenCV library project in your application
If you already have an Android application, you can add a reference to OpenCV and import all its functionality.
First of all you need to have both projects (your app and OpenCV) in a single workspace.
So, open workspace with your application and import the OpenCV project into your workspace as stated above.
Add a reference to OpenCV project.
Do the right mouse click on your app in Package Explorer, go to Properties ? Android ? Library ? Add
and choose the OpenCV library project.
Whats next?
tutorial to learn how add native OpenCV code to your Android project.
Help and Feedback
You did not find what you were looking for?
Ask a question in the .
If you think something is missing or wrong in the documentation,
please file a .
Previous topic
Next topic
Navigation}

我要回帖

更多关于 kinect游戏 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信