我想评测一下PaddleLite+PaddleOCR在TX2上的效果,但是我在编译的时候发现cuda好像不能打开,具体的编译log如下:
./lite/tools/build.sh -DLITE_WITH_CUDA=ON -DLITE_WITH_CV=ON -DLITE_BUILD_EXTRA=ON
+ readonly 'CMAKE_COMMON_OPTIONS=-DWITH_GPU=OFF -DWITH_MKL=OFF -DWITH_LITE=ON -DLITE_WITH_CUDA=OFF -DLITE_WITH_X86=OFF -DLITE_WITH_ARM=ON -DLITE_WITH_LIGHT_WEIGHT_FRAMEWORK=ON'
+ CMAKE_COMMON_OPTIONS='-DWITH_GPU=OFF -DWITH_MKL=OFF -DWITH_LITE=ON -DLITE_WITH_CUDA=OFF -DLITE_WITH_X86=OFF -DLITE_WITH_ARM=ON -DLITE_WITH_LIGHT_WEIGHT_FRAMEWORK=ON'
+ readonly NUM_PROC=4
+ NUM_PROC=4
+ BUILD_EXTRA=OFF
+ BUILD_TRAIN=OFF
+ BUILD_JAVA=ON
+ BUILD_PYTHON=OFF
++ pwd
+ BUILD_DIR=/home/enfu/SDData/howe/paddleLite/paddle-lite
+ OPTMODEL_DIR=
+ BUILD_TAILOR=OFF
+ BUILD_CV=OFF
+ WITH_LOG=ON
+ WITH_EXCEPTION=OFF
+ WITH_PROFILE=OFF
+ BUILD_NPU=OFF
++ pwd
+ NPU_DDK_ROOT=/home/enfu/SDData/howe/paddleLite/paddle-lite/ai_ddk_lib/
+ BUILD_XPU=OFF
+ BUILD_XTCL=OFF
++ pwd
+ XPU_SDK_ROOT=/home/enfu/SDData/howe/paddleLite/paddle-lite/xpu_sdk_lib/
+ BUILD_APU=OFF
++ pwd
+ APU_DDK_ROOT=/home/enfu/SDData/howe/paddleLite/paddle-lite/apu_sdk_lib/
+ BUILD_RKNPU=OFF
++ pwd
+ RKNPU_DDK_ROOT=/home/enfu/SDData/howe/paddleLite/paddle-lite/rknpu/
+ WITH_HUAWEI_ASCEND_NPU=OFF
+ HUAWEI_ASCEND_NPU_DDK_ROOT=/usr/local/Ascend/ascend-toolkit/latest/x86_64-linux_gcc4.8.5
+ PYTHON_EXECUTABLE_OPTION=
+ IOS_DEPLOYMENT_TARGET=9.0
+ readonly THIRDPARTY_TAR=https://paddle-inference-dist.bj.bcebos.com/PaddleLite/third-party-05b862.tar.gz
+ THIRDPARTY_TAR=https://paddle-inference-dist.bj.bcebos.com/PaddleLite/third-party-05b862.tar.gz
+ readonly workspace=/home/enfu/SDData/howe/paddleLite/paddle-lite
+ workspace=/home/enfu/SDData/howe/paddleLite/paddle-lite
++ uname -s
+ os_name=Linux
+ '[' Linux == Darwin ']'
+ main -DLITE_WITH_CUDA=ON -DLITE_WITH_CV=ON -DLITE_BUILD_EXTRA=ON
+ '[' -z -DLITE_WITH_CUDA=ON ']'
+ for i in "$@"
+ case $i in
+ print_usage
+ set +x
USAGE:
----------------------------------------
compile tiny publish so lib:
for android:
./build.sh --arm_os=<os> --arm_abi=<abi> --arm_lang=<lang> --android_stl=<stl> tiny_publish
for ios:
./build.sh --arm_os=<os> --arm_abi=<abi> ios
compile full publish so lib (ios not support):
./build.sh --arm_os=<os> --arm_abi=<abi> --arm_lang=<lang> --android_stl=<stl> full_publish
compile all arm tests (ios not support):
./build.sh --arm_os=<os> --arm_abi=<abi> --arm_lang=<lang> test
optional argument:
--with_log: (OFF|ON); controls whether to print log information, default is ON
--with_exception: (OFF|ON); controls whether to throw the exception when error occurs, default is OFF
--build_extra: (OFF|ON); controls whether to publish extra operators and kernels for (sequence-related model such as OCR or NLP)
--build_train: (OFF|ON); controls whether to publish training operators and kernels, build_train is only for full_publish library now
--build_python: (OFF|ON); controls whether to publish python api lib (ANDROID and IOS is not supported)
--build_java: (OFF|ON); controls whether to publish java api lib (Only ANDROID is supported)
--build_dir: directory for building
--ios_deployment_target: (default: 9.0); Set the minimum compatible system version for ios deployment.
argument choices:
--arm_os: android|ios|ios64
--arm_abi: armv8|armv7
--arm_lang: only support gcc now, clang will be supported in future.(for android)
--android_stl: c++_static|c++_shared (for android)
tasks:
tiny_publish: a small library for deployment.
full_publish: a full library for debug and test.
test: produce all the unittests.
----------------------------------------
-DLITE_WITH_CUDA始终是OFF,参考git的帖子我还尝试了如下编译选项
./lite/tools/build.sh cuda
也是一样的效果,请问具体该如何操作?