weixin_39575758
weixin_39575758
2021-01-10 04:55

Changing Camera source from onboard camera to USB camera

Hi,

Would it be possible to change the camera source from the onboard camera to USB camera?

Kind Regards, JP

该提问来源于开源项目:dusty-nv/jetson-inference

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

29条回答

  • weixin_39842029 weixin_39842029 4月前

    Sorry for not explaining correctly, I am trying to feed my action camera rtsp into detectnet. The stream cannot be modified. I did not manage to use the pipeline with rtspsrc. I was getting weird auth errors even if there is no auth on the camera, I went for an alternative. Using v4l2 loopback I feed the RTSP into the dev/video0 using ffmpeg -i rtsp://192.168.42.1/live -pix_fmt yuv420p -f v4l2 /dev/video0

    I can verify this works with :

    gst-launch-1.0 v4l2src device="/dev/video0" ! 'video/x-raw, width=848, height=480' ! xvimagesink

    But now I am not sure how to do the rest, when I compiled detectnet-camera.cpp with the DEFAULT_CAMERA set to 0 I get errors. Probably du to the fact my stream is in yuv420p. How can I convert it to be in the right format for the rest ?

    Here are the errors I am getting while running detect net:

    `[gstreamer] gstreamer decoder onEOS [gstreamer] gstreamer v4l2src0 ERROR Failed to allocate a buffer [gstreamer] gstreamer Debugging info: gstv4l2src.c(884): gst_v4l2src_create (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0 [gstreamer] gstreamer v4l2src0 ERROR Internal data flow error. [gstreamer] gstreamer Debugging info: gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: streaming task paused, reason error (-5)

    detectnet-camera: failed to capture frame detectnet-camera: failed to convert from NV12 to RGBA detectNet::Detect( 0x(nil), 848, 480 ) -> invalid parameters`

    thanks

    点赞 评论 复制链接分享
  • weixin_39842029 weixin_39842029 4月前

    I managed to make it work, First I changed the default camera width and height to the ones of my camera. Second I left DEFAULT_CAMERA to -1 and modified the pipeline of the onboard camera a bit to fit mine.

    点赞 评论 复制链接分享
  • weixin_39760206 weixin_39760206 4月前

    I have my RGB webcam working using the Gstreamer v4l2src source plug-in. I needed to writ a CUDA kernel to convert to YUV my fork shows you how to do this.

    https://github.com/abaco-systems/jetson-inference-gv This fork has the primary goal of accepting RTP uncompressed video streams as the input video source but works well with my webcam (Logitech C920). Set VIDEO_SRC==VIDEO_GST_V4L_SRC in config.h to enable webcam as the input.

    点赞 评论 复制链接分享
  • weixin_39918928 weixin_39918928 4月前

    maybe you should do this: 1.change DEFAULT_CAMERA -1 to DEFAULT_CAMERA 1 in detectnet-camera.cpp 2. run 'v4l2-ctl -d /dev/video1 -V' in terminal to see your USB camera's width/height 3. open the /util/camera/gstCamera.h to find the " static const uint32_t DefaultWidth = 1280; static const uint32_t DefaultHeight = 720; " and change the width and height to your width/height 4. remake

    点赞 评论 复制链接分享
  • weixin_39835965 weixin_39835965 4月前

    Hi, I am using a Logitech Camera and it works brilliantly with imagenet-camera, but it doesn't work after I stop and start the process again. I have to replug the camera to make it work. Maybe it's not closing the camera stream properly. Please help!

    点赞 评论 复制链接分享
  • weixin_39522927 weixin_39522927 4月前

    Dusty, I know the inference works out of the box with most cameras now, except the ZED. Where is the file that we change in jetson-inference? And what is the code snippet? And did the debayering get addressed? Also, can you offer guidance to integrate the new object detection on the ZED with existing models in jetson-inference? Thank you.

    点赞 评论 复制链接分享
  • weixin_39684967 weixin_39684967 4月前

    Hello, Any news about ZED camera ? Regards, Sergiu

    点赞 评论 复制链接分享
  • weixin_39715348 weixin_39715348 4月前

    Hello everyone, How to connect the ZED camera with jetson-inference. May someone give me some guides. thanks a million

    点赞 评论 复制链接分享
  • weixin_39575758 weixin_39575758 4月前

    -Kamenev Hi Alexey, thanks for providing the modification to Dustin's code. It seems that something is still missing as I am getting the following error:

    failed to capture frame failed to convert from NV12 to RGBA

    点赞 评论 复制链接分享
  • weixin_39542477 weixin_39542477 4月前

    -nv Hi, Dustin. Can we use OpenCV to capture video frames from a usb camera instead of on board camera?

    点赞 评论 复制链接分享
  • weixin_39612038 weixin_39612038 4月前

    yes you can use USB cam

    点赞 评论 复制链接分享
  • weixin_39542477 weixin_39542477 4月前

    Did you try it ? Details please.

    点赞 评论 复制链接分享
  • weixin_39542477 weixin_39542477 4月前

    Maybe, it only works for some usb cameras. Thanks for the Alexey modification. I tested it on my logitec c920 usb camera and it worked fine. But it did not work on other usb cameras. Wish Dustin -nv make them the same interface asap.

    点赞 评论 复制链接分享
  • weixin_39552768 weixin_39552768 4月前

    Sorry for not replying sooner. If it works with some USB cameras but not the others, make sure that your camera supports YUV format (YUYV aka YUY2 etc). Run v4l2-ctl to see supported formats, e.g. v4l2-ctl -d /dev/video2 -V If it does not, the quick workaround would be to use videoconvert filter with appropriate caps. I've just tested the code with 2 USB cameras that I have, Microsoft LifeCam NX-3000 which supports YUYV format so my code worked fine and Mobius ActionCam which supports MJPG by default. For Mobius I had to use videoconvert filter which looks like that (when running via gst-launch):

    
    gst-launch-1.0 v4l2src device="/dev/video2" name=e ! 'video/x-raw, width=640, height=480' ! videoconvert ! 'video/x-raw, width=640, height=480, format=(string)YUY2' ! xvimagesink
    

    If gst-launch works fine with your camera - make appropriate modifications to your code.

    As I already mentioned, the better way would be implementing DNN prediction part as a gstreamer data probe - that way you have complete control over the pipeline and can build pretty complex pipelines without modifying the code.

    点赞 评论 复制链接分享
  • weixin_40007515 weixin_40007515 4月前

    for some reason the gst-launch-1.0 command from -Kamenev works in the terminal for me, but the program has problems to create a pipleline and it gives me a Syntax error.

    detectnet-camera args (1): 0 [/home/ubuntu/workspace/jetson-inference/build/aarch64/bin/detectnet-camera]

    [gstreamer] initialized gstreamer, version 1.8.2.0 [gstreamer] gstreamer decoder pipeline string: v4l2src device='/dev/video1' name=e ! 'video/x-raw, width=(int)640, height=(int)480' ! videoconvert ! 'video/x-raw, width=(int)640, height=(int)480, format=(string)YUY2' ! appsink name=mysink [gstreamer] gstreamer decoder failed to create pipeline [gstreamer] (syntax error) [gstreamer] failed to init gstCamera

    点赞 评论 复制链接分享
  • weixin_39552768 weixin_39552768 4月前

    Try removing quotes from the caps string (i.e. around video/x-raw...). The quotes are only needed for the shell to parse command line properly.

    点赞 评论 复制链接分享
  • weixin_40007515 weixin_40007515 4月前

    okay, I got rid of the syntax error message by changing device='/dev/video1' to device=\"/dev/video1\" and getting rid of all the '-symbols in the following string. ss << "v4l2src device=\"/dev/video1\" ! video/x-raw, width=(int)"<< mWidth << ", height=(int)" << mHeight <<" ! videoconvert ! video/x-raw, width=(int)"<<mWidth<<", height=(int)"<<mHeight<<", format=(string)YUY2 ! appsink name=mysink";

    But now I am getting this error:

    
    detectnet-camera:  failed to convert from NV12 to RGBA
    detectNet::Detect( 0x(nil), 1280, 720 ) -> invalid parameters
    [cuda]   cudaNormalizeRGBA((float4<em>)imgRGBA, make_float2(0.0f, 255.0f), (float4</em>)imgRGBA, make_float2(0.0f, 1.0f), camera->GetWidth(), camera->GetHeight())
    [cuda]      invalid device pointer (error 17) (hex 0x11)
    [cuda]      /home/ubuntu/workspace/jetson-inference/detectnet-camera/detectnet-camera.cpp:266
    [cuda]   cudaGetLastError()
    [cuda]      invalid argument (error 11) (hex 0x0B)
    [cuda]      /home/ubuntu/workspace/jetson-inference/cuda/cudaYUV-YUYV.cu:97
    [cuda]   cudaYUYVToRGBAf((uchar2<em>)input, (float4</em>)mRGBA, mWidth, mHeight)
    [cuda]      invalid argument (error 11) (hex 0x0B)
    [cuda]      /home/ubuntu/workspace/jetson-inference/camera/gstCamera.cpp:74
    <p></p>
    点赞 评论 复制链接分享
  • weixin_40007515 weixin_40007515 4月前

    ohh nevermind, I just tested this fork and the webcam support works out of the box from there. https://github.com/ross-abaco/jetson-inference

    点赞 评论 复制链接分享
  • weixin_39575758 weixin_39575758 4月前

    I am still having issues getting this code to work with the ZED Stereo Camera. With the abaco fork I get the following error: failed to convert from NV12 to RGBAf

    点赞 评论 复制链接分享
  • weixin_39911567 weixin_39911567 4月前

    Stereolabs ZED is RAW bayer format, so it would need debayering.

    I've merged support for v4l2src into gstCamera. It doesn't detect Bayer yet though, although maybe the pipeline can convert to RGB as-is.

    点赞 评论 复制链接分享
  • weixin_39911567 weixin_39911567 4月前

    OK, I've checked-in support for USB camera. In imagenet-camera or detectnet-camera, change the DEFAULT_CAMERA define at the top from -1 (onboard camera) to the /dev/video* V4L2 index (>=0).

    点赞 评论 复制链接分享
  • weixin_39842029 weixin_39842029 4月前

    Alexey-Kamenev, did you end up streaming the video with UDP. I am trying to use the input from a RSTP stream using Gstreamer, but I am unable to make it work. Could you provide more info on what you ended up doing ?

    thanks

    点赞 评论 复制链接分享
  • weixin_39552768 weixin_39552768 4月前

    Hi, What error are you getting? In my case I'm using ROS (and GStreamer) and UDP H.265 streaming works fine. Assuming that you connected to your TX-1 through WiFi, try using the following: On the TX-1:

    
    gst-launch-1.0 v4l2src device="/dev/video0" ! 'video/x-raw, width=352, height=288' ! videoconvert ! omxh265enc ! 'video/x-h265, stream-format=byte-stream' ! h265parse ! rtph265pay config-interval=1 ! udpsink host=10.42.0.79 port=6000
    

    On the host:

    
    gst-launch-1.0 udpsrc port=6000 ! application/x-rtp, encoding-name=H265,payload=96 ! rtph265depay ! h265parse ! avdec_h265 ! xvimagesink
    

    Now you should see the video on the host streaming from TX-1. You may need to change width/height depending on what camera you have. I'm using similar pipeline in my code (with USB cameras), except I have more than one sink on TX-1.

    Update: I hope I did not misunderstand your question. You should be able to consume video stream on the host using the pipelines above. Let me know if I got your question wrong though.

    点赞 评论 复制链接分享
  • weixin_39911567 weixin_39911567 4月前

    Hi JP, yes in theory it is possible using the v4l2Camera class. Currently it's not totally transparent to change between V4L2 and onboard camera, but I am working to make them the same interface. For now, there may be a specific CUDA colorspace conversion function that needs called for your USB V4L2 camera.

    Can you run v4l-info or a similar utility to print out the settings of your USB webcam, which should indicate the colorspace?

    点赞 评论 复制链接分享
  • weixin_39575758 weixin_39575758 4月前

    Hi Dustin,

    Executing v4l2-ctl provides the following:

    
    ubuntu-ubuntu:~$ v4l2-ctl -V
    Format Video Capture:
             Width/Height      : 4416/1242
             Pixel Format      : 'YUYV'
             Field             : None
             Bytes per Line    : 8832
             Size Image        : 10969344
             Colorspace        : sRGB
             Transfer Function : Default
             YCbCr Encoding    : Default
             Quantization      : Default
    

    Executing v4l2-debug provides the following:

    
    ubuntu-ubuntu:~$ v4l2-dbg -D
    Driver info:
            Driver name   : uvcvideo
            Card type     : ZED
            Bus info      : usb-tegra-xhci-2
            Driver version: 3.10.96
            Capabilities  : 0x84000001
                    Video Capture
                    Streaming
                    Device Capabilities
    

    `

    点赞 评论 复制链接分享
  • weixin_39911567 weixin_39911567 4月前

    OK cool thanks, looks like you're using a Stereolabs ZED in YUYV mode. I've been meaning to make sure it works with ZED out-of-the-box and will let you know when I check-in a patch.

    In the meantime, the v4l2Camera class and YUYV->RGBA CUDA kernel exists in the repo, they're just not invoked in the current imagenet-camera program which uses onboard camera currently..

    点赞 评论 复制链接分享
  • weixin_39552768 weixin_39552768 4月前

    Not sure if this is still relevant but I'll share my experience anyway: instead of using v4l2Camera class I modified gstCamera (because gstreamer is awesome :) ) to support v4l2src and various devices (e.g. /dev/video1). You would also need to change cudaYUYVToRGBA to support float data type. In my case USB cameras work great after these changes.

    点赞 评论 复制链接分享
  • weixin_39542477 weixin_39542477 4月前

    -Kamenev. Hi, Alexey. Could you please explain the details about modifing the gstCamera? I also want to use usb-camera. Thanks!

    点赞 评论 复制链接分享
  • weixin_39552768 weixin_39552768 4月前

    I ended up doing it GStreamer way, that is, I created data probe as I have various scenarios which require different gst pipelines (e.g. streaming video via UDP or writing to file). I plan to upload the code to github once it looks a bit prettier than it is now :)

    However, I played with Dustin's code first and here is the short summary of modifications that I made: https://gist.github.com/Alexey-Kamenev/41821acaecad66de6081a4f017a07aef

    Basically, I just changed gstreamer source and added float version of cudaYUYVToRGBAf. There is also a cudaRGBAToYUYVf kernel which is not needed if you use this sample (as the image is just displayed on the screen) but can be useful in case you are modifying the image and putting it back to gst pipeline. Note that my camera is YUYV, if your camera has different format then you can use videoconvert filter with proper caps for quick prototyping. I tested it on my other USB camera which supports only MPEG format and it worked fine. When you make a change to gst pipeline - test it first via command line, that's the easiest/fastest way to verify that everything is working before digging into code.

    Hope this helps, let me know if you have any questions.

    点赞 评论 复制链接分享

相关推荐