Code Monkey home page Code Monkey logo

esp32-camera's Introduction

ESP32 Camera Driver

Build examples Component Registry

General Information

This repository hosts ESP32 series Soc compatible driver for image sensors. Additionally it provides a few tools, which allow converting the captured frame data to the more common BMP and JPEG formats.

Supported Soc

  • ESP32
  • ESP32-S2
  • ESP32-S3

Supported Sensor

model max resolution color type output format Len Size
OV2640 1600 x 1200 color YUV(422/420)/YCbCr422
RGB565/555
8-bit compressed data
8/10-bit Raw RGB data
1/4"
OV3660 2048 x 1536 color raw RGB data
RGB565/555/444
CCIR656
YCbCr422
compression
1/5"
OV5640 2592 x 1944 color RAW RGB
RGB565/555/444
CCIR656
YUV422/420
YCbCr422
compression
1/4"
OV7670 640 x 480 color Raw Bayer RGB
Processed Bayer RGB
YUV/YCbCr422
GRB422
RGB565/555
1/6"
OV7725 640 x 480 color Raw RGB
GRB 422
RGB565/555/444
YCbCr 422
1/4"
NT99141 1280 x 720 color YCbCr 422
RGB565/555/444
Raw
CCIR656
JPEG compression
1/4"
GC032A 640 x 480 color YUV/YCbCr422
RAW Bayer
RGB565
1/10"
GC0308 640 x 480 color YUV/YCbCr422
RAW Bayer
RGB565
Grayscale
1/6.5"
GC2145 1600 x 1200 color YUV/YCbCr422
RAW Bayer
RGB565
1/5"
BF3005 640 x 480 color YUV/YCbCr422
RAW Bayer
RGB565
1/4"
BF20A6 640 x 480 color YUV/YCbCr422
RAW Bayer
Only Y
1/10"
SC101IOT 1280 x 720 color YUV/YCbCr422
Raw RGB
1/4.2"
SC030IOT 640 x 480 color YUV/YCbCr422
RAW Bayer
1/6.5"
SC031GS 640 x 480 monochrome RAW MONO
Grayscale
1/6"

Important to Remember

  • Except when using CIF or lower resolution with JPEG, the driver requires PSRAM to be installed and activated.
  • Using YUV or RGB puts a lot of strain on the chip because writing to PSRAM is not particularly fast. The result is that image data might be missing. This is particularly true if WiFi is enabled. If you need RGB data, it is recommended that JPEG is captured and then turned into RGB using fmt2rgb888 or fmt2bmp/frame2bmp.
  • When 1 frame buffer is used, the driver will wait for the current frame to finish (VSYNC) and start I2S DMA. After the frame is acquired, I2S will be stopped and the frame buffer returned to the application. This approach gives more control over the system, but results in longer time to get the frame.
  • When 2 or more frame bufers are used, I2S is running in continuous mode and each frame is pushed to a queue that the application can access. This approach puts more strain on the CPU/Memory, but allows for double the frame rate. Please use only with JPEG.

Installation Instructions

Using with ESP-IDF

  • Add a dependency on espressif/esp32-camera component:
    idf.py add-dependency "espressif/esp32-camera"
    (or add it manually in idf_component.yml of your project)
  • Enable PSRAM in menuconfig (also set Flash and PSRAM frequiencies to 80MHz)
  • Include esp_camera.h in your code

These instructions also work for PlatformIO, if you are using framework=espidf.

Using with Arduino

Arduino IDE

If you are using the arduino-esp32 core in Arduino IDE, no installation is needed! You can use esp32-camera right away.

PlatformIO

The easy way -- on the env section of platformio.ini, add the following:

[env]
lib_deps =
  esp32-camera

Now the esp_camera.h is available to be included:

#include "esp_camera.h"

Enable PSRAM on menuconfig or type it direclty on sdkconfig. Check the official doc for more info.

CONFIG_ESP32_SPIRAM_SUPPORT=y

Examples

This component comes with a basic example illustrating how to get frames from the camera. You can try out the example using the following command:

idf.py create-project-from-example "espressif/esp32-camera:camera_example"

This command will download the example into camera_example directory. It comes already pre-configured with the correct settings in menuconfig.

Initialization

#include "esp_camera.h"

//WROVER-KIT PIN Map
#define CAM_PIN_PWDN    -1 //power down is not used
#define CAM_PIN_RESET   -1 //software reset will be performed
#define CAM_PIN_XCLK    21
#define CAM_PIN_SIOD    26
#define CAM_PIN_SIOC    27

#define CAM_PIN_D7      35
#define CAM_PIN_D6      34
#define CAM_PIN_D5      39
#define CAM_PIN_D4      36
#define CAM_PIN_D3      19
#define CAM_PIN_D2      18
#define CAM_PIN_D1       5
#define CAM_PIN_D0       4
#define CAM_PIN_VSYNC   25
#define CAM_PIN_HREF    23
#define CAM_PIN_PCLK    22

static camera_config_t camera_config = {
    .pin_pwdn  = CAM_PIN_PWDN,
    .pin_reset = CAM_PIN_RESET,
    .pin_xclk = CAM_PIN_XCLK,
    .pin_sccb_sda = CAM_PIN_SIOD,
    .pin_sccb_scl = CAM_PIN_SIOC,

    .pin_d7 = CAM_PIN_D7,
    .pin_d6 = CAM_PIN_D6,
    .pin_d5 = CAM_PIN_D5,
    .pin_d4 = CAM_PIN_D4,
    .pin_d3 = CAM_PIN_D3,
    .pin_d2 = CAM_PIN_D2,
    .pin_d1 = CAM_PIN_D1,
    .pin_d0 = CAM_PIN_D0,
    .pin_vsync = CAM_PIN_VSYNC,
    .pin_href = CAM_PIN_HREF,
    .pin_pclk = CAM_PIN_PCLK,

    .xclk_freq_hz = 20000000,//EXPERIMENTAL: Set to 16MHz on ESP32-S2 or ESP32-S3 to enable EDMA mode
    .ledc_timer = LEDC_TIMER_0,
    .ledc_channel = LEDC_CHANNEL_0,

    .pixel_format = PIXFORMAT_JPEG,//YUV422,GRAYSCALE,RGB565,JPEG
    .frame_size = FRAMESIZE_UXGA,//QQVGA-UXGA, For ESP32, do not use sizes above QVGA when not JPEG. The performance of the ESP32-S series has improved a lot, but JPEG mode always gives better frame rates.

    .jpeg_quality = 12, //0-63, for OV series camera sensors, lower number means higher quality
    .fb_count = 1, //When jpeg mode is used, if fb_count more than one, the driver will work in continuous mode.
    .grab_mode = CAMERA_GRAB_WHEN_EMPTY//CAMERA_GRAB_LATEST. Sets when buffers should be filled
};

esp_err_t camera_init(){
    //power up the camera if PWDN pin is defined
    if(CAM_PIN_PWDN != -1){
        pinMode(CAM_PIN_PWDN, OUTPUT);
        digitalWrite(CAM_PIN_PWDN, LOW);
    }

    //initialize the camera
    esp_err_t err = esp_camera_init(&camera_config);
    if (err != ESP_OK) {
        ESP_LOGE(TAG, "Camera Init Failed");
        return err;
    }

    return ESP_OK;
}

esp_err_t camera_capture(){
    //acquire a frame
    camera_fb_t * fb = esp_camera_fb_get();
    if (!fb) {
        ESP_LOGE(TAG, "Camera Capture Failed");
        return ESP_FAIL;
    }
    //replace this with your own function
    process_image(fb->width, fb->height, fb->format, fb->buf, fb->len);
  
    //return the frame buffer back to the driver for reuse
    esp_camera_fb_return(fb);
    return ESP_OK;
}

JPEG HTTP Capture

#include "esp_camera.h"
#include "esp_http_server.h"
#include "esp_timer.h"

typedef struct {
        httpd_req_t *req;
        size_t len;
} jpg_chunking_t;

static size_t jpg_encode_stream(void * arg, size_t index, const void* data, size_t len){
    jpg_chunking_t *j = (jpg_chunking_t *)arg;
    if(!index){
        j->len = 0;
    }
    if(httpd_resp_send_chunk(j->req, (const char *)data, len) != ESP_OK){
        return 0;
    }
    j->len += len;
    return len;
}

esp_err_t jpg_httpd_handler(httpd_req_t *req){
    camera_fb_t * fb = NULL;
    esp_err_t res = ESP_OK;
    size_t fb_len = 0;
    int64_t fr_start = esp_timer_get_time();

    fb = esp_camera_fb_get();
    if (!fb) {
        ESP_LOGE(TAG, "Camera capture failed");
        httpd_resp_send_500(req);
        return ESP_FAIL;
    }
    res = httpd_resp_set_type(req, "image/jpeg");
    if(res == ESP_OK){
        res = httpd_resp_set_hdr(req, "Content-Disposition", "inline; filename=capture.jpg");
    }

    if(res == ESP_OK){
        if(fb->format == PIXFORMAT_JPEG){
            fb_len = fb->len;
            res = httpd_resp_send(req, (const char *)fb->buf, fb->len);
        } else {
            jpg_chunking_t jchunk = {req, 0};
            res = frame2jpg_cb(fb, 80, jpg_encode_stream, &jchunk)?ESP_OK:ESP_FAIL;
            httpd_resp_send_chunk(req, NULL, 0);
            fb_len = jchunk.len;
        }
    }
    esp_camera_fb_return(fb);
    int64_t fr_end = esp_timer_get_time();
    ESP_LOGI(TAG, "JPG: %uKB %ums", (uint32_t)(fb_len/1024), (uint32_t)((fr_end - fr_start)/1000));
    return res;
}

JPEG HTTP Stream

#include "esp_camera.h"
#include "esp_http_server.h"
#include "esp_timer.h"

#define PART_BOUNDARY "123456789000000000000987654321"
static const char* _STREAM_CONTENT_TYPE = "multipart/x-mixed-replace;boundary=" PART_BOUNDARY;
static const char* _STREAM_BOUNDARY = "\r\n--" PART_BOUNDARY "\r\n";
static const char* _STREAM_PART = "Content-Type: image/jpeg\r\nContent-Length: %u\r\n\r\n";

esp_err_t jpg_stream_httpd_handler(httpd_req_t *req){
    camera_fb_t * fb = NULL;
    esp_err_t res = ESP_OK;
    size_t _jpg_buf_len;
    uint8_t * _jpg_buf;
    char * part_buf[64];
    static int64_t last_frame = 0;
    if(!last_frame) {
        last_frame = esp_timer_get_time();
    }

    res = httpd_resp_set_type(req, _STREAM_CONTENT_TYPE);
    if(res != ESP_OK){
        return res;
    }

    while(true){
        fb = esp_camera_fb_get();
        if (!fb) {
            ESP_LOGE(TAG, "Camera capture failed");
            res = ESP_FAIL;
            break;
        }
        if(fb->format != PIXFORMAT_JPEG){
            bool jpeg_converted = frame2jpg(fb, 80, &_jpg_buf, &_jpg_buf_len);
            if(!jpeg_converted){
                ESP_LOGE(TAG, "JPEG compression failed");
                esp_camera_fb_return(fb);
                res = ESP_FAIL;
            }
        } else {
            _jpg_buf_len = fb->len;
            _jpg_buf = fb->buf;
        }

        if(res == ESP_OK){
            res = httpd_resp_send_chunk(req, _STREAM_BOUNDARY, strlen(_STREAM_BOUNDARY));
        }
        if(res == ESP_OK){
            size_t hlen = snprintf((char *)part_buf, 64, _STREAM_PART, _jpg_buf_len);

            res = httpd_resp_send_chunk(req, (const char *)part_buf, hlen);
        }
        if(res == ESP_OK){
            res = httpd_resp_send_chunk(req, (const char *)_jpg_buf, _jpg_buf_len);
        }
        if(fb->format != PIXFORMAT_JPEG){
            free(_jpg_buf);
        }
        esp_camera_fb_return(fb);
        if(res != ESP_OK){
            break;
        }
        int64_t fr_end = esp_timer_get_time();
        int64_t frame_time = fr_end - last_frame;
        last_frame = fr_end;
        frame_time /= 1000;
        ESP_LOGI(TAG, "MJPG: %uKB %ums (%.1ffps)",
            (uint32_t)(_jpg_buf_len/1024),
            (uint32_t)frame_time, 1000.0 / (uint32_t)frame_time);
    }

    last_frame = 0;
    return res;
}

BMP HTTP Capture

#include "esp_camera.h"
#include "esp_http_server.h"
#include "esp_timer.h"

esp_err_t bmp_httpd_handler(httpd_req_t *req){
    camera_fb_t * fb = NULL;
    esp_err_t res = ESP_OK;
    int64_t fr_start = esp_timer_get_time();

    fb = esp_camera_fb_get();
    if (!fb) {
        ESP_LOGE(TAG, "Camera capture failed");
        httpd_resp_send_500(req);
        return ESP_FAIL;
    }

    uint8_t * buf = NULL;
    size_t buf_len = 0;
    bool converted = frame2bmp(fb, &buf, &buf_len);
    esp_camera_fb_return(fb);
    if(!converted){
        ESP_LOGE(TAG, "BMP conversion failed");
        httpd_resp_send_500(req);
        return ESP_FAIL;
    }

    res = httpd_resp_set_type(req, "image/x-windows-bmp")
       || httpd_resp_set_hdr(req, "Content-Disposition", "inline; filename=capture.bmp")
       || httpd_resp_send(req, (const char *)buf, buf_len);
    free(buf);
    int64_t fr_end = esp_timer_get_time();
    ESP_LOGI(TAG, "BMP: %uKB %ums", (uint32_t)(buf_len/1024), (uint32_t)((fr_end - fr_start)/1000));
    return res;
}

esp32-camera's People

Contributors

7fm avatar againpsychox avatar alic-maker avatar apiesse avatar bartlomiejcieszkowski avatar bkeevil avatar ccvelandres avatar devellison avatar eeeebin avatar elasticdotventures avatar fkubicek avatar jason-mao avatar jjsch-dev avatar kevincoooool avatar kumekay avatar mavalki avatar me-no-dev avatar mirronelli avatar nqh2412 avatar raphaelbs avatar samw3 avatar tda-2030 avatar thenitek avatar tore-espressif avatar vikramdattu avatar voodust avatar wangyuxin-esp avatar wouterdebie avatar xdanielpaul avatar yorickvp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

esp32-camera's Issues

JBIB in jpeg's header

I print the hexdump for every jpeg frame:

FF D8 FF E4 00 10 4A 42  49 42 00 01 01 01 00 00  |  ......JBIB......
00 00 00 00 FF DB 00 43  00 08 08 09 0B 09 08 08  |  .......C........

I'm using low resoultion (in this case CIF, but I obtain the same result on VGA and SVGA). This on M5Stack camera device.

Reconnect after loss of wifi

The cameras will not reconnect after it gets disconnect from the router. I rebooted my router and the last msg from the camera is this and it just sits there. also it maybe helpful if you add hostname just so users can set it in case they have more than one camera.
I think we need to add something like this.
if (WiFi.status() == 6)
{
ESP.reset();
}
or
if (WiFi.status() == 0)
{
ESP.reset();
}
0 being the disconnect status number
just sits on [0] Disconnected!

Starting web server on port: '80'
Starting stream server on port: '9601'
Camera Ready! Use 'http://192.168.2.175' to connect , de stream zit op een andere poortkanaal 9601
stream Ready! Use 'http://192.168.2.175:9601/stream
image Ready! Use 'http://192.168.2.175/capture
websocketport:81 Use 'http://192.168.2.175/export
export.html
webSocketEvent(0, 2, ...)
[0] Connected from 192.168.2.11 url: /
{"RTC":"190331-00:48:58","ESPtimer":"17","Streamport":"9601","TLrunning":"0","SDmax":"0","SDused":"0","Flashled":"33","Led":"1","Timelapseinterval":"1","Timelapsecounter":"0","Lastfilename":"esp32-cam17"}webSocketEvent(0, 3, ...)
[0] get Text: Ba
Ba knop dedrukt
{"RTC":"190331-00:49:00","ESPtimer":"19","Streamport":"9601","TLrunning":"0","SDmax":"0","SDused":"0","Flashled":"33","Led":"1","Timelapseinterval":"1","Timelapsecounter":"0","Lastfilename":"esp32-cam17"}205
vithlengte letop 300bytes lengte voorzien
lijn493
pixformatjpg for httpd send
JPG: 201843B 572ms

webSocketEvent(0, 3, ...)
[0] get Text: Bb
Bb knop dedrukt
{"RTC":"190331-00:49:02","ESPtimer":"21","Streamport":"9601","TLrunning":"0","SDmax":"0","SDused":"0","Flashled":"33","Led":"1","Timelapseinterval":"1","Timelapsecounter":"0","Lastfilename":"esp32-cam17"}205
vithlengte letop 300bytes lengte voorzien
MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: MJPG: webSocketEvent(0, 1, ...)
[0] Disconnected!

Ai-Thinker CAM board and OV2640

I am testing the code with the Ai-Thinker CAM board using an OV2640 sensor.

The initial camera probe fails with 0x20001 error (ESP_ERR_CAMERA_NOT_DETECTED).

I have used the same hardware and PIN configuration with the @igrr example with no problems.

Have you tested the code using the OV2640 sensor?

This is the PIN config:

//Ai-Thinker CAM board PIN Map
#define CAM_PIN_PWDN    -1 //power down is not used
#define CAM_PIN_RESET   -1 //software reset will be performed
#define CAM_PIN_XCLK     0
#define CAM_PIN_SIOD    26
#define CAM_PIN_SIOC    27

#define CAM_PIN_D7      35
#define CAM_PIN_D6      34
#define CAM_PIN_D5      39
#define CAM_PIN_D4      36
#define CAM_PIN_D3      21
#define CAM_PIN_D2      19
#define CAM_PIN_D1      18
#define CAM_PIN_D0       5
#define CAM_PIN_VSYNC   25
#define CAM_PIN_HREF    23
#define CAM_PIN_PCLK    22

And the error:

E (397) camera: Enabling XCLK output                                            
E (397) camera: Initializing SSCB                                               
E (397) camera: Software resetting camera                                       
SCCB_Write [ff]=01 failed                                                       
SCCB_Write [12]=80 failed                                                       
E (407) camera: Searching for camera address                                    
E (427) camera: Camera probe failed with error 0x20001                          
E (427) wifi station: Camera Init Failed                                        
I (427) wifi station: ESP_WIFI_MODE_ST

Fps for jpeg - settings question

I have this project to make at least 25 fps vga video stream for fpv purposes. I will push data through 2 radios with more bandwidth than normal wifi connection. Also compression can be very lossy to lower bandwidth.
camera OV2640

My questions are:
Frame rate for vga jpeg video? What settings or hardware mods are needed?
What is the latency for one vga frame?

If these are other platforms for this project I take suggestions. Esp32 is just cheap and small enough even for micro quadcopters.

less important.
Is it possible to access parts of the frame data before it is complete?

Using the window function of the ov2640

I would like to use the windowing functionality, but I see that it is currently commented out in the sensor code. Does this code work?, or were there problems with the implementation? If problems, what needs to be worked out, or is this underway, and you have some time when you think it will be included.

Thanks! This new functionality is great, thanks for taking the next step on making the camera code usable.

from sensors/ov2640, line 136

//Functions are not needed currently
#if 0
//Set the sensor output window
int set_output_window(sensor_t *sensor, uint16_t x, uint16_t y, uint16_t width, uint16_t height)

Question about compression bound value

Good day

I would kindly like to ask how the values were selected in the following:
Are these strict values or could there be jpeg images that will require a larger frame buffer than these values allow?

int compression_ratio_bound = 1;
if (qp > 10) {
compression_ratio_bound = 16;
} else if (qp > 5) {
compression_ratio_bound = 10;
} else {
compression_ratio_bound = 4;
}

Failed to initialize after reset

Hi,

I uploaded the code without any issue through FTDI programmer.
After reset I always get following error:

rst:0x1 (POWERON_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:1100
load:0x40078000,len:10088
load:0x40080400,len:6380
entry 0x400806a4

SCCB_Write [ff]=01 failed

Any idea how i can fix this or what is the cause? Correct model is defined.

Thanks!

If Wire.begin before esp_camera_init system reboots

I trying to use the ESP32_CAM board with a I2C Display with the ARUINO IDE

I added the minimal example:

Test.ino.txt

The following error is reported when i use Wire or Wire1 begin:

rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:1100
load:0x40078000,len:9232
load:0x40080400,len:6400
entry 0x400806a8
camera init ...
[E][camera.c:1049] camera_probe(): Detected camera not supported.
[E][camera.c:1249] esp_camera_init(): Camera probe failed with error 0x20004
Guru Meditation Error: Core  1 panic'ed (LoadProhibited). Exception was unhandled.
Core 1 register dump:
PC      : 0x400d13a5  PS      : 0x00060f30  A0      : 0x800d2ae2  A1      : 0x3ffb1f20  
A2      : 0x3ffc06a4  A3      : 0x00000001  A4      : 0x3f400f27  A5      : 0x80000020  
A6      : 0x00000000  A7      : 0x3ffba2c4  A8      : 0x800d13a5  A9      : 0x3ffb1f00  
A10     : 0x00000000  A11     : 0x11173ea6  A12     : 0x3ffc16f8  A13     : 0x11173ea6  
A14     : 0x3ffc06a4  A15     : 0x00000000  SAR     : 0x00000004  EXCCAUSE: 0x0000001c  
EXCVADDR: 0x0000003c  LBEG    : 0x400014fd  LEND    : 0x4000150d  LCOUNT  : 0xffffffff  

Backtrace: 0x400d13a5:0x3ffb1f20 0x400d2adf:0x3ffb1fb0 0x40089f55:0x3ffb1fd0

Rebooting...
ets Jun  8 2016 00:22:57

Any Ideas?

Support for power down modes

I am using an ESP32-CAM module, and currently the camera power consumption is relatively high when the camera is idle. I have done several not very accurate current measurement using my test project under several scenarios (all with the WiFi up):

  1. Camera initialized, module idle: 90 mA
  2. Camera taking photo: 165 mA
  3. Camera initialized, photo taken, camera deinitialized: 55 mA
  4. Camera initialized, photo taken, power down pin set to 1: 35 mA
  5. Camera initialized, photo taken, written 0x10 to register 0x0A (Standby mode) 35 mA

As the intended use for the module is to be battery powered and the camera will be idle most of the time, I would like to be able to put it to sleep and wake it up as needed. From the measurements above, it seems I should be able to cut current consumption almost by a 1/3 factor, but unfortunately, when I try options 4 an 5, I am not able to make the camera work again until I issue a power cycle. I have tried disabling Standby mode for option 5, and reinitializing the camera for both options, without success.

Are power down modes supported? If not, could power down support be added?

Regards,

doragasu

I2S Camera in Master mode - question (IDFGH-1572)

Not an issue - but a question.

I have a ESP32 CAM which uses the OV2640.

All the github code samples limit the resolution because I believe most people are interested in FPS and not maximum resolution.

At a resolution of 1600x1200 RGB - that's 5.76 MB - more than the 4 MB psram can manage.

There is not a lot of technical information on the I2S component in LCD/Camera mode - so my question.

I want to be able to capture a line of data from the camera with ME controlling the PClk, Hsync, and Vsync.

Is there a way to set up DMA to acquire data from the I2S (camera master mode if such a mode exists) where the dma takes the data on pclk edge but ignores hsync, and vsync?

OR - is there another component on the ESP32 where I can capture 8 bits using DMA - where each bit is assigned to a specific GPIO pin?

My thoughts are to directly drive those two lines, along with pclk, and have the DMA capture the data as it's shifted out - That way I can control the time between lines and process them without trying to create a frame buffer to capture everything.

Any insight?

Thanks.

Joe

Further increasing frame rate?

The frame rate is already pretty good, dont get me wrong :D
I am wondering whats currently limiting the frame rate in higher resolution modes like VGA and higher.
Seems like its independent from the data rate? Would it be possible to implement dynamic frame rate based on jpeg bandwidth? Can you point me in the right direction?

any way to sent the picture via bluetooth?

the thing i had wondering was, how can it capture image and sent it via bluetooth,
for example capturing an image with it and sending it to android via bluetooth and saving it to the cloud through android internet.

esp_camera_deinit() not working?

Is it possible to deinit camera completely? or put the camera in power saving mode?
When I init it again, I get some error about GPIO and i2s.

If I keep OV2640 or OV3660 running, the camera module will "eat" about 100ma current.
While at the same time, the ESP32 is busy and also "eat" about 100ma current.
These will create a lot of heat.
Maybe I can try to put ESP32 into light sleep mode, but I am not sure the camera can be brought back to life with ESP32 at the same time.

I saw this for esp_camera_init():

  • This function detects and configures camera over I2C interface,
  • allocates framebuffer and DMA buffers,
  • initializes parallel I2S input, and sets up DMA descriptors.
  • Currently this function can only be called once and there is
  • no way to de-initialize this module.

I prefer to use Anduino IDE, but it seems that esp_camera_deinit() is not implement-ed, because I got more error when I tried to init it again..

Any suggestion?

Sorry for my poor English.
Thank you in advance.

Esp_http_server.h cannot find file

Hi all,

I am using esp-Idf v3.1. I am trying to compile the example but it cannot find Esp_http_server.h.
Also, is this example for Arduino IDE or for IDF?
Thank you in advance.

ov2640 camera get JPEG Vertical flip, color is changed

hello, used ov2640, i want flip image, call this code

sensor_t * s = esp_camera_sensor_get();
s->set_vflip(s, true);

when i get jpeg image, the color red is change to green, My red cup turned green :)
how to deal with......

esp_camera_fb_get hangs on semaphoreTake

Hello,

I am using the project with an OV2640 sensor. Occasionally I noticed that my camera task would hang because esp_camera_fb_get was blocked trying to take the frame_ready semaphore. I have a heavily loaded ESP32 and am using one frame buffer in JPEG mode.

frame_Ready is set in camera_fb_done which is called in dma_finish_frame, which is called in dma_filter_task (with buf_idx = SIZE_MAX). That in turn is called in i2s_stop which (in JPEG mode) appears to only be called in vsync_isr.

By enabling some ets_printf statements and using a logic analyzer, I see that under heavy LwIP/TLS usage (on the other core), I sometimes get a bad header or the data_ready queue fills before filtering a line - setting the bad flag. This can happen at the beginning of the image in which case the dma_filtered_count remains at 0. Then since I am using 1 frame buffer, the conditional:

if(s_state->dma_filtered_count > 1 || s_state->config.fb_count > 1) { i2s_stop(&need_yield); }

at line 573 will never call i2s_stop. The bad flag is reset in dma_finish_frame which can only be called from i2s_stop so the original esp_camera_fb_get call never returns.

I'm afraid I don't understand the conditionals within vsync_isr but suspect a change is needed, at least in 1 frame buffer mode? When I naively removed the conditional so i2s_stop is always called, I no longer see a hang - even under artificially high CPU loads. I'm still not sure if the i2s_stop call should be under the dma_receieved_count check, since it would seem to me i2s_stop should be called anytime VSYNC goes low (signaling the end of an image).

Thank you for your help and your work on this library.

Rotation of image

Hello, could someone point me to right direction how could I rotate the image for let we say 90 degrees ? I can flip it horizontally or vertically but as the image from camera is landscape when the module is horizontal. I cannot find the right piece of code. I know there could be problem with resolution, but square image would be acceptable in this situation.
thanks

[non-issue] Can play sound in esp32-camera ?

I look at in DataSheet of esp32-camera. I don't see DAC port. I want to play sound (wav) in esp32-camera. How can I do ? Can we have DAC port in esp32-camera ? Thanks in advance.

[Not an issue] Saving camera input to SD card

Has anyone managed to save the camera output (either video or photo) to SD card, besides MJPEG streaming over HTTP? Or is the only way to save the camera output to use ffmpeg on another PC to capture the stream to disk?

streaming to browser

The Arduino example program prints at start:
...
Starting stream server on port: '81'
...
After starting capturing I tried to get the stream on port 81 in a browser. It waits for data, but can't see anything. I tried Firefox, Chrome, Edge. VLC breaks down immediately.

Did I misunderstand something? Am I able to get the stream only in the web server window?

Thank you.

Error in to_bmp.c

Hi, I found errors in to_bmp.c function fmt2rgb888 in YUV422 Mode:

-YUYV = 4 Bytes, so you have to divide per 4, not 2 (pix_count = src_len / 4;)

-It seems that U and V are swapped for unknown reason, the Bytes in the array are in Y0/V/Y1/U order.
After this change the colors are correct, at least in QQVGA mode.

My code:

`
} else if (format == PIXFORMAT_YUV422) {
int i;
uint8_t y0, y1, u, v;
uint8_t r, g, b;
int pix_count = src_len / 4; //yuyv = 4Bytes!
for (i = 0; i < pix_count; i++) //use variable pix_count instead of maxi
{
y0 = *src_buf++;
v = *src_buf++; //v and u is swapped, unknown reason
y1 = *src_buf++;
u = *src_buf++;

  yuv2rgb(y0, u, v, &r, &g, &b);
  *rgb_buf++ = b;
  *rgb_buf++ = g;
  *rgb_buf++ = r;

  yuv2rgb(y1, u, v, &r, &g, &b);
  *rgb_buf++ = b;
  *rgb_buf++ = g;
  *rgb_buf++ = r;
}

`

Esp32 Camera is connected but page never shows

Hi,
Here is the result of Serial Monitor in Arduino:
rst:0x1 (POWERON_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:1100
load:0x40078000,len:9232
load:0x40080400,len:6400
entry 0x400806a8

WiFi connected
Starting web server on port: '80'
Starting stream server on port: '81'
Camera Ready! Use 'http://192.168.0.111' to connect

when I enter the above IP, after about 30 seconds I get this:
The connection has timed out

The server at 192.168.0.111 is taking too long to respond.

The site could be temporarily unavailable or too busy. Try again in a few moments.
If you are unable to load any pages, check your computer’s network connection.
If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web.

did this code support VGA resolution?

as the code:

bool fmt2jpg(uint8_t *src, size_t src_len, uint16_t width, uint16_t height, pixformat_t format, uint8_t quality, uint8_t ** out, size_t * out_len)
{
    //todo: allocate proper buffer for holding JPEG data
    //this should be enough for CIF frame size
    int jpg_buf_len = 24*1024;

which indicate it only support 24K buffer for picture encoding.
when I want to encode a 640*480 YUV picture.it says the buffer is too small.

then I change this to 240*1024 .but it did not work.

so my question is: does this project truly support VGA or bigger picture encoding?

Streaming video

Hello
We are interested in the ability to transfer RTSP video using ESP32. Is it
possible Is it possible to transfer video from ESP32 to a server on the
Internet? What protocol can this be implemented in? We did a very big search
on the Internet and on your site, but we didn’t find some information.
We want to develop a device which will be connected to a local wifi network.
A camera and a relay will be connected to it. You need to manage this device
from anywhere in the world with access to the Internet. Tell me, is it
possible to implement using ESP32?

OV2640 initializing issue

Hello,
If I set SCCB_HARDWARE_I2C to true, the SCCB_Probe() returns the address 0x3c although it should be 0x30 for the OV2640. Ultimately this results in a camera-not-supported error. If I disable SCCB_HARDWARE_I2C it fails to detect the camera. HOWEVER if I switch the pin_reset and pin_pwdn everything works.
Supposedly just because the reset is switched from 0->1 to 1->0 instead in the camera driver (lines 946-970):

    if(config->pin_pwdn >= 0) {
        ESP_LOGD(TAG, "Resetting camera by power down line");
        gpio_config_t conf = { 0 };
        conf.pin_bit_mask = 1LL << config->pin_pwdn;
        conf.mode = GPIO_MODE_OUTPUT;
        gpio_config(&conf);

        // carefull, logic is inverted compared to reset pin
        gpio_set_level(config->pin_pwdn, 1);
        vTaskDelay(10 / portTICK_PERIOD_MS);
        gpio_set_level(config->pin_pwdn, 0);
        vTaskDelay(10 / portTICK_PERIOD_MS);
    }

    if(config->pin_reset >= 0) {
        ESP_LOGD(TAG, "Resetting camera");
        gpio_config_t conf = { 0 };
        conf.pin_bit_mask = 1LL << config->pin_reset;
        conf.mode = GPIO_MODE_OUTPUT;
        gpio_config(&conf);

        gpio_set_level(config->pin_reset, 0);
        vTaskDelay(10 / portTICK_PERIOD_MS);
        gpio_set_level(config->pin_reset, 1);
        vTaskDelay(10 / portTICK_PERIOD_MS);

Is this an issue with the driver or with the ESP32-CAM board (A.I. Thinker i guess) I am using?

Consider Arduino support?

I've got an esp32 project that's already built in arduino that I'm trying to add in OV2640 camera support.

This is the simplest library I've found so far, but porting my existing project back to esp-idf is busy-work I'd like to avoid.

Would you consider instructions / changes to use in arduino?

Text overlay on image possible

Hi!
Thanks for this awesome code.
Currently I'm using it to serve a raw mjpeg stream to which I can connect to via browser. It works great.

Is there any way to overlay text on the video with this as they are doing with the facial recognition code?
I have been trying to figure it out but I'm stumped.
I'm using an Ai Thinker esp32-cam

Thanks a lot.

Read high res frames instead of video stream?

Is it possible to read single full resolution OV2640 frames and stream over network for further processing on PC (Python, OpenCV)? What maximum frame rate is possible for this mode?

esp_camera_fb_get() failed

Hi :
I try to use esp_camera_fb_get() to get camera buffer, but it stops. I find the camera driver, it stops here. "xQueueReceive(s_state->fb_out, &fb, portMAX_DELAY);" I think camera has run, because it print "I (812) camera: i2s_run". And the same camera in another esp32 works fine. The api function as below :
camera_fb_t* esp_camera_fb_get()
{
if (s_state == NULL) {
return NULL;
}
if(!I2S0.conf.rx_start) {
if(s_state->config.fb_count > 1) {
ESP_LOGD(TAG, "i2s_run");
}
i2s_run();
}
if(s_state->config.fb_count == 1) {
xSemaphoreTake(s_state->frame_ready, portMAX_DELAY);
}
if(s_state->config.fb_count == 1) {
return (camera_fb_t*)s_state->fb;
}
camera_fb_int_t * fb = NULL;
if(s_state->fb_out) {
printf("s_state->fb_out\r\n" );
xQueueReceive(s_state->fb_out, &fb, portMAX_DELAY);
printf("s_state->fb_out complete\r\n" );
}
return (camera_fb_t*)fb;
}

How esp32 cam image display on the oled 0.96 inch driver is SSD1306

When I used ESP32-CAM to connect to OLED 0.96 inch (IIC & SSD1306 & use ThingPulse / esp8266-oled-ssd1306 library), however, all noise is no image. After thinking, I feel that I need to convert the data obtained by the sensor to match the loading order of the OLED. Direct use of the sensor data cannot display the image on the OLED. Getting it for other reasons, hope to get your help. Best wishes.

M5STACK ESP32CAM Example?

Hi,

Are you able to provide an example of how to use this library with the ESP32CAM M5STACK module?

I was able to get the module up and running with the demo at https://github.com/m5stack/esp32-cam-demo, however, this demo only enabled use of the OV2640 module at a max of 800x600 resolution as it didn't make use of the external PSRAM.

That's why I'm so interested in your demo, as a quick look through your source code showed that you have provided support for all resolutions up to the max UXGA resolution of the OV2640.

So is it possible for you to provide a simple example project or update the instructions on how to integrate with the ESP32CAM M5STACK module? I'm new to the ESP world / tooling and need a little hand holding at the start 😄.

I was able to get the https://github.com/m5stack/esp32-cam-demo project happening from just installing the dependencies and typing make menuconfig then make flash. I'd love for the instructions here to be as clear, even if we have to do a little extra work, but just a hint in the right direction!

I'm sure there are a lot of excited ESP32CAM M5STACK owners who have been waiting for a project like this as the demo shipped with the module was somewhat disappointing with the shipped low resolution enabled.

Thanks for reading. This project looks very promising!

I2S Camera - Vsync question (IDFGH-1573)

In the ESP32 Hardware manual the following is stated:

**> When I2S is in the camera slave receiving mode, and when I2Sn_H_SYNC, I2S_V_SYNC and I2S_H_REF are

held high, the master starts transmitting data, that is,
transmission_start = (I2Sn_H_SY NC == 1)&&(I2Sn_V _SY NC == 1)&&(I2Sn_H_ENABLE == 1)
Thus, during data transmission, these three signals should be kept at a high level. For example, if the
I2Sn_V_SYNC signal of a camera is at low level during data transmission, it will be inverted when routed to the
I2S module. ESP32 supports signal inversion through the GPIO matrix.**

When I look at the various github ESP32 Camera sites, non of the code inverts the vsync input signal.

Am I missing something as the OV2640 clearly has a HI Hsync and a LO Vsync during active pixel lines.

Please explain why the existing code does not invert Vsync.

Thanks.

Joe

Clarification of XCLK value and number of fb.

I would kindly like to know:
I have the camera set to vga and jpeg.
Which XCLK setting achieves double the frame rate?
Can double the frame rate only be achieved at 10MHz XCLK or also at 20Mhz?
Also, is there a maximum number of framebuffers that can be used and is there an advantage as to using more than 2 framebuffers?
Thank you in advance.

Hang at "Waiting for frame" (illegal pin map?)

I'm creating a custom board that includes both an ethernet PHY (LAN8720) and a OV2640 camera module. As a result most of the IO lines of the module are in use (one left). The OV2640 is connected using the following pin mapping:

CONFIG_D0=39
CONFIG_D1=36
CONFIG_D2=33
CONFIG_D3=34
CONFIG_D4=13
CONFIG_D5=12
CONFIG_D6=14
CONFIG_D7=15

CONFIG_XCLK=32
CONFIG_PCLK=35
CONFIG_VSYNC=16
CONFIG_HREF=17

CONFIG_SDA=4
CONFIG_SCL=2

CONFIG_RESET=5

Is this a valid pin mapping? As far as I can tell the I2S peripheral is on the GPIO matrix, so typically any GPIO can be used. Running the demo app does detect the camera, however when a frame is read the program hangs at "waiting for frame":

D (198788) camera: Waiting for positive edge on VSYNC
D (198828) camera: Got VSYNC
D (198828) camera: Waiting for frame

It seems the camera does its job (got VSYNC), but the frame is never transferred. Could it be I'm using an illegal pin map; should PCLK be on an ouput-capable gpio pin for instance?

I just fid some additional testing: i2s_run() is called, but the frame_ready semaphore is never released; the program blocks on xSemaphoreTake(s_state->frame_ready, portMAX_DELAY);. Any ideas where to look for the issue?

Support callbacks (instead of framebuffers) for bytes from the camera

Hi,

I recently wrote a little lib to serve RTSP video from your slick ESP32. However, someone asked about supporting the full resolution of the imager. Which I actually think I could do if I can refactor your camera code a bit. (My general thoughts are here: geeksville/Micro-RTSP#1 (comment) - any feedback/corrections would be appreciated).

Rather than just fork your camera.c code and somewhat cruftily add the option of callbacks for parsing i2s data, I could do it somewhat carefully/cleanly and send in a pull request. Do ya'll take PRs? If so, I'll send one on if not - no worries, I'll fork and use the modified driver in my project.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.