How Does Arduino with Camera Module Work?

14 Apr.,2024

 

Arduino Team

If you’re interested in embedded machine learning (TinyML) on the Arduino Nano 33 BLE Sense, you’ll have found a ton of on-board sensors — digital microphone, accelerometer, gyro, magnetometer, light, proximity, temperature, humidity and color — but realized that for vision you need to attach an external camera.

In this article, we will show you how to get image data from a low-cost VGA camera module. We’ll be using the Arduino_OVD767x library to make the software side of things simpler.

Hardware setup

To get started, you will need:

  • Arduino Nano 33 BLE Sense with headers
  • OV7670 CMOS VGA Camera Module
  • 16x female to female jumper wires
  • A microUSB cable to connect to your Arduino

You can of course get a board without headers and solder instead, if that’s your preference.

The one downside to this setup is that (in module form) there are a lot of jumpers to connect. It’s not hard but you need to take care to connect the right cables at either end. You can use tape to secure the wires once things are done, lest one comes loose.

You need to connect the wires as follows:

Software setup

First, install the Arduino IDE or register for Arduino Create tools. Once you install and open your environment, the camera library is available in the library manager.

  • Install the Arduino IDE or register for Arduino Create
  • Tools > Manage Libraries and search for the OV767 library
  • Press the Install button

Now, we will use the example sketch to test the cables are connected correctly:

  • Examples > Arduino_OV767X > CameraCaptureRawBytes
  • Uncomment (remove the //) from line 48 to display a test pattern
Camera.testPattern();
  • Compile and upload to your board

Your Arduino is now outputting raw image binary over serial. To view this as an image we’ve included a special application to view the image output from the camera using Processing.

Processing is a simple programming environment that was created by graduate students at MIT Media Lab to make it easier to develop visually oriented applications with an emphasis on animation and providing users with instant feedback through interaction.

  • Install and open Processing 
  • Paste the CameraVisualizerRawBytes code into a Processing sketch
  • Edit line 31-37 to match the machine and serial port your Arduino is connected to
  • Hit the play button in Processing and you should see a test pattern (image update takes a couple of seconds):

If all goes well, you should see the striped test pattern above!

Next we will go back to the Arduino IDE and edit the sketch so the Arduino sends a live image from the camera in the Processing viewer: 

  • Return to the Arduino IDE
  • Comment out line 48 of the Arduino sketch
// We've disabled the test pattern and will display a live image
// Camera.testPattern();
  • Compile and upload to the board
  • Once the sketch is uploaded hit the play button in Processing again
  • After a few seconds you should now have a live image:

Considerations for TinyML

The full VGA (640×480 resolution) output from our little camera is way too big for current TinyML applications. uTensor runs handwriting detection with MNIST that uses 28×28 images. The person detection example in the TensorFlow Lite for Microcontrollers example uses 96×96 which is more than enough. Even state-of-the-art ‘Big ML’ applications often only use 320×320 images (see the TinyML book). Also consider an 8-bit grayscale VGA image occupies 300KB uncompressed and the Nano 33 BLE Sense has 256KB of RAM. We have to do something to reduce the image size! 

Camera format options

The OV7670 module supports lower resolutions through configuration options. The options modify the image data before it reaches the Arduino. The configurations currently available via the library today are:

  • VGA – 640 x 480
  • CIF – 352 x 240
  • QVGA – 320 x 240
  • QCIF – 176 x 144

This is a good start as it reduces the amount of time it takes to send an image from the camera to the Arduino. It reduces the size of the image data array required in your Arduino sketch as well. You select the resolution by changing the value in Camera.begin. Don’t forget to change the size of your array too.

Camera.begin(QVGA, RGB565, 1)

The camera library also offers different color formats: YUV422, RGB444 and RGB565. These define how the color values are encoded and all occupy 2 bytes per pixel in our image data. We’re using the RGB565 format which has 5 bits for red, 6 bits for green, and 5 bits for blue:

Converting the 2-byte RGB565 pixel to individual red, green, and blue values in your sketch can be accomplished as follows:

    // Convert from RGB565 to 24-bit RGB

    uint16_t pixel = (high << 8) | low;

    int red   = ((pixel >> 11) & 0x1f) << 3;
    int green = ((pixel >> 5) & 0x3f) << 2;
    int blue  = ((pixel >> 0) & 0x1f) << 3;

Resizing the image on the Arduino

Once we get our image data onto the Arduino, we can then reduce the size of the image further. Just removing pixels will give us a jagged (aliased) image. To do this more smoothly, we need a downsampling algorithm that can interpolate pixel values and use them to create a smaller image.

The techniques used to resample images is an interesting topic in itself. We found this downsampling example from Eloquent Arduino works with fine the Arduino_OV767X camera library output (see animated GIF above).

Applications like the TensorFlow Lite Micro Person Detection example that use CNN based models on Arduino for machine vision may not need any further preprocessing of the image — other than averaging the RGB values in order to remove color for 8-bit grayscale data per pixel.

However, if you do want to perform normalization, iterating across pixels using the Arduino max and min functions is a convenient way to obtain the upper and lower bounds of input pixel values. You can then use map to scale the output pixel values to a 0-255 range.

byte pixelOut = map(input[y][x][c], lower, upper, 0, 255); 

Conclusion

This was an introduction to how to connect an OV7670 camera module to the Arduino Nano 33 BLE Sense and some considerations for obtaining data from the camera for TinyML applications. There’s a lot more to explore on the topic of machine vision on Arduino — this is just a start!

Cameras have always dominated the electronics industry as it has lots of applications such as visitor monitoring system, surveillance system, attendance system etc. Cameras that we use today are smart and have a lot of features that were not present in earlier cameras. While todays digital cameras not only capture images but also captures high-level descriptions of the scene and analyse what they see. It is used extensively in Robotics, Artificial Intelligence, Machine Learning etc. The Captured frames are processed using Artificial Intelligence and Machine Learning, and then used in many applications like Number plate detection, object detection, motion detection, facial recognition etc.

In this tutorial we will interface most widely used camera module OV7670 with Arduino UNO. The camera module OV7670 can be interfaced with Arduino Mega with same pin configuration, code and steps. The camera module is hard to interface because it has large number of pins and jumbled wiring to carry out. Also the wire becomes very important when using camera modules as the choice of the wire and length of the wire can significantly affect the picture quality and can bring noise.

We have already done ample projects on Cameras with different kind of Microcontrollers and IoT Devices such as:

 

The Camera OV7670 works on 3.3V, so it becomes very important to avoid Arduino which gives 5V output at their Output GPIO pins. The OV7670 is a FIFO camera. But in this tutorial, the picture or frames will be grabbed without FIFO. This tutorial will have simple steps and simplified programming to interface OV7670 with Arduino UNO.

 

Components Required

  • Arduino UNO
  • OV7670 Camera Module
  • Resistors(10k, 4.7k)
  • Jumpers

Software Required:

  • Arduino IDE
  • Serial Port Reader (To analyze Output Image)

 

Things to Remember about Camera Module OV7670

OV7670 Camera Module is a FIFO camera Module available from different Manufacturers with different pin Configurations. TheOV7670 provides full frame, windowed 8-bit images in a wide range of formats. The image array is capable of operating at up to 30 frames per second (fps) in VGA. The OV7670 includes

  • Image Sensor Array(of about 656 x 488 pixels)
  • Timing Generator
  • Analog Signal Processor
  • A/D Converters
  • Test Pattern Generator
  • Digital Signal Processor(DSP)
  • Image Scaler
  • Digital Video Port
  • LED and Strobe Flash Control Output

 

The OV7670 image sensor is controlled using Serial Camera Control Bus (SCCB) which is an I2C interface (SIOC, SIOD) with a maximum clock frequency of 400KHz.

 

 

The Camera comes with handshaking signals such as:

  • VSYNC: Vertical Sync Output – Low during frame
  • HREF:  Horizontal Reference – High during active pixels of row
  • PCLK: Pixel Clock Output – Free running clock. Data is valid on rising edge

In addition to this, it has several more signals such as

  • D0-D7: 8-bit YUV/RGB Video Component Digital Output
  • PWDN: Power Down Mode Selection – Normal Mode  and Power Down Mode
  • XCLK: System Clock Input
  • Reset: Reset Signal

 

The OV7670 is clocked from a 24MHz oscillator. This gives a Pixel Clock(PCLK) output of 24MHz. The FIFO provides 3Mbps of video frame buffer memory. The test pattern generator features 8-bar color bar pattern, fade-to-gray color bar patter. Now let’s start programming the Arduino UNO for testing Camera OV7670 and grabbing frames using serial port reader.

 

Circuit Diagram

 

Programming Arduino UNO

The programming starts with including required library necessary for OV7670. Since OV7670 runs on I2C interface, it includes <util/twi.h> library. The libraries used in this project are built-in libraries of ArduinoIDE. We just have to include the libraries to get the job done.

After this, the registers need to be modified for OV7670. The program is divided into small functions for better understanding.

 

The Setup() comprises all the initial setups required for only image capturing. The first function is arduinoUnoInut() which is used to initialise the arduino uno. Initially it disables all the global interrupts and sets the communication interface configurations such as the PWM clock, selection of interrupt pins, presclaer selection, adding parity and stop bits.

​arduinoUnoInut();

 

After configuring the Arduino, the camera has to be configured. To initialise the camera, we only have the options to change the register values. The register values need to be changed from the default to the custom. Also add required delay depending upon the microcontroller frequency we are using. As, slow microcontrollers have less processing time adding more delay between capturing frames.

void camInit(void){
 writeReg(0x12, 0x80);
  _delay_ms(100);
  wrSensorRegs8_8(ov7670_default_regs);
 writeReg(REG_COM10, 32);//PCLK does not toggle on HBLANK.
}

 

The camera is set to take a QVGA image so the resolution need to be selected. The function configures the register to take a QVGA image.

setResolution();

 

In this tutorial, the images are taken in monochrome, so the register value is set to output a monochrome image. The function sets the register values from register list which is predefined in the program.

setColor();

 

The below function is write to register function which writes the hex value to register. If you get the scrambled images then try to change the second term i.e. 10 to 9/11/12. But most of the time this value works fine so no need to change it.

writeReg(0x11, 10);

 

This function is used to get the image resolution size. In this project we are taking pictures in the size of 320 x 240 pixels.

captureImg(320, 240);

 

Other than this, the code also has the I2C configurations divided in to several parts. Just to get the data from camera, the I2C configurations has Start, Read, Write, Set Address function which are important when using  I2C protocol.

You can find the complete code with a demonstration video at the end of this tutorial. Just Upload the code and open the Serial Port Reader and grab the frames.

 

How to Use Serial Port Reader for reading Images

Serial Port Reader is a simple GUI, download it from here . This captures the base64 encode and decodes it to form an image. Just follow these simple steps to use Serial Port Reader

Step 1: Connect Your Arduino to any USB Port of your PC

 

Step 2: Click on “Check” to find your Arduino COM Port

 

Step 3: Finally click on “Start” button to start reading serially.

 

Step 4: One can also save this pictures by just clicking on “Save Picture”.

 

Below are Sample Images Taken from the OV7670

 

Precautions when using OV7670

  • Try to use wires or jumpers as short as possible
  • Avoid any loose contact to any pins on Arduino or OV7670
  • Be careful about connecting as large number of wiring can lead short circuit
  • If the UNO gives 5V output to GPIO then use Level Shifter.
  • Use 3.3V Input for OV7670 as exceeding voltage than this can damage the OV7670 module.

 

This project is created to give overview of using a camera module with Arduino. Since Arduino has less memory, so the processing may not be as expected. You can use different controllers which has more memory for processing.

How Does Arduino with Camera Module Work?

How to Use OV7670 Camera Module with Arduino​