Getting Started with Himax WE-I Plus EVB Endpoint AI Development Board

Himax Getting Started WE-I Plus AI Development Board

Himax WE-I Plus is one small, simple, low-power SoC yet very powerful and effective board for your AI and ML applications at a reasonable price.

Himax WE-I Plus EVB Endpoint AI Development Board Description

Low power application, high-resolution camera, AI-based object recognition, compact Size, I2C interface and GPIO pins for universal connectivity are some of its key features. Product Page

Onboard Components:

    1. Himax HM0360 AoSᵀᴹ VGA camera
    2. FTDI USB to SPI/I2C/UART bridge
    3. LDO power supply (3.3/2.8/1.8/1.2V)
    4. 3-axis accelerometer
    5. 1x reset button
    6. 2x microphones (L/R)
    7. 2x user LEDs (RED/GREEN)
    8. micro-USB connector

Machine Learning and AI

Machine Learning is the use and development of computer systems that can learn and adapt without following explicit instructions, by using algorithms and statistical models to analyse and draw inferences from patterns in data. It consists of these three types: supervised learning, unsupervised learning, and reinforcement learning.

Machine Learning is a part of Artificial intelligence (AI) which is a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.

Coding Environment and Plugins description

We used the Edge Impulse software to communicate between the board and the PC. It is also used to collect and store data to and from the board.

Edge Impulse is an online development platform for embedded machine learning on embedded devices for sensors, audio, and computer vision. It enables to solve real problems using highly optimized ML deployable to a wide range of hardware from MCUs to CPUs, speeding up development time and applicable to predictive maintenance, asset tracking and monitoring, and human and animal sensing.

To set up Edge Impulse:


  • Once done with account creation and login, you will be redirected to the projects page.
  • Create a new project


  • The following page will appear.


Here, you can now set up devices, Create an impulse, do data acquisition and create your own ML model. However, you need the latest versions of Python, Pip, Visual Studio and Node.js installed on your PC to work on with Edge Impulse.

To get the latest Python, head on to:

You can always check the current version you are running using CMD.


Install the latest Visual Studio:

Remember to include the  Desktop development with C++ and the latest Windows 10 SDK while doing the installation.


Download and install Node.js from:


Here, in the setup, you need to keep the Automatically install the necessary tools option ticked on.


Click on Install and the setup will start to install Node.js on your PC. It’ll open a PowerShell window in between.



Wait for the setup to complete. For any troubleshooting go to:

Now once the setup is complete, launch the Node.js command prompt 


and run: npm config set msvs_version 2019 –global

Note: You need to put your current version of the Visual Studio in here.

After that run: npm install -g edge-impulse-cli


This command will perform a series of installations to set-up the environment for Edge Impulse.

Once the installation is complete, exit from the command prompt.

Firmware Download and Adding the Board

Download the firmware from:

Connect your Himax WE-I Plus board to your PC now.

Extract the just downloaded firmware files and run the flash_windows.bat file for the Windows OS.


Thus, the firmware will be successfully uploaded to your Himax board. Press the Reset button of the board wherever prompted.

Launch a new command prompt window and run the edge-impulse-daemon command.


This will ask you for your login credentials, the project to which you want to add your Himax Board as well as the name you want to give to the board. Keep this window open.

Switch to Edge Impulse on your Browser and under the devices tab, you can see the new board added with its stats.


Head on to the data acquisition tab, record new data, do the required settings. Voila! You can now send and receive data to your Himax WE-I Plus using Edge Impulse.


Continue recording data to make a data set and proceed to make your ML and AI application.

Tera Term Software for Himax WE-I AI Development Board

To get a continuous serial feed of data from the board the Tera Term software may be used.

It is an open-source, free, software-implemented, terminal emulator (communications) program that provides a VT (Virtual Terminal) to read the data from the port and many more functions!



First configure the Port, Serial and BaudRate and then on the VT window you can see the data from the board.



Examples using Arduino IDE

Arduino is an open-source hardware and software company that designs and manufactures single-board microcontrollers and microcontroller kits. The much popular Arduino Integrated Development Environment (IDE) is very simple to understand and use.

Your Himax WE-I Plus can run some examples using the Arduino IDE. Following are steps for the same.

Arduino IDE_step2

  • Click OK
  • Now go to the Boards Manager from the Tools menu.

Arduino IDE_step4

  • Search WE-I keyword and install the WE-I Plus Board package.

Arduino IDE_step4_Boards Manager

  • Now, once the installation is complete, you can see the WE-I board in the boards section of the Tools menu. Select it.

Arduino IDE_step5

  • Once the board is selected, in the same menu, you have to select the proper port, upload speed and the example you want to deploy.
    Arduino IDE_step6
  • The examples provided are:
    • hello_world
    • magic_wand
    • micro_speech
    • person_detection
  • Select the example you want and click the upload button. Your program will start to upload on the board. Press the Reset button on the board wherever prompted.

Arduino IDE_step8

  • Once the upload is complete, you will see a Done uploading message
  • Open the Arduino IDE serial monitor now.

Arduino IDE_step10

You can see the output from the board now.

TensorFlow Lite Examples using Make Tool for Himax WE-I Board

These TensorFlow Lite Examples can also be deployed to the Himax WE-I Plus board using the native software and conventional procedure. TensorFlow is a free and open-source online software library for machine learning applicable on training and inference of deep neural networks.

To work with tensor flow, a make tool installation is required which can be downloaded from This is ARC GNU Tool Chain by Synopsys. Extract the package to your application directory.

You need to choose the tool file based on the OS you are currently running. The latest prebuilt_elf32_le version is enough for the make operation.

However, you can also use the ARC MetaWare Development Toolkit by Synopsys for the make operation. You may need to order the toolkit from their website.

Following is the step-by-step guide to run the TensorFlow Lite Micro Person Detection INT8 example on your device.

Firstly, download and extract the GitHub repository from:

See the makefile here and just edit the first two lines to suit the make tool you are using.


Extract the maketool folder (ARC GNU Tool Chain) to the directory of the current make file.

The filefol



der should now look like this:


Next step is to add the path for the maketool (ARC GNU Tool Chain). For that, open a terminal window in this directory and type export PATH=[location of your ARC_GNU_ROOT]/bin:$PATH

For eg: export PATH=/home/jay/Desktop/Person_Detection_Example/arc_gnu_2020.09_prebuilt_elf32_le_linux_install/bin:$PATH


Also, install the curl and make by commands:

sudo apt update

sudo apt upgrade

sudo apt install curl

sudo apt install make


Input the command one after another. This process may take some time.


Now, run the make download command to download the necessary directories and settings.


New folders are created in the example directory:


Use commands,

make person_detection_int8

make flash example=person_detection_int8


The Generate Image Done message at the end marks the completion of make process for the Tensorflow Lite Micro Person Detection INT8 example with the .img and .elf files generated.


The last step is to flash this .img file for which you can use any flash tool.

Here, we used the himax-flash-tool as it was installed on the Windows OS. 



With person Output:


Without person Output:


The output can also be obtained by analyzing the blink of the onboard green LED. If a person is detected, the onboard green LED should turn ON.

Other examples can also be built and flash following the same process.

SPI Tool

Now, to get the sensor image by SPI output do the following:

First, check g++ version by command g++ –version

Install by commands

sudo apt install g++

sudo apt update

sudo apt install build-essential



Second, download and extract and install the FT4222 Linux driver from here:

Go to the directory and run sudo ./ and then cd /etc/udev/rules.d/


Now, create a file with name 99-ftdi.rules containing the following data:

# FTDI’s ft4222 USB-I2C Adapter
SUBSYSTEM==“usb”, ATTRS{idVendor}==“0403”, ATTRS{idProduct}==“601c”, GROUP=“plugdev”, MODE=“0666”

I first created a file on my desktop folder using touch 99-ftdi.rules


Opened it, entered the required contents and saved the file:


And then copied it to the destination using terminal command sudo cp ~/Desktop/99-ftdi.rules /etc/udev/rules.d/99-ftdi.rules


Third, download and extract the SPI_Tool inside the GitHub Repo:

Navigate to the SPI_Tool directory and open it in the terminal. Run the ./WEI_SPIrecvImg 30 command to get the .dat files in the same directory. The number at the end of the command can be edited to get the desired number of files.

The output obtained for the handwriting example:

SPI_Tool_output1 SPI_Tool_output2

SPI_Tool_output3 SPI_Tool_output4

Share on facebook
Share on twitter
Share on linkedin

One Response

Leave a Reply