The Smart City Application using Himax WE-I Plus EVB Endpoint AI Development Board includes installation of the Himax WE-I Plus Board on city traffic lights to detect whether a car has crossed the stop line or not, using Edge Impulse.
The application also uses SparkFun’s Qwiic sensors to simultaneously obtain the CO2, TVOC and Humidity data.
The Himax WE-I Plus board’s AI-based object recognition enables it to recognize a car within its onboard VGA camera frame based on Machine Learning.
The environmental data thus collected can prove useful for studying levels of CO2 emission from Vehicles, the cause and impacts of global warming, the contribution of metropolitan regions in Air pollution etc. factors. TVOC stands for the Total Volatile Organic Compounds where VOC are organic chemicals that may have long-term chronic health effects.
Description of Himax WE-I Plus Board
The Himax WE-I Plus board was chosen for its Compact Size, Ultra-low Power Application, High Resolution HM0360 AoSTM VGA Camera and Powerful AI-based object recognition capabilities. Available at: https://www.sparkfun.com/products/17256 some of the board features also includes:
- A 3-axis accelerometer
- 2x microphones (L/R)
- 2x user LEDs (RED/GREEN)
- An I2C master
- 3x GPIOs expansion headers
Refer THIS Article to get-started with the Himax WE-I Plus EVB Endpoint AI Development Board.
SparkFun Qwiic Environmental Combo Breakout Board – CCS811/BME280 available at https://www.sparkfun.com/products/14348 which helps determine:
- Total Volatile Organic Compound (TVOC) sensing from 0 to 1,187 ppb
- eCO2 sensing from 400 to 8,192 parts per million
- Temp Range: -40C to 85C
- Humidity Range: 0–100% RH, =-3% from 20–80%
- Pressure Range: 30,000Pa to 110,000Pa, relative accuracy of 12Pa, absolute accuracy of 100Pa
- Altitude Range: 0 to 30,000 feet (9.2 km), relative accuracy of 3.3 feet (1m) at sea level, 6.6 (2m) at 30,000 feet
Download and Extract/Clone the GitHub Repo to your local storage. This will be your main work directory.
Make the Hardware connections between the Himax WE-I Plus and the SparkFun Environmental Combo Board as shown in the image.
|Red||3.3V||J3 Pin 1|
|Yellow||SCL||J3 Pin 5|
|Blue||SDA||J3 Pin 6|
|Black||GND||J3 Pin 7|
The entire process of building the application is divided into two steps:
- Train model using Edge Impulse.
- Deploy it using Docker.
An already trained model is also provided in the directory SmartCity-example/image_gen_linux/out.img. This out.img file can readily be flashed onto the Himax WE-I Plus board. To do that head-on directly to step 2.3. This method will neither require training on edge-impulse nor the docker application.
We recommend following the complete procedure to achieve high accuracy in object detection on basis of various environmental parameters.
1. Train model using Edge Impulse
Edge Impulse is a development platform for embedded machine learning which helps to efficiently manage and build your AI and ML projects.
Assuming that you already have an Edge Impulse account, if not create one quickly using https://studio.edgeimpulse.com/signup, let’s begin.
Step 1.1: First upload the firmware
The firmware is found at https://cdn.edgeimpulse.com/firmware/himax-we-i.zip and this manual https://docs.edgeimpulse.com/docs/himax-we-i-plus can also be referred.
Step 1.2: Connect the board to Edge-Impulse
The Himax WE-I Plus board can be linked to your edge-impulse project using edge-impulse-daemon on the Node.js command prompt.
You may need to login to edge-impulse using your login credentials and also select the project you want to add your board to. You can always create new projects from the website.
As you can see now the board has been added to the project in the devices section.
Step 1.3: Data Acquisition using Edge-Impulse
Head on to the data acquisition section, make the appropriate settings. See the real-time camera feed displayed here.
You need to collect data to train the model with proper labels in the Training Data and Test Data tabs by clicking Start sampling.
Step 1.4: Impulse Design
Switch to Create impulse under the Impulse Design section. Here you need to enter the details of image data, add a processing block and also add a learning block. The following settings are recommended for this application.
Once you press Save Impulse, new tabs will appear in the Impulse design section of the left pane. Each of them must be configured in order to complete the model.
In the Parameters of Image tabs, using the dropdown menu, select the image to generate features for. Then select color depth as grayscale and click on Save Parameters.
Switch to the Generate features and click generate features to create a 3D view of features. You can also navigate to a corresponding data set so as to delete, edit or retake that particular Sample. This completes the process for this tab.
Moving on to the Transfer learning Tab, set the number of training cycles you want. Follow the setting shown in the following images, or the default settings can also work. Click Start training.
You can now see Training output and Model output which also determines the accuracy, loss, scores of validation and training set. You can always retrain your model to achieve greater accuracy.
Keep in mind, the Ram Usage and Flash Usage suit the specification of the Himax WE-I Plus board.
Following is the result from the live classification section which helps to determine the model result from a live sample before deploying it to the board. You may or may not use the following section for your model based on your preferences.
When Car is placed:
When a car is not placed:
The model was found to work accurately. Now it’s time to deploy it to our Himax WE-I Plus Board.
Step 1.5: Deployment of Impulse
Head on to the Deployment section from the left pane. For this application select the C++ library and optimizations as shown and Click Build.
The Job completed message in build output marks the completion of the build process and also a file folder as shown below should get downloaded. Extract it.
Copy all of these files to the main work directory except the CmakeLists.txt one.
2. Deploy it using Docker
Step 2.1: Installation of Docker
Navigate to the directory where you have assembled all files (main work directory).
Open it in the Terminal.
Next, you need to install Docker. You can follow this guide for the same https://docs.docker.com/engine/install/ubuntu/
Input ‘Y’ whenever prompted during the installation.
Running the hello-world image checks proper installation of docker.
Step 2.2: Build using Docker
After complete installation of docker run
|sudo docker build -t himax-build-gnu -f Dockerfile.gnu .|
|mkdir -p build-gnu
sudo docker run –rm -it -v $PWD:/app himax-build-gnu /bin/bash -c “cd build-gnu && cmake -DCMAKE_TOOLCHAIN_FILE=toolchain.gnu.cmake ..”
|sudo docker run –rm -it -v $PWD:/app:delegated himax-build-gnu /bin/bash -c “cd build-gnu && make -j && sh ../make-image.sh GNU”|
Please run these commands line by line.
This process may take some time for the first run.
The Generate Image Done message marks the completion of the process.
Step 2.3: Flashing of Image
Now, if you navigate to the image_gen_linux folder of the work directory, you will find an out.img file.
This is the file that you need to flash to your Himax WE-I Plus board.
We then flashed this file on our Windows OS using himax-flash-tool on edge-impulse-cli.
The output as seen on TeraTerm software by listening to the serial port.
Video showing a prototype model:
Here, the red light indicates no car detected and green indicates a car is detected.
Thus, creating an automated traffic monitoring system that simultaneously obtains environment data for analysis.
Jayesh Rajam is a tech enthusiast who loves to design and test various circuits. Amazed by the development and miniaturization in the IoT field, he wishes to bring efficacy to the design and development of tech projects. He has hands-on experience with various Simulation and Design Software. He also plays and learns Indian Classical music and do some Content creation in his free time.