Building a Serverless TensorFlow App with a React Frontend for Plant Species Identification

Introduction

In today’s fast-paced world, technology is increasingly being used to solve everyday problems. One fascinating application is using machine learning to identify plant species from images. This can be incredibly useful for gardeners, botanists, or anyone curious about the plants around them. In this blog post, we’ll walk through how to build a serverless TensorFlow application with a React frontend that identifies plant species from photos. This app will be deployed on AWS using Lambda for serverless processing, and it will leverage TensorFlow for the machine learning model.

Problem Statement

Plant identification can be a challenging task, especially for those without a background in botany. While there are books and online resources available, they often require prior knowledge of plant characteristics. Our goal is to create an easy-to-use application that allows users to upload a photo of a plant, and receive information about the species directly on their device.

Solution Overview

The solution involves three main components:

  1. Machine Learning Model: We’ll use TensorFlow to create and train a model capable of classifying different plant species.
  2. Serverless Backend: The model will be hosted on AWS Lambda, which will handle the image processing and classification.
  3. React Frontend: The frontend will allow users to upload images, which will be sent to the serverless backend for processing. The results will then be displayed on the user interface.

Building the TensorFlow Model

To start, we need a dataset of plant images labeled by species. There are publicly available datasets such as the "PlantVillage Dataset" that you can use. Once you have the data, the next step is to train a TensorFlow model.

Example Code for Model Training

Here’s a simple example using TensorFlow and Keras to train a model:

import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.preprocessing.image import ImageDataGenerator

# Load and preprocess data
train_datagen = ImageDataGenerator(rescale=1./255, validation_split=0.2)
train_generator = train_datagen.flow_from_directory(
    'path_to_dataset',
    target_size=(150, 150),
    batch_size=32,
    class_mode='categorical',
    subset='training')

validation_generator = train_datagen.flow_from_directory(
    'path_to_dataset',
    target_size=(150, 150),
    batch_size=32,
    class_mode='categorical',
    subset='validation')

# Build the model
model = models.Sequential([
    layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3)),
    layers.MaxPooling2D((2, 2)),
    layers.Conv2D(64, (3, 3), activation='relu'),
    layers.MaxPooling2D((2, 2)),
    layers.Conv2D(128, (3, 3), activation='relu'),
    layers.MaxPooling2D((2, 2)),
    layers.Conv2D(128, (3, 3), activation='relu'),
    layers.MaxPooling2D((2, 2)),
    layers.Flatten(),
    layers.Dense(512, activation='relu'),
    layers.Dense(5, activation='softmax')  # Assuming 5 plant species
])

# Compile and train the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(train_generator, epochs=10, validation_data=validation_generator)

# Save the model
model.save('plant_species_model.h5')

Understanding the Output

The output of this process is a TensorFlow model saved in the file plant_species_model.h5. This file contains the trained model’s architecture, weights, and optimizer state, which can be used to make predictions on new images.

When deploying the model to AWS Lambda, we need to convert this .h5 file into a format that TensorFlow.js can use. TensorFlow.js uses a JSON format to represent the model's architecture and binary files for the weights.

Converting the Model to TensorFlow.js Format

You can convert the .h5 model to the TensorFlow.js format using the tensorflowjs_converter command:

tensorflowjs_converter --input_format=keras plant_species_model.h5 /path_to_output_directory

This will generate a model.json file and binary weight files. These files need to be uploaded to your Lambda function’s deployment package.

Incorporating the Model into Your Lambda Function

Once the model is converted, you need to include these files in the Lambda function’s deployment package. Here’s how to modify the Lambda function to load and use the TensorFlow.js model:

const tf = require("@tensorflow/tfjs-node");
const Jimp = require("jimp");
const fs = require("fs");

exports.handler = async (event) => {
    const { imageBase64 } = JSON.parse(event.body);
    const buffer = Buffer.from(imageBase64, "base64");

    // Load and process the image
    const image = await Jimp.read(buffer);
    image.resize(150, 150); // Match input size of model

    const imageTensor = tf.node
        .decodeImage(image.bitmap.data, 3)
        .expandDims(0)
        .toFloat()
        .div(tf.scalar(255));

    // Load the model
    const model = await tf.loadLayersModel("file://opt/model/model.json");

    // Make prediction
    const prediction = model.predict(imageTensor);
    const predictionArray = await prediction.array();

    return {
        statusCode: 200,
        body: JSON.stringify({ prediction: predictionArray }),
    };
};

Packaging the Lambda Function

Your Lambda function’s deployment package should include:

  • The Node.js code (index.js or lambda_function.js)
  • The model.json file and weight binaries generated by the tensorflowjs_converter
  • Any required Node.js modules (like @tensorflow/tfjs-node and jimp)

You can use a tool like the AWS CLI to upload this package to your Lambda function.

CloudFormation Template for AWS Resources

To streamline the setup of your AWS resources, here’s a CloudFormation template that creates the necessary IAM role, Lambda function, and API Gateway:

AWSTemplateFormatVersion: "2010-09-09"
Resources:
    LambdaExecutionRole:
        Type: "AWS::IAM::Role"
        Properties:
            AssumeRolePolicyDocument:
                Version: "2012-10-17"
                Statement:
                    - Effect: "Allow"
                      Principal:
                          Service:
                              - "lambda.amazonaws.com"
                      Action:
                          - "sts:AssumeRole"
            Policies:
                - PolicyName: "LambdaExecutionPolicy"
                  PolicyDocument:
                      Version: "2012-10-17"
                      Statement:
                          - Effect: "Allow"
                            Action:
                                - "logs:*"
                                - "s3:*"
                            Resource: "*"

    MLApiLambdaFunction:
        Type: "AWS::Lambda::Function"
        Properties:
            FunctionName: "PlantSpeciesIdentifier"
            Handler: "index.handler"
            Role: !GetAtt LambdaExecutionRole.Arn
            Code:
                S3Bucket: "your-s3-bucket-name"
                S3Key: "path/to/your/lambda-deployment-package.zip"
            Runtime: "nodejs18.x"
            MemorySize: 128
            Timeout: 30

    ApiGatewayRestApi:
        Type: "AWS::ApiGateway::RestApi"
        Properties:
            Name: "PlantSpeciesIdentifierAPI"

    ApiGatewayResource:
        Type: "AWS::ApiGateway::Resource"
        Properties:
            ParentId: !GetAtt ApiGatewayRestApi.RootResourceId
            PathPart: "predict"
            RestApiId: !Ref ApiGatewayRestApi

    ApiGatewayMethod:
        Type: "AWS::ApiGateway::Method"
        Properties:
            AuthorizationType: "NONE"
            HttpMethod: "POST"
            ResourceId: !Ref ApiGatewayResource
            RestApiId: !Ref ApiGatewayRestApi
            Integration:
                IntegrationHttpMethod: "POST"
                Type: "AWS_PROXY"
                Uri:
                    Fn::Sub:
                        - "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${LambdaArn}/invocations"
                        - LambdaArn: !GetAtt MLApiLambdaFunction.Arn

    LambdaInvokePermission:
        Type: "AWS::Lambda::Permission"
        Properties:
            Action: "lambda:InvokeFunction"
            FunctionName: !GetAtt MLApiLambdaFunction.Arn
            Principal: "apigateway.amazonaws.com"

Deploying the Stack

You can deploy this CloudFormation template using the AWS Management Console, AWS CLI, or any Infrastructure as Code (IaC) tool that supports AWS CloudFormation.

Creating the React Frontend

The frontend will allow users to upload images and view the classification results. Here’s a simple React application to achieve this:

Example React Code

App.js

import React, { useState } from "react";
import "./App.css";

function App() {
    const [image, setImage] = useState(null);
    const [result, setResult] = useState(null);

    const handleImageUpload = (event) => {
        const file = event.target.files[0];
        const reader = new FileReader();
        reader.onloadend = () => {
            setImage(reader.result);
        };
        reader.readAsDataURL(file);
    };

    const handleSubmit = async () => {
        const response = await fetch("https://your-api-gateway-url/predict", {
            method: "POST",
            headers: {
                "Content-Type": "application/json",
            },
            body: JSON.stringify({ imageBase64: image.split(",")[1] }),
        });
        const data = await response.json();
        setResult(data.prediction);
    };

    return (
        <div className="App">
            <h1>Plant Species Identifier</h1>
            <input type="file" onChange={handleImageUpload} />
            {image && <img src={image} alt="Upload preview" />}
            <button onClick={handleSubmit}>Identify Plant</button>
            {result && <div>Prediction: {result}</div>}
        </div>
    );
}

export default App;

Conclusion

By building this serverless TensorFlow application, we’ve created a powerful tool that simplifies plant species identification. This application is not only useful for educational purposes but can also be extended to serve professionals in the field of botany and gardening. With serverless architecture, we ensure scalability and cost-effectiveness, while the React frontend provides a user-friendly interface. This project demonstrates how machine learning can be integrated into everyday applications to solve real-world problems.