Let’s Integrate Google Cloud Vision API to React Native Expo Apps

Detect Object from an Image using Advance Google Cloud Vision API | JavaScript Code

Rohit Kumar Thakur
4 min readAug 3, 2023

--

Hello Developers! My name is Rohit and I am a software developer and Youtube content creator. A few days ago I integrated Google cloud vision API into a react native expo app. I know the use of this API is insane. So, let me show you how I did it.

Installations and Set-up

In this project, we will need three dependencies to get our job done.

  • axios: To make the HTTP request to the cloud vision API. You can install it using the following command

npm i axios
  • Next, we need to install an expo-image-picker. It provides access to the system’s UI for selecting images and videos from the phone’s library or taking a photo with the camera. Use the command below to install this package.
npx expo install expo-image-picker

After installing this package make sure to add the image-picker plugin to the app.json file.

"plugins": [
[
"expo-image-picker",
{
"photosPermission": "The app accesses your photos to let you share them with your friends."
}
]
]
  • Lastly, Install expo-file-system. It provides access to a file system stored locally on the device. Use the following command to install this:
npx expo install expo-file-system

That’s it, We are all set to write some code for our project.

The project output:

Make a separate file or you can continue to write code in the default App.js file. But I recommend you make a separate component and Import it into App.js. It’s good practice.

One important piece of information, If you feel anywhere lost in this project, feel free to check the step-by-step Youtube video.

First, I defined two states in your apps using the useState hook. For those who don’t know much about react useState hooks, In simple terms, useState hooks hold some states, which are like variables to store data in our app. One state, ‘imageUri,’ will keep track of the selected image’s URI, and the other, ‘labels,’ will store the analyzed object labels.

const [imageUri, setImageUri] = useState(null);
const [labels, setLabels] = useState([]);

We start by implementing two essential functions: ‘pickImage’ and ‘analyzeImage.’ The ‘pickImage’ function lets us choose an image from our phone’s gallery, while ‘analyzeImage’ will send that image to Google Cloud Vision API for analysis.

const pickImage = async () => {
try {
let result = await ImagePicker.launchImageLibraryAsync({
mediaTypes: ImagePicker.MediaTypeOptions.Images,
allowsEditing: true,
aspect: [4, 3],
quality: 1,
});

if (!result.canceled) {
setImageUri(result.assets[0].uri);
}
console.log(result);
} catch (error) {
console.error('Error picking image:', error);
}
};
const analyzeImage = async () => {
try {
if (!imageUri) {
alert('Please select an image first.');
return;
}

// Replace 'YOUR_GOOGLE_CLOUD_VISION_API_KEY' with your actual API key
const apiKey = '#################################';
const apiUrl = `https://vision.googleapis.com/v1/images:annotate?key=${apiKey}`;

// Read the image file from local URI and convert it to base64
const base64ImageData = await FileSystem.readAsStringAsync(imageUri, {
encoding: FileSystem.EncodingType.Base64,
});

const requestData = {
requests: [
{
image: {
content: base64ImageData,
},
features: [{ type: 'LABEL_DETECTION', maxResults: 5 }],
},
],
};

const apiResponse = await axios.post(apiUrl, requestData);
setLabels(apiResponse.data.responses[0].labelAnnotations);
} catch (error) {
console.error('Error analyzing image:', error);
alert('Error analyzing image. Please try again later.');
}
};

Read the image file from the ‘imageUri’ and convert it to a format called ‘base64.’ It’s just a different way to represent the image in code.

Next, create a request data object that contains the image data and the type of analysis we want. Here, I’m using ‘LABEL_DETECTION’ to identify objects in the image, and I ask for a maximum of 5 results.

Don’t forget to add your API KEY.

The rest is the UI part. You are free to add a custom UI as per your requirements.

Final Tip: This is an advance level project. So, if you feel lost midway or need some assistance then feel free to check the Youtube video where I explained the step-by-step process. From setting up the expo project to google vision API setup, I covered it all.

Now if you are here then make sure to clap. It gonna help the medium algorithms to promote my article to a wider audience. Thanks for reading. Happy coding!!

--

--