Have you ever stood in front of your fridge, staring at its contents, hoping that a ready-made meal would suddenly appear before you? I have. Too many times. But instead of accepting my fate and ordering pizza, I decided to harness the power of machine learning.
In this post, we’ll build a simple—but fun—proof of concept. Here’s the idea: we’ll snap a photo of some ingredients, send the image to ChatGPT via its API, and hopefully ChatGPT will recognize what’s in the photo. Then it will shoot back three recipes that you can make with those ingredients, complete with step-by-step instructions.

Flutter is Google’s open-source UI toolkit, and Dart is the language powering it. Together, they let you create slick, cross-platform apps using a single codebase. The community around Flutter is absolutely fantastic—if you search for “Flutter” on YouTube, you’ll be greeted by tons of videos featuring the Flutter team themselves. They do such a fine job that these videos can work as an excellent intro not only to Flutter, but to programming in general.
Overall App Description
- Take a Photo: The user snaps a picture of some random ingredients lying around the kitchen—maybe carrots, tomatoes, onions, or something more exotic if you’re feeling adventurous.
- Send to ChatGPT: The image is packaged up and sent to the ChatGPT API. The plan is for ChatGPT to do its best to figure out what’s in that photo.
- Receive Recipes: ChatGPT responds with three recipe ideas based on those ingredients. Each recipe has a detailed, step-by-step cooking guide.
It’s also a neat demonstration of how you can leverage AI in Flutter apps. Granted, it’s not guaranteed that ChatGPT can perfectly identify every single ingredient. But even if the occasionally mistakes, at least you’ll have a memorable dinner.
Why Use Only ChatGPT’s API?
For this proof of concept, we’re relying on ChatGPT for the entire operation. This includes:
- Identifying the ingredients in the uploaded photo.
- Suggesting recipes and describing the cooking steps.
All the heavy lifting is done by ChatGPT. Our Flutter app serves as a sleek UI wrapper for the user’s interaction.
1. Set Up Your Flutter Project
First, create a new Flutter project:
flutter create recipe_finder
cd recipe_finder
Inside your pubspec.yaml
, add dependencies such as image_picker
(for taking photos) and http
(for making network requests):
dependencies:
flutter:
sdk: flutter
image_picker: ^1.1.2
http: ^1.3.0
Then, run:
flutter pub get
2. Service: Sending Images to the Vision Endpoint
Create a file named chatgpt_vision_service.dart
in lib
. This service handles uploading the photo to OpenAI’s Vision endpoint.
import 'package:http/http.dart' as http;
const String openAiVisionEndpoint = 'https://api.openai.com/v1/chat/completions';
class ChatGptVisionService {
final String _apiKey;
ChatGptVisionService(this._apiKey);
Future<String> identifyIngredientsAndRecipes(List<int> imageBytes) async {
try {
final uri = Uri.parse(openAiVisionEndpoint);
// http.MultipartRequest from the http: ^1.3.0 package
final request = http.MultipartRequest('POST', uri)
// Add your API key
..headers['Authorization'] = 'Bearer $_apiKey'
// Attach the image bytes with a file name
..files.add(
http.MultipartFile.fromBytes(
'image',
imageBytes,
filename: 'ingredients.jpg',
),
)
// Provide any additional prompts or instructions
..fields['prompt'] = 'Identify ingredients and provide 3 recipes with step-by-step instructions.';
final streamedResponse = await request.send();
final response = await http.Response.fromStream(streamedResponse);
if (response.statusCode == 200) {
return response.body;
} else {
return 'Error: ${response.statusCode} - ${response.body}';
}
} catch (e) {
return 'Error sending image to Vision endpoint: $e';
}
}
}
Quick Notes:
- We use
http.MultipartRequest
for a multipart/form-data upload. - We attach the image bytes to the
image
field. - We can also pass any relevant prompt in
fields['prompt']
.
3. Main UI: Take or Pick a Photo
Here is the core Flutter UI. It presents two buttons—“Take a Photo” and “Select from Gallery”—and then sends the chosen file to identifyIngredientsAndRecipes
, displaying the result in the UI.
import 'package:flutter/material.dart';
import 'package:image_picker/image_picker.dart';
import 'dart:io';
import 'chatgpt_vision_service.dart';
void main() {
runApp(const RecipeApp());
}
class RecipeApp extends StatelessWidget {
const RecipeApp({Key? key}) : super(key: key);
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'AI Recipe Finder',
theme: ThemeData(primarySwatch: Colors.blue),
home: const RecipeFinderScreen(),
);
}
}
class RecipeFinderScreen extends StatefulWidget {
const RecipeFinderScreen({Key? key}) : super(key: key);
@override
_RecipeFinderScreenState createState() => _RecipeFinderScreenState();
}
class _RecipeFinderScreenState extends State<RecipeFinderScreen> {
final ChatGptVisionService _visionService =
ChatGptVisionService('YOUR_VISION_API_KEY');
String _resultText = 'No recipes yet.';
/// Take a photo with the device camera
Future<void> _takePhoto() async {
final picker = ImagePicker();
final XFile? image = await picker.pickImage(source: ImageSource.camera);
if (image != null) {
final bytes = await image.readAsBytes();
final response = await _visionService.identifyIngredientsAndRecipes(bytes);
setState(() {
_resultText = response;
});
}
}
/// Select an existing photo from the gallery
Future<void> _selectPhotoFromGallery() async {
final File image = await ImagePicker.pickImage(source: ImageSource.gallery);
if (image != null) {
final bytes = await image.readAsBytes();
final response = await _visionService.identifyIngredientsAndRecipes(bytes);
setState(() {
_resultText = response;
});
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('AI Recipe Finder'),
),
body: Padding(
padding: const EdgeInsets.all(16.0),
child: Column(
children: [
ElevatedButton(
onPressed: _takePhoto,
child: const Text('Take a Photo'),
),
ElevatedButton(
onPressed: _selectPhotoFromGallery,
child: const Text('Select from Gallery'),
),
const SizedBox(height: 24),
Expanded(
child: SingleChildScrollView(
child: Text(_resultText),
),
),
],
),
),
);
}
}
How It Works:
- Camera or Gallery: The user taps either “Take a Photo” or “Select from Gallery.”
- Converting to Bytes: We read the chosen image file (
photo.path
) into a byte array. - Sending to Vision: The
ChatGptVisionService
sends a multipart request with the image bytes to the Vision endpoint. - Response: The endpoint hopefully returns recognized ingredients and a list of recipes. We display it in
_resultText
.
Testing the App
Run it on a device or emulator that supports camera and gallery:
flutter run
Try snapping a photo of some fruits or vegetables, or pick an image from the gallery. The response you see will depend on the Vision API’s capabilities.
5. Next Steps and Possibilities
- LiteSQL Database: Store your recognized dishes so you can build a cooking history.
- Calorie Calculations: Extend the response parsing to extract approximate calorie info.
- Improved Architecture: Incorporate patterns like Riverpod or BLoC for more maintainable code.
- User Feedback Loop: Let the user confirm or correct recognized ingredients to refine the AI model.
- Voice Output: Convert the step-by-step instructions to spoken text, making the app more kitchen-friendly.
Happy coding, and good luck discovering new recipes.
Leave a Reply