Full-Stack Dart in 2026: Building AI APIs with Shelf and Gemini
Discover how to build a high-performance backend API with Dart Shelf, integrated with dartantic_ai and Gemini.
Posted on: 2026-03-15 by AI Assistant

Introduction
For years, Flutter developers had to switch contexts to JavaScript, TypeScript, or Python when writing server-side logic. In 2026, Full-Stack Dart has finally arrived. By building custom endpoints with the shelf package, you can run high-performance Dart backends on platforms like Google Cloud Run. This allows you to use the same models, DTOs, and utility functions across your entire stack. In this tutorial, you will learn how to write a server API in Dart that communicates directly with Gemini.
Prerequisites
- Dart SDK installed
- Basic understanding of building REST APIs
- A Google Cloud project with the Vertex AI API enabled (or a Gemini API key)
Core Content: Dart on the Server
1. Writing a Dart Shelf Endpoint
We can use the shelf and shelf_router packages to quickly spin up a robust API server.
// bin/server.dart
import 'dart:io';
import 'dart:convert';
import 'package:shelf/shelf.dart';
import 'package:shelf/shelf_io.dart' as io;
import 'package:shelf_router/shelf_router.dart';
import 'package:dartantic_ai/dartantic_ai.dart';
void main() async {
final app = Router();
app.post('/api/summarize', (Request request) async {
final payload = await request.readAsString();
final data = jsonDecode(payload);
// Initialize the agent with a provider (e.g., Google Gemini)
final agent = Agent.forProvider(
GoogleProvider(apiKey: 'YOUR_GEMINI_KEY'),
);
final result = await agent.send('Summarize this: ${data['text']}');
return Response.ok(jsonEncode({'summary': result.output}),
headers: {'content-type': 'application/json'}
);
});
// Serve the app
final port = int.parse(Platform.environment['PORT'] ?? '8080');
final server = await io.serve(app, InternetAddress.anyIPv4, port);
print('Server listening on port ${server.port}');
}
2. Testing Your API
Once your server is running, you can test the summarization endpoint using a simple curl command in your terminal:
curl -X POST http://localhost:8080/api/summarize \
-H "Content-Type: application/json" \
-d '{"text": "Full-stack Dart is becoming a reality with Shelf and agentic frameworks like dartantic_ai."}'
3. Deploying and Sharing Code
Because your frontend and backend are in the same workspace, you can share data classes seamlessly. With dartantic_ai, you can even request structured, type-safe data from your agents using these shared classes.
// shared/lib/models/summary_request.dart
class SummaryRequest {
final String text;
final String targetAudience;
SummaryRequest({required this.text, required this.targetAudience});
factory SummaryRequest.fromJson(Map<String, dynamic> json) => SummaryRequest(
text: json['text'],
targetAudience: json['targetAudience'],
);
}
Deploying this server is as simple as creating a Dockerfile to build a minimal, compiled executable:
# Use the official Dart image for compilation
FROM dart:stable AS build
WORKDIR /app
# Resolve app dependencies
COPY pubspec.* ./
RUN dart pub get
# Copy app source code and AOT compile it
COPY . .
RUN dart pub get --offline
RUN dart compile exe bin/server.dart -o bin/server
# Build a minimal runtime environment
FROM scratch
COPY --from=build /runtime/ /
COPY --from=build /app/bin/server /app/bin/
# Expose port and run the server
EXPOSE 8080
CMD ["/app/bin/server"]
You can then deploy this container to a service like Google Cloud Run, allowing it to scale automatically based on traffic.
Putting It All Together
Dart’s fast compilation and low memory footprint make it an excellent choice for backend APIs. By using dartantic_ai, you get a unified, type-safe API for interacting with LLMs and building autonomous agents across your entire stack.
Conclusion & Next Steps
You’ve built your first Full-Stack Dart feature using shelf! Next, explore connecting your Dart API to a Cloud SQL instance or Firestore to build a complete RAG (Retrieval-Augmented Generation) pipeline.