-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathgemini_api.py
More file actions
79 lines (61 loc) · 3.65 KB
/
gemini_api.py
File metadata and controls
79 lines (61 loc) · 3.65 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
import dotenv
from google import genai
'''
HOW TO USE THIS API FROM THE FRONTEND:
1. Make sure you create a .env file in the same location as this Python file. Within this .env file add your personal Gemini API key:
GEMINI_API_KEY=[YOUR API KEY]
2. Make sure you have all Python libraries installed as imported above
3. Run this Python file on your computer using the command "uvicorn gemini_api:app --reload"
4. When you need to send a prompt to the AI, use an HTTP **POST** sent to "http://localhost:8000/gemini_api/generate_prompt"
5. When sending a POST, make sure to add an HTTP header with key 'Content-Type' and value 'application/json' as the API needs things packed in JSON format
6. Each prompt sent to this API needs the ENTIRE recipe as well as the actual question sent to it, all in one string. We cannot store recipe context in this API
so you need to attach the recipe details every time (following REST API best practices). The context limit of the AI model is more than big enough for this
7. For the actual body/content of the POST, format your prompt in JSON form with the key "prompt":
Example/
{
"prompt": "[USER-INPUTTED PROMPT HERE]"
}
Where the "user-inputted prompt" is the recipe in question followed by the actual user-inputted question, all stored in one long string
8. The API will automatically reply to the frontend with the response from the AI packed in JSON format as follows:
{
"answer": "[AI REPLY]"
}
Which you can extract and display back onto the frontend.
'''
# Load environment variables (Gemini API key) from the .env file (each person neeeds to create their own and add their own Gemini API key)
dotenv.load_dotenv()
# Initialize FastAPI
# FastAPI is used here so that our API can be communicated with (from the front end) like a web server
# Allows us to create specific "routes" (like localhost/api/generate) that can be accessed with HTTP requests which has
# specific code related to it that will execute specific functionality (like processing a Gemini prompt)
app = FastAPI()
# Configure CORS (cross origin resource sharing) so that only our NextJS frontend can access this API
app.add_middleware(
CORSMiddleware,
allow_origins=["http://localhost:3000"], # Assuming that we're using port 3000 in NextJS
allow_credentials=True,
allow_methods=["*"], # Allow all HTTP methods
allow_headers=["*"]
)
# Initialize the Gemini client (the environmment variable - API key - will automatically be detected) and AI model
client = genai.Client()
GEMINI_MODEL = "gemini-3-flash-preview"
# Define the Pydantic BaseModel to establish expectations for incoming JSON payload
class PromptRequest(BaseModel):
prompt: str = "This is a default prompt, if you receive this, reply with \"Sorry, your request couldn't be processed. Can you try sending it again?\""
# Define a route like that the front end can use to request a prompt to be processed
@app.post('/gemini_api/generate_prompt') # When a POST request is sent to the route '/gemini_api/generate_prompt'
async def generate_prompt(request_data: PromptRequest): # Any incoming data should be validated against the prompt standards
try:
prompt = request_data.prompt
response = client.models.generate_content(
model = GEMINI_MODEL,
contents = [prompt]
)
# Process the response and send it back to the frontend
return {"answer": response.text}
except Exception as e:
return {"answer": "Sorry, your request couldn't be processed. Can you try sending it again?"}