Skip to main content
Loading…
const model = "gpt-4"
async function generate()
await completion()
return { choices }
embedding → vector
tokenize(input)
temperature: 0.7
max_tokens: 2048
system: "You are"
user: prompt
=> response
try { await ai() }
model: "gemini"
stream: true
parse(response)
const model = "gpt-4"
async function generate()
await completion()
return { choices }
embedding → vector
tokenize(input)
temperature: 0.7
max_tokens: 2048
system: "You are"
user: prompt
=> response
try { await ai() }
model: "gemini"
stream: true
parse(response)
N
Sign in
Join now
Your
professional
network
Students, professionals, institutes, companies & organizations — jobs, connections, verified alumni.
Join now
Sign in
Jobs & video interviews
Institutes & verified alumni
Hire & employer brand
Network & connect