Learn we built LlamaTutor from scratch – an open source AI tutor with 90k users.
<input>
and <select>
, and control both using some new React state:
/getSources
endpoint:
/getSources
:
app/api/getSources/route.js
file:
.env.local
:
/api/getParsedSources
, passing along the sources in the request body:
app/api/getParsedSources/route.js
for our new route:
getTextFromURL
function and outline our general approach:
jsdom
and @mozilla/readability
libraries:
getTextFromURL
:
Promise.all
to kick off our functions in parallel:
/chat
!
chat.completions.create
method expects, our API handler is mostly acting as a simple passthrough.
We’re also using the stream: true
option so our frontend will be able to show partial updates as soon as the LLM starts its response.
We’re read to display our chatbot’s first message in our React app!
ChatCompletionStream
helper from Together’s SDK to update our messages
state as our API endpoint streams in text:
role
to determine whether to append the streamed text to it, or push a new object with the assistant’s initial text.
Now that our messages
React state is ready, let’s update our UI to display it:
chat
endpoint responds with the first chunk, we’ll see the answer text start streaming into our UI!
handleMessage
that will look a lot like the end of our first handleSubmit
function:
chat
endpoint, and reuse the same logic to update our app’s state as the latest response streams in.
The core features of our app are working great!