Application
In this lab we’ll be developing the basic user interface of a ChatGPT client. We’ll primarily here be looking into token handling, state management and message passing between the frontend and the backend.
The UI we create in this lab also serves as the basic interface we’ll be building on for the other labs in this track.
To make use of the catchup mechanic clone the labs repository and fetch all tags. You’ll then be able to checkout a tagged commit per step in each track using the format git checkout TRACK_PART_STEP
. Checking out a step takes you to the solved version of the step. Checking out the branch takes you to the finished version.
-
Create a project
For this project we’re going to be using Nuxt, a framework that CTA doesn’t support. So, we’re going to have to create it manually. It’s much simpler than you might think thanks to the
tauri init
command.If you feel confident you’re certainly allowed to use something else, but it’s just a lab, I recommend you just go with what I recommend so you can easily copy-paste my solutions.
Simply create a new Nuxt project and run
tauri init
in it. Try running it when you’re done withtauri dev
. -
Configure the frontend
When developing Tauri apps there are certain demands on the project your frontend creates, such as that it’s a statically compiled frontend. To make sure your Nuxt project is compatible with Tauri and follows best practices follow the official guide.
Note that even if we could run the previous step without these configurations it’s when you actually build the app that you may notice things breaking if your configurations haven’t been set up right. This is true for most frameworks. Always try compiling your app in production mode early on and run it before you develop your app for several months only to realize it doesn’t work at a fundamental level (based on a true story).
-
Add the Tauri JS API
While it’s possible to develop Tauri apps without adding this package it’s heavily encouraged for JavaScript projects to use it. An alternative is to get the full api exposed at all times in the
window
variable using"withGlobalTauri": true
intauri.conf.json
, but it causes a bit of bloat and isn’t as nice.The most simplified explanation of what the JS API is is a set of premade Tauri commands you can use. You don’t need to have it, but there’s also little point in not making use of it.
-
Develop a basic frontend
This is where you can let your artistic side out and go nuts. If you don’t want to make the frontend yourself feel free to just use the solution below.
What you should have at the end of this step is a basic website capable of making use of the Tauri JS API. You should be showing some manner of HTML
<form>
for getting asubmit
event that you can use to send your message to the BE. Then you’ll need some manner of<ul><li><p>
or other appropriate set of elements that you can reactively create as you receive events with messages from the BE. Additionally there should be some manner of<input>
somewhere that you can enter your API token into to send to the BE.I’ll be using Vuetify since it has a lot of components we can use and I’m quite familiar with it. If you don’t want to copy-paste your way to the solution feel free to
git checkout chatgpt_1_4
now. I’ll be having a couple extras, like selecting which model to use and markdown rendering, but don’t feel like you need to overkill this, a minimal example could technically be a single HTML file with 10-20 lines of code.You’re done when you have the following features implemented:
- An
async saveToken
function with access to atoken: string
variable - Some manner of
<input>
where users can enter the aforementioned token value - An
async sendMessage
function with access to amessage: string
variable - Some manner of
<input>
where users can enter the aforementioned message value - Data reactivity for a
messages
array so that when new messages are pushed to it they show up in the FE, each message in the format{from: string, text: string}
Add dependencies.
Update your Nuxt config.
Add a plugin for loading Vuetify.
Now Vuetify and the material design icons are set up and available to use! You can find a list of all the components at your disposal here to develop your UI. Or you can simply keep reading and we’ll set up something basic together.
Modify
app.vue
to useNuxtLayout
.Create a default layout.
Create a main page.
- An
-
Set up logging
To get some nicer looking logging going in the terminal as well as get the logs to propagate to the frontend we’re gonna add the
tauri-plugin-log
crate to our project, which is essentially a wrapper around the more well knownlog
crate.At a global level attach the logging so you can see it in your devtools.
Register the plugin in
lib.rs
. -
Add the Tauri commands
To begin with we’re just going to be adding the placeholders for the basic Tauri commands we want to have. We’re not going to be going into the specifics of error handling here, so just use the standard library’s
std::result::Result
and don’t return the error to the frontend. For this lab I’ve chosen the following commands:struct Message {role: String, content: String}
: Represents a single message to/from the API.async fn get_messages(app: AppHandle) -> Result<Vec<Message>, ()>
: Retrieves the current list of messages.async fn connect(app: AppHandle, token: String) -> Result<(), ()>
: Stores the token in the BE so that we can use it when we query the ChatGPT API.async fn prompt(app: AppHandle, message: String) -> Result<(), ()>
: Sends a message to the ChatGPT API. This will also set up an emitter formessage
events carrying payloads from the API response, so we can listen to them in the FE.
Set up the commands and register them in the app. We also need to make sure the
Manager
trait is in scope so we extend the capabilities of theAppHandle
struct. -
Connect the FE with the BE
Now that we have a basic FE and BE going it’s time to connect them so we can see what’s going on.
- Make your token input send its value to the BE:
invoke('connect', {'token': 'abc123'})
- Make the messaging form submit its value to the BE:
invoke('prompt': {'message': 'Hello!'})
- Create an event listener that listens to message payloads from the BE:
const unlisten = listen('message')
TODO: Develop this part
- Make your token input send its value to the BE:
-
Store data in a State
Tauri has concept called “states”, something you’ll find in most frameworks out there. They’re a form of singleton value that is accessible throughout your entire app for the entirety of its runtime. Note that they don’t persist after the app exits, something that can be manually implemented but we won’t be going into how to do that here.
We’ll be creating a
ChatGPT
struct here that stores all our messages as well as the API token. In a real application you’d want to store the messages on the filesystem instead of in-memory, but this will do just fine for the lab.TODO: Develop this part
-
Get yourself a ChatGPT API token
If you don’t have one already now is the time to get yourself a ChatGPT API token. Log onto your ChatGPT account and create one now. Make sure you set a spending limit on it just in case, and use the cheaper models during development so you don’t waste money.
TODO: Develop this part
-
Create a ChatGPT API client
Now that we have somewhere to store our data and a token for the API we’ll be setting up a client that we can use for sending our requests.
TODO: Develop this part
-
Prompt the API
Now that we have somewhere to store our data and a token for the API we’ll be setting up a client that we can use for sending our requests.
TODO: Develop this part
-
BONUS: Mobile support
Thanks to the way we’ve developed this application it’s very easy to add mobile support to it. Simply run
pnpm tauri android init
andpnpm tauri ios init
depending on which target you want to start working on. These commands will set up new projects ingen/android
andgen/ios
as well as sanity check your dev environment setup. Note that choosing to.gitignore
thegen/
folder is a project specific decision, in general if you don’t have advanced requirements it’s best to ignore it and rely on developing plugins instead of directly modifying the projects, though direct modification can in rare cases be necessary.
This lab doesn’t show off nearly all the capabilities we have at our disposal for making the perfect project. There are things both in the code itself and outside it that we could do to improve it. Figure out what some of those are!
- Native token prompt: We could receive the token from a native window instead of from the FE.
- Error handling: We could implement proper error handling instead of just ignoring them.
- Tracing: We are currently not implementing any form of tracing with our solution, so we have no idea which parts the user is actually using, which errors they get or how they caused them.
- Test suite: Our solution is currently completely untested. We could use TDD using
cargo test
and/or BDD usingcucumber-rs
. - Documentation: We haven’t documented any parts of our project. We could both create a
docs/
folder with e.g. a Starlight based site and add docstrings with examples to our code.
While a PWA (persistent web app) shares a lot of similarities with a Tauri app, there are features that Tauri provides that you wouldn’t have access to in a basic PWA. Lets take a moment to appreciate what it is that Tauri has to offer!
- Added security:
- Native performance:
While we haven’t actively tried to make an insecure solution there are some potential risks involved with how we’ve developed this little lab. Which ones did you spot?
- Token handling: Currently we are relying on submitting the token to the BE via the FE. We could use Egui to pop open a native window for receiving the secret, eliminating the risk of a compromised FE harming the user.
- Malfunctioning JavaScript: We haven’t implemented any checks and balances to prevent JavaScript from calling our commands a million times per second.
- Malicious JavaScript: We haven’t touched upon CSP nor used the Isolation Pattern to mitigate risk, additionally we’ve exposed the token to the FE meaning nothing prevents it from intercepting the token and sending it off to some remote location.
- Environment variable exposure: We are using
envPrefix
in a way that’s known to risk leaking secrets. It’s a low risk issue since the situtation it happens in can only happen in poorly managed projects, but it’s still better to expose specific environment variables to the FE rather than wildcarding them.
The best defense is a strong offense. It’s a good practice to take some time to think about how you might go about attacking your own solutions. Don’t just consider potential attacks against your users, you are also very much a potential target to attack. Can you figure out some ways of defending against the attacks?
- Cross-site scripting:
- Malicious dependencies:
- Infect the webview:
- MITM (Man In The Middle):