Contents
Lumitra Voice Assistant — Documentation
This document introduces the Lumitra Voice Assistant—a zero-config (or low-config) way to add real-time AI voice interactions to your website. You'll learn how to integrate it quickly with minimal code, then explore advanced configuration options and custom implementations.
Documentation Sections
1. Introduction & Quick Start
Basic concepts and minimal implementation
2. Advanced Configuration
Data attributes and programmatic control
3. Environment-Specific Usage
Integration with common platforms and frameworks
4. Theming, Extensions & FAQs
Customization, common questions and support
This section introduces the Lumitra Voice Assistant—a zero-config (or low-config) way to add real-time AI voice interactions to your website. By the end of this first section, you'll see how to integrate it in under a minute using just a single <script>
tag.
1.1 What Is the Lumitra Voice Assistant?
The Lumitra Voice Assistant is a client-side script that, when embedded on your site, enables a user to have a real-time voice conversation with an AI model (like GPT-4 or GPT-3.5). Think of it as a quick way to provide "Ask our AI" functionality to your visitors. No back-end integration is required on your end—just drop in the script, pass your appId
or other parameters, and you can immediately facilitate AI-driven voice sessions.
1.2 Core Features (From a Developer's Perspective)
Real-Time Voice Conversations
Users click "Talk to AI," the browser requests mic permission, and they start talking with an AI model.
Customizable AI Behavior
Control the AI persona (e.g., "You are a sales rep"), the model type, creativity, and more through data-*
attributes or JavaScript config.
Zero-Code or Low-Code
If you're happy with the default UI, you just need one <script>
. If you want advanced logic or your own button, that's also easy.
Simple Embedding, No Extra Servers
The assistant communicates directly with Lumitra's infrastructure, so you don't have to set up or maintain a specialized server for these sessions.
1.3 Quick Start: One Script Tag
If you only need the simplest integration (and don't want to custom-code your own UI), drop this snippet onto a page in your site:
<script src="https://voice.lumitra.io/v1.2/lumitra-voice-assistant.js" data-app-id="YOUR_APP_ID" data-show-button="true" ></script>
Note: Replace YOUR_APP_ID
with your real Lumitra application ID. As soon as the page loads, users will see a floating "Talk to AI" button in the bottom-right corner.
User Experience Flow:
- 1They click "Talk to AI."
The browser prompts them for microphone permission.
- 2They ask questions or chat with the AI model.
The default model and voice are used unless you override them with data attributes or advanced config (we'll see that in later sections).
- 3They can close the assistant.
The session ends, or it can be re-opened if they click the button again.
1.4 Minimal "Hello World" Example
Below is a complete, minimal HTML file showing how you might embed the assistant. You can literally copy-paste this into a .htmlfile and open it in your browser (just note that mic permissions might require an HTTPS context in some browsers):
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <title>My Lumitra Voice Assistant</title> </head> <body> <h1>Welcome!</h1> <p>Click the button below to chat with our AI assistant.</p> <!-- Single script that does everything --> <script src="https://voice.lumitra.io/v1.2/lumitra-voice-assistant.js" data-app-id="YOUR_APP_ID" data-show-button="true" ></script> </body> </html>
Result: You should see a floating button labeled "Talk to AI." Clicking it triggers the microphone prompt, and once granted, you can start your conversation. That's literally all that's required for a basic setup.
1.5 What's Next?
Of course, there's more you can do: hide the default button, call .show()
yourself, change the AI voice/model, or set custom instructions. In Section 2 (Advanced Config & Programmatic Usage), we'll dive into how to pass extra parameters, create your own "Talk to AI" button, or control the assistant in more detail.
If this single-script approach is all you need, you may not even need the rest of the doc. But if you want to go further—especially if you're using React or want to override the default UI—keep reading the upcoming sections.
In Section 1, we saw how to embed the Lumitra Voice Assistant with a single <script>
tag for a quick start. Now we'll explore how you can:
- Pass extra parameters (AI model, voice, instructions, etc.) via
data-*
attributes. - Control the assistant programmatically (show/hide/toggle) if you don't want the default floating button.
- Use the NPM Loader to initialize everything in React or Next.js, skipping inline scripts entirely.
By the end of this section, you'll know how to customize the assistant's behavior, connect it to your own UI elements, or integrate it seamlessly into a modern JS framework.
2.1 Additional Data Attributes
You can pass various data-*
attributes on the <script>
to tailor how the assistant behaves—like which AI model to use, how creative it should be, or what voice style you want. Below is a quick reference for the most common attributes:
Attribute | Type | Example | Description |
---|---|---|---|
data-app-id | String | data-app-id="myAppId" | Required. Your Lumitra application ID (identifies which AI project to use). |
data-show-button | Boolean | data-show-button="true" | Whether to automatically display the floating "Talk to AI" button. Set to "false" if you want to call .show() yourself. |
data-instructions | String | data-instructions="You are a helpful AI." | A "persona" or role for the AI. E.g., "You are a marketing assistant." |
data-model | String | data-model="gpt-4" | Choose which GPT-based model to use. (GPT-4 or GPT-3.5, etc.) |
data-voice | String | data-voice="shimmer" | Select the AI's voice style. Possible values depend on your plan (e.g. "alloy", "shimmer", "coral", etc.). |
data-temperature | Number | data-temperature="0.7" | AI "creativity" level, from 0.0 (strict) to 1.0 (very creative). |
data-prompt-id | String | data-prompt-id="prompt_12345" | Reference a stored prompt in your Lumitra account. This merges with any additional instructions set here. |
data-position | String | data-position="bottom-left" | Controls where the assistant widget is placed. Defaults to bottom-right, but you can choose "top-left," "top-right," etc. |
Full Example with All Attributes
<script src="https://voice.lumitra.io/v1.2/lumitra-voice-assistant.js" data-app-id="myAppId" data-show-button="true" data-instructions="You are a sales agent..." data-model="gpt-4" data-voice="alloy" data-temperature="0.8" data-prompt-id="prompt_example" data-position="top-left" ></script>
Tip: Mix and match the attributes you need. If an attribute is omitted, we use a default (e.g., model="gpt-3.5"
, temperature=0.7
, etc.).
2.2 Programmatic Control (Show/Hide/Toggle)
If you don't want the default floating button, or you want to control the widget's visibility manually, set data-show-button="false"
and call these methods from JavaScript:
// Show the assistant window.Lumitra.lumitraVoiceAssistant.show(); // Hide the assistant window.Lumitra.lumitraVoiceAssistant.hide(); // Toggle visibility window.Lumitra.lumitraVoiceAssistant.toggle(); // Check if it's currently visible console.log(window.Lumitra.lumitraVoiceAssistant.isVisible); // true/false
Here's a quick HTML snippet using a custom button:
<!-- Turn off auto-floating button --> <script src="https://voice.lumitra.io/v1.2/lumitra-voice-assistant.js" data-app-id="myAppId" data-show-button="false" ></script> <!-- Your own button anywhere on the page --> <button id="myAIButton">Talk to AI</button> <script> document.getElementById('myAIButton').addEventListener('click', () => { // Manually show the assistant if (window.Lumitra && window.Lumitra.lumitraVoiceAssistant) { window.Lumitra.lumitraVoiceAssistant.show(); } }); </script>
With this, you can integrate the assistant into your existing UI. For instance, you might open the assistant only after a user logs in, or toggle it from a navigation menu item.
2.3 NPM Loader (React / Next.js)
If you prefer not to rely on inline <script>
tags, you can install our NPM package (lumitra-js, for example) that dynamically loads the same hosted script under the hood. This is perfect for React or Next.js apps that want to initialize the assistant purely in code.
Installation
# Using npm npm install @lumitra/lumitra-js # or pnpm pnpm add @lumitra/lumitra-js # or yarn yarn add @lumitra/lumitra-js
Basic Usage in React
import React from 'react'; import { loadLumitra } from '@lumitra/lumitra-js'; function MyAssistantButton() { async function startAssistant() { const Lumitra = await loadLumitra(); // This injects the hosted script if it's not already present const session = await Lumitra.initializeLumitraSession({ appId: 'myAppId', instructions: 'You are a friendly FAQ bot.', model: 'gpt-4', temperature: 0.7, voice: 'shimmer', // Any other fields, like promptId or customFields }); // Connect to start the voice session await session.connect(); console.log('Assistant is connected:', session.isConnected); } return ( <button onClick={startAssistant}> Start My AI Assistant </button> ); } export default MyAssistantButton;
Once you click StartAssistant
, the script loads (if not already loaded). Then initializeLumitraSession
fetches your session details, returning a session
object.
This approach does not create the default floating button UI for you. Instead, you have a session object. The internal voice assistant logic is running, but you must provide your own mechanism to showor hide a widget if you want that. Some devs simply rely on the session object, while others import a "lumitra-voice-assistant.js" React component. We'll explore that more in advanced usage (Section 3).
Advanced Fields & Cleanup
You can pass all the same fields as data attributes—like model
,voice
, promptId
—directly to initializeLumitraSession
. If you want to fully disconnect:
session.disconnect(); console.log('Assistant disconnected.');
This stops audio processing. If the user tries to talk again, they'll need to re-initiate a new session.
2.4 Next Steps
In this section, we covered:
- Adding advanced
data-*
attributes to the script tag (model, voice, instructions, etc.). - Programmatic control of the default assistant UI, so you can decide when to show or hide it.
- NPM-based usage (React/Next.js) if you prefer not to rely on a script tag in your HTML.
In Section 3, we'll dig into environment-specific usage (like Webflow, WordPress, and plain HTML) plus troubleshooting tips for mic permissions or domain whitelisting. We'll also mention how you can handle custom front-end approaches (like building your own UI around the assistant).
Section 3: Environment-Specific Usage & Fully Custom Front-End
In Section 2, we explored advanced configuration via data attributes and programmatic calls—plus an NPM loader approach for React/Next.js. Now we'll look at how to embed the assistant in common environments (like Webflow or WordPress), then see how to build a fully custom front-end using the AI session details. We'll finish with key troubleshooting tips for mic permissions, domain whitelisting, and firewalls.
3.1 Environment-Specific Usage
The single script tag (plus optional data attributes) is typically enough for any site builder or CMS that allows you to inject custom HTML. Below are some pointers for a few common platforms:
A) Webflow
In Webflow, you can place the script inside a Site-wide Custom Code area (Project Settings → Custom Code) or in a regular Embed element on a specific page. For example:
<!-- In a Webflow Embed block --> <script src="https://voice.lumitra.io/v1.2/lumitra-voice-assistant.js" data-app-id="myAppId" data-show-button="true" ></script>
If you want a custom button approach, just set data-show-button="false"
and call .show()
from your own embedded <script>
block.
B) WordPress
For WordPress, you have several options:
- Add a Custom HTML block in the Block Editor (Gutenberg):
<script src="https://voice.lumitra.io/v1.2/lumitra-voice-assistant.js" data-app-id="myAppId" data-show-button="true" ></script>
- Use a plugin like Insert Headers and Footers, then paste the snippet into the Scripts in Footer area.
C) Plain HTML / Static Site
If you have a simple static site (e.g. an index.html
file), just place the snippet in the <head>
or near the bottom of <body>
(with no special plugin or environment needed).
D) React (Create React App)
If you are using Create React App (not Next.js) and prefer the minimal script approach, you can place your script in public/index.html
:
<!-- in public/index.html --> <script src="https://voice.lumitra.io/v1.2/lumitra-voice-assistant.js" data-app-id="myAppId" data-show-button="true" ></script>
Or, as shown in Section 2, you can install our NPM loader (@lumitra/lumitra-js
) for a purely code-driven approach.
3.2 Fully Custom Front-End (Advanced LiveKit Usage)
If you want to skip the default assistant UI entirely—maybe you have a multi-participant audio scenario or custom branding—Lumitra can still generate the session. You just need to callinitializeLumitraSession()
and get the serverUrl
plusparticipantToken
to feed into your own LiveKit setup.
A) Retrieving Server URL & Token
// Suppose you have already loaded the lumitra script or used the NPM loader const session = await window.Lumitra.initializeLumitraSession({ appId: 'myAppId', instructions: 'Act as a financial advisor.' model: 'gpt-4', // Additional fields: voice, customFields, promptId, etc. }); await session.connect(); console.log('Server URL:', session.serverUrl); // e.g. "wss://..." console.log('Participant Token:', session.participantToken); // JWT for joining the LiveKit room
At this point, you have everything you need to manually create or join a LiveKit room (with or without the default UI).
B) Example: Using LiveKitRoom
in React
import React from 'react'; import { LiveKitRoom } from '@livekit/react'; function MyCustomAudioUI({ serverUrl, token }) { return ( <LiveKitRoom serverUrl={serverUrl} token={token} audio={true} video={false} onConnected={() => console.log('Connected to LiveKit with a custom UI')} onDisconnected={() => console.log('Disconnected')} > {/* Build your own interface or audio controls here */} </LiveKitRoom> ); } export default MyCustomAudioUI;
You could feed session.serverUrl
and session.participantToken
into this component, bypassing the out-of-the-box assistant interface. You still get full AI conversation because the session is already associated with your appId
and instructions. All that's left is styling and controlling the user experience how you see fit.
C) Mic/Device Handling
If you rely entirely on your own UI, you might handle microphone permissions or device selection differently. That's fine. The key is that you have the token and the AI session.
3.3 Troubleshooting
Below are the most common issues that might arise during setup or runtime, along with possible solutions.
A) Mic Permission Denied
Browsers generally require HTTPS for mic usage. If the user clicks "Block," you'll see an error or the assistant might show an in-UI message. The best fix is to prompt them to allow mic access or check browser settings. In dev mode, you can run localhost
on HTTPS or some local tunnel likengrok if needed.
B) Domain Whitelisting / 403 Errors
Your domain must be authorized in your Lumitra account. If you see 403 or "invalid token" errors, ensure your appId is correct and that your domain is whitelisted. Contact your Lumitra account admin or support if you're unsure.
C) Firewalls / Corporate Networks
Real-time audio requires WebSockets on certain ports. Some corporate networks or older proxies may block them. If your users can't connect from a specific environment, instruct them to contact IT or test from a more open network.
D) Assistant Not Visible
If data-show-button="false"
is set, the floating button won't appear. You must call window.Lumitra.lumitraVoiceAssistant.show()
or use another approach to reveal the UI. Also watch out for other CSS elements with higher z-index
overlapping the assistant.
E) Duplicate or Conflicting Scripts
If you embed multiple <script>
tags or call the NPM loader multiple times, you might see weird behavior or collisions. Keep it to one instance of the assistant unless you have a special scenario and know what you're doing.
3.4 Moving Forward
Now you know how to embed the assistant across various site builders (Webflow, WordPress, static HTML, React) and even build a fully custom front-end by grabbing the serverUrl
and participantToken
. You've also seen solutions for common issues like mic denial or domain whitelisting.
In Section 4, we'll wrap up with any final notes on theming, advanced extensibility, and next steps for contacting support or requesting new features.
Section 4: Theming, Extensions, FAQs, & Final Steps
In Section 3, you learned how to integrate the Voice Assistant into various environments (Webflow, WordPress, plain HTML) and even create a fully custom UI with LiveKit. Now we'll cover optional theming/styling tips, plus advanced ideas like session recording. We'll also address a quick FAQ before wrapping up with support resources.
4.1 Optional Theming or Styling
If you use the default assistant UI (floating button + built-in panel), you can position it with data-position
. Beyond that, you might only have minimal styling overrides (like a small CSS tweak for the button). For heavy customization, see the "Fully Custom Front-End" approach from Section 3.2—where you build your own UI entirely.
Minor CSS Overrides
Some developers add a short CSS rule to shift or restyle the button container. For instance:
/* Example: Shift the default bottom-right button up a bit more */ #lumitra-floating-button-container { bottom: 60px !important; } /* Example: Make the 'Talk to AI' button's background a different color */ .lk-floating-button { background-color: #E53E3E !important; /* Redwood? */ color: white !important; }
We don't guarantee full theming support, but these minimal overrides might help you brand the default button. Again, for deeper control, you'd build a custom approach or place the session data into your own UI.
4.2 Extending the Voice Assistant
Beyond toggling or data attributes, here are a few advanced ideas:
- Recording / Playback: If your plan includes server-based recording, you could pass something like
recordingEnabled: true
ininitializeLumitraSession
. Then retrieve recorded audio segments from your Lumitra account or logs. This can help you archive conversations or review them for compliance. - Custom Fields: Provide
customFields
to pass user-specific data or contextual info (e.g., company size,product version). The AI can incorporate these in real time, producing more relevant answers. - State Tracking / onStateChange: If you prefer more direct event handling, the session object might allow you to listen to changes (like "connected," "disconnected," etc.). This is especially useful if you're building your own advanced front-end.
If you plan to do something highly specialized (multiple concurrent sessions, dynamic prompt switching, etc.), consider the fully custom approach with token-based usage from Section 3.2.
4.3 Frequently Asked Questions
Q: How do I pass user-specific data to the AI?
A: If you're using data attributes, you might rely on a separate param, or set data-instructions
dynamically. For the JS approach, passcustomFields
inside initializeLumitraSession()
:
const session = await Lumitra.initializeLumitraSession({ appId: "myAppId", instructions: "Tailor answers for a premium user named Lucy.", customFields: { userName: "Lucy", membershipLevel: "premium" }, });
The AI can see these fields behind the scenes. Additional context can lead to more relevant answers.
Q: Which voice or model should I pick?
A: It depends on your plan. You might have access to GPT-4 or only GPT-3.5. Similarly, voices like "shimmer," "alloy," or "coral" might require higher-tier plans. Check your Lumitra subscription or contact support if you're unsure.
Q: Does it work on mobile browsers?
A: Yes, iOS Safari and Android Chrome are fully supported, though the user still must allow microphone permission. If mic permission is blocked at the OS level, they can't speak with the AI, obviously.
Q: Can I brand the floating button or customize it more deeply?
A: Minor styling is possible with CSS overrides, but for full control, you'd build a custom front-end. That means you handle the UI, and Lumitra just provides the session logic + tokens.
4.4 Contact & Next Steps
That wraps up the main sections of the Lumitra Voice Assistant docs. Here are your final resources if you need more help or want to propose new ideas:
- Support Email / Portal: If you have an urgent issue or can't get a session working, email us at support@lumitra.io or use our help portal for real-time assistance.
- Feature Requests: If you'd like new voices, higher-level models, advanced logging, or more custom UI examples, let us know! We prioritize requests that align with our user community's needs.
- Server-Side API Docs: If you also want to manage sessions or store transcripts on your side, we have separate "Lumitra Server-Side API" docs that explain how to do deeper integration. Contact us if you can't find them.
Thanks again for checking out the Lumitra Voice Assistant. We hope you enjoy building interactive, voice-enabled AI experiences for your users!