🤖 AI-First Scripting with GoloScript
- GoloScript: AI scripting that feels like magic.
- Build AI-powered applications with zero boilerplate. GoloScript is designed for the AI era.
- Start building AI applications without the complexity.
Why GoloScript for AI?
GoloScript eliminates the complexity of AI integration. Stream LLM responses, build agents, and handle tool calls with simple, expressive code—no frameworks needed.
Stream LLM Responses in Minutes
# Stream responses from any LLM with 5 lines of code
let agent = NewAgent("MyAssistant")
agent: streamCompletion(
"Write a sci-fi story about AI",
|chunk| {
print(chunk: content()) # Stream directly to console
return true # Continue streaming
}
)
That’s it. No connection pooling. No complex async handling. Just results.
Built-in AI Primitives
Native OpenAI Client
Connect to any OpenAI-compatible API instantly:
# Configuration
let baseURL = "http://localhost:12434/engines/llama.cpp/v1"
let apiKey = "I💙DockerModelRunner"
let model = "ai/qwen2.5:1.5B-F16"
let client = openAINewClient(
baseURL,
apiKey,
model
)
let messages = list[
DynamicObject(): role("system"): content("You are a helpful assistant"),
DynamicObject(): role("user"): content("Explain quantum computing")
]
# Non-streaming completion
let response = openAIChatCompletion(client, messages)
println(response: content())
# Streaming completion
openAIChatCompletionStream(client, messages, |chunk| {
print(chunk: content())
return true
})
JSON Templates for AI Requests
Build complex AI requests with readable templates:
let jsonTemplate = """
{
"model": "{{.Model}}",
"temperature": {{.Temperature}},
"messages": [
{"role": "system", "content": "{{.SystemInstruction}}"},
{"role": "user", "content": "{{.UserContent}}"}
],
"stream": true
}
"""
let data = DynamicObject()
: Model("gpt-4")
: Temperature(0.7)
: SystemInstruction(escapeJSON(systemPrompt))
: UserContent(userQuestion)
let requestBody = template(jsonTemplate, data)
Or skip templates entirely with native JSON encoding:
let request = DynamicObject()
: model("gpt-4")
: temperature(0.7)
: messages([
DynamicObject(): role("system"): content(systemPrompt),
DynamicObject(): role("user"): content(userQuestion)
])
let json = toJSON(request) # Automatic escaping, perfect JSON
Real-World AI Patterns
Streaming Chat Agent
struct ChatAgent = {
name,
client,
systemMessage,
options
}
function NewAgent = |name| -> ChatAgent()
: name(name)
: client(openAINewClient(url, apiKey, model))
: systemMessage(
DynamicObject()
: role("system")
: content("You are a creative storyteller.")
)
: options(DynamicObject(): temperature(0.7): topP(0.9))
augment ChatAgent {
function streamCompletion = |this, userMessage, onChunk| {
let messages = list[
this: systemMessage(),
DynamicObject(): role("user"): content(userMessage)
]
try {
let stats = openAIChatCompletionStream(
this: client(),
messages,
onChunk,
this: options()
)
return Result(stats)
} catch (e) {
return Error("Error: " + e)
}
}
}
Usage:
let agent = NewAgent("StoryTeller")
let result = agent: streamCompletion(
"Write a short sci-fi story",
|chunk| {
if chunk: error() != null {
println("Error:", chunk: error())
return false # Stop streaming
}
print(chunk: content())
return true # Continue
}
)
if result: isOk() {
let stats = result: value()
println("\nTokens:", stats: totalTokens())
}
Key Features
- OpenAI Compatible: Works with OpenAI, Ollama, llama.cpp, and any compatible API
- Zero Dependencies: No npm packages, no pip installs—just GoloScript
- JSON First: Native
toJSON()andfromJSON()for perfect AI requests - Template Engine: String templates with automatic JSON escaping
- Error Handling: Functional error handling with
Result/Errortypes
Perfect For
- Rapid prototyping - Test AI ideas in minutes, not hours
- Local LLM integration - Perfect for Ollama, llama.cpp, LM Studio
- Automation scripts - AI-powered CLI tools and workflows
- Learning AI - Simple syntax, powerful results
- Production microservices - Fast, lightweight, no runtime bloat