Tensoria
🤖 AI Agents & Autonomous Workflows Production-grade, observable, cost-bounded
Book a call →

AI agents that work in production

We ship tool-using LLMs that handle multi-step workflows your team actually relies on. Instrumented from day one. Cost-bounded. With eval pipelines, not vibes.

What we build

Six categories of agents we have shipped to production. Each comes with observability, cost controls, and eval — not as options, but as baseline requirements.

Conversational Agents

Stateful assistants with memory, tool access, and context management across long sessions. Built on structured conversation graphs — not infinite message arrays that exhaust context windows and fail silently.

Autonomous Workflow Agents

Agents that drive multi-step business processes end to end: data gathering, decision branching, external API calls, and output generation — without a human in the loop for each step. Designed with explicit loop-break conditions and human escalation paths.

Research Agents

Multi-step information synthesis: web search, document retrieval, structured data extraction, cross-source reasoning. We pair these with RAG retrieval when the knowledge base is internal — agents decide when to retrieve and what to retrieve next.

Customer Support Agents

Tier-1 support automation with tool access to your CRM, ticketing system, and knowledge base. Graceful escalation to human agents when confidence is low or intent is outside scope — with full context handoff, not a dead end.

Internal Ops Agents

Agents that handle internal back-office workflows: report generation, data reconciliation, scheduling, compliance checks, Slack-triggered actions. Replaces ad-hoc scripting and manual copy-paste chains with auditable, observable automation.

Integration Agents

Agents that bridge systems that were never designed to talk to each other: ERP to CRM, API to spreadsheet, legacy SOAP endpoints to modern REST surfaces. We use MCP (Model Context Protocol) where it reduces integration surface area — your existing teams publish tool endpoints, the agent discovers and uses them.

Engineering stack

The tools we actually use — chosen per project, not because they trend on Twitter. No framework loyalty, no hidden vendor dependency.

Orchestration Frameworks

LangGraph for stateful graph-based workflows with explicit state machines and human-in-the-loop nodes. CrewAI for role-based multi-agent delegation. AutoGen for conversational agent loops. Raw OpenAI or Anthropic SDK for latency-critical paths where framework overhead is unacceptable. We know when to use each — and when to use none.

Tool Calling

OpenAI function calling and Anthropic tool use as the core primitives. We design tool schemas that minimize hallucinated calls — tight argument typing, enum constraints, clear descriptions. Every tool is independently unit-tested before it enters the agent loop. Parallel tool calls where supported; sequential where ordering matters.

State Management

Agent state lives in Redis (short-term, session-scoped) or Postgres (persistent, auditable). LangGraph checkpointers for mid-run interruption and resume. No in-memory state that evaporates on restart. State schema is versioned — upgrades don't corrupt running sessions.

MCP — Model Context Protocol

We expose internal systems as MCP servers: your database, internal APIs, file systems, SaaS tools. Any compliant agent runtime can then discover and call those endpoints without bespoke glue code per integration. Reduces integration surface area at scale and lets your platform team publish new capabilities independently of the agent layer.

Observability

LangSmith for full trace capture: every LLM call, tool invocation, token count, and latency per step. Helicone for provider-level cost tracking and rate limiting. Custom dashboards for business-level metrics: task completion rate, escalation rate, average steps per run. Nothing ships to production without traces on by default.

Evaluation Pipelines

Task-level eval sets built from real scenarios before POC begins. Unit tests per tool. Integration tests that replay full agent traces against expected outcomes. LLM-as-judge for open-ended output quality. Regression gates block deployment if task completion rate drops below threshold. Eval is not an afterthought.

Patterns we use — and when

Agent architecture is not one pattern. The choice of reasoning loop, control structure, and collaboration model has direct consequences on latency, cost, reliability, and debuggability. We pick deliberately.

Honest disclaimer: most projects don't need a full autonomous agent. If your workflow can be expressed as a deterministic DAG with 1-2 LLM steps embedded, that is almost always the better choice — lower cost, lower latency, easier to test, easier to operate. We will tell you this straight before we scope the engagement.

"An agent is justified when the decision about which steps to take — and in what order — requires reasoning that you cannot pre-encode. If you can write the flowchart, write the flowchart."

01
ReAct (Reason + Act) The default loop: think, choose a tool, observe the result, think again. Best for single-agent tasks where step count is bounded and the tool set is small. Fails under very long horizons — switch to planning + execution.
02
Planning + Execution Planner LLM decomposes the task into a structured sub-task list; executor agents run each sub-task. Separates high-level reasoning from tool use. Good for long-horizon tasks. Plan can be reviewed by a human before execution fires.
03
Multi-Agent Collaboration Specialized sub-agents with scoped tool sets; an orchestrator routes tasks and merges outputs. Used when the tool surface is too large for a single context window or when parallelism is critical. Adds coordination overhead — only justified when specialization wins.
04
Self-Correction Loops Agent outputs are validated by a critic step (rule-based or LLM-based). On failure, the agent retries with error context. Hard retry limits are mandatory. Used for code generation, structured data extraction, and any output that has a verifiable correctness criterion.
05
Hybrid Workflow + Agent The most common production pattern. Deterministic orchestration layer handles routing, scheduling, and integration. Agent nodes handle only the steps that genuinely require reasoning. You get reliability from the workflow and flexibility from the agent — without paying the full non-determinism tax.

Failure modes we solve

We name these upfront. Vendors who don't are either not shipping to production or are not planning to maintain what they build.

Prompt injection via tool outputs

An adversarial string in a retrieved document or API response hijacks the agent's next action. Fix: output sanitization before injection into context, tool output schemas that strip free-text fields from untrusted sources, and sandboxed tool execution environments.

Infinite loops and runaway cost

Agent loops without a clear termination condition run until the token budget or timeout kills them — after you've spent $40 on a task worth $0.02. Fix: hard step limits per run, per-session spend ceilings enforced at the proxy layer, and loop-detection heuristics on repeated tool calls with identical arguments.

Hallucinated tool calls

The LLM calls a tool with plausible-sounding but invalid arguments — the tool fails, the agent hallucinates a successful result and continues. Fix: strict JSON schema validation on every tool call, error injection back into context with structured failure messages, and retry budgets that trigger human escalation on repeated tool failure.

Context window pollution

Long agent runs accumulate tool outputs, intermediate reasoning, and error messages until the context window is saturated — performance degrades and costs spike. Fix: structured summarization of completed sub-tasks, tool output compression, and explicit context pruning strategies per agent pattern.

Evaluation difficulty

Agents produce open-ended outputs across multi-step traces — there's no obvious ground-truth label to compare against. Fix: scenario-based eval sets with explicit success criteria defined upfront, LLM-as-judge scoring calibrated against human ratings, and trace-level assertions on tool call correctness independent of final output quality.

Silent failures in production

The agent completes without error but produces a wrong result — no exception was raised, so no alert fires. Fix: output validation at every agent boundary, business-metric monitoring (not just error rate), and anomaly detection on task completion patterns that flags behavioral drift.

How we ship agent projects

Four phases. Eval gates between each. No milestone where we say "it works on my machine" and call it done.

1

Discovery

Map the target workflow end to end. Identify which steps require LLM reasoning vs. deterministic logic. Assess risk: what happens if the agent makes a wrong decision? Define human-in-the-loop boundaries before any code is written.

  • Workflow decomposition and risk assessment
  • Tool inventory and integration scoping
  • Agent vs. workflow decision documented
  • Eval scenario set defined upfront
2

Instrumented POC

We ship a working POC in 2-3 weeks. LangSmith traces on from the first run. Eval scenarios run automatically after every change. We iterate on tool schemas, prompt structure, and loop logic against measured task completion — not developer intuition.

  • Core agent loop + tools wired up
  • Full LangSmith trace from run 1
  • Automated eval on scenario set
  • Cost-per-task and step-count baselines
3

Production Rollout

Staged rollout: shadow mode first (agent runs but humans act), then partial traffic, then full. Spend ceilings enforced at the proxy. Business-metric monitoring in place before traffic opens. Rollback plan documented and tested.

  • Shadow mode + canary rollout
  • Spend and rate limits enforced
  • Business-metric dashboards live
  • Rollback procedure tested
4

Iterate

Production data reveals edge cases that scenarios miss. We review traces weekly in the first month, triage failure patterns, and ship targeted fixes — tool schema updates, prompt adjustments, new self-correction conditions. The agent gets better as it runs.

  • Weekly trace reviews in month 1
  • Failure taxonomy and targeted fixes
  • Eval set expanded from real failures
  • Engineering handoff and runbook

When NOT to use agents

If your problem is solvable with a state machine or a deterministic workflow, use that. Agents add latency, cost, and non-determinism. Those are real trade-offs, not footnotes.

You don't need an agent if: your steps are fixed and enumerable, the branching logic can be written as code, the tool calls always happen in the same order, or the main bottleneck is just automating something a human does by clicking through screens. Build a workflow. Add LLM calls where you need text understanding or generation. Done.

You might need an agent if: the right next action genuinely depends on reasoning over intermediate results, the task structure changes based on context you can't pre-encode, or you need the system to handle a large open-ended input space without explicit rules for every case.

We'll tell you straight which bucket your use case falls into — before you spend six weeks building something that should have been a Temporal workflow. Start with an AI audit if you're unsure.

Related resources

Technical reading and adjacent services for teams evaluating agent architectures.

Technical FAQ

It depends on the control model you need. LangGraph is our default for stateful, graph-based workflows where you need explicit control over state transitions and human-in-the-loop steps. CrewAI works well for role-based multi-agent pipelines with structured task delegation. AutoGen fits conversational multi-agent loops. For latency-critical paths, we often drop frameworks entirely and wire tool-calling logic directly against the provider API. No dogma — we pick based on your use case, ops constraints, and what your team can maintain.
We instrument every agent run with hard step limits, token budget guards, and timeout constraints from day one. LangSmith or Helicone traces every tool call and LLM invocation so you can inspect exactly what happened when a loop fires. Runaway cost events are almost always caused by missing loop-break conditions and no spend ceiling — both are solved at architecture design time, not as an afterthought. We also build human-in-the-loop checkpoints for high-stakes actions.
Agent evaluation is harder and requires a multi-layer approach. We use task completion rate on curated scenario sets, tool-call accuracy (right tool, right arguments), step efficiency (did it take 3 steps when 1 would do), and final output quality. We build deterministic unit tests for individual tool integrations, integration tests that run full agent traces, and LLM-as-judge scoring for open-ended outputs. Eval is set up before the first POC sprint, not bolted on before launch.
Model Context Protocol (MCP) is an open standard for exposing tools and data sources to LLMs in a standardized, composable way. Instead of writing bespoke tool-calling glue code for every integration, MCP servers expose capabilities that any compliant agent runtime can discover and use. For enterprise deployments with many internal systems, MCP dramatically reduces integration surface area and allows your existing teams to publish new tool endpoints without touching the agent orchestration layer.
Use a deterministic workflow when you can enumerate the steps, the branching logic is known, and latency or cost matter. Use an agent when the task requires reasoning about which steps to take, the tools to use, or the order of operations — and when that reasoning changes based on context you cannot pre-encode. In practice, most "we need an agent" requests turn out to need a workflow with 1-2 LLM calls embedded. We will tell you straight which one applies. If you are unsure, book a technical call and we will give you an honest assessment.

Ship AI agents that work in production

Book a technical call. Bring your workflow, your stack, and your constraints. We will give you a straight answer on whether agents are the right call — and if yes, what the architecture, timeline, and eval strategy should look like.