Christ is my all
2521 stories
·
3 followers

Muffin

1 Share


Read the whole story
rtreborb
15 hours ago
reply
San Antonio, TX
Share this story
Delete

AI Studio is going Open Source (and why the AI Control Plane must be extensible)

1 Share

Over the last year, something subtle but important has happened in the AI industry. The conversation has shifted from “Which model should we use?” to “How do we control what we’ve built?”

That shift matters, because the hard problem was never the model.

The hard problem is what happens when models become infrastructure, when they are embedded into workflows, connected to internal systems, calling APIs, invoking tools, moving data across boundaries, and increasingly operating with partial autonomy. At that point, the LLM is no longer a novelty. It becomes part of your value chain.

And value chains, if left ungoverned, become liabilities.

Today, I’m very pleased to announce that Tyk AI Studio is going open source.

Tyk is an Open Source company, we are strong advocates of open standards, and open formats. I am personally a big believer that in order to continue to foster innovation, open source must thrive (and yes, in the age of AI even open source projects are coming under strain). 

Taking AI Studio open source is a deliberate architectural decision about how AI infrastructure should evolve.

The AI value chain is fragmenting

If you’ve read my previous writing, you’ll know I tend to frame AI as a value chain rather than a product. That framing becomes more useful every month.

We have model vendors competing aggressively and evolving rapidly. We have orchestration layers and emerging standards such as MCP and A2A attempting to normalise interaction patterns. We have internal data sources being wired into RAG pipelines. We have agents that can trigger APIs and external tools. And we have compliance teams trying to keep up with all of this.

Each of these layers moves independently. Each introduces its own risk surface. And very few organisations have a unifying control plane across them.

What many enterprises currently have is not an AI architecture, but a growing collection of integrations that work… until they don’t.

AI Studio exists to address that structural gap. It acts as the AI Gateway layer of what we call the AI Control Stack: a policy-driven, observable, extensible layer that sits between your organisation and the evolving AI value chain.

Open source is structural, not ideological (ok maybe it’s a bit ideological)

The obvious question is: but why open source?

The answer is pretty mundane…

We cannot build a durable governance layer for a system that is changing this quickly if that governance layer itself is closed and fixed. New vendors appear, pricing models shift, standards evolve, internal use cases change, and regulation tightens. If your control plane can only adapt at the speed of a vendor roadmap, you are permanently behind.

Open source flips that dynamic: it enables a community to respond to change faster than any single team could. 

We’ve seen this play out in API infrastructure over the last decade. The organisations that retained control of their APIs were the ones that owned their control plane. AI will be no different.

Extensibility is the point

One of the first conclusions we reached while building AI Studio is that there is no “standard” AI architecture in the enterprise. Some organisations are embedding copilots into internal tooling. Others are exposing AI-powered features in customer-facing products. Many are experimenting with autonomous agents.

Any platform that assumes a fixed pattern of use will eventually constrain its users.

So instead of hardcoding assumptions, we built AI Studio around extensibility. The AI Gateway provides the core capabilities (routing, governance, observability, cost control), but the real power lies in the plugin ecosystem and developer environment that surrounds it.

If you need custom model selection logic, you can build it.

If you need proprietary guardrails, you can implement them.

If you need specialised pre- or post-processing, you can inject it.

If you want to integrate internal systems, MCP servers, or domain-specific tooling, you can extend the gateway rather than working around it.

If you need a new, completely custom UI – you can add it.

This is not about shipping features faster than competitors. It’s about ensuring that your AI control plane can evolve at the same pace as your AI usage.

This was fun…

On a more personal level, building AI Studio has been energising in a way that feels familiar. It reminds me of the early days of Tyk, when I was hacking away in my living room with a bottle of wine hiding under a blanket (yes, there is a photo… unfortunately).

The difference now is that we’re not building in isolation. We have an exceptional team, and we have customers deploying serious AI systems in production. Their feedback has shaped this platform in very concrete ways. The direction of AI Studio is informed by real deployment constraints, real governance requirements, and real operational complexity.

There’s something uniquely motivating about working in a space that is moving quickly while also having the benefit of experience and community behind you. It creates a kind of disciplined experimentation, which is rare.

Democratizing AI Means Owning the Control Plane

“Democratizing AI” is a phrase that gets used loosely. Often, it implies simplification — hiding complexity behind abstraction. That’s not what we mean.

AI is inherently complex because it is embedded in systems that are complex. Democratization, in this context, means giving enterprises the ability to participate in that complexity without surrendering control to it. It means enabling vendor independence. It means making policy enforceable. It means giving teams visibility into cost, risk, and behaviour, and it means allowing organisations to extend their systems safely rather than relying on opaque black boxes.

Open source is how we make that credible.

By open sourcing AI Studio, we are inviting architects, developers, and platform teams to help shape the control plane that will underpin the agentic era. The Community Edition provides the foundation. Enterprise capabilities build on top for organisations that require deeper operational and governance tooling.

But the core — the AI Gateway, the UI control plane, and its extensibility model — is open.

Because the AI value chain is not stabilising any time soon. If anything, it is accelerating. The only sustainable response is to build infrastructure that is transparent, adaptable, and extensible by design.

We’re excited to take that step.

And we’re even more excited to build it in the open.

— Martin

The post AI Studio is going Open Source (and why the AI Control Plane must be extensible) appeared first on Tyk API Management.

Read the whole story
rtreborb
16 hours ago
reply
San Antonio, TX
Share this story
Delete

Can Coding Agents Relicense Open Source Through a ‘Clean Room’ Implementation of Code?

1 Share

Simon Willison:

There are a lot of open questions about this, both ethically and legally. These appear to be coming to a head in the venerable chardet Python library. chardet was created by Mark Pilgrim back in 2006 and released under the LGPL. Mark retired from public internet life in 2011 and chardet’s maintenance was taken over by others, most notably Dan Blanchard who has been responsible for every release since 1.1 in July 2012.

Two days ago Dan released chardet 7.0.0 with the following note in the release notes:

Ground-up, MIT-licensed rewrite of chardet. Same package name, same public API — drop-in replacement for chardet 5.x/6.x. Just way faster and more accurate!

Yesterday Mark Pilgrim opened #327: No right to relicense this project.

A fascinating dispute, and the first public post from Pilgrim that I’ve seen in quite a while.

Link: simonwillison.net/2026/Mar/5/chardet/

Read the whole story
rtreborb
4 days ago
reply
San Antonio, TX
Share this story
Delete

Manipulating AI Summarization Features

1 Share

Microsoft is reporting:

Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters….

These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.

I wrote about this two years ago: it’s an example of LLM optimization, along the same lines as search-engine optimization (SEO). It’s going to be big business.

Read the whole story
rtreborb
7 days ago
reply
San Antonio, TX
Share this story
Delete

Draw.io MCP for Diagram Generation: Why It’s Worth Using

1 Share
I started using Draw.io MCP to generate diagrams from structured input and keep them tied to code and infrastructure. Instead of manually arranging every shape, I can now generate a solid first draft in minutes, make deliberate edits, and commit it to Git. That simple change turns diagrams into living assets rather than throwaway images ... Read more
Read the whole story
rtreborb
10 days ago
reply
San Antonio, TX
Share this story
Delete

Running GitHub Copilot SDK Inside GitHub Actions

1 Share

If you’ve been using GitHub Copilot, you already know how powerful it can be. Lets look at running the GitHub Copilot SDK inside GitHub Actions. Dropping it into a GitHub Actions workflow means it can work right inside your CI/CD pipeline. I will show how-to with a working example: a Pull Request Review Assistant that runs in GitHub Actions, uses the Copilot SDK, and applies a predefined…

Source

Read the whole story
rtreborb
10 days ago
reply
San Antonio, TX
Share this story
Delete
Next Page of Stories