Claude-skill-registry ash-ai
AshAi extension guidelines for integrating AI capabilities with Ash Framework. Use when implementing vectorization/embeddings, exposing Ash actions as LLM tools, creating prompt-backed actions, or setting up MCP servers. Covers semantic search, LangChain integration, and structured outputs.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/ash-ai" ~/.claude/skills/majiayu000-claude-skill-registry-ash-ai && rm -rf "$T"
manifest:
skills/data/ash-ai/SKILL.mdsource content
AshAi Guidelines
AshAi integrates AI capabilities with Ash resources: vectorization, LLM tools, prompt-backed actions, and MCP servers.
Vectorization
Convert text to vector embeddings for semantic search:
defmodule MyApp.Artist do use Ash.Resource, extensions: [AshAi] vectorize do full_text do text(fn record -> """ Name: #{record.name} Biography: #{record.biography} """ end) used_attributes [:name, :biography] # Only rebuild when these change end strategy :ash_oban # or :after_action, :manual embedding_model MyApp.OpenAiEmbeddingModel end end
Embedding Model
defmodule MyApp.OpenAiEmbeddingModel do use AshAi.EmbeddingModel @impl true def dimensions(_opts), do: 3072 @impl true def generate(texts, _opts) do response = Req.post!("https://api.openai.com/v1/embeddings", json: %{"input" => texts, "model" => "text-embedding-3-large"}, headers: [{"Authorization", "Bearer #{System.fetch_env!("OPEN_AI_API_KEY")}"}] ) case response.status do 200 -> {:ok, Enum.map(response.body["data"], & &1["embedding"])} _ -> {:error, response.body} end end end
Vectorization Strategies
| Strategy | Behavior |
|---|---|
| Sync after create/update (slow, not for production) |
| Async via Ash Oban (recommended) |
| You control when to update |
Semantic Search Action
read :semantic_search do argument :query, :string, allow_nil?: false prepare before_action(fn query, _context -> case MyApp.OpenAiEmbeddingModel.generate([query.arguments.query], []) do {:ok, [search_vector]} -> query |> Ash.Query.filter(vector_cosine_distance(full_text_vector, ^search_vector) < 0.5) |> Ash.Query.sort(asc: vector_cosine_distance(full_text_vector, ^search_vector)) {:error, error} -> {:error, error} end end) end
Authorization Bypass
bypass action(:ash_ai_update_embeddings) do authorize_if AshAi.Checks.ActorIsAshAi end
AI Tools
Expose Ash actions as LLM tools:
defmodule MyApp.Blog do use Ash.Domain, extensions: [AshAi] tools do tool :read_posts, MyApp.Blog.Post, :read do description "customize the tool description" end tool :create_post, MyApp.Blog.Post, :create tool :read_with_details, MyApp.Blog.Post, :read do load [:author, :internal_notes] # Include private attrs when loaded end end end
Tool Data Access
- Filtering/Sorting: Only
attributespublic?: true - Arguments: Only
action argumentspublic?: true - Response: Public attributes by default
option: Can include private attributes in responseload
LangChain Integration
chain = %{llm: LangChain.ChatModels.ChatOpenAI.new!(%{model: "gpt-4o"}), verbose: true} |> LangChain.Chains.LLMChain.new!() |> AshAi.setup_ash_ai(otp_app: :my_app, tools: [:read_posts, :create_post])
Prompt-Backed Actions
Create actions where LLM provides the implementation:
action :analyze_sentiment, :atom do constraints one_of: [:positive, :negative] description "Analyzes sentiment of text" argument :text, :string, allow_nil?: false run prompt( LangChain.ChatModels.ChatOpenAI.new!(%{model: "gpt-4o"}), tools: true # or [:specific, :tools] or false ) end
Structured Outputs with Custom Types
defmodule JobListing do use Ash.TypedStruct typed_struct do field :title, :string, allow_nil?: false field :company, :string, allow_nil?: false field :requirements, {:array, :string} end end action :parse_job, JobListing do argument :raw_content, :string, allow_nil?: false run prompt( fn _input, _context -> LangChain.ChatModels.ChatOpenAI.new!(%{ model: "gpt-4o-mini", api_key: System.get_env("OPENAI_API_KEY") }) end, prompt: "Parse this job listing: <%= @input.arguments.raw_content %>", tools: false ) end
For detailed prompt format options and adapters, see references/prompt-actions.md.
MCP Server
Development
# In endpoint.ex if code_reloading? do plug AshAi.Mcp.Dev, protocol_version_statement: "2024-11-05", otp_app: :your_app end
Production
pipeline :mcp do plug AshAuthentication.Strategy.ApiKey.Plug, resource: YourApp.Accounts.User, required?: false end scope "/mcp" do pipe_through :mcp forward "/", AshAi.Mcp.Router, tools: [:read_posts, :create_post, :analyze_sentiment], protocol_version_statement: "2024-11-05", otp_app: :my_app end
Testing
- Mock embedding model responses for consistent results
- Test vector search with known embeddings
- Use deterministic test models for prompt-backed actions
- Verify tool access and permissions