Awesome-omni-skill chatgpt-app-sdk

WHEN building ChatGPT apps using the OpenAI Apps SDK and MCP; create conversational, composable experiences with proper UX, UI, state management, and server patterns.

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/development/chatgpt-app-sdk" ~/.claude/skills/diegosouzapw-awesome-omni-skill-chatgpt-app-sdk && rm -rf "$T"
manifest: skills/development/chatgpt-app-sdk/SKILL.md
source content

ChatGPT Apps SDK Best Practices

Build ChatGPT apps using the OpenAI Apps SDK, Model Context Protocol (MCP), and component-based UI patterns.

Quick Reference

TopicGuide
Display modes, visual design, accessibilityui-guidelines.md
MCP architecture, tools, and server patternsmcp-server.md
React patterns and window.openai APIui-components.md
React hooks (useOpenAiGlobal, useWidgetState)react-integration.md
Three-tier state architecture and best practicestate-management.md

Critical Setup Requirements

IssuePrevention
CORS blockingEnable
https://chatgpt.com
origin on endpoints
Widget 404sUse
ui://widget/
prefix format for widget resources
Plain text displaySet MIME type to
text/html+skybridge
for widgets
Tool not suggestedUse action-oriented descriptions in tool definitions
Missing widget dataPass initial data via
_meta.initialData
field
CSP script blockingReference external scripts from allowed CDN origins

Decision Trees

What display mode should I use?

Is this a multi-step workflow or deep exploration?
├── Yes → Fullscreen
└── No → Is this a parallel activity (game, live session)?
    ├── Yes → Picture-in-Picture (PiP)
    └── No → Inline
        ├── Single item with quick action → Inline Card
        └── 3-8 similar items → Inline Carousel

Where should state live?

Is this data from your API/database?
├── Yes → MCP Server (Business Data)
│   Return in structuredContent from tool calls
└── No → Is it user preference/cross-session data?
    ├── Yes → Backend Storage (via OAuth)
    └── No → Widget State (UI-scoped)
        Use window.openai.widgetState / useWidgetState

Should this be a separate tool?

Is this action:
- Atomic and standalone?
- Invokable by the model via natural language?
- Returning structured data?
├── Yes → Create public tool (model-accessible)
└── No → Is it only for widget interactions?
    ├── Yes → Use private tool ("openai/visibility": "private")
    └── No → Handle within existing tool logic

What should go in structuredContent vs _meta?

Does the model need this data to:
- Understand results?
- Generate follow-ups?
- Reason about next steps?
├── Yes → structuredContent (concise, model-readable)
└── No → _meta (large datasets, widget-only data)

Should I use custom UI or just text?

Does this require:
- User input beyond text?
- Structured data visualization?
- Interactive selection/filtering?
├── Yes → Custom UI component
└── No → Return plain text/markdown in content

Official Documentation