install
source · Clone the upstream repo
git clone https://github.com/Intense-Visions/harness-engineering
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/Intense-Visions/harness-engineering "$T" && mkdir -p ~/.claude/skills && cp -r "$T/agents/skills/codex/graphql-pagination-patterns" ~/.claude/skills/intense-visions-harness-engineering-graphql-pagination-patterns-5a9706 && rm -rf "$T"
manifest:
agents/skills/codex/graphql-pagination-patterns/SKILL.mdsource content
GraphQL Pagination Patterns
Implement cursor-based and offset pagination in GraphQL using the Relay connection specification
When to Use
- Returning lists of items that may grow unboundedly
- Building paginated feeds, search results, or admin tables
- Choosing between cursor-based and offset-based pagination
- Implementing infinite scroll or "load more" UI patterns
- Ensuring consistent pagination when items are added or removed
Instructions
- Use the Relay connection spec for cursor-based pagination. Even if you do not use Relay on the client, the
pattern is the industry standard for GraphQL pagination.Connection/Edge/PageInfo
type Query { users(first: Int, after: String, last: Int, before: String): UserConnection! } type UserConnection { edges: [UserEdge!]! pageInfo: PageInfo! totalCount: Int } type UserEdge { node: User! cursor: String! } type PageInfo { hasNextPage: Boolean! hasPreviousPage: Boolean! startCursor: String endCursor: String }
- Implement cursor encoding with opaque strings. Cursors should be opaque to clients — base64-encode the underlying value. Never expose raw database IDs or offsets as cursors.
function encodeCursor(id: string): string { return Buffer.from(`cursor:${id}`).toString('base64'); } function decodeCursor(cursor: string): string { const decoded = Buffer.from(cursor, 'base64').toString('utf-8'); return decoded.replace('cursor:', ''); }
- Build the resolver to handle
(forward) andfirst/after
(backward) pagination.last/before
const resolvers = { Query: { users: async (_parent, { first, after, last, before }, { db }) => { const limit = first ?? last ?? 20; const afterId = after ? decodeCursor(after) : null; const beforeId = before ? decodeCursor(before) : null; const users = await db.users.findPaginated({ limit: limit + 1, // fetch one extra to determine hasNextPage afterId, beforeId, direction: last ? 'backward' : 'forward', }); const hasMore = users.length > limit; const nodes = hasMore ? users.slice(0, limit) : users; if (last) nodes.reverse(); return { edges: nodes.map((user) => ({ node: user, cursor: encodeCursor(user.id), })), pageInfo: { hasNextPage: first ? hasMore : false, hasPreviousPage: last ? hasMore : false, startCursor: nodes[0] ? encodeCursor(nodes[0].id) : null, endCursor: nodes[nodes.length - 1] ? encodeCursor(nodes[nodes.length - 1].id) : null, }, }; }, }, };
-
Include
when clients need it (e.g., for "showing 1-20 of 342"). Be aware this requires a separatetotalCount
query, which can be expensive on large tables.COUNT(*) -
For simple use cases, offset pagination is acceptable. Use it for admin dashboards, data tables, or any context where "jump to page N" is needed and data does not change frequently.
type Query { users(offset: Int, limit: Int): UserList! } type UserList { items: [User!]! totalCount: Int! hasMore: Boolean! }
- On the client, use
to load additional pages.fetchMore
const { data, fetchMore } = useQuery(GET_USERS, { variables: { first: 20 } }); const loadMore = () => { fetchMore({ variables: { after: data.users.pageInfo.endCursor }, updateQuery: (prev, { fetchMoreResult }) => ({ users: { ...fetchMoreResult.users, edges: [...prev.users.edges, ...fetchMoreResult.users.edges], }, }), }); };
- Set sensible defaults and maximums for
/first
. Default to 20, cap at 100. This prevents clients from requesting unbounded result sets.limit
const limit = Math.min(first ?? 20, 100);
- Use
directive (Apollo Client) to give paginated fields a stable cache key when the same field is queried with different pagination arguments.@connection
Details
Cursor vs. offset trade-offs:
- Cursor-based: Stable under concurrent inserts/deletes, efficient with indexed columns (e.g.,
), no "page drift." Cannot jump to arbitrary pages.WHERE id > cursor - Offset-based: Simple to implement, supports "jump to page N." Degrades with large offsets (
scans and discards rows), unstable when items are inserted/deleted between pages.OFFSET 10000
Cursor implementation strategies:
- ID-based:
— simple, efficient, works when ordering by primary keyWHERE id > :cursor ORDER BY id - Timestamp-based:
— use a composite cursor (timestamp + id) for tiesWHERE created_at > :cursor ORDER BY created_at - Composite: Encode multiple sort values into the cursor for multi-column sorting
Performance considerations:
- Fetch
to determinelimit + 1
without a separate count queryhasNextPage - Use indexed columns for cursor comparison (
clause must hit an index)WHERE - Cache
separately if it is expensive and does not need to be real-timetotalCount - For keyset pagination on composite sorts, build the
clause dynamicallyWHERE
Apollo Client cache integration: Apollo's
offsetLimitPagination() and relayStylePagination() type policies handle merging paginated results in the cache automatically.
Source
https://relay.dev/graphql/connections.htm
Process
- Read the instructions and examples in this document.
- Apply the patterns to your implementation, adapting to your specific context.
- Verify your implementation against the details and edge cases listed above.
Harness Integration
- Type: knowledge — this skill is a reference document, not a procedural workflow.
- No tools or state — consumed as context by other skills and agents.
- related_skills: graphql-schema-design, graphql-resolver-pattern, graphql-performance-patterns, api-pagination-cursor, api-pagination-offset, api-pagination-keyset
Success Criteria
- The patterns described in this document are applied correctly in the implementation.
- Edge cases and anti-patterns listed in this document are avoided.