Best Practice on Generative and Agentic AI usage metrics
Generative and agentic artificial intelligence systems (“AI”) have proliferated since publication of Release 5.1 of the COUNTER Code of Practice. This best practice guidance extends R5.1 to facilitate usage reporting of publisher content by AI systems. It is designed primarily for use by publishers with AI systems embedded on their platforms, but it can also be used by third-party providers of AI tools.
This guidance was published on 20 April 2026 and applies to Release 5.1 of the COUNTER Code of Practice (R5.1).
Relevant parts of the Code
This section of the guide links to Code of Practice.
Conventions
Per the Code of Practice, this best practice guidance uses the following convention:
The keywords MUST (or REQUIRED), MUST NOT, SHOULD (or RECOMMENDED), SHOULD NOT (or NOT RECOMMENDED), and OPTIONAL in this document are to be interpreted as described in RFC 2119.
Internet Robots and Crawlers
Section 7.8 of R5.1 will be updated to better reflect the distinction between human (regular) usage, malicious bots that MUST be excluded, and AI systems that SHOULD be included in COUNTER Reports in line with this best practice.
COUNTER Report Common Attributes and Elements
Section 3.3 outlines both Access_Methods and Metric_Types. We are introducing a new Access_Method and several new Metric_Types within this best practice.
Requirements
Data Processing
Report providers MUST process their raw usage data in compliance with the requirements of the Code of Practice. That is
- The same data processing rules apply for the new AI metrics as for established Total_Item and Unique_Item metrics within the COUNTER Code of Practice, as defined in Section 7.3.
- The Access_Type applied to an Item for AI metrics MUST adhere to the principles outlined in Section 3.3.4.
- Where a report provider elects to offer Item Reports with Components, the Component Data_Type SHOULD be the same as the Item Data_Type for the new AI metrics.
Reports
The new elements outlined in this best practice MUST only be included in the COUNTER Reports (Platform Report, Database Report, Title Report, Item Report). The new elements outlined in this best practice MUST NOT be included in any Standard Views of the COUNTER Reports
The new elements outlined in this best practice MUST only be included in the COUNTER Reports if called for (C) using the Report_Filters for Access_Method and Metric_Type (see Section 3.3.7). If included they MUST be listed in the Report_Filters header. This means
- To request a Platform Report with only AI usage, the filter will need to show
access_method=Agent&metric_type=AI_Responses_Generated|Total_AI_Investigations|Unique_AI_Investigations|Total_AI_Requests|Unique_AI_Requests
- To request a Platform Report with Regular, TDM and AI usage, the filter will need to show
access_type=Agent|Regular|TDM&metric_type=AI_Responses_Generated|Total_AI_Investigations|Unique_AI_Investigations|Total_AI_Requests|Unique_AI_Requests|Searches_Platform|Total_Item_Investigations|Unique_Item_Investigations|Unique_Title_Investigations|Total_Item_Requests|Unique_Item_Requests|Unique_Title_Requests
Access Methods
In order to track content usage accessed by AI, and to keep that usage separate, this best practice introduces a new Access_Method.
Access_Method Agent is an OPTIONAL extension. Where report providers make Access_Method Agent available, it MUST only be included in COUNTER Reports (i.e. it MUST NOT appear in Standard Views of COUNTER Reports). Where report providers make Access_Method Agent available, it MUST only be included in COUNTER Reports when called for by report consumers. Where Access_Method Agent is included in COUNTER Reports, it MUST be reported against AI Metric_Types.
The table below is related to Table 3.v in the Code of Practice, with the addition of the new optional Access_Method Agent alongside Regular and TDM, listing the Host_Types and reports each Access_Method applies to.
Table 1: List of Access_Method Values (mapped to Table 3.v)
| Access_Method | Description | Host_Types | Reports |
|---|---|---|---|
| Regular | Refers to activities on a platform or content host that represent typical user behaviour. | All Host_Types | All COUNTER Reports and Standard Views of COUNTER Reports |
| TDM | Content and metadata accessed for the purpose of text and data mining, e.g. through a specific API used for TDM. Note that usage representing TDM activity is to be included in COUNTER Reports only. | All Host_Types | PR, DR, TR, IR |
| Agent [NEW] | Content and metadata accessed by an AI system. Access_Method Agent is an OPTIONAL extension for inclusion in COUNTER Reports only when called for. | All Host_Types | PR, DR, TR, IR |
Metric Types
Metric_Types, which represent the nature of activity being counted, can be grouped into the categories of Searches, Investigations, Requests, and Access Denied.
These new AI Metric_Types are all OPTIONAL extensions to the Code of Practice. Where report providers make these AI Metric_Types available, they MUST only be included in COUNTER Reports. Where report providers make these AI Metric_Types available, they MUST only be included in COUNTER Reports when called for by report consumers. Where AI Metric_Types are included in COUNTER Reports, they MUST be reported against Access_Method Agent.
Search
Search metrics within R5.1 report on search results that are successfully delivered to end users. This best practice specifies an equivalent new metric for AI activity, AI_Responses_Generated.
Table 2: List of Metric_Types for AI Searches (addendum to Table 3.r)
| Metric_Type | Description | Host_Types | Reports |
| AI_Responses_Generated | A response delivered by an AI system in response to a user prompt. The response is likely to be text, but may include images or other multimedia. Each response MUST only be counted once regardless of the number of queries initiated by the AI system. Subsequent prompts within the same user session MUST be counted as a new AI_Responses_Generated. | All Host_Types | PR |
Where a platform automatically shows an AI summary without a user prompt, the report provider MUST NOT count an AI_Responses_Generated. This includes
- Collection pages or tables of contents where AI is used to generate a summary of the Items within the collection.
- AI summaries of individual pieces of content (e.g. lay summaries) which are pre-configured and delivered on request.
- Users retrieving a saved response to a previous prompt.
Where a platform generates AI summaries of individual pieces of content on request (i.e. they are not pre-configured), the platform SHOULD count an AI_Responses_Generated.
Investigations and Requests
This best practice introduces new AI-equivalents for Investigations and Requests.
- Total_AI_Investigations and Unique_AI_Investigations: These paired metrics deliver information about content chunks used by an AI system as part of a user session. Only chunks selected by the AI system for final synthesis count as Investigations. Chunks that are simply assessed during initial processing MUST be excluded from the usage metrics to avoid counting system noise.
- Total_AI_Requests and Unique_AI_Requests: Just as traditional COUNTER Requests reflect usage of full text content, these paired metrics indicate where an AI is able to access the full text of an Item to generate a response to a user prompt. As with the AI_Investigation metrics, only chunks selected for final synthesis count as requests and chunks that are only assessed during initial processing MUST be excluded from the usage metrics.
- The usual rules for Data_Type and other attributes and aggregating the usage apply. That is, usage for the journal article chunks MUST be reported with Data_Type Article in IR and Data_Type Journal in TR, DR (if applicable), and PR. If Components are included in IR, the Total_AI metrics MUST be reported on the Component level..
AI Investigations and Requests are counted both in response to platform- or database-level user prompts, and when a user triggers a prompt for a single Item (e.g. “generate a lay summary for this journal article”). Per Section 7.5, where an Item appears in multiple databases, the AI investigations and Requests MUST only be assigned to one.
Table 3.s (below): List of Metric_Types for AI Requests and Investigations (addendum to Table 3.s)
| Metric_Type | Description | Host_Types | Reports |
|---|---|---|---|
| Total_AI_Investigations | Total number of times within a user session that a chunk from an Item or information related to an Item was included by an AI system in generating a response to a user prompt. | All Host_Types | PR, DR, TR, IR |
| Unique_AI_Investigations | Unique count of times within a user session that a chunk from an Item or information related to an Item was included by an AI system in generating a response to a user prompt. | All Host_Types | PR, DR, TR, IR |
| Total_AI_Requests | Total number of times within a user session that a chunk from an Item was requested (i.e. the full text or content was accessible to the AI system) in generating a response to a user prompt, during a user session. | All Host_Types | PR, DR, TR, IR |
| Unique_AI_Requests | Unique count of times within a user session that a chunk from an Item was requested (i.e. the full text or content was accessible to the AI system) in generating a response to a user prompt. | All Host_Types | PR, DR, TR, IR |
Considering a standard AI system approach in which, for example, seven content chunks from a single journal article are used in generating a response to a user prompt, we would expect to see the following metrics:
- 7 Total_AI_Investigations
- 1 Unique_AI_Investigations
- 7 Total_AI_Requests
- 1 Unique_AI_Requests
Access Denied
There are no denial metrics associated specifically with Access_Method Agent.
AI Activity Generating Traditional COUNTER Metrics
Report providers are using generative and agentic AI tools to enhance and supplement content items, for example through generating AI summaries of journal articles or book chapters. Interactions with these generated AI materials COULD result in traditional COUNTER metrics.
For example, where a journal article offers the option to generate a new AI summary:
- 1 AI_Responses_Generated metric with Access_Method Agent
- A user clicking a link or button to view the summary MUST count as 1 Total_Item_Investigation with Access_Method Regular.
- Automatic display of the AI summary alongside the metadata or full text MUST NOT result in additional Total_Item_Investigations, any more than automatic display of the figures results in additional Investigations.
Per the Search section of this best practice guide, pre-configured or otherwise stored AI summaries of individual pieces of content MUST NOT be counted as AI_Responses_Generated. Where such pre-configured summaries are available, a user clicking a link or button to view the summary MUST count as 1 Total_Item_Investigation with Access_Method Regular.
Automatic display of a newly generated or pre-configured AI summary alongside the metadata or full text MUST NOT result in additional Total_Item_Investigations, any more than automatic display of the figures results in additional Investigations.
Non-COUNTER Metrics
Report providers MUST NOT include any non-COUNTER AI metrics in their COUNTER reports except as outlined in Section 11 of the Code of Practice. That is, the non-COUNTER metric must be identified via a namespace and must only be delivered if explicitly called for by the report consumer.
Transparency and verification
Technical Validity
Report providers delivering AI usage reporting, like all publishers, are RECOMMENDED to use the free COUNTER Validator regularly to make sure their reports remain technically accurate.
Audit and Manual Assessment
As an optional extension to the Code of Practice, AI reporting is not subject to formal audits as described in Section 9 of the Code of Practice.
Registry Records
COUNTER will consider adding a flag to the COUNTER Registry to indicate whether AI metrics are available. If that is implemented, this best practice guidance will be updated to require report providers to add the information to the Registry.
Appendix: Glossary of Terms
- Agentic AI. A system, or agent, that is given a high-level goal and has the ability to autonomously plan, reason, and execute a sequence of actions in an environment to achieve that goal.
- AI system. Generic term for generative and agentic AI, which encompasses machine learning, natural language processing, foundation models, and other related technologies.
- Called for. Elements and attributes of COUNTER Reports that are delivered only when a report consumer requests their inclusion. An example from the main Code of Practice is the inclusion of Access_Method in a Platform Report.
- Chunk. Units of text, usually 100-300 words long, that allow AI systems to retrieve specific information efficiently. These may be created based on fixed size rules, variable size rules based on content characteristics, semantic, or a combination of the three.
- Content chunk. See Chunk.
- First Party. Report providers and publisher platforms offering scholarly content to end users and reporting usage directly to libraries. In the context of this best practice, a publisher such as Springer Nature or an aggregator such as EBSCO with AI systems embedded in their platforms.
- Generative AI. Artificial intelligence systems that are designed to create new content, such as text, images, audio, or code. In the context of COUNTER, Generative AI systems use scholarly content as sources for the generated materials.
- Item. Collective term for content that is reported at a high level of granularity, e.g. a full-text article (original or a review of other published work), an abstract or digest of a full-text article, a sectional HTML page, supplementary material associated with a full-text article (e.g. a supplementary data set), or non-textual resources such as an image, a video, audio, a dataset, a piece of code, or a chemical structure or reaction.
- Pre-configured. Text or image content generated by an AI system to supplement an Item. The generated content is saved and delivered on request to end users in the same way as other parts of the Item (e.g. figures).
- Third Party. Services making use of First Party content to deliver services to end users. Third Parties should report usage to First Parties, and may also report usage to libraries. In a COUNTER context, a service such as Consensus or Scite.
How this best practice was developed
Like all our best practices, this guidance started with a community consultation on a draft policy developed by a small working group. The draft was revised in line with the feedback, before being published in April 2026.