This topic explains how your data is managed when using our AI features.

Our AI features enable admins to create concise summaries, generate insightful analyses, and more. Organizations can then use this content at their discretion to enhance their board meeting preparation.

Like any Large Language Model (LLM) or AI feature, there is potential for errors when interpreting data input and producing an output. While the Diligent team has worked to minimize errors, their occasional occurrence is a fundamental limitation of AI technology. We encourage all users to take care with any content that is generated using AI and ensure that they check its validity, especially before making decisions based on it.

Encryption is integral in our process to ensure that all original content, and the AI output derived from this, remains secure at every step. Each time AI generates or rewrites, it uses industry-standard protocols and unique encryption keys to protect data during transfer and processing.

  • Encryption in transitDuring transfer, both the original content and the generated output are encrypted with execution-specific keys, ensuring secure transmission.

  • Data deletion from AWS After successful transfer of the output, both the original book file and the generated content are deleted from AWS within 24 hours, ensuring no residual data remains in cloud storage.

  • Output storage Encrypted outputs are securely transferred back to Diligent's colocation data center for storage, where they are kept permanently, ensuring they only need to be generated once.

We use the AWS Bedrock service, which leverages Anthropic’s Claude family of LLMs, to generate AI content. This next-generation AI model provides faster and more intelligent outputs, efficiently managing complex reasoning and large volumes of text.

Learn more about the next generation of Claude models.