Skip to main content

Responsible AI Statement

Our commitment to building responsible AI

C
Written by Chris Cottrell-Mason
Updated this week

Building Responsible AI

Here at Natter, Responsible AI has always been core to our mission. Our proprietary platform allows you to instantly gather impartial, anonymous and inclusive insights for decisions. Combating bias, and respecting the individual, are therefore fundamental principles for us.

Anonymizing data is neither a compromise nor an aspiration, but instead a central product feature and precondition for inclusivity: being seen to do the right thing is critical for users to feel confident and comfortable with our platform.

Our platform has been purpose-built to achieve these specific goals, and our AI Engine does not retain or train using end users’ data, unlike many general-purpose AI models. These principles sit on top of our three pillars - residency, redaction, and retention - which underpin and reinforce our commitment to Responsible AI, and ensure our compliance with regulations such as the EU AI Act.

Low Risk AI

  • Natter’s use of AI is not only built responsibly, but it is low-risk categorically by its very nature. Natter’s system performs descriptive analysis of anonymized conversation data for the purpose of surfacing themes and insights. It does not automate decisions, make predictions, or assign scores or classifications to individuals. Outputs are designed to aid human understanding, not replace human judgment. As a result, we believe Natter does not fall into the category of high-risk AI under emerging frameworks such as the EU AI Act, which further reduces the possibility of adverse impact or harm to individuals.

Our Key AI Principles:

Combating Bias:

  • Natter has a diverse, experienced, and global team including experts in privacy and machine learning techniques.

  • Although our proprietary AI Engine is not used for automated decision-making, it has nevertheless still been built at every step to ensure the highest levels of accuracy of insights - namely through anonymized qualitative data aggregated accurately and systematically into themes, categories, and summaries. In contrast, traditional manual methods of qualitative research introduce bias at the collection as well as analysis and reporting stages, including through confirmation bias, researcher bias, sampling bias, and selection bias, or introduce manual error or omissions in the transcription step. Product features of the Natter platform such as the interactive conversational aspect, our ability to support thousands of conversations, and the customizable quantitative reporting dashboard, further enhance our ability to collect and report inclusive insights and avoid bias.

Respecting the Individual:

  • Transparency

    • End users are automatically made aware of Natter and the nature of the service through various touch points. This includes Natter’s Privacy & Cookies Policy, which defines “Transcription data” as a general category of personal data that we collect. To reinforce transparency, Natter also provides visibility to the URL link “Our Approach to Data Privacy & Anonymity” during account registration and any subsequent usage of the platform.

  • Consent

    • Consent is requested and provided as part of the end user account creation process, as well as prior to each conversation and transcription that takes place on Natter. If no such consent is given - transcription does not take place.

Underpinning Responsible AI at Natter are 3 Key Pillars:

Residency

  • Natter is UK-headquartered and compliant with both the UK and EU GDPR data protection regulations. The service is securely hosted in AWS eu-west-2 (London, UK) which ensures high availability and resilience as well as physical security.

Redaction

  • Insights derived from transcription data are not attributed to individuals, and our technology automatically redacts personal data before analysis takes place. Our system can redact 50 types of PII (e.g. names, ages, even health conditions).

Retention

  • Once data is processed, the system does not retain or reuse it for training purposes. Natter excludes account names, IP addresses, and authentication data from processing and storage. This applies to our suppliers as well. Customers are in control of the retention and deletion for the data which we do retain.

Our customers expect our platform to be secure, and so Natter has incorporated security and privacy by design, as well as Responsible AI, through guardrails and checkpoints in our development cycle, which is ISO 27001 certified and enterprise-grade. These include human-in-the-loop review of AI output as well as peer reviews of any software changes. Even if rare, Natter also recognises that AI systems may experience hallucinations, and Natter as a business may also receive data access or transparency requests, and Natter has procedures in place to respond to such events and requests.

Refer to these pages for more

  • Security & Privacy

  • Privacy & Cookies Policy

  • Service Providers & Data Transfer Policy

  • End User Licence Agreement

  • Our Approach to Data Privacy & Anonymity

We recognise that Responsible AI is a journey not a destination and we are constantly striving to improve our processes and policies. If you would like to contact the team please write to [email protected].

Did this answer your question?