We provide three types of accounts: Free, Business, and Pro

Free account

1 User
1 Dialogue p/m
Question limit (3) per dialogue
All question types
Dialogue round per question
100 invitees
Access via dialogue link
Restriction on mailextension
Top 5 result
Bottom 5 result
Participants’ rating
Training videos
Unsubscribe
Delete data
GDPR: you agree with Data Processing Agreement

 

Business version

All of the Free version, plus:

10 Users
3 Admin users
10 Dialogues p/m
<300 invitees
No question limit per dialogue
Challenges + questions
Pre-approved invitees
Routing
CI Query
Weighted Word Count
Attributes + Filtering
AI recommended suggestions
Project collaboration
Recommendations
QuestionDesignLab
Custom SenderName
Planning & reminders
Import mail addresses
Your DPA
Branding

Minimum 6 months
€950 per month ($1,025)
12 months or more: 15% discount
24 months or more: 25% discount
Contact us for more invitees for the Business account

 

Pro account

All of Free and Business account, plus

Unlimited users
Unlimited admin
From 300 invitees
Unlimited Dialogues
NetworkEmergence AI
Supervisor roles
Attribute-based reporting
Thought Leaders questions
Single Sign On
Private Library
SenseBuilder AI
Import contributions
SelfReflector
ActionNavigator
UserActivation
Text message access
Collect mail addresses
Identify influencers

Add-on: Advanced Analytics (this is for Pro users only)

Cross-Silo Collaboration AI
ReflectionAnalysis
FastFinder
BubbleCharts
Extended Excel Exports
NoveltySearch
DiversityIndex

Minimum 12 months
Starting at €1,950 per month ($2,150)
24 months or longer: 25% discount

 

 

Specified

Users

Individuals that are granted a personal login to their Free, Business, or Pro account. Their permissions are granted on a personal basis by the user’s company’s Admin user or by default for the Free account.

Admin users

Grant a user or multiple users permission to manage your user pool, grant individual user(s) permissions to edit, read, and transfer dialogues; including removing users, modifying permissions, assigning supervisor roles, sending messages to one or more users, and overseeing and exporting usage insights.

Dialogue

Users can start a project from the dashboard after logging in. The setup process involves navigating through tabs and filling in all required and optional fields and settings. Once completed, the project can be activated immediately or scheduled to go live on a set date.
Each question is structured step by step, incorporating a question-and-answer format, optional additional context, a video, and a unique second round for dialogue.

In this second round, participants (of the first round) and invitees receive a curated set of 15 contributions, intelligently structured and distributed by our AI. This ensures that even tens of thousands of contributions are effectively redistributed among the audience. Our AI processes this within minutes, and users are notified as soon as it’s complete, to start the second round.

Question limit

Users with a Free account have a limit of three questions per project. This limit does not apply to users with a Business or Pro account.

Question types

A question can consist of a quantitative scale (e.g., 1 to 5, 1 to 10, -3 to +3, -2 to +2), a closed format (e.g., yes/no, pro/against, agree/disagree), multiple choice (up to 15 answer options), or an open-ended response. All question types can include an open-ended section, a free-text response box, and a second round. Additionally, participants may have the option to revise their answers.

For multiple-choice questions, answer options can be up to 80 characters long or 30 characters if the second round is enabled. Multiple-choice questions can also be used to collect profile information and be converted into an Attribute (after project completion) for filtering. In this case, participants must be limited to selecting only one answer.

Dialogue round per question

For each question, users can choose to enable the unique second round (dialogue round). This allows participants to evaluate and rate different answers from the first round on a scale from -3 to +3. Additionally, participants can optionally select up to five key words, add a recommendation, and/or revise and resubmit their original answer.

They can view more than the initial set of 15 diverse contributions shown to them, with five additional contributions displayed at a time. Each participant receives a different set of 15 contributions, ensuring that all contributions are systematically distributed and presented multiple times through our AI.

Invitees

An invitee is either a person who is invited by importing email addresses (via an .xlsx or .ods file) or a person who gains access to your dialogue through the QR code or (generic) dialogue link. An invitee may or may not become an active participant in one or both rounds of your dialogue, either partially or fully.

Access via dialogue link

This is your project’s unique link, valid for both rounds, which grants access to anyone who clicks on it or scans the QR code. The dialogue link may be restricted to (pre-approved) email invitees only, or to individuals with specific email domains. Additionally, the link may require users to opt-in twice with their email address and verify it before or after responding to the dialogue’s questions.

Restriction on mail extension

The dialogue link may be restricted to individuals with specific email domains.

Top 5 result

This is the result during and at the end of the second round, where contributions from the first round are rated with scores (ranging from -3 to +3) by participants in the second round. The Top 5 is determined by the contributions with the highest net score.

Bottom 5 result

This is the result during and at the end of the second round, where contributions from the first round are rated with scores (ranging from -3 to +3) by participants in the second round. The Bottom 5 is determined by the contributions with the lowest net score.

Participants’ rating

By default, participants can rate the user’s dialogue at the end of each round and leave a comment alongside selecting an emoji that reflects their experience. These ratings and comments are available to the user in the dashboard. This allows for instant feedback on the dialogue while also providing valuable tips for future dialogues.

Training videos

The user can watch training videos in their account dashboard, each lasting approximately 60-120 seconds.

Unsubscribe

The user can allow invitees, or individuals who gain access to the dialogue, to unsubscribe from messages related to this specific dialogue and/or any messages from any (future) dialogue from this particular user.

Delete data

The user can delete their own dialogues and associated data within two clicks using a unique code. This data will be removed from backups within 11 days.

Data processing agreement

The user (or their organization) enters into a data processing agreement with us. For the Free account, this is the agreement from CircleLytics. For the Business and Pro accounts, it is typically the agreement of the organization that holds the subscription to the account that CircleLytics will sign.

The above functionalities are included in the Free account at no cost. If you, as a user of this account, wish to upgrade to a Business or Pro account, your dialogues cannot be transferred.

Dialogues p/m

The Free and Business accounts enable users to run a limited number of projects (Dialogues) per month.

<100 invitees or <300 invitees

For each dialogue, up to 100 or 300 people can participate via an email invitation sent by the user or by accessing the generic dialogue link, which can also be scanned using a QR code

No question limit per dialogue

The user has no limit on the number of questions per dialogue. However, we recommend limiting each dialogue to max 5 questions.

Challenges + questions

The user can select important challenges in the dashboard and view and favorite example questions for each challenge. These favorites will appear in the list in the QuestionDesignLab, which can be accessed when the user designs questions.

Routing

The user can choose to display a specific question (or questions) only based on the participant’s answers to a previous question. This conditional visibility is available when designing a question.

CI Query

This allows the user to quickly review all results, search for keywords (e.g., those from the Top 5) or high-value words (from WWC). The user can apply operators to refine the query by including, combining, or excluding words, as well as hiding individual contributions. Results are displayed in a ranked list from most to least valued by participants, as an export, and through three donut charts. These charts represent the query results in terms of the number of matching contributions in the first round, unique participants who scored one or more contributions in the second round, and the proportion of supportive (positive) scores in that second round. The query results can be saved for future retrieval.

Weighted Word Count (WWC)

WWC displays the raw (frequency) count of individual words, as well as two- and three-word combinations, along with their weighted counts. The weight of each word is determined by the scores given by participants to the contributions in which these words appear and whether the words were selected by participants. This makes the weighted word count more valuable and accurate than the raw frequency count. When the user clicks on a set of words, they are added to the CI Query, ready to be executed.

Attributes + Filtering

The user can import a file with up to five attributes, such as function, age category, region, department, etc. During and after the dialogue, the user can apply filters to analyze results based on multiple attributes or a specific attribute (and one or more of its values). This provides response and result insights based on the applied filters for that specific group of participants.

AI recommended suggestions

At the end of the second round, participants may be given the option to submit topics for a new dialogue. These suggestions can be individually converted or transformed all at once into the most relevant open-ended questions by our AI.

Project collaboration

The user can transfer ownership—and with it, the editing rights—of a project (dialogue) to another user within the same account. The transfer happens instantly, enabling seamless collaboration among multiple users. Additionally, the user can grant one or more users in the same account co-viewer access to the dialogue, without editing rights.

Recommendations

The user can choose, on a per-question basis, to allow participants to add recommendations alongside the scores they give in the second round. This enables participants to provide additional context and value to their evaluations of others’ contributions. The user can also customize the guidance text that participants see, helping them use the text fields effectively for their recommendations. In the Results tab, recommendations for each contribution can be viewed by expanding the list.

QuestionDesignLab (QDL)

The QDL enables users to quickly find, refine, and select sample open-ended questions tailored to their specific challenge. This challenge may stem from the user’s own description or be inspired by our suggested descriptions. The QDL’s AI considers these inputs, as well as additional context such as the invitation text, the subject of the invitation, and the context of the question. The AI then suggests the three best matches, which the user can refresh to explore better options, refine using keywords to enhance the results, or select the proposed question. Users can also save questions to their favorites list or, if needed, schedule a meeting with a contact person.

Custom SenderName

User can set a custom Sender Name to align the dialogue with the context and any IT security requirements (such as regarding spam filters).

Planning & reminders

User can plan each round, set reminders, customize reminder texts (or use our suggested templates), and even plan and personalize the final email to participants. This email can include the top 5 responses for each question with a second round and can be sent to all invitees or only to active participants.

Import mail addresses

Users can choose one or more Excel or .ods files to import email addresses of invitees. This can be done before the dialogue launch or during the first and second rounds. We recommend using our Excel template or ensuring the file meets the required format, e.g., emails in the first column (A), optionally first names in column B, and columns C to G for attributes. The first row may contain titles but no values. If users choose to import multiple files, the columns (and headers) from the last imported file will take precedence and override those from the previous file.

Branding

Users can apply custom branding for each dialogue. Branding components include the logo and its placement in the messages, favicon, and background colors of the messages. Existing brandings can be edited, or duplicated and then edited, in case the branding is owned by another user. The applied branding is instantly visible in the user’s dashboard—what you see is what the participant will see.

 

The above functionalities belong to the Free version (at the top) and the Business version, respectively. If you, as a user of the Business version, wish to upgrade to the Pro version, you can transfer your dialogues, as you will retain your account as both a user and an organization.

NetworkEmergence AI

This knowledge graph provides users with a dynamic, visual representation of how the second round unfolds—showing how participants view and rate others’ differing contributions. Users can analyze support (positive scores), net scores, and rejection (negative scores).

If attributes were applied, users can filter by a specific attribute and observe how the graph evolves for each value. This reveals how participants score contributions from other subgroups (representing different values) without, of course, knowing the identity of those participants.

Our AI analyzes and quantifies the extent to which the scores assigned to contributions differ from the perspectives of the participants giving those scores. Additionally, it measures how much participants favor contributions broken down by attribute and value (for example departments); this way, highlighting variations in openness – per department in our example – to differing viewpoints.

Supervisor roles

The Admin user can assign one or more supervisors to oversee one or more users’ dialogues. Each user can view their assigned supervisor(s) in their Profile, accessible via the main dashboard, and contact them directly via email if needed.
A supervised user can only activate (launch), transfer (reassign editing rights to another user), or permanently delete a dialogue once the supervisor grants approval via a checkbox in their dashboard. By default, the supervisor has viewing access to all dialogues of the users they oversee.

MC to attribute conversion

A user can add one or more multiple-choice questions for participants to answer. If the user restricts participants to selecting only one answer, the results of that MC question can be converted into an attribute upon completing the dialogue and used for filtering purposes. The total number of attributes – whether imported from a file with invitees or generated from converted MC questions – is currently limited to a maximum of five.

Attribute-based reporting

The user can apply filters to attributes and then download a report in Excel. This report includes graphs, a top 5 list, and other insights. It can be further customized and saved as a PDF for easy sharing. This allows, for example, managers to generate reports specific to their own department.

Thought Leaders (coming soon)

In the dashboard, users can explore Thought Leaders to gain insights from domain experts and be inspired by their recommended questions. Users can instantly favorite these questions (by clicking the heart icon) to add them to their QuestionDesignLab’s list of favorites for future dialogues. Most Thought Leaders featured here also share their website or other links, allowing users to learn more about their masterclasses, research, books, or consultancy services.

Single Sign On

We provide Single Sign-On (SSO) support for organizations upon request, allowing users to access the platform safely without logging in. Participants can be sent a direct dialogue link, eliminating the need to import their email addresses first, neither for invitees to opt in first.

Private Library

Organizations can request the addition of their own questions to their account. These questions will be integrated into the QuestionDesignLab, making them accessible to users within that account. This allows users to easily utilize predefined or preferred questions.

SenseBuilder AI

When a question gathers at least 300 contributions in the first round, SenseBuilder AI is activated in the Results tab. Using our proprietary AI, similar contributions are clustered, and the weight of each cluster is quantified, showing the user its value as a potential theme. Unlike the Top 5 results, which are based on individual contributions and their ranking in the full list, clustering groups comparable inputs together. Users can refine each cluster by removing or adding contributions through queries (CI Query), making it easier to interpret similar responses. SenseBuilder AI enables users to surface themes and topics in a seamless human-AI collaboration and make more sense faster.

Import contributions

Users can import Excel files containing contributions in column H (starting from the second row). These contributions may come from a generative AI prompt, a brainstorming session, or comment fields from a customer or employee survey. The import is only available during the first round for open-ended questions. While the first round is still open, other participants, such as invitees, can continue to contribute. Once the first round is complete, the second round begins, allowing participants and invitees to (re)view their assigned sets of contributions.

SelfReflector (coming soon)

The user is encouraged to answer her own questions, including the intent behind this dialogue and its follow-up. This is the first step: self-reflection. After the dialogue is complete, the user’s answers are compared to the most valuable contributions and their clusters. This process helps the user understand and reflect on her own answers and perspective in relation to what participants view as most and least important. Such reflection is crucial for making informed, high-quality decisions that effectively engage the participants.

ActionNavigator (coming soon)

This is a structured approach designed to help the user plan and share goals, set reminders, be notified, and track progress. Admin users can monitor users’ accomplishments.

UserActivation (coming soon)

This is a structured approach designed to help the user discover new videos, challenges, thought leaders, and questions that match their profile. It also includes dedicated support from a member of the CircleLytics team who supports and instructs re onboarding, continued engagement, and successful follow-ups to drive the organization’s impact.

Text message (SMS) access

The organization can request CircleLytics to enable the user to invite participants via text message for both rounds. This request may involve additional costs and a certain lead time.

Collect email addresses

The user may choose to collect email addresses or other (personal) information from participants. A custom message will be displayed to participants at the end of both rounds. Any email addresses or other submitted information are collected separately from participants’ responses to the dialogue’s questions to ensure their anonymity.

Identify influencers

The user can choose to invite participants who had the most impact for each question – if a second round is enabled. To do this, the user can compose a personal message. This message is sent after the dialogue has completed, to participants who either submitted a contribution that made it into the Top 5 or gave a +3 score to one of the contributions in the Top 5 for that question. If a participant chooses to click the link within seven days, their email address will be made available in the user’s dashboard.

 

Add-on: Advanced Analytics (extra, and only available for Pro accounts)

Cross-Silo Collaboration AI

This analysis, presented in a table and a network graph, highlights which subgroups contributed the most and least, as well as which subgroups rated others’ contributions most frequently. Subgroups may include departments, regions, or age categories – representing attributes and their respective values. We provide insights into both positive scores (support) and negative scores (rejection). This enables the user to see how subgroups collaborated and learned from one another beyond their own silo. Additionally, we display the average cross-silo collaboration effect for each subgroup and overall, instantly revealing what percentage of scores were given to contributions outside their own subgroup.

ReflectionAnalysis

This analysis shows the user how participants changed their stance during the two rounds on the questions with a closed scale.

FastFinder

These are spider graphs (one per attribute) that instantly show the user how subgroups (the values of the attribute) performed on closed and quantitative questions. The accompanying Excel export provides deeper insights into response percentages and quantitative answers for various subgroups compared to overall totals. The user can quickly identify deviations between subgroups, filter results based on these insights and analyze their Top 5 and Bottom 5 results. This helps the user uncover the qualitative insights behind the quantitative scores of these subgroups.

BubbleCharts

These graphs compare a selected question with another question of choice in a single chart. How did participants who gave a certain answer to one question respond to the other question? This is only possible for questions with closed scales. The axes can be swapped for better readability or based on user preference.

The user can choose to display the graph based on quantitative answers from either the first or the second round. Switching between these views reveals the differences in participants’ responses between the two rounds. If an open text field was included with the question, the user can click on each bubble to view the most relevant associated contributions.

Extended Excel Exports

These exports provide the extensive raw data from both rounds for each question. If a privacy addendum has been signed, all attributes, their values, and multiple-choice question results are included as well. If an explicit contractual agreement is in place, this export may also include multiple-choice answers, which could potentially impact anonymity.

NoveltySearch

This overview highlights the contributions with the highest adjusted variance, meaning those with relatively many widely differing scores—for example, a high number of -3 and -2 on one side and +2 and +3 on the other. Meanwhile, the average and net score remain close to zero. The user can assess these contributions based on their level of controversy, the need for additional discussions, or the attention required during the communication of decisions and their implementation.

DiversityIndex

This numerical value reflects our analysis of the diversity of all submitted contributions. A higher percentage indicates a greater level of diversity among the contents of the contributions.

Contact us here to talk this through, or send a message to sign here the same day.

Back to top
Close Offcanvas Sidebar