Articles les plus récents
Répondez à toutes vos questions grâce aux articles et témoignages proposés par les experts de Voxco et ses partenaires du secteur.

Études de marché : les bases
Systematic Sampling Explained: A Step-by-Step Guide for Researchers
What is Systematic Sampling?
Systematic sampling is a type of probability sampling method used in research to select individuals from a target population at regular intervals. Unlike non-probability sampling, where not every individual has an equal chance of being chosen, systematic sampling ensures that each member of the population has a known and equal probability of selection. The process involves choosing a random starting point and then selecting every kᵗʰ individual from a structured list, where k is the sampling interval determined by dividing the population size by the desired sample size. This method offers a simple, efficient way to create representative samples—especially when working with large populations and well-defined sampling frames.
How to Implement Systematic Sampling in Your Research
Systematic sampling can be implemented in just two main steps:
- Calculate the sampling interval
 Divide the total population size (N) by the desired sample size (n) to determine the sampling interval (i). If the result is a decimal, round it to the nearest whole number.
- Select a random starting point
 Choose a random starting point (r) between 1 and the sampling interval (i). From there, select every i-th element in the population list until the desired sample size is reached.
Before proceeding, it’s crucial to ensure that the sampling frame is not arranged in a cyclical or repetitive pattern. If it is, using a fixed interval may introduce bias.
Researchers often use survey platforms or social research tools with built-in sampling capabilities to streamline this process. For instance, Voxco’s survey platform offers advanced features that allow users to easily generate systematic samples through its panel management tools.
Example of Systematic Sampling
Let’s say a researcher wants to select a sample of 25 individuals from a population of 1,000:
- Population size (N) = 1,000
- Sample size (n) = 25
- Sampling interval (i) = N / n = 1,000 / 25 = 40
This means the researcher will select every 40th individual from the list.
Next, a random starting point (r) must be chosen between 1 and 40. Suppose the researcher picks 17. The sample will then include the 17th person, the 57th, the 97th, and so on, continuing in 40-unit intervals until 25 participants are selected.
Types of Systematic Sampling
There are three primary types of systematic sampling methods:
- Systematic Random Sampling
 The most common form, where a random start is followed by selection at fixed intervals.
- Linear Systematic Sampling
 In this method, the list is treated linearly. Once the end is reached, the sampling stops—even if the desired sample size isn’t met.
- Circular Systematic Sampling
 The population list is treated as a continuous loop. After reaching the end, the count continues from the beginning until the sample size is completed.
1. Systematic Random Sampling
This is the most common and straightforward type. Here's how it works:
- Calculate the sampling interval using the formula: i = N / n
- Choose a random starting point (r) between 1 and i
- From that point onward, select every i-th element until the desired sample size is reached
2. Linear Systematic Sampling
In this method, the population list is treated as a linear sequence. Once the end of the list is reached, sampling stops—even if the full sample size hasn’t been met. Steps include:
- Create a sequential list of the population
- Determine your desired sample size (n) and compute the skip interval: k = N / n
- Pick a random starting number (r) between 1 and k
- Add k repeatedly to r to select the remaining units
3. Circular Systematic Sampling
Here, the list is treated as circular, allowing the sampling to continue from the beginning if the end of the list is reached before the full sample is drawn:
- Calculate the interval: k = N / n
- Select a random starting point (r) between 1 and N
Move forward in k steps, looping back to the start of the list as needed, until n units are selected
When Should You Use Systematic Sampling?
Systematic sampling is especially useful in the following research scenarios:
- When the population list is already randomized: If the sampling frame is randomly ordered, systematic sampling provides a quick and unbiased way to select a representative sample.
- When the population is large and well-defined: It's ideal for large-scale surveys where listing and selecting every individual manually would be time-consuming. The method simplifies the process without compromising accuracy.
- When resources or time are limited: Systematic sampling requires less effort than simple random sampling while still maintaining the principles of probability sampling, making it efficient for researchers with tight deadlines or limited staff.
- When you're using a structured list (like customer databases or employee rosters): As long as the list isn’t organized in a cyclical pattern, systematic sampling is a great choice for drawing samples from such structured data.
- When consistent intervals are meaningful or necessary: If your research benefits from evenly spaced sampling (e.g., time-based studies or product quality checks), systematic sampling can provide consistency in selection.
Advantages of Systematic Sampling
- Simple to implement when a complete and ordered sampling frame is available
- Easy to understand and execute, even for researchers with limited statistical training
- Efficient and organized, especially compared to more complex sampling methods like stratified sampling
- Minimizes bias when the list is randomly ordered, ensuring a fair and representative sample
Disadvantages of Systematic Sampling
- Risk of systematic bias if the population list is ordered in a repeating or cyclical pattern, which may align with the sampling interval and distort results
- Potential for data manipulation, as researchers could intentionally choose intervals or starting points that skew results
- Lower randomness compared to methods like simple random sampling, which can increase the risk of selecting similar types of units repeatedly
Conclusion
Systematic sampling offers a practical, efficient, and widely-used approach for drawing representative samples—particularly when dealing with large populations and organized sampling frames. While it comes with a few limitations, especially regarding potential bias in non-random lists, its simplicity and speed make it a valuable tool in both academic and commercial research. When paired with the right tools, like Voxco’s survey platform, systematic sampling can help streamline the research process and ensure reliable results.
Read more

Les dernières tendances en études de marché
L’expérience client : un levier stratégique pour les organismes à but non lucratif
Quand on pense à l’expérience client (CX), on imagine plus facilement le commerce en ligne ou les services par abonnement — rarement les organismes caritatifs. Et pourtant, dans un monde où les relations personnalisées sont synonymes de fidélité, les OBNL ne peuvent plus se permettre de négliger le rôle que joue la CX dans l’engagement des donateurs.
Avec plus de 10 millions d’organismes à but non lucratif en concurrence pour capter l’attention et les financements, créer des relations durables avec les donateurs, les bénévoles et les bénéficiaires n’est plus un luxe. C’est une nécessité.
Ce qui rend les “clients” des OBNL uniques
Contrairement aux clients traditionnels, les soutiens d’un OBNL ne recherchent pas de retour matériel sur leur don ou leur engagement. Leur motivation repose sur la conviction — dans votre mission, dans votre capacité à avoir un impact concret, et dans l’idée que leur contribution compte.
Ce type d’engagement est profondément personnel. Les donateurs veulent voir l’impact humain de leur geste. Les bénévoles veulent sentir que leurs compétences et leur temps sont valorisés. Même les bénéficiaires peuvent devenir les défenseurs les plus convaincus de votre cause. Ces personnes ne sont pas de simples parties prenantes — elles sont de véritables partenaires.
C’est pourquoi l’expérience client dans le secteur associatif ne peut être traitée comme un simple processus opérationnel. Elle doit être relationnelle, pas transactionnelle — fondée sur la transparence, la reconnaissance, l’empathie et un objectif partagé.
Quand quelqu’un vous soutient, ce n’est pas uniquement dans les résultats. C’est dans votre organisation qu’il place sa confiance.
Comprendre cette dynamique permet aux OBNL de :
- Reconnaître que la lassitude des donateurs est souvent liée à un sentiment de distance ou de manque de reconnaissance ;
- Repenser l’engagement comme une relation continue, pas seulement une logique d’acquisition et de rétention ;
- Offrir des expériences qui parlent aux motivations profondes des individus : héritage, communauté, plaidoyer ou impact direct.
En considérant les donateurs et les bénévoles comme des parties prenantes à part entière — et non de simples entrées dans un CRM — vous créez des liens plus solides dans un environnement de plus en plus concurrentiel.
Ancrer une culture centrée sur les donateurs
Pour les OBNL, la mission est au cœur de tout. Mais ce sont les personnes qui croient en cette mission qui rendent votre travail possible. Donateurs, bénévoles, bénéficiaires et employés forment un écosystème. Adopter une culture centrée sur les donateurs ne signifie pas devenir une entreprise — mais renforcer les liens de confiance qui soutiennent votre mission.
Une culture orientée donateur reconnaît que chaque point de contact — email de remerciement, publication sur les réseaux sociaux, formulaire de don — est une opportunité de faire sentir aux soutiens qu’ils sont vus, valorisés et connectés à la cause.
Cela signifie aussi que toutes les équipes internes doivent partager une même vision de l’engagement donateur, pour offrir une expérience cohérente sur tous les canaux.
Voici quelques actions concrètes pour intégrer cette culture au quotidien :
- Alignement interne : Faites de l’expérience donateur une priorité partagée entre la collecte de fonds, les programmes, le marketing et la direction.
- Formation ciblée : Donnez aux équipes en contact direct les bons messages, une communication empathique et des exemples concrets d’impact à partager.
- Boucles de rétroaction : Recueillez les retours des donateurs, mais aussi des employés et bénévoles. Qu’entendent-ils sur le terrain ? Quelles tendances émergent ?
- Célébrer l’impact : Ne vous limitez pas aux rapports annuels. Partagez régulièrement les résultats obtenus grâce aux dons — en images, en témoignages, ou en mettant certains donateurs à l’honneur.
- Traiter chaque donateur comme un partenaire : Qu’il s’agisse d’un don unique de 10 € ou d’une subvention majeure, chaque contribution est une étape dans une mission commune.
Les OBNL qui placent l’expérience donateur au cœur de leur culture ne se contentent pas de conserver leur soutien — ils le renforcent. Et dans un monde où l’attention est limitée, une connexion émotionnelle forte peut faire toute la différence.
Pourquoi la CX est plus importante que jamais
Alors que la concurrence pour l’attention et les fonds s’intensifie, les organismes qui valorisent l’expérience de leurs donateurs et bénévoles sont mieux armés pour fidéliser, instaurer la confiance et accroître leur impact.
Chez Voxco, nous accompagnons les OBNL dans le monde entier pour mener des études sociales rigoureuses et améliorer l’expérience de leurs parties prenantes. Forts de plus de 45 ans d’expertise dans le secteur public et associatif, nous offrons une plateforme flexible pour gérer des enquêtes complexes, segmenter les publics et recueillir des retours utiles à chaque étape.
Envie de transformer chaque interaction en une connexion durable ? La plateforme tout-en-un de Voxco aide les OBNL à collecter des feedbacks pertinents, à mesurer leur impact et à renforcer l’expérience donateur — sur tous les canaux. Réservez une démo pour découvrir comment nous pouvons vous aider à établir la confiance et faire avancer votre mission.
Read more
Analyse de texte & IA
What is Text Mining?
Text mining, also known as text data mining or text analytics, refers to the process of deriving high-quality information from text. Leveraging techniques and tools from both AI (artificial intelligence) and NLP, text mining involves the discovery of patterns, trends, and insights in text data. Text mining is widely used in various fields, including marketing, business intelligence, healthcare, finance, to make sense of large amounts of unstructured text and derive actionable insights.
How Text Mining Applications Benefit Your Company
Text mining can provide numerous benefits to a company across various departments and functions. Here are some of the key ways it can add value:
- Customer Insights and Sentiment Analysis
- Market Research and Competitive Analysis
- Improving Customer Service
- Enhancing Product Development
- Boosting Marketing Efforts
- Human Resources and Employee Insights
- Knowledge Management
- Operational Efficiency
By leveraging text mining, companies can unlock valuable insights from unstructured text data, leading to improved decision-making, enhanced customer experiences, and increased operational efficiency.
What Are the Main Steps in the Text Mining Process?
Text mining typically includes the following tasks:
- Information Retrieval: Extracting relevant information from large text collections, such as documents, emails, web pages, and social media posts.
- Natural Language Processing (NLP): Using computational techniques to analyze and understand human language. NLP includes tasks like tokenization, part-of-speech tagging, named entity recognition, and sentiment analysis.
- Text Categorization: Automatically classifying text into predefined categories or topics. This can be used for organizing documents, spam detection, and more.
- Text Clustering: Grouping similar documents or text segments together based on their content. This helps in identifying themes and patterns within large text datasets.
- Sentiment Analysis: Determining the sentiment expressed in a piece of text, such as positive, negative, or neutral. This is commonly used in social media monitoring and customer feedback analysis.
- Topic Modeling: Discovering abstract topics within a collection of documents. Techniques like Latent Dirichlet Allocation (LDA) are commonly used for this purpose.
- Information Extraction: Extracting specific pieces of information, such as names, dates, and relationships, from unstructured text.
- Summarization: Creating concise summaries of large texts to highlight the most important points.
- Text Visualization: Using graphical representations to help understand and interpret text data, such as word clouds and topic maps.
Text Mining Examples in Marketing
There are many use cases available for text mining. If you were in marketing, for example, here are some of the most common use cases you might consider.
- Learning about positive, negative, and neutral reactions from your audience: Sentiment analysis is an excellent tool for marketers as it allows you to quickly see what the reception is to the topic that you’re studying. When you have a good understanding of your audience’s reactions, you can tailor your marketing based on that information.
- Categorizing survey responses: Group survey responses into broad topics or get granular with it, depending on your needs. You can focus on the areas that are most important for a particular campaign. Recurring themes may require closer examination, so you can conduct more studies that focus specifically in those areas to get more information.
- Translating and scoring survey results: Are you working with more than one language on your survey responses? You don’t need to translate that as a separate step before it goes into your text mining application. Simply choose a software that supports the languages you see the most and it can automate the process.
- Gauging interest in a new concept: Even when you do your best at developing a concept that should appeal to your audience, sometimes the latest project just falls flat. You can start to troubleshoot why that happened by using text mining and open ended survey questions to see what your audience is thinking about the latest products, services, and company moves. By gauging the interest in a new concept before you move forward with the project, you can handle development much more cost-effectively. This helps you avoid particularly high-profile failures, as a small study may end up with respondents that are more on-board with the concept than a more representative sample of your audience.
- Understanding the customer experience: Do you know why your customers feel the way that they do about your customer experience? It’s not enough to know if they are happy or not. You need to know the why behind it if you want to excel at marketing. Text mining gives you the why so that you can continually improve the experience and the marketing tools that support it.
- Discovering your customer satisfaction ratings and the meaning behind them: Your audience gives you a lot of feedback on whether they’re happy or not, you just need a way to analyze it. Use text mining to look through customer service records to identify customers who may be open to purchasing again, those that are upset with the company and need attention, and others that may need a push to move away from being ambivalent in either direction.
- Tracking the success of new products and services: You want to know how well your new products and services are doing now, not weeks or months from now. Automating the analysis through a text mining tool means that you can get near a real-time understanding of how well a product launch is going.
- Finding new business opportunities: Open ended survey responses allow you to find replies that are outside of the norm. Sometimes your customers have adopted a product or service for a use case that never came up in research studies. Expanding horizontally or vertically may be possible based on this data, which can offer an excellent approach to building your business.
- Using customer service data for marketing strategies: Your customer service data is a marketing goldmine, but it’s often overlooked due to the logistical challenges of processing the information. Text mining eliminates these concerns and allows you to find out more about your customers, what they like, dislike, and how to keep them loyal and happy.
- Providing hard data for reports and presentations: If you need a way to make your case to upper management, having powerful visualizations in helpful reports and presentations is one way to make it happen. Text mining creates structure out of unstructured data, so you’re able to use it in this fashion. Customizable dashboards are another way to easily access the data in a form that’s user-friendly for most marketers. When you can easily work with the data, that makes it more accessible to power all types of marketing efforts.
- Improving the value of social media comments: People are more than happy to comment on social media posts, but harnessing that data is hard if you’re doing it manually and have a relatively active page. Text mining makes this process more efficient and allows you to leverage such a large and frequently updated data set. Consistently looking at your social media comments is also a good way to stay ahead of any public relations problems you may encounter. You can execute your crisis communications plan as soon as you start seeing negative comments pop up.
- Creating performance benchmarks for marketing campaigns: Get more benchmarking metrics for your marketing campaigns so you can study how customer sentiment changes over time, the ways they react to new campaigns, and isolating the characteristics that lead to a successful marketing effort.
- Powering Voice of the Customer programs: Voice of the Customer programs are greatly improved when you have a cost-effective and productive way of working with audience feedback.
Whether you’re using text mining for a one-off study or an ongoing series, your team will benefit from its implementation. It takes some time to fine-tune the results for your use cases, but once you get it dialed in, you’re going to wonder how you ever did without it.
Choose Ascribe For Your Text Analysis Needs
Ascribe has two advanced text analytics solutions to meet your business needs. CX Inspector is a text analysis solution that quickly unlocks actionable insights from large data sets with unstructured or open end responses and creates charts to visualize the results. Coder, another text analytics solution, is the leading verbatim coding platform designed to improve the efficiency of coding. Contact us for more information or request a demo with your data.
Read more
Analyse de texte & IA
Natural Language processing (NLP) for Machine and Process Learning - How They Compare
Natural language is a phrase that encompasses human communication. The way that people talk and the way words are used in everyday life are part of natural language. Processing this type of natural language is a difficult task for computers, as there are so many factors that influence the way that people interact with their environment and each other. The rules are few and far between, and can vary significantly based on the language in question, as well as the dialect, the relationship of the people talking, and the context in which they are having the conversation.Natural language processing (NLP) is a type of computational linguistics that uses machine learning to power computer-based understanding of how people communicate with each other. NLP leverages large data sets to create applications that understand the semantics, syntax, and context of a given conversation.Natural language processing is an essential part of many types of technology, including voice assistance, chat bots, and improving sentiment analysis. NLP analytics empowers computers to understand human speech in text and/or written form without needing the person to structure their conversation in a specific way. They can talk or type naturally, and the NLP system interprets what they’re asking about from there.Machine learning is a type of artificial intelligence that uses learning models to power its understanding of natural language. It’s based off of a learning framework that allows the machine to train itself on data that’s been input. It can use many types of models to process the information and develop a better understanding of it. It’s able to interpret both standard and out of the ordinary inquiries. Due to its continual improvements, it’s able to handle these edge cases without getting tripped up, unlike a strict rules-based system.Natural language processing brings many benefits to an organization that has many processes that depend on natural language input and output. The biggest advantage of NLP technology is automating time-consuming processes, such as categorizing text documents, answering basic customer support questions, and gaining deeper insight into large text data sets.
Is Natural Language Processing Machine Learning?
It’s common for some confusion to arise over the relationship between natural language processing and machine learning. Machine learning can be used as a component in natural language processing technology. However, there are many types of NLP machines that perform more basic functionality and do not rely on machine learning or artificial intelligence. For example, a natural language processing solution that is simply extracting basic information may be able to rely on algorithms that don’t need to continually learn through AI.For more complex applications of natural language processing, the systems are using machine learning models to improve their understanding of human speech. Machine learning models also make it possible to adjust to shifts in language over time. Natural language processing may be using supervised machine learning, unsupervised machine learning, both, or neither alongside other technologies to fuel its applications.Machine learning can pick up on patterns in speech, identify contextual clues, understand the sentiment behind a message, and learn other important information about the voice or text input. Sophisticated technology solutions that require a high-level of understanding to hold conversations with humans require machine learning to make this possible.
Machine Learning vs. Natural Language Processing (NLP)
You can think of machine learning and natural language processing in a Venn diagram that has many pieces in the overlapping section. Machine learning has many useful features that help with the development of natural language processing systems, and both of them fall under the broad label of artificial intelligence.Organizations don’t need to choose one or the other for development that involves natural language input or output. Instead, these two work hand-in-hand to tackle the complex problem that human communication represents.
Supervised Machine Learning for Natural Language Processing and Text Analytics
Supervised machine learning means that the system is given examples of what it is supposed to be looking for so it knows what it is supposed to be learning. In natural language processing applications and machine learning text analysis, data scientists will go through documents and tag the important parts for the machine.It is important that the data fed into the system is clean and accurate, as this type of machine learning requires quality input or it is unable to produce the expected results. After a sufficient amount of training, data that has not been tagged at all is sent through the system. At that point, the machine learning technology will look at this text and analyze it based on what it learned from the examples.This machine learning use case leverages statistical models to fuel its understanding. It becomes more accurate over time, and developers can expand the textual information it interprets as it learns. Supervised machine learning does have some challenges when it comes to understanding edge cases, as natural language processing in this context relies heavily on statistical models.While the exact method that data scientists use to train the system varies from application to application, there are a few core categories that you’ll find in natural language processing and text analytics.
- Tokenization: The text gets distilled into individual words. These “tokens” allow the system to start by identifying the base words involved in the text before it continues processing the material.
- Categorization: You teach the machine about the important, overarching categories of content. The manipulation of this data allows for a deeper understanding of the context the text appears in.
- Classification: This identifies what class the text data belongs to.
- Part of Speech tagging: Remember diagramming sentences in English class? This is essentially the same process, just for a natural language processing system.
- Sentiment analysis: What is the tone of the text? This category looks at the emotions behind the words, and generally assigns it a value that falls under positive, negative, or neutral standing.
- Named entity recognition: In addition to providing the individual words, you also need to cover important entities. For some systems, this refers to names and proper nouns. In others, you’ll need to highlight other pieces of information, such as hashtags.
Unsupervised Machine Learning for Natural Language Processing and Text Analytics
Unsupervised machine learning does not require data scientists to create tagged training data. It doesn’t require human supervision to learn about the data that is input into it. Since it’s not operating off of defined examples, it’s able to pick up on more out-of-the-box cases and patterns over time. Since it’s less labor intensive than a supervised machine learning technique, it’s frequently used to analyze large data sets and broad pattern recognition and understanding of text.There are several types of unsupervised machine learning models:
- Clustering: Text documents that are similar are clustered into sets. The system then looks at the hierarchy of this information and organizes it accordingly.
- Matrix factorization: This machine learning technique looks for latent factors in data matrices. These factors can be defined in many ways, and are based on similar characteristics.
- Latent Semantic Indexing: Latent Semantic Indexing frequently comes up in conversations about search engines and search engine optimization. It refers to the relationship between words and phrases so that it can group related text together. You can see an example of this technology in action whenever Google suggests search results that include contextually related words.
Deep Learning
Another phrase that comes up frequently in discussions about natural language processing and machine learning is deep learning. Deep learning is artificial intelligence technology based on simulating the way the human brain works through a large neural network. It’s used to expand on learning algorithms, deal with data sets that are ever-increasing in size, and to work with more complex natural language use cases.It gets its name by looking deeper into the data than standard machine learning techniques. Rather than getting a surface-level understanding of the information, it produces comprehensive and easily scalable results. Unlike machine learning, deep learning does not hit a wall in how much it can learn and scale over time. It starts off by learning simple concepts and then builds upon this learning to expand into more complicated ones. This continual building process makes it possible for the machine to develop a broad range of understanding that’s necessary for high-level natural language processing projects.Deep learning also benefits natural language processing in improving both supervised and unsupervised machine learning models. For example, it has a functionality referred to as feature learning that is excellent for extracting information from large sets of raw data.
NLP Machine Learning Techniques
Text mining and natural language processing are related technologies that help companies understand more about text that they work with on a daily basis. The importance of text mining can not be underestimated.The type of machine learning technique that a natural language processing system uses depends on the goals of the application, the resources available, and the type of text that’s being analyzed. Here are some of the most common techniques you’ll encounter.
Text Embeddings
This technique moves beyond looking at words as individual entities. It expands the natural language processing system’s understanding by looking at what surrounds the text where it’s embedded. This information provides valuable context clues about the situation in which the word is being used, whether its meaning is changed from the base dictionary definition, and what the user means when they are using it.You’ll often find this technique used in deep learning natural language processing applications, or those that are addressing more complex use cases that require a better understanding of what’s being said. When this technique looks for contextually relevant words, it also automates the removal of text that doesn’t further understanding. For example, it doesn’t need to process articles such as “a” and “an.”One representation of text embeddings technique in action is with predictive text on cell phones. It’s attempting to predict the next word in the sequence, which it’s only able to do by identifying words and phrases that appear around it frequently.
Machine Translation
This technique allows NLP systems to automate the translation process from one language to another. It relies on both word-for-word translations and those that are able to identify and get context to facilitate accurate translations between languages. Google Translate is one of the most well-known use cases of this technique, but there are many ways that it’s used throughout the global marketplace.Machine learning and deep learning can improve the results by allowing the system to build upon its base understanding over time. It might start out with a supervised machine learning model that inputs a dictionary of words and phrases to translate and then grows that understanding through multiple data sources. This evolution over time allows it to pick up on speech and language nuances, such as slang.Human language is complex and being able to produce accurate translations requires a powerful natural language processing system that can work with both the base translation and contextual cues that lead to a deeper understanding of the message that is being communicated. It’s the difference between base translation and interpretation.In a global marketplace, having a powerful machine translation solution available means that organizations can address the needs of the international markets in a way that scales seamlessly. While you still need human staff to go through the translations to correct errors and localize the information for the end user, it takes care of a substantial part of the heavy lifting.
Conversations
One of the most common contexts that natural language processing comes up in is conversational AI, such as chatbots. This technique is focused on allowing a machine to have a naturally flowing conversation with the users interacting with it. It moves away from a fully scripted experience by allowing the bot to create a more natural sounding response that fits into the flow of the conversation.Basic chatbots can provide the users with information that’s based on key parts of the input message. They can identify relevant keywords within the text, look for phrases that indicate the type of assistance the user needs, and work with other semi-structured data. The user doesn’t need to change the way they typically type to get a relevant response.However, open-ended conversations are not possible on the basic end of things. A more advanced natural language processing system leveraging deep learning is needed for advanced use cases.The training data used for understanding conversations often comes from the company’s communications between customer service and the customers. It provides broad exposure to the way people talk when interacting with the business, allowing the system to understand requests made in a wide range of conversational styles and dialects. While everyone reaching out to the company may share a common language, their verbiage, slang, and writing voice can be drastically different from person to person.
Sentiment Analysis
Knowing what is being communicated depends on more than simply understanding the words being said. It’s also important to consider the emotions behind the conversation. For example, if you use natural language processing as part of your customer support processes, it’s important to know whether the person is frustrated and experiencing negative emotions. Sentiment analysis is the technique that brings this data to natural language processing.The signs that someone is upset can be incredibly subtle in text form, and requires a lot of data about negative and positive emotions in text-based form. This technique is useful when you want to learn more about your customer base and how they feel about your company or products. You can use sentiment analysis tools to automate the process for going through customer feedback from surveys to get a big picture view of their feelings.This type of system can also help you sort responses into those that may need a direct response or follow-up, such as those that are overwhelmingly negative. It’s an opportunity for a business to right wrongs and turn detractors into advocates. On the flip side, you can also use this information to determine people who would be exceptional customer advocates, as well as those who could use a little push to end up on the positive side of the sentiment analysis.The natural language processing system uses an understanding of smaller elements of the text to get to the meaning behind the text. It automates a process that can be incredibly painstaking to try to do manually.
Question Answering
Natural language processing is really good at automating the process of answering questions and finding relevant information by analyzing text from multiple sources. It creates a quality user experience by digging through the data to find the exact answer to what they’re asking, without requiring them to sort through multiple documents on their own or find the answer buried in the text.The key functions that NLP must be able to perform in order to answer questions include: understanding the question being asked, the context it’s being asked in, and the information that best addresses the inquiry. You’ll frequently see this technique used as part of customer service, information management, and chatbot products.Deep learning is useful for this application, as it can distill the information into a contextually relevant answer based on a wide range of data. It determines whether the text is useful for answering the inquiry, and the parts that are most important in this process.Once it goes through this sequence, it then needs to be assembled in natural language so the user can understand the information.
Text Summarization
Data sets have reached awe-inspiring sizes in the modern business world, to the point where it would be nearly impossible for human staff to manually go through the different information to create summaries of the data. Thankfully, natural language processing is capable of automating this process to allow organizations to derive value from these big data sets.There are a few aspects that text summarization needs to address with the use of natural language processing. The first is that it needs to understand and recognize the parts of the text that are the most important to the users accessing it. The type of information that is most-needed from a document would be drastically different for a doctor and an accountant.The information must be accurate and presented in a form that is short and easy to understand. Some real-world examples of this technique in use include automated summaries of news stories, article digests that provide a useful excerpt as a preview, and the information that is given in alerts in a system. The way this technique works is by scanning the document for different word frequencies. Words that appear frequently are likely to be important to understanding the full text. The sentences that contain these words are pulled out as the ones that are most likely to produce a basic understanding of the document, and it then sorts these excerpts in a way that matches the flow of the original.Text summarization can go a step further and move from an intelligent excerpt to an abstract that sounds natural. The latter requires more advanced natural language processing solutions that can create the summary and then develop the abstract in natural dialogue.
Attention Mechanism
Attention in the natural language processing context refers to the way visual attention works for people. When you look at a document, you are paying attention to different sections of the page rather than narrowing your focus to an individual word. You might skim over the text for a quick look at this information, and visual elements such as headings, ordered lists, and important phrases and keywords will jump out to you as the most important data.The Attention mechanism techniques build on the way people look through different documents. It operates on a hierarchy of the most important parts of the text while placing lesser focus on anything that falls outside of that primary focus. It’s an excellent way of adding relevancy and context to natural language processing. You’ll find this technique used in machine translation and creating automated captions for images.Are you ready to see what natural language processing can do for your business? Contact us to learn more about our powerful sentiment analysis solutions that provide actionable, real-time information based on user feedback.
Read more
Analyse de texte & IA
Customer Experience Analysis - How to improve customer loyalty and retention
The global marketplace puts businesses in a position where you need to compete with organizations from around the world. Standing out on price becomes a difficult or impossible task, so the customer experience has moved into a vital position of importance. Customer loyalty and retention are tied to the way your buyers feel about your brand throughout their interactions. Customer experience analysis tools provide vital insight into the ways that you can address problems and lead consumers to higher satisfaction levels. However, knowing which type of tool to use and the ways to collect the data for them are important to getting actionable information.
Problems With Only Relying on Surveys for Customer Satisfaction Metrics
One of the most common ways of collecting data about the customer experience is through surveys. You may be familiar with the Net Promoter Score system, which rates customer satisfaction on a 1-10 scale. The survey used for this method is based off a single question — “How likely are you to recommend our business to others?” Other surveys have a broad scope, but both types focus on closed-ended questions. If the consumer had additional feedback on topic areas that aren't covered in the questions, you lose the opportunity to collect that data. Using open-ended questions and taking an in-depth look at what customers say in their answers gives you a deeper understanding of your positive and negative areas. Sometimes this can be as simple as putting a text comment box at the end. In other cases, you could have fill-in responses for each question.
How to Get Better Customer Feedback
To get the most out of your customer experience analysis tools, you need to start by establishing a plan to get quality feedback. Here are three categories to consider:
Direct
This input is given to your company by the customer. First-party data gives you an excellent look at what the consumers are feeling when they engage with your brand. You get this data from a number of collection methods, including survey results, studies and customer support histories.
Indirect
The customer is talking about your company, but they aren't saying it directly to you. You run into this type of feedback on social media, with buyers sharing information in groups or on their social media profiles. If you use social listening tools for sales prospecting or marketing opportunities, you can repurpose those solutions to find more feedback sources. Reviews on people's websites, social media profiles, and dedicated review websites are also important.
Inferred
You can make an educated guess about customer experiences through the data that you have available. Analytics tools can give you insight on what your customers do when they're engaging with your brand. Once you're collecting customer data from a variety of sources, you need a way to analyze it properly. A sentiment analysis tool looks through the customer information to tell you more about how they feel about the experience and your brand. While you can try to do this part of the process manually, it requires an extensive amount of human resources to accomplish, as well as a lot of time.
Looking at Product-specific Customer Experience Analytics
One way to use this information to benefit customer loyalty and satisfaction is by analyzing it on a product-specific basis. When your company has many offerings for your customers, looking at the overall feedback makes it difficult to know how the individual product experiences are doing. A sentiment analysis tool that can sort the feedback into groups for each product makes it possible to look at the positive and negative factors influencing the customer experience and judge how to improve sentiment analysis. Some of the information that you end up learning is whether customers want to see new features or models with your products, if they've responded to promotions during the purchase process, and if products may need shelves or need to be completely reworked.
Improving the Customer Experience for Greater Loyalty
If you find that your company isn't getting a lot of highly engaged customer advocates, then you may be running into problems generating loyalty. To get people to care more about your business, you need to fully understand your typical customers. Buyer personas are an excellent tool to keep on hand for this purpose. Use data from highly loyal customers to create profiles that reflect those characteristics. Spend some time discovering the motivations and needs that drive them during the purchase decision. When you fully put yourself in the customer's shoes, you can begin to identify ways to make them more emotionally engaged in their brand support. One way that many companies drive more loyalty is by personalizing customer experiences. You give them content, recommendations and other resources that are tailored to their lifestyle and needs.
Addressing Weak Spots in Customer Retention
Many factors lead to poor customer retention. Buyers may feel like the products were misrepresented during marketing or sales, they could have a hard time getting through to customer support, or they aren't getting the value that they expected. In some cases, you have a product mismatch, where the buyer's use case doesn't match what the item can accomplish. A poor fit leads to a bad experience. Properly educating buyers on what they're getting and how to use it can lead to people who are willing to make another purchase from your company. You don't want to center your sales tactics on one-time purchases. Think of that first purchase as the beginning of a long-term relationship. You want to be helpful and support the customer so they succeed with your product lines. Sometimes that means directing them to a competitor if you can't meet their needs. This strategy might sound counterintuitive, but the customers remember that you went out of your way to help them, all the way up to sending them to another brand. They'll happily mention this good experience to their peers. If their needs change in the future, you could end up getting them back. Customer loyalty and retention are the keys to a growing business. Make sure that you're getting all the information you need out of your feedback to find strategies to build these numbers up.
Read more
Analyse de texte & IA
Verbatim Coding for Open Ended Market Research
Coding Open-Ended Questions
Verbatim coding is used in market research to classify open-end responses for quantitative analysis. Often, verbatims are coded manually through software such as Excel, however, there are verbatim coding solutions and coding services available to streamline this process and easily categorize verbatim responses.
Survey Research is an important branch of Market Research. Survey research poses questions to a constituency to gain insight into their thoughts and preferences through their responses. Researchers use surveys and the data for many purposes: customer satisfaction, employee satisfaction, purchasing propensity, drug efficacy, and many more.
In market research, you will encounter terms and concepts in data specific to the industry. We will share some of those with you here, and the MRA Marketing Research Glossary can help you understand any unknown terms you encounter later.
Seeking Answers by Asking Questions
Every company in the world has the same goal: they want to increase their sales and make a good profit. For most companies, this means they need to make their customers happier — both the customers they have and the customers they want to have.
Companies work toward this goal in many ways, but for our purposes, the most important way is to ask questions and plan action based on the responses and data gathered. By “ask questions,” we mean asking a customer or potential customer about what they care about and taking action based on the customer feedback.
One way to go about this is to simply ask your customers (or potential customers) to answer open ended questions and gather the responses:
Q: What do you think about the new package for our laundry detergent?
A: It is too slippery to open when my hands are wet.
This technique is the basis of survey research. A company can conduct a questionnaire to get responses by asking open ended questions.
In other cases, there may be an implied question and response. For example, a company may have a help desk for their product. When a customer calls the help desk there is an implied question:
Q: What problem are you having with our offering?
The answers or responses to this implied question can be as valuable (or more!) as answers and responses to survey questions.
Thinking more broadly, the “customer” does not necessarily have to be the person who buys the company’s product or service. For example, if you are the manager of the Human Resources department, your “customers” are the employees of the company. Still, the goal is the same: based on the feedback or response from employees, you want to act to improve their satisfaction.
Open, Closed, and Other Specify
There are two basic types of data to gather responses in survey research: open and closed. We also call these open-end and closed-end questions.
A closed-end question is one where the set of possible responses is known in advance. These are typically presented to the survey respondent, who chooses among them. For example:

Open-end questions ask for an “in your own words” response:

The response to this question will be whatever text the user types in her response.
We can also create a hybrid type of question that has a fixed set of possible responses, but lets the user make an answer or response that was not in the list:

We call these Other Specify questions (O/S for short). If the user types a response to an O/S question it is typically short, often one or two words.
Just as we apply the terms Open, Closed, and O/S to questions, we can apply these terms to the answers or responses. So, we can say Male is a closed response, and The barista was rude is an open response.
What is an Answer vs a Comment?
If you are conducting a survey, the meaning of the term answer is clear. It is the response given by the respondent to the question posed. But as we have said, we can also get “answers” to implied questions, such as responses to what a customer tells the help desk. For this reason, we will use the more generic term comment to refer to some text or responses that we want to make an examination for actionable insight.
In most cases, comments are electronic text, but they can also be images (handwriting) and voice recording responses.
You need to be aware of some terminology that varies by industry. In the marketing research industry, a response to a question is called either a response or a verbatim. So, when reading data in survey research we can call these responses, verbatims, or comments interchangeably. They are responses to open-end questions. As we will see later, we don’t call the responses to an open-end question answers. We will find that these verbatims are effectively turned into answers by the process of verbatim coding.
Outside of survey research, the term verbatim is rarely used. Here the term comment is much more prevalent. In survey research the word verbatim is used as a noun, meaning the actual text given in response to a question.
Survey Data Collection
In the survey research world, verbatims are collected by fielding the survey. Fielding a survey means putting it in front of a set of respondents and asking them to read it and fill it out.
Surveys can be fielded in all sorts of ways. Here are some of the different categories of surveys marketing research companies might be using:
- Paper surveys- Mailed to respondents
- Distributed in a retail store
- Given to a customer in a service department
 
- In-person interviews- In kiosks in shopping malls
- Political exit polling
- Door-to-door polling
 
- Telephone interviews- Outbound calling to households
- Quality review questions after making an airline reservation
- Survey by voice robot with either keypad or voice responses
 
- Mobile device surveys- Using an app that pays rewards for completed surveys
- In-store surveys during the shopping experience
- Asking shoppers to photograph their favorite items in a store
 
- Web surveys- Completed by respondents directed to the survey while visiting a site
- Completed by customers directed to the survey on the sales receipt
 
There are many more categories of survey responses. The number of ways to field surveys the ingenious market research industry has come up with is almost endless.
As you can see, the form of the data collected can vary considerably. It might be:
- Handwriting on paper
- Electronic text
- Voice recording responses
- Electronic data like telephone keyboard button presses
- Photographs or other images
- Video recording responses
And so on. In the end, all surveys require:
- A willing respondent
- A way of capturing the responses
The way of capturing the responses is easy. The first takes us to the area of sample we will consider soon.
Looping and Branch Logic
Data collection tools can be very sophisticated. Many data collection tools have logic built in to change the way that the survey is presented to the respondent based on the data or responses given.
Suppose for example you want to get the political opinions of Republican voters. The first question might make the respondent provide his political party affiliation. If he responds with an answer other than “Republican,” the survey ends. The survey has been terminated for the respondent, or the respondent is termed. This is a simple example of branch logic. A more sophisticated example would be to direct the respondent to question Q11 if she answers A, or to question Q32 if she answers B.
Another common bit of data collection logic is looping. Suppose we make our respondents participate in an evaluation of five household cleaning products. We might have four questions we want to ask the respondents about each product, the same four for each product. We can set up a loop in our data collection tool. It loops through the same four questions five times, once for each product.
There are many more logic features of data collection tools, such as randomization of the ordering of questions and responses to remove possible bias for the first question or answer presented.
The Survey Sample
A sample can be described simply as a set of willing respondents. There is a sizable industry around providing samples to survey researchers. These sample providers organize collections of willing respondents and provide access to these respondents to survey researchers for a fee.
A panel is a set of willing respondents selected by some criteria. We might have a panel of homeowners, a panel of airline travelers, or a panel of hematologists. Panelists almost always receive a reward for completing a survey. Often this is money, which may range from cents to hundreds of dollars, however, it can be another incentive, such as coupons or vouchers for consumer goods, credits for video purchases, or anything else that would attract the desired panelists. This reward is a major component of the cost per complete of a survey: the cost to get a completed survey.
Sample providers spend a lot of time maintaining their panels. The survey researcher wants assurance that the sample she purchases is truly representative of the market segment she is researching. Sample providers build their reputation on the quality of sample they provide. They use statistical tools, trial surveys, and other techniques to measure and document the sample quality.
Trackers and Waves
Many surveys are fielded only once, a one-off survey. Some surveys are fielded repeatedly. These are commonly used to examine the change in the attitude of the respondents over time. Researching the change in attitude over time is called longitudinal analysis. A survey that is fielded repeatedly is called a tracker. A tracker might be fielded monthly, quarterly, yearly, or at other intervals. The intervals are normally evenly spaced in time. Each fielding of a tracker is called a wave.
Verbatim Coding
In the survey research industry responses to open-end questions are called verbatims. In a closed-end question the set of possible responses from the respondent is known in advance. With an open-end question, the respondent can say anything. For example, suppose a company that sells laundry detergent has designed a new bottle for their product. The company sends a sample to 5,000 households and conducts a survey after the consumers have tried the product. The survey will probably have some closed-end responses to get a profile of the consumer, but to get an honest assessment of what the consumer thinks of the new package the survey might have an open-end question:
What do you dislike about the new package?
So, what does the survey researcher do with the responses to this question? Well, she could just read each verbatim. While that could provide a general understanding of the consumers’ attitudes, it’s really not what the company that is testing the package wants. The researcher would like to provide more specific and actionable advice to the company. Things like:
22% of women over 60 thought the screw cap was too slippery.
8% of respondents said the bottle was too wide for their shelves.
This is where verbatim coding, or simply coding, comes in. Codes are answers, just like for closed-end questions. The difference is that the codes are typically created after the survey is conducted and responses are gathered. Coders are people trained in the art of verbatim coding and often on a coding platform, such as Ascribe Coder. Coders read the verbatims collected in the survey and invent a set of codes that capture the key points in the verbatims. The set of codes is called a codebook or code frame. For our question, the codebook might contain these codes:
- Screw cap too slippery
- Bottle too wide
- Not sufficiently child-proof
- Tends to drip after pouring
The coders read each verbatim and assign one or more codes to it. Once completed, the researcher can now easily read each one of the coded responses and see what percentage of respondents thought the cap was too slippery. You can see that armed with information from the closed-end responses the researcher could then make the statement:
22% of women over 60 thought the screw cap was too slippery.
Now you can see why the responses to open-end questions are called verbatims, not answers. The answers are the codes, and the coding process turns verbatims into answers. Put another way, coding turns qualitative information into quantitative information.
Codebooks, Codes, and Nets
Let’s look at a real codebook. The question posed to the respondent is:
In addition to the varieties already offered by this product, are there any other old-time Snapple favorites that you would want to see included as new varieties of this product?
And here is the codebook:
- VARIETY OF FLAVORS- like apple variety
- like peach variety
- like cherry variety
- like peach tea variety (unspecified)
- like peach iced tea variety
- like raspberry tea variety
- like lemon iced tea variety
- other variety of flavors comments
 
- HEALTH/ NUTRITION- good for dieting/ weight management
- natural/ not contain artificial ingredients
- sugar free
- other health/ nutrition comments
 
- MISCELLANEOUS- other miscellaneous comments
 
- NOTHING
- DON’T KNOW
Notice that the codebook is not a simple list. It is indented and categorized by topics, called nets, and the other items are codes. Nets are used to organize the codebook. Here the codebook has two major categories, one for people whose responses are that they like specific flavors and the other for people mentioning health or nutrition.
In this example, there is only one level of nets, but nets can be nested in other nets. You can think of it like a document in outline form, where the nets are the headers of the various sections.
Nets cannot be used to code responses. They are not themselves answers or responses to questions and instead are used to organize the answers (codes).
Downstream Data Processing
Once the questions in a study are coded they are ready to be used by the downstream data processing department in the survey research company. This department may be called data processing, tabulation, or simply tab. In tab, the results of the survey are prepared for review by the market researcher and then to the end client.
The tab department uses software tools to analyze and organize the results of the study. These tools include statistical analysis which can be very sophisticated. Normally, this software is not interested in the text of the code. For example, if a response is coded “like apple variety” the tab software is not interested in that text but wants a number like 002. From the tab software point of view, the respondent said 002, not “like apple variety”. The text “like apple variety” is used by the tab software only when it is printing a report for a human to read. At that time, it will replace 002 with “like apple variety” to make it human-readable. Before the data are sent to the tab department each code must be given a number. The codebook then looks like this:
- 001   VARIETY OF FLAVORS- 002 like apple variety
- 003 like peach variety
- 004 like cherry variety
- 021 like peach tea variety (unspecified)
- 022 like peach iced tea variety
- 023 like raspberry tea variety
- 024 like lemon iced tea variety
- 025 other variety of flavors comments
 
- 026   HEALTH/ NUTRITION- 027 good for dieting/ weight management
- 028 natural/ not contain artificial ingredients
- 029 sugar free
- 030 other health/ nutrition comments
 
- 031   MISCELLANEOUS- 032 other miscellaneous comments
 
- 998 NOTHING
- 999 DON’T KNOW
The tab department may impose some rules on how codes are numbered. In this example the code 999 always means “don’t know”.
Choose Ascribe For Your Verbatim Coding Software Needs
When it comes to verbatim coding for open-ended questions in market research surveys, Ascribe offers unparalleled solutions to meet your needs. Our sophisticated coding platform, Ascribe Coder, is designed to streamline the process of categorizing and analyzing open-ended responses, transforming qualitative data into quantitative results. Whether you are dealing with responses from customer satisfaction surveys, employee feedback, or product evaluations, Ascribe provides the tools necessary for efficient and accurate verbatim coding.
If you are short on time or need further assistance with your verbatim coding projects, Ascribe Services can complete the coding project for you. They also offer additional services like Verbatim Quaity review, AI Coding with human review, and translation with human review. Many of the top market research firms and corporations trust Ascribe for their verbatim coding needs. Contact us to learn more about coding with Coder.
Read more


