Canada Privacy Act-Compliance in Survey Hosting & Access

Survey Hosting & Data Access that’s Canada Privacy Act-Compliant

Market Researchers who have been following the news about data privacy and protection in the past few years know that there are major concerns with any data that is hosted in the US, or accessed by US parties.

Recently, we’ve seen that US survey software providers come under fire from provincial governments, even those who have operations in Canada. The major problem is that US employees of these firms often have access to their Canadian clients’ personal information data even when it is stored on Canadian servers. This is in direct conflict with many Canadian Privacy Acts.

We’re proudly Canadian, and have been for over 25 years. So we understand Federal and Provincial Privacy Acts and how they relate to safeguarding our Canadian clients’ data, including the secure access of personal respondent data. Voxco’s secure servers host our Canadian clients’ data here in Canada, but with Voxco your data will never be accessed by any party in the United States.

We’re currently providing survey tools and secure-access data hosting to Federal and Provincial governments, universities, and Canada’s largest research firms. Many of them have chosen us because Voxco offers industry-leading survey software with the added confidence of knowing that your data is securely stored in Canada, only accessed by Canadian parties.

Let us know if we can help you keep your Canadian survey data in Canada.

5 Reasons to support the 2016 Canadian Census

Why Marketers Should Support the 2016 Canadian Census

When Justin Trudeau’s newly formed majority government announced the reinstatement of the mandatory long-form census in its very first Liberal caucus meeting, researchers and businesses across Canada celebrated. The mandatory long-form census was a huge step back in the right direction for lovers of data across the nation that Voxco calls home.

Here’s a great article by Jan Kestle, Chair of CMA’s Customer Insights & Analytics Council, first published on the Canadian Marketing Association website here:

Starting at the beginning of May, Canadians are being asked to complete the 2016 Census. And there is good news for businesses, consumers and citizens alike.

As you’ll recall for the 2011 Census, the mandatory long-form census was replaced by a voluntary survey. This move resulted in a non-response bias that meant a lot of small-area data for 2011 could not be released—including data on income, education, labour force activity and special populations like immigrants and aboriginal peoples. The loss of the long-form census was a significant blow to policy makers, researchers, data scientists and marketing professionals.

But today we can breathe a sigh of relief because the mandatory long-form census is back, and the complexity of the Canadian population will once again be documented in a comprehensive and statistically sound way. Support for the restoration of a “proper” census came from all corners of Canadian society and has reinforced the fact that the Census of Canada is a national treasure to be protected and supported.

In this era of Big Data, some people wonder why we need the census. There are many reasons, but here are my top five for marketers:

  1. Key metrics like market share and untapped potential use census-based data to benchmark an organization’s users against total population.
  2. Census-derived demographics are used to weight surveys to make them applicable to the general population—and more actionable.
  3. Segments derived from small-area census data are used by marketers to link disparate data sources together—connecting demographics to lifestyle and media preferences and thereby helping marketers get the right message to the right people using their preferred media.
  4. Census data for neighbourhoods can help harness unstructured data from digital and social media. This allows marketers to link internal offline data to online sources-–getting closer to a single view of the customer.
  5. Using census-derived data to leverage existing surveys and administrative databases results in businesses asking consumers fewer questions. Businesses don’t have to collect data if it already exists in another form—and consumers benefit.

Because of these and many other reasons, we urge Canadians to fill out the Census—whether they get the short or long form. The Census is essential to the fundamental operation of the country—from planning the provision of social services to managing economic growth.

We live in a time when busy consumers expect brands to know and connect with them. The resulting good data will also help us be better marketers.

Read the source article at CMA

This Survey Is Unwieldy And Intrusive. And Invaluable.

This Survey Is Unwieldy And Intrusive — And Invaluable To Understanding Americans’ Health

As the length of the NHIS grows, the attention spans of respondents shrinks which means declining response rates. Maggie Koerth-Baker at FiveThirtyEight has put together an excellent article on why the impact that decreasing the length of the NHIS could have in the US:

“Every year since 1957, tens of thousands of Americans have opened their homes to government survey takers who poke and prod their way through a list of intimate and occasionally uncomfortable questions. That process is part of the National Health Interview Survey, the gold standard of health data in the United States.

Experts say it is unique in its sample size, the scope of its questions, and how long it has existed. The NHIS is crucial to our ability to track the prevalence of diseases and health-related behaviors, and to answer complex questions involving health, income and family demographics. And now it’s about to change. The National Center for Health Statistics, the branch of the Centers for Disease Control and Prevention that conducts the survey, is planning a big shift in how the NHIS works, one that some scientists fear will impair their ability to learn about things like how stepmothers can subtly disadvantage children — investigations that end up shaping everything from the way your money gets spent to the policies your legislators vote on.

At issue is the question of how long anybody can reasonably expect Americans to put up with sharing intimate details of their lives with taxpayer-funded bureaucrats.

The 35,000 households — more than 87,000 individuals — who will be interviewed for the NHIS this year will have to devote an average of 90 minutes of their time to the survey. They aren’t paid. There’s no prize at the end. “That’s a lot to ask of people,” said Stephen Blumberg, associate director for science at the National Center for Health Statistics.

The thing about a gold standard is that every researcher wants to be a part of it. Twenty years ago, the NHIS lasted about an hour. But over time, more government agencies and scientists wanted to add more and more questions to make it more and more useful. Meanwhile, participation has been dropping. In 1997, the survey had a 91.8 percent response rate. “The response rate today is about 70-73 percent,” Blumberg said.

So scientists and the government are caught in a tug of war over the survey. Pull too hard one way, and they risk losing access to valuable information. Yank it back the other, and Blumberg worries that the participation rate could fall below 60 percent.


But nobody knows whether making it shorter will actually increase the response rate. The NHIS is not the only survey that’s losing respondents over time. Across the board, fewer and fewer Americans are choosing to participate in surveys of all kinds. This trend has been documented by the National Research Council and the Pew Research Center. Scientists I spoke to repeatedly referred to this problem as “the elephant in the room.”

It’s not clear why this is, but the length of the survey is probably not the primary driver, said James Lepkowski, director of the University of Michigan’s program in survey methodology. “If length is an issue, what you should see is that people break off in the middle. That’s actually very rare.”

Read the full article at FiveThirtyEight

This article is a perfect illustration of a primary problem facing researchers today. Is shorter and simpler better? Not always.

Online Polling: More or less accurate than phone polls?

Surveys Say: Online Polls Are Rising. So Are Concerns About Their Results.

The New York Times recently shone a light on the polling industry and how two of the most important trend lines in the industry may be about to cross.

As online polling rises, and the use of telephone surveys declines due mostly to rising costs and diminishing response rates, there are concerns among public opinion researchers that web polling will overtake phone polling before online polling is truly ready to handle the responsibility.

Ready or not, online polling is here and political analysts are facing a deluge of data from new firms employing promising (but not always proven) methodologies. The wide margin of difference in poll results for Donald Trump is noticeable – he fares better in online polls than in traditional polls. But it’s still not clear which method is capturing the true public opinion.

The abundance of web-based polling reflects how much easier it has become over the past decade. But big challenges remain. Random sampling is at the heart of scientific polling, and there’s not yet any way to randomly reach people on the Internet as auto-dialers allow for telephone polls. Internet pollsters obtain their samples through other, decidedly non-random, means. To compensate, they sometimes rely on extensive statistical modeling.

The online pollsters may get grouped together because they’re using the same channel, but they’re using a far greater diversity of methodology than phone pollsters. The statistical techniques that the Internet pollsters then use to adjust these data vary nearly as much. And because no one is yet sure of the ‘right’ methods, that variance in methodology is a risk.

There is one big reason why online polling (and other non-personal polling methods like IVR) numbers are so varied: Anonymity. Voters are likelier to acknowledge their support of a given candidate in an anonymous online survey than in an interview with a real person. Research suggests that the social acceptability of an opinion shapes a respondent’s willingness to divulge it.

This social acceptability bias may be making online surveys more accurate. But we don’t know. What is also possibly adding to the problem is that the online polls have biased, non-random and non-representative samples. The fact that online pollsters have sometimes relied on extensive modeling to adjust their results could be a sign of the limitations of online polling.

The current web-based approach to polling are high-profile experiments into the next generation of opinion research, and the industry is watching closely.

Read the source article at The New York Times

Bad Survey Data and how to avoid it

The Illusion of Technology and Decision Making

When the technology of the internet and mass acceptance of desktop computers arrived in the late ’90s, things were supposed to change. And they have. But better access to customer and market information should have led to smarter management decisions across the board. Faster access to customer and market information was supposed to give companies the ability to make production or inventory adjustments on the fly.

But if either of those things were true, then in theory the 2008 crash shouldn’t have happened, and we wouldn’t be in a situation where U.S. consumers have slipping confidence and almost half feel that their country’s economy is still in a recession. Why didn’t our incredible access to technology and consumer data help us avoid these results? It’s not a simple problem with a simple answer, but one cause was highlighted recently by Vic Crain via the MRA blog. The cause? Bad Data.

Technology gives the nearly limitless access to data that decision makers crave. But if the data that is collected is wrong, so will be the decisions based on that bad data. So where does bad data come from?

  1. Lazy data entry
  2. Programming errors
  3. Managers and sales people who fudge or hide numbers to meet monthly or quarterly targets
  4. Flawed statistical analysis (for example, segmentation)
  5. Wrong customer data
  6. Bad customer satisfaction and market research data
  7. Changes in government data collection

Bad research data comes from several common sources listed above, including (a) asking the wrong questions, (b) response bias, unqualified respondents and fakery, and (c) biased data collectors who really don’t want to collect negative information.

Government adds to the problem. Public health inspectors who call ahead to make appointments for “surprise inspections” add the the problem. Elimination of data from the U.S. Census reduces the accuracy of information with which business works.

Blind acceptance of data as fact is a problem. Sophisticated statistical models of business operations or consumer demand built on bad data will be of questionable reliability.  “The Smell Test” — a subjective tool based on experience and judgment — remains one of the best tools for assessing the quality of data and of models based on the data.

Read more in the source article at MRA

Will Market Research still be using passive data in 2 years?

Two-thirds of Market Research Clients Do Not Expect to Be Using Passive Data in Two Years

Despite the awareness of clients and suppliers of Market Research that passive data measurement is growing as a source of insight, about 2 out of 3 claim they will still not be using it two years from now.

30% of clients and 27% of suppliers identified “consumer-specific data collected passively” as the source of insights data that would be most important in two years. It was roughly even with the same number of respondents who selected “custom surveys in any mode.”

But more interestingly, when asked about their passive measurement today, and how much they foresee doing in the future, almost 70% of the respondents said they’re doing zero today and expect they’ll still be doing none in two years. Roughly 25% say they’re doing no passive measurement today but expect to be doing ‘some’ in two years.

Budget limitations are currently and are expected to remain the leading reason why researchers don’t expect they’ll increase passive data collection; but a variety of other concerns are worth noting: data integration and regulatory concerns were both ranked nearly equally as rationale for not beginning passive data measurement.

The majority of researchers (~60%) will be doing most of their research using mobile devices in two years. The ability to answer specific questions ranks as more important to researchers looking to tell a data story.

Read the source article at Market Research

The Dissolution of a U.S.-EU Personal Information Safe Harbor

European Court of Justice Strikes Down U.S.-EU Safe Harbor

On Tuesday, October 6th, the European Court of Justice (ECJ) struck down the U.S.-EU Safe Harbor Agreement. Per the Safe Harbor Agreement, US companies used to be allowed to collect and/or transfer data from the EU by “self-certifying” via the U.S. Department of Commerce that they were compliant with 7 Privacy Principles, in place of being adherent to specific privacy laws in numerous EU countries. The agreement was designed to ease of US organizations to move the personal data of European subjects across to the U.S.

But as of Tuesday, according to the ECJ, the Safe Harbor Agreement (initially approved in 2000) violates the EU’s Data Protection Directive, which forbids the transfer of personal data outside the EU to a country with inadequate privacy protections. The Directive also requires each EU member state to designate at least one Data Protection Authority (DPA) to monitor the application of the Directive within its territory. The Safe Harbor Agreement was becoming a shortcut for American companies to avoid scrutiny by individual DPAs.

The case in which the ECJ made its ruling was sparked by Edward Snowden releasing documents that showed US intelligence agencies mining personal information from the data of U.S. companies. Once the case was referred to the ECJ, it ruled that US companies removing data from the EU can no longer hide behind the Safe Harbor Agreement.

Market Researcher Compliance

Tuesday’s ruling invalidated the Safe Harbor effective immediately, but it’s likely understood that companies will need some time to assess their options and achieve compliance through another means, like avoiding any US-based data hosting or required data access by American organizations.

U.S. companies — including survey, opinion and market researchers — are subject to a number of different DPAs across the EU, each of which will be individually responsible for determining whether the United States’ data protections are “adequate.” This makes removal, access or hosting of EU data to the US far more complicated and perilous. Theoretically, U.S. companies could be subject to fines previously shielded by Safe Harbor.

We are monitoring news about the story as it advances, but for the short-term, it’s important that all research firms that deal in personal information consider the implications of US access to international personal data.

Read the source article at MRA

Europe plans General Data Protection Regulation

In an effort to centralize data protection across its 28 Member States, the EU wishes to mandate the General Data Protection Regulation (GDPR) as early as 2017. The GDPR is a proposed law that will apply to personal identifiable data collected from its residents.

The GDPR will regulate what data can be collected, how it can be collected, and what can be done with it. It will also determine what organizations must do to protect personal data and set legal/financial penalties for organizations that fail to comply.

It’s expected that MR companies in the EU will be affected by the law if it passes, despite the feeling that the law targets organizations who are less responsible with data collection and handling. Financial effects to MR firms will likely include the need to hire new staff to monitor data protection and implement regular processes that obtain legal authorization to collect data.

We’ll keep an eye on this news and clearly advise our European clients as the story progresses.

Read the source article at

Are MR Industry best practices changing?

While Some Things Change, Others Will Stay The Same | GreenBook

Technologies and roles are changing rapidly in the Market Research industry. But what about the core best practices that define our industry and we’ve all been maintaining?

The best practices of yesterday and today will still be the same core best practices of tomorrow, regardless of the technological and physical changes that happen around us. But that doesn’t mean that those best practices aren’t becoming more complicated and nuanced. Because they are.

Survey Length

As an industry, we’ve always respected the time of our our respondents – we promise them a short survey and thank them for their time, and that will always be a part of the process. But the definition of ‘short’ is certainly changing, and changing fast. Requesting a mobile user to complete a 20 minute survey on their phone is ridiculous. It’s still important that researchers respect the respondent’s time but our definition of short needs to keep up with technology.

The perfect solution today is micro-surveys, with more frequency if necessary. Surveys that promise they’ll take <5 minutes to complete are getting much higher response rates from a more diverse demographic than longer ones are. But surprisingly, they are still not being adopted by as many members of the industry as you might think. Market researchers are still looking for the happy medium between the 5-minute micro-survey and the 20-minute behemoth. Smart researchers adapt the same surveys across multiple channels.

Insight Reporting
Market researchers have always needed to analyze raw data, and extract key insights that matter to the organization. That will never change – but there is a shift underway surrounding the way that we report the insights. It used to be much a more raw presentation full of bar and pie charts. But nowadays, there’s a big shift away from this more raw form of reporting towards storytelling. Storytelling means taking the exact same data that was previously housed in a more fact-based, word-heavy reports and finding more visual methods for presenting the insights. As an industry we’re moving away from standard toplines and PowerPoints to more engaging, visual stories.
Data Privacy
With the wide adoption of big data, there’s also been far more awareness from the general public about personal data being collected and used. Depending on your surveys, this may not directly apply to your research practices, but it’s a factor in prospective respondents’ minds, and it certainly applies to the secondary data used to supplement survey results. And while this isn’t a change in best practices (anonymize respondents and protect their privacy at all costs), it is a far more difficult landscape to navigate.


Market research will always be the same: gather data to help make better organizational decisions. So at this fundamental core, best practices of yesterday and today will remain tomorrow. It’s just important to differentiate between the core best practices we should be hanging onto because they define our industry, and the stubborn habits that are attached to outdated technology and practices – because those could actually hold us back.

Read the source article by Zontziry Johnson at GreenBook Blog

Cellphones to blame for 2012 polling inaccuracies

Study tells pollsters: Call more cellphones

The culprit of the 2012 US election polling inaccuracies, according to two professors in the department of statistics at Oklahoma State University (OSU), was cellphones. Pollsters need to recommit to stricter methodologies or risk further poll inaccuracies in the future.

Prior to 2012, state election polling tended to be accurate, but for the 2012 presidential cycle public polls had Mitt Romney as the winner of many battleground states. A post-election analysis noted that public opinion pollsters who only called landlines performed poorly, generally showing results that skewed more Republican than Democratic. It turns out that cellphone-only households don’t poll the same as the landlines, which leads to more bias in general.

In the 2012 election cycle, 40% of all US households were cellphone-only (CPO Households) up from 8% in 2006. Those households tend to lean more towards Democratic support (let your imagination determine the reasons), and so their underrepresentation is greatly skewing the results. It turns out pollsters are partially ignoring a defined portion of the population who vote decidedly differently from the segment who are being polled. And if we keep following this increasingly-invalid sampling methodology, polls will become even more skewed in the future – in just 3 years, it is estimated that CPO households are at nearly 60%.

Of course as political pollsters try harder to reach CPO households, FCC restrictions and the increased costs associated with stricter cellphone-calling are making it harder to follow standardized telephone survey methodology by equally weighting CPO and landline households. The OSU professors recommend that sampling methodology is adjusted “to reflect the new realities and account for a segment of the CPO voting population that tends to vote for Democratic candidates.”

Full regulations surrounding the new ruling by the FCC on TCPA changes has been released and we are working to provide a summary of the meaning soon. Keep an eye on the blog for what it means and how it will affect telephone surveying of CPO households.

Read the source article at The Magazine for People in Politics

Let us help you.

Discover how we can help YOUR organization solve its current survey needs.