ReportLiveStatisticsResearchMethodologyAbout

Methodology

How CareVoice collects, anonymizes, and publishes what nurses report. Updated whenever the instrument or pipeline changes.

Modular survey design

CareVoice is not one questionnaire. It is a menu of twelve opt-in microsurveys, each focused on a single working-condition dimension. A respondent picks one topic, answers it in 60 to 120 seconds, and leaves. She can return another day for another topic. Each submission is atomic — completing one topic does not require completing any other.

The twelve modules in current rotation: burnout (CBI-6), workplace abuse, family and personal-life impact, retaliation after reporting, institutional violence (TSO and ratios), intent to leave, new grad / first 2 years, agency and travel staffing, pregnancy and postpartum at work, harassment and discrimination, patient-safety near-misses, and open testimony.

Modular design is a deliberate trade-off against the broad omnibus surveys CFNU and SHCWEP run annually. The cost is that not every respondent answers every dimension. The benefit is a 60-second commitment per session, which lets the registry capture nurses who would not give twenty minutes — and gives every dimension its own k-anonymous denominator instead of forcing all questions onto every respondent.

Base-layer demographics

Three short questions are asked once per submission, before the topic-specific module begins: province (required), unit type (optional), years of practice in bands (optional). Province is required because every published cut is sliced by it; unit and experience are optional because we do not want a missed checkbox to gate the registry.

The base layer is stored separately from the module answers. This means any module can be cross-cut against any base-layer dimension at the analysis stage, while individual responses stay isolated from the larger registry.

We never ask for the name of an employer, a hospital, a CIUSSS or CISSS, a supervisor, or a patient. Only categories.

Burnout — Copenhagen Burnout Inventory (CBI-6)

Burnout is measured with the work-related subscale of the Copenhagen Burnout Inventory (Kristensen et al., 2005, doi:10.1080/02678370500297720). The instrument is in the public domain, validated in français québécois (Dion & Tessier 1994), shorter than ProQOL, and avoids the depersonalization assumption that makes Maslach awkward in nursing contexts.

Six items, five-point frequency scale (Always · Often · Sometimes · Seldom · Never/Almost never). Server-side scoring maps to 100 / 75 / 50 / 25 / 0; the per-item score is averaged into an overall 0–100 figure. Severe burnout threshold: ≥ 50. Critical: ≥ 75.

Individual scores are never displayed. Module aggregates are only published when the cell has n ≥ 30. A respondent who completes only the burnout module produces one row; that row never resolves into a published number until at least 29 other respondents have answered the same item under comparable demographics.

Workplace-abuse taxonomy

Four act-types are surfaced as separate tiles, not collapsed into one: verbal abuse, physical assault, sexual assault or unwanted sexual contact, and threats of violence. The split between physical and sexual is a deliberate methodological upgrade over CFNU 2017, where sexual assault is grouped under "physical" and disappears in the headline.

Source of abuse is captured as a separate multi-select: patient or resident, family or visitor, colleague (peer nurse or support staff), physician, supervisor or manager, senior administration. Physician is its own tile — collapsed into "colleague" in earlier instruments, and the resulting low physician-perpetrator rate in those instruments is an artifact of the taxonomy, not the workplace.

Frequency in the 30-day recall window is captured in bands (1, 2-3, 4-9, 10+). Bands let us publish intensity ("X% experienced abuse 4+ times in 30 days") without exposing exact counts that would compound with base-layer demographics into re-identifiable cells.

Skip-logic and branching

Several modules carry conditional branches that route a respondent through different question sets depending on her earlier answers. The retaliation module is the most consequential: question 1 splits the sample into three populations — those who formally reported an incident in the last 12 months, those who considered reporting but didn't, and those who had nothing to report. Each branch has its own follow-up questions; the populations never share a denominator.

Skip-logic is encoded declaratively in the database, not in client code. Every rule has the form "if a previous answer matches, jump to a specific later question or end the module." Compound rules use AND/OR combinators. The form runner evaluates rules in order; first match wins.

Branching is what differentiates this instrument from a flat Likert battery. The chilling-effect rate (the population that wanted to report but didn't, broken down by reason) is a number that does not exist in any Canadian population dataset. The branch is what makes it measurable.

K-anonymity and cell suppression

No published cell shows fewer than 5 responses. This is the floor; several modules raise it.

On the home dashboard and the per-province map, n ≥ 5 is the gate. On dimension cross-tabs that compose three or more variables (province × unit × incident type, for instance), n ≥ 10 is the gate. On the patient-safety module specifically, n ≥ 25 applies to any cross-cut that combines incident type with location — Crookes v Newton co-publisher liability is highest there, and we accept the lower statistical resolution as the trade-off.

The harassment-and-discrimination module suppresses identity-dimension × small-region cells below n = 10, because the combination "language harassment + Saguenay + ICU" can re-identify a single respondent on a small unit even when each dimension alone is k-anonymous.

Suppression rules are enforced at the analysis layer, not by post-hoc redaction. Cells below threshold are never queried into existence in the first place.

Encryption and moderation pipeline

Free-text narratives — when a module includes one — are encrypted at the application layer with AES-256-GCM before they touch the database. The encryption key is held in an environment variable, separate from the database credentials. A leak of the database without the key still does not expose any narrative text.

The encrypted column is never published. A second column holds the human-redacted version that the moderator approves; that is the only narrative the public ever sees. Submissions whose narrative cannot be safely de-identified are marked rejected and the redacted column stays empty forever — the encrypted original is wiped on the standard 90-day rotation.

Moderation is performed through an authenticated admin queue that decrypts narratives one row at a time, in a privileged context, never from the browser. The moderator's username is recorded on every approve / reject / withdraw action for audit. A respondent who later wants her testimony withdrawn can email the contact address; withdrawal clears the published row and is logged with a timestamp.

Redaction rules

Every published narrative passes through the same seven rules. Each rule is enforced manually by the moderator before the testimony enters the public archive.

  • Replace all proper names of people with role: 'Dr. Smith' → 'a physician', 'Marie' → 'a colleague'.
  • Replace hospital, CIUSSS, CISSS, and clinic names with role plus region only: 'CHSLD Saint-Joseph in Laval' → 'a CHSLD in the Laurentides'.
  • Replace exact dates with relative windows: '14 mars 2024' → 'a Tuesday'. Months and years are usually safe; specific days and dates are not.
  • Replace room numbers, ward identifiers, employee IDs, and license numbers with generic placeholders.
  • Strip professional titles that combined with the unit and province become identifying — for example, the only NP on a small rural unit.
  • Preserve the respondent's voice. If cuts compromise meaning, mark the row rejected rather than rewrite to neutrality.
  • If the testimony names a specific incident at a unit that has fewer than 10 reports in the registry, suppress publication entirely and mark rejected.

Bilingual non-translation

Canadian English and Québec French are treated as first-class languages on this site. Neither is a translation of the other. Question copy is written in français québécois first and adapted to English-Canadian after, because Quebec-specific labour vocabulary (TSO, CHSLD, CIUSSS, CISSS, FIQ, OIIQ, Comité paritaire SST) does not survive a faithful translation from English.

A small number of options exist only in the FR-QC instrument. The clearest example is the Comité paritaire SST tile in the retaliation module's report-channel question — a Quebec-specific joint occupational health-and-safety committee with no exact Ontario equivalent. The schema permits these QC-only tiles via a flag on the option; the database trigger enforces that no other option may exist asymmetrically across the two languages.

Aggregations that combine FR-QC-only tiles with EN-CA submissions are not published — the denominators differ.

Sampling and limitations

CareVoice is a self-selected convenience sample. It is not a probability survey. Respondents reach the registry through nurse networks, social media, and direct recommendation; the sampling frame is unknown.

Comparisons to CFNU 2017 (Enough is Enough), CFNU 2022 (Outside Looking In), StatCan SHCWEP 2021, RNAO 2022, OIIQ 2022-2023, and Havaei et al. (UBC, 2021-2023) are directional, not statistical. When a CareVoice rate diverges from one of those baselines, the divergence is read as a signal — to investigate, not to overturn — and never as a more accurate measurement.

The registry's structural advantages over annual academic surveys are: a 30-day rolling recall window (versus annual), province-level granularity (versus national-level reporting), and incident-level outcome data (versus general perception). Its structural disadvantage is the unknown sampling frame.

Versioning and changelog

Every change to a module — a new question, a renamed option, a tightened suppression threshold — produces a database migration with a timestamp. Migrations are append-only; no historical row in the registry is ever rewritten in place.

Quarterly reports cite the module versions active during the reporting window. A nurse who reports under module version 1 is never re-cast as having answered version 2 even if version 2 changes the wording.

When a methodological decision changes, this page is updated and the prior text is preserved in the project's documentation repository. The first version of this page was published in May 2026.

Editorial standards

CareVoice publishes two distinct kinds of content: the CareVoice ledger (first-party anonymous nurse reports, behind k-anonymity floors) and the research surface (third-party citations from CFNU, CIHI, OECD, peer-reviewed journals, government surveys). The two are kept on separate pages and never co-mingled in a single chart.

Every external statistic carries: source organisation, publication name, year, URL, and a `lastVerified` ISO date. Stats older than five years are flagged `archival`; stats older than two years are flagged `stale`. The freshness rule is enforced in code, not by an editor's judgement.

Numbers shown on this site are not paraphrased from media coverage. When a media outlet reported on a primary source, CareVoice cites the primary source directly. When only secondary coverage exists for a claim, the claim is not anchored on the research surface.

Corrections policy

If you spot an error — a wrong figure, a misattributed source, a stale URL, a mistranslation between EN and FR — write to contact@nursio.io with the URL and what's wrong. We aim to correct verifiable factual errors within five business days.

Material corrections (a stat replaced or retracted) are logged in the project repository changelog with the date, the previous value, the new value, and the reason. Page-level `dateModified` reflects the most recent verified correction across all cited stats.

Cosmetic edits (typos, spacing, link reflows) are not logged but do not change the page's `datePublished`. The instrument the data was collected with is never silently revised — see Versioning above.

Conflicts of interest

CareVoice receives no advertising revenue, no sponsorship from hospitals or healthcare employers, no funding from professional regulatory colleges, and no government funding tied to data access. The project is personally funded by its founder through Nursio (1001511837 ONTARIO INC.).

The founder is a registered nurse trained in Quebec. The founder's nursing licence and current employment are disclosed to the incorporating registry but are not made public on this site, in line with the same anonymity principle CareVoice extends to respondents.

If the funding model changes — for instance, if CareVoice ever accepts a foundation grant — the change will be disclosed on this page within seven days, and the Organization JSON-LD `funder` property will be updated to match.