Online Safety Campaigns

Public content designed to show up in high-risk searches and open space for better questions.

What This Means

We work to place thoughtful content where harmful patterns often begin, early and with care.

Harmful loops don’t usually start with extreme content. They often begin with a question, a search, or a post that shows up at just the right time. What follows is repetition. The more someone sees, the deeper it can go.

We work to build content that appears early in that process. It’s designed to feel familiar, matching the tone and format of what someone is already engaging with, but it shifts the pace, opens space for reflection, and introduces different questions.

This is not public awareness at scale. It is quiet, targeted work that aims to interrupt harmful patterns before they gain momentum.

Comment Thread Interventions: How to Step In Without Making It Worse
How We Do It

Targeted content designed for real searches, not assumptions.

We study aggregate search, viewing, and engagement trends where harmful patterns often form. Then we work to build content that appears in those same spaces.

Some of it ranks in search. Some of it runs as ads. Some of it blends into feeds. It is written to feel familiar, but includes just enough friction to invite a pause.

We use the same skills that businesses use to market products: audience research, SEO, ad targeting, and ongoing testing. But instead of selling something, we focus on slowing things down and creating space for reflection.

We test headlines, formats, and tone. If something is not working, we change it. This is adaptive work. The goal is not just to be seen, but to reduce harm and protect trust.

How We Measure Impact

We monitor overall reach, relevance, and signs of resistance to further escalation.

We don’t track individuals. We track patterns.

We don’t track individuals. We track patterns. We monitor what shows up in search results and how our content performs near high-risk terms. We look at which posts are shared, engaged with, or overlooked. We test tone, headlines, and formats to see what slows things down and what gets passed over.

On social platforms, we look at reach, saves, and signs of deeper engagement beyond likes. On websites, we track bounce rates and time on page to understand what holds attention. For scripts and real-world tools, we consult with professionals to learn how they land in actual conversations.

We’re not trying to go viral. We’re trying to meet people where they already are and offer a pause. A chance to reconsider, not react.

Support the Cause

Every dollar helps us place thoughtful content where it’s most needed.

Right now, every dollar supports the creation and placement of content that helps slow harmful patterns and open space for reflection online.

A contribution of $25 helps us reach around 2,500 people with thoughtful, well-placed content.

You can make a one-time contribution by clicking the button below!

Monthly Supporter

Ongoing contributions help us maintain long-term content, keep signals active across platforms, and continue refining what works.

$
5+/m
What your contribution supports:
Digital Kits Access

These kits offer practical tools to help you respond when someone may be caught in a harmful content loop.

$
15/m
What your contribution supports:
Fund a Signal Tier

This tier has full access to all digital kits. You're not just using the tools, you're helping carry the signal forward.

$
25/m
What your contribution supports:
Help place the next signal

Not ready to contribute?

Common Questions

What to know before you support this work

This isn’t a traditional campaign. We don’t follow a standard fundraising model, and the work stays intentionally quiet. But it’s built with care, tested over time, and shaped by what we learn. Here are some of the questions people ask before contributing, including how funds are used and how we understand impact.

No. We don’t track individuals. We study patterns in content, search behaviour, and engagement using public tools and aggregate data. Our goal is to understand how harmful loops form—not to monitor people.

Not at this time. All contributions go directly toward content creation, testing, research, and ethical media placements. We’re building this with transparency and public interest in mind.

Funds support ad placements, unbranded content sites, media production, research, and partnerships with professionals. We use lean, purpose-driven budgeting and keep our overhead low.

We create blog posts, short-form videos, comment thread replies, ad copy, and other media designed to show up in feeds and searches where harmful loops tend to start. It’s familiar in tone, but offers space to reflect.

We track public metrics like search rank, bounce rate, saves, time on page, and interaction rates, not personal data. If something doesn’t land, we change it. We also work with counsellors, educators, and digital professionals to gather qualitative feedback.

We don’t brand all of our content. Sometimes, keeping it unlabelled helps it land more gently and keeps the focus on the message, not the source.

No. Our content is meant to support conversations and slow escalation, not to replace mental health services or legal interventions. If someone is in crisis, professional help is essential. 

Yes. If you work in content, research, tech, or media buying, and want to help quietly, reach out here.