The traditional model of collecting user data is hurting us all. Let’s change it, with blockchain.

4 Ways to Reinvent the Facebook/Google Data Model in the Wake of Cambridge Analytica

The Cambridge Analytica scandal, as CNBC reported, marked an important tipping point in how many of us view tech giants like Facebook and Google.

To briefly recap what happened: Facebook allowed a third party (the British political consulting firm Cambridge Analytica) to access huge amounts of Facebook user data and use it for political ends. This data contained details about the identities of millions of Facebook users and was shared without their direct knowledge.

A massive controversy ensued and is still in full swing. Just this week, Mark Zuckerberg tried to address the European Parliament’s concerns. This and other recent events have confirmed what many of us have known for some time: Our data is not safe with the Facebooks and Googles of the world and is used without our knowledge.

Since the Cambridge Analytica scandal broke, widespread protests and furious debate have surrounded the issue of how our personal data ought to be handled. The ethics of the tech juggernauts are being questioned as never before.

As the case against Facebook unfolds and as that conversation dominates the spotlight, the flaws with the Facebook/Google model (where big companies control our data and do with it as they please) are becoming more apparent every day. It’s a system based on profiting from user data, and it pretty much benefits no one aside from the platforms themselves.

Let’s take a look at the problems the model shows.

A seriously flawed system

The problems boil down to the fact that users of these companies’ services don’t own their data. It’s the property of the platforms themselves, and that means it just isn’t safe.

While Facebook repeatedly insists that it doesn’t directly sell users’ data to advertising companies, this is an incomplete answer. In reality, both Facebook and Google work closely with ad companies, using users’ data to help them — the tech companies — bolster their efforts to target their own ads at the users most likely to respond well.

Worse, the Cambridge Analytica scandal is proof that this data isn’t safe with these tech companies. It can (and does) easily fall into the hands of people we know very little about, who may then use it in ways we may not agree with.

But the issues run deeper than privacy.

The data-model benefits users and advertisers in theory because users get to see ads for things they like, and ad companies get higher responses and ultimately more money.

Unfortunately, things aren’t quite that straightforward. Often, data is inaccurately used, leading to ad companies targeting the wrong users and losing money. So a data aggregator might say it didn’t sell the data — that it simply got leaked or wass “harvested” by other companies (as Zuckerberg testified in…