The Flipside of “The Social Dilemma” – Analytics as the Hero


Share on LinkedIn

I watched the Netflix docudrama “The Social Dilemma” last week and I was struck by the underlying message that social media technology is inherently bad. That the foundational technologies – data management, predictive analytics and artificial intelligence (AI) – are the root of the evil.

While I agree that predictive analytics and AI can be abused and used unethically are they always bad for humanity?

Analytics for good
The docudrama suggested that the data we provide social platforms is “poured out” to “systems with almost no human supervision” to “make better predictions about what we are going to do and who we are” Social platforms use these predictive models to drive engagement, growth, and advertising – all powered by analytical algorithms. This is all true. Yet predictive analytics was painted in a very negative light as though it’s only application is to manipulate humans in harmful ways. What the docudrama failed to provide was any meaningful mentions of how predictive analytics can positively impact us. Even within social media platforms AI and predictive analytics can be a force for good. Consider:

• Connecting with physically distant friends and family around the world
• Communicating news and information during times of emergency. Social media has been used during natural disasters and violent situations to bring people to safety.
• Linking donors to transplant recipients.
• Finding missing persons – whether through forced or voluntary disappearance.
• Understanding sentiment and emotion to indicate potential mental illness or future self-inflicted harm.

We see a lot of marketing and advertising on social media platforms – it’s the monetization model that fuels their business. But is the use of predictive analytics in all marketing and advertising related use cases bad? Predictive analytics and AI enable marketers to:

• Prevent customer churn, knowing that more churn results in increased costs. Delivering offers that help prevent churn save customers time and hassle and brands money in switching costs.
• Improve customer segmentation, knowing that better segmentation of customers results in more appropriate and contextually relevant messages and offers delivered to end customers – as opposed to broad brush generic ones that don’t fit the bill.
• Predict product purchase propensity, delivering messages for the right products across the right channels to customers – at the right point in time.
• Model risk, saving organizations and customers money by marketing to keep the right customers while removing the costly ones.
• Analyze sentiment on digital, social, and voice channels to improve customer service and ultimately the customer experience via personalized marketing message and offers.

As customer experience becomes the lone differentiator between brands, the use of data and analytics in the correct manner will drive both business benefit and customer experience improvement. I, like many others, want to see analytics used to continually improve the brand to customer interactions.

Where do we go from here?
The program wrapped by calling for reform – mainly around the advertising focused business model but also by providing restrictions and consequences for large technology companies around how they use the data they collect. I agree that this should occur, and I think we can take it even further.

I believe big technology companies will certainly see some reform placed on how predictive analytics and AI can be used. As social media platforms have more and more analytical algorithms coded into their user interfaces – regulations around what “coded behavior modifications” will be allowed will should to be put into place. To test a design update for its addictive value, put that update into production, and then continuing to modify that update to increase user addiction is wrong. Ethics standards are needed – where big technology companies come together to limit these changes in order to safeguard their end users.

I also hope that business and advertising models will be altered to be more user friendly. The intent should not be to serve as many ads as possible – but to serve the right, contextually relevant ads to users. These advertisements should be beneficial to users, not always encouraging the purchase of a product, but advertising health and wellness alternatives to the social media “tether” that is created.

From a user perspective, we need education. Just as the automobile was designed for good it can be a deadly weapon when in the wrong hands. Social media is the same – particularly with younger users. Users should be educated on the business model, know how social media impacts the brain, understand how to deal with social content of all kinds, and realize how they can curb negative usage habits. This should be done before they are allowed to create a user profile on any social platform. Responsibility lies with parents as well. Parents should discuss how social media can influence your life, and how to be smart with engagement behaviors. And parents should consider whether children and teens need social media at all.

Technology can certainly improve lives when used properly. Digital disruption influencer Daniel Newman recently summed it up by saying “I love tech. It is awesome. I eat, drink and sleep tech. Understanding how it works may help return a level of decency between fellow humans and drive technology innovation to do more than build awesome apps that connect the world, but technology that drives greater levels of compassion and humanity.”

Jonathan Moran
Jonathan Moran is a global product marketing manager at SAS, with a focus on customer experience and marketing technologies. Jon has over 20 years of marketing and analytics industry experience including roles at Earnix and the Teradata Corporation in pre-sales, consulting and marketing.


  1. Great post. Blaming social media technology for all manner of evil is a bit like blaming the syringe for heroin addiction. If we got rid of syringes in an effort to reduce the use of narcotic drugs, with it would go all the life-saving medicine administered with a syringe. We clearly need boundaries, safeguards and privacy-protecting protocols. But, those are all processes and tactics managed by people, not by technology. As we enter the era of advanced technology, let us keep the focus on the decisions made by perpetrators with bad manners, not circumvent accountability by blaming the tools they employ.

  2. Having also watched “The Social Dilemma”, the potential evils of subtle (and overt) digital messaging, as well as the benefits you’ve outlined, are all there. It feels like much of the challenge lies in leveraging and applying gathered data for real, personalized customer value.

    In one form or another, the negatives of business and advertising communication have been with us for decades, if not for centuries, perhaps beginning with Johannes Gutenberg and his invention of the printing press in 1450. (Much) more recently, we can look to Vance Packard. In his book, The Hidden Persuaders, first published in 1957, Packard explored advertisers’ use of consumer motivational research and other psychological techniques, including depth psychology and subliminal tactics, to manipulate expectations and induce desire for products, particularly in the American postwar era.

    Over the past 60+ years, as technology has brought us to the current state, we have seen both the challenges and opportunities of social communication, and the data it produces. Daniel Newman certainly sees the potential for positive application. That said, many don’t take nearly enough compassion and humanity into their use of data. Need one say more than Cambridge Analytica?

  3. Hi Jonathan: I am not sanguine about companies such as Amazon, Google, Facebook, Microsoft, and others that own vast data repositories ever coming together to safeguard their end users. That would require, among other things, consensus on the meaning of safeguard. I don’t foresee this group ever doing that – at least not in my lifetime. And egalitarian is an adjective far down the list of any I’d use to describe these companies. The only way to protect consumers is for the government to do it. This is one instance where thoughtful regulation is a good thing.

    You’ve pointed out some positive uses of big data and predictive analytics. They are abundant. What continues to fuel the negative consequences cited in the Netflix documentary is the fact that – like many technologies – companies have adopted a Panglossian view of AI, Big Data, Data Science, predictive analytics and personalization, completely ignoring equally abundant nefarious uses. Companies fail their non-investor stakeholders by ignoring the possible outcomes. Boards are complicit, often looking the other way as stakeholders are exploited (Wells Fargo), and not establishing even minimal governance that could mitigate financial and physical harm to employees, customers, suppliers, and communities. Weird times. When it comes to customer-centricity, companies talk a good game, but revenue is still king.

    I agree that when it comes to having the populace at large hold a healthy skepticism of technology – and social media technology in particular – we need education. But who will provide it? Certainly not the companies harvesting and monetizing our data. In a parallel example, Altria has an ostensible campaign to transition adult smokers from tobacco. How’s that going? Marlboro cigarettes are a cash cow for the company, as is Skoal, a smokeless tobacco product known to cause oral cancer.


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here