Blog

  • Beyond Algorithms: Making Data Analytics Human-Centered, Ethical, and Accessible

    Beyond Algorithms: Making Data Analytics Human-Centered, Ethical, and Accessible

    What happens when data-driven insights meet human-centered values? And in an age where AI shapes everything we see and believe, how do we ensure communication stays grounded in empathy, ethics, and real human stories?

    I sat down with Dr. Janice M. Collins on the Spotlight on Research Podcast from Ohio University’s E.W. Scripps School of Journalism to tackle these questions, and explore what it really means to have a “data analytics advantage” in today’s media landscape.


    Here’s the truth about analytics in communication: The advantage isn’t just having data. It’s knowing how to interpret it meaningfully and ethically. Data analytics moves us beyond gut feelings and into insight-driven communication. It helps us understand audiences in real time—what they feel, believe, and respond to. But there’s a catch we can’t ignore: behind every data point is a human story. A comment. A click. A sentiment. A person. If we lose sight of that, we risk treating audiences as abstractions—numbers on a dashboard instead of living, breathing communities with needs, fears, and hopes.

    This book didn’t start in a boardroom. It started in a classroom and my research lab. For years, I watched students light up with curiosity about analytics, then freeze when faced with spreadsheets and statistical jargon. I saw professionals in consulting struggle to bridge the gap between technical analysis and strategic communication. And I realized something important: the problem wasn’t the data. It was how we talk about it. So I set out to create a bridge: between the technical and the human, between theory and practice, between intimidation and insight.

    The book reflects my work across three worlds. Academia gave me the theory and frameworks. Consulting showed me the urgency and messiness of real-world application. Analytics provided the tools to connect them both. The result? A guide that’s rigorous enough for the classroom but practical enough to use on Monday morning.

    I built the book around what I call the DAV Framework—Discovery, Analysis, and Visualization. It mirrors how we naturally solve problems. Discovery is about starting with curiosity, asking the right questions, and finding reliable data. Analysis means exploring patterns and relationships to make sense of what you’ve found. Visualization transforms those insights into stories people can understand and act on. Think of it as a journey: from finding data in the wild to crafting a narrative that moves people to action. It’s not about being a “data person”—it’s about being a curious person who wants to understand the world more deeply.

    Let me guess: When you hear “data analytics,” you think of complex formulas, intimidating software, and people who speak in statistical significance. I get it. That’s exactly why I wrote this book differently. Instead of starting with theory, I start with stories—real-world examples from social media, marketing campaigns, and everyday communication. The focus is on interpretation over formulas, on building confidence and curiosity instead of fear. Complex ideas are broken into manageable steps. I show you easy-to-use tools that work right away. Math anxiety? Left at the door. You don’t need to be a “data person” to do this work. You just need to care about understanding people better.

    Here’s where things get interesting. AI has made analytics faster, more accessible, and frankly, a bit magical. It can summarize thousands of social media posts in seconds. It can visualize trends you’d never spot manually. It can predict patterns before they fully emerge. But here’s what AI can’t do: make ethical judgments, understand cultural context, replace human empathy, or decide what’s fair. The real power lies in collaboration—humans guiding AI with critical thinking, ethical reasoning, and contextual understanding. We’re not becoming more dependent on machines. We’re learning to be better collaborators with them. And that requires us to stay sharp, stay ethical, and stay human.

    In our conversation, Dr. Collins and I dug into why you need both numbers and stories. Quantitative data tells us what is happening. Qualitative data tells us why. Imagine you’re analyzing a campaign and the numbers show engagement dropping off a cliff. Alarming, right? But why is it dropping? Are people offended by the messaging? Did the algorithm change? Is there a competitor campaign pulling attention? Did your audience simply move on to a new platform? A chart can’t tell you that. But interviews, open-ended survey responses, and thematic analysis can. When you combine both approaches, you capture the full picture: emotion, meaning, and behavior. In communication work, that synthesis isn’t optional, it’s essential.

    Which brings us to the ethical question we can’t ignore: Who’s included in our datasets—and who’s left out? Too often, marginalized voices are missing from the data that shapes media narratives, policy decisions, and business strategies. Algorithms trained on incomplete data perpetuate bias. Privacy gets sacrificed for convenience. Representation becomes an afterthought. Ethical analytics means being intentional about inclusion—whose voices are we hearing and whose are we missing? It means protecting privacy rather than exploiting it. It means asking whether our analysis reinforces inequity or challenges it. Data should empower, not exclude. If we’re not asking these questions, we’re not doing analytics responsibly.

    Social media data is chaotic, emotional, fast-moving, and contradictory. And that’s exactly what makes it so valuable. My advice? Embrace the mess. Use frameworks that help you organize chaos into themes. Focus on patterns rather than noise. And always—always—interpret data within its cultural and social context. A hashtag can be empowering or satirical depending on who’s using it. An emoji can signal joy or sarcasm. A trending topic can mean completely different things in different communities. Good analysts listen before they label.

    Most people don’t care about your data. They care about what it means. A good data story isn’t about showing more charts—it’s about showing the right ones that answer meaningful questions. The best stories simplify complexity without dumbing it down. They speak to both head and heart—logic and emotion. They make audiences remember and inspire them to act. Before you create any visualization, ask yourself: “What do I want my audience to do after seeing this?” That single question will transform how you communicate insights.

    Here’s my bold prediction: The next generation of communicators and business leaders won’t see analytics as a technical skill. They’ll see it as a form of literacy—as essential as reading, writing, and critical thinking. They won’t use data just to measure. They’ll use it to understand. They’ll blend AI tools with human insight to build communication systems that are more transparent, more empathetic, and more responsive to real human needs. In that world, data analytics isn’t just an advantage. It’s a responsibility.

    Dr. Collins asked the kinds of questions that made me think deeply about ethics, about empathy, about what communicators we want to become. Watch the full conversation and join us in exploring how to harness the power of data and AI while keeping human values at the center of everything we do.

    Questions? Thoughts? Let’s talk. Drop a comment below or connect with me—because the future of communication isn’t just about smarter algorithms. It’s about wiser humans using them.

  • The Data Storyteller: How I Turn Analytics into Insight and Impact

    The Data Storyteller: How I Turn Analytics into Insight and Impact

    We’re drowning in data but starving for insight. Here’s how I’m helping communicators bridge that gap.

    I started my recent talk with a question I often pose to audiences: “When you see the word analytics, what’s your honest first reaction—excitement, confusion, or skepticism?”

    The responses vary, but there’s usually a pattern. Some people lean forward with genuine curiosity. Others shift uncomfortably, associating analytics with spreadsheets, technical jargon, and numbers that seem to obscure rather than illuminate. A few remain skeptical—they’ve seen too many dashboards that promise insight but deliver only information overload.

    That range of reactions captures precisely why I do this work. As Associate Professor and Director of the Social Media Analytics Research Team (SMART Lab) at Ohio University, I’ve spent years exploring what I call the data paradox: we live in an era of unprecedented information abundance, yet we’re struggling to transform that data into meaningful understanding.

    My talk addressed this tension directly. How do we move from merely collecting data to actually learning from it? How do we turn metrics into meaning, and numbers into narratives that drive change?

    The Data Paradox: When More Means Less

    Consider this: every single day, our world generates 2.5 quintillion bytes of data. That’s a number so large it becomes almost meaningless. Organizations meticulously track every click, view, share, and scroll. They build sophisticated dashboards that visualize engagement rates, reach metrics, and conversion funnels.

    Yet despite—or perhaps because of—this abundance, many find themselves paralyzed. They have the data but lack the insight. They can tell you what happened but not why it matters or what to do next.

    This is the paradox I’ve devoted my career to addressing. More data doesn’t automatically mean more understanding. In fact, without the right frameworks and questions, more data often means more confusion.

    The solution isn’t gathering even more information. It’s about asking better questions and connecting the answers to human stories. It’s about recognizing that behind every data point is a person, and behind every trend is a community of lived experiences.

    When Metrics Mislead: A Tale of Two Strategies

    Let me illustrate this with two contrasting case studies I often share with my students and colleagues.

    First, consider Meta’s Threads, launched in 2023 with enormous fanfare. The platform achieved record-breaking sign-ups—100 million users in just five days. By conventional metrics, it looked like an unqualified success. Headlines celebrated the numbers. Analysts predicted Twitter’s demise.

    Then reality set in. Within weeks, Threads lost approximately 80 percent of its active users. The vanity metrics—the impressive sign-up numbers—had masked deeper problems. What the raw data didn’t reveal was user retention, community formation, or meaningful engagement patterns. Organizations focused on the excitement of acquisition while ignoring the critical signals that predict long-term viability.

    Now contrast that with Spotify Wrapped. Every December, millions of users eagerly await their personalized year-in-review summaries. They share them across social platforms. They compare listening habits with friends. They discover artists they didn’t know they loved. Some even feel seen and understood by their own data.

    What’s the difference? Spotify transformed analytics into self-expression. They made data emotional. They didn’t just tell users “you listened to 42,000 minutes of music this year.” They said “you were in the top 1% of fans for this artist” and “your music taste took you on a journey through these genres.” The data became personal, meaningful, shareable.

    The insight here is profound: data alone doesn’t inspire action or create connection. Storytelling does. Context does. Empathy does.

    The DAV Framework: From Numbers to Narratives

    This conviction—that analytics must be fundamentally human-centered—drives my new book, The Data Analytics Advantage (Oxford University Press, 2025). In it, I introduce what I call the DAV Framework: Discovery, Analysis, Visualization.

    The framework isn’t just a technical process; it mirrors how communicators naturally think and work:

    Discovery begins with curiosity. What questions matter? What patterns might exist beneath the surface? What stories are waiting to be told? This stage is about purposeful exploration, not aimless data collection. It requires us to approach information with both rigor and imagination.

    Analysis builds meaning from what we discover. It’s where we move from observation to interpretation, from correlation to potential causation. This stage demands critical thinking—the ability to see connections, identify outliers, and resist the temptation to let our assumptions dictate our conclusions.

    Visualization turns insight into impact. It’s where we make the invisible visible, where complexity becomes comprehensible, where evidence becomes persuasive. Good visualization isn’t just about making pretty charts; it’s about designing clarity and facilitating understanding.

    AI as Creative Partner, Not Replacement

    Artificial intelligence has transformed every stage of this storytelling process. I’m often asked whether AI will replace data analysts or make human interpretation obsolete. My answer is always the same: AI augments the storyteller; it doesn’t replace them.

    Consider what AI enables us to do now. We can scrape and process massive datasets that would take humans months to compile manually. We can use natural language processing to cluster topics and detect sentiment across thousands of conversations. We can generate visualizations that reveal patterns invisible to the human eye. We can test hypotheses and iterate analyses at speeds previously unimaginable.

    But—and this is crucial—AI can’t replace human judgment. It can show us patterns, but it can’t tell us which patterns matter or why. It can cluster comments by sentiment, but it can’t understand the cultural context that gives those sentiments meaning. It can generate correlation, but it can’t determine causation or ethical implication.

    The goal isn’t automation for its own sake. It’s collaboration between human curiosity and machine capability. The best data storytelling emerges when we combine AI’s computational power with human wisdom, ethical reasoning, and contextual understanding.

    The Profoundly Human Future of Analytics

    I closed my talk with a message I want every student, communicator, and organization to internalize: behind every dataset is a human story waiting to be understood.

    The future of analytics isn’t purely technical. It’s not about bigger models, faster processing, or more sophisticated algorithms—though those things matter. The future of analytics is profoundly human. It requires us to embrace the tools of artificial intelligence while never losing sight of empathy, ethics, and meaning.

    Data storytelling is ultimately about translation: taking the language of numbers and speaking it in the language of human experience. It’s about building bridges between evidence and emotion, between pattern and purpose, between what happened and why it matters.

    As we move deeper into an age of algorithmic mediation and artificial intelligence, this work becomes more essential, not less. We need people who can read data with both technical skill and humanistic wisdom. We need communicators who understand that analytics isn’t separate from storytelling—it’s a powerful form of it.

    My call to action is simple: learn these tools, but never let them replace your curiosity. Master these methods, but never forget why measurement matters. Build your technical capabilities, but always ground them in empathy.

    Because in the end, the data storyteller isn’t defined by algorithms or visualizations. The data storyteller is defined by the ability to turn information into understanding, and understanding into impact that improves human lives.

    That’s the work. That’s the opportunity. That’s what excites me about analytics.

    Reflections from my talk at the CIS Forum at Ohio University, October 9, 2025

  • Building Epistemic Resilience in an Age of Algorithmic Truth

    Building Epistemic Resilience in an Age of Algorithmic Truth

    The Trust Architect: Building Epistemic Resilience in an Age of Algorithmic Truth

    How do we sustain truth and trust when artificial intelligence shapes every conversation we have?

    We live in an era of algorithmic curation. Every scroll, click, and conversation unfolds within invisible architectures of code—systems that decide what information reaches us, which voices amplify, and ultimately, what we come to believe. In this landscape, trust has become paradoxically abundant and scarce: we trust our devices implicitly while doubting nearly everything they show us.

    This is the central tension of our moment. As a scholar of communication, I’ve spent years examining how technology mediates human understanding. What I’ve come to realize is that we’re not simply experiencing an information crisis—we’re witnessing the transformation of truth itself. The question isn’t whether we can fact-check our way out of misinformation. It’s whether we can redesign the very infrastructure through which knowledge flows.

    The Velocity Problem: When Lies Move Faster Than Light

    Consider how information travels today. A false claim about a public figure can circle the globe in hours, accumulating millions of engagements before any correction emerges. By the time fact-checkers publish their findings, the damage is done—not because people are gullible, but because our digital ecosystems reward speed over accuracy.

    I call this the velocity problem. Misinformation doesn’t succeed merely because it’s false; it succeeds because it’s engineered for velocity. Our platforms privilege emotion, outrage, and novelty—precisely the qualities that make falsehoods spread. Truth, by contrast, is slow. It requires verification, nuance, context. In an attention economy that measures success in milliseconds, accuracy becomes a competitive disadvantage.

    This isn’t an accident of design. It’s the design. Social media platforms optimize for engagement, not enlightenment. Algorithms amplify content that keeps us scrolling, regardless of its veracity. The architecture itself—the recommendation engines, the infinite feeds, the metrics of virality—creates an environment where misinformation thrives.

    As scholars and practitioners in communication and the wider information field, we must develop what I call infrastructural literacy: the capacity to understand not just the content of misinformation, but the systems that enable it to flourish. We need to read the architecture, not just the messages it carries.

    The AI Mediator: When Machines Join the Conversation

    Now add artificial intelligence to this equation. AI has fundamentally altered the nature of communication itself. These systems don’t merely transmit or filter information—they generate it. Large language models write news articles, draft emails, and increasingly, produce the very content we consume and share. Chatbots simulate empathy with uncanny precision. Deepfakes render video evidence unreliable.

    This represents a categorical shift in human communication. For millennia, we developed sophisticated heuristics for evaluating trustworthiness: reading facial expressions, detecting vocal inflections, assessing credentials. These mechanisms evolved in a world where communication was fundamentally human. What happens when the voice on the other end of the conversation isn’t human at all?

    AI-mediated communication raises profound epistemic questions—questions about how we know what we know. When algorithms curate our information environment, where does human judgment reside? When content is algorithmically generated, how do we distinguish authentic expression from synthetic production? The line between what is said and what is computed has dissolved.

    This isn’t simply a technological challenge; it’s a philosophical one. We’re forced to reconsider fundamental assumptions about meaning, intention, and truth. If a machine can generate text indistinguishable from human writing, what does authorship mean? If an algorithm can predict what will persuade us before we know it ourselves, what becomes of autonomy?

    These questions aren’t academic abstractions. They shape whether citizens can engage meaningfully in democratic discourse, whether communities can organize effectively for social change, whether individuals can maintain coherent identities in digital spaces.

    Building Resilience: The Architecture of Trust

    If the velocity problem and AI mediation represent the crisis, what might resilience look like? I propose we need epistemic resilience—not merely the ability to identify individual falsehoods, but the capacity to preserve the conditions that make truth-seeking possible.

    This requires us to become what I call trust architects: designers of systems, pedagogies, and institutions that embed verification, transparency, and human accountability at their foundation. Being a trust architect means asking different questions:

    • How do we design communication infrastructures that resist manipulation rather than reward it?
    • How do we cultivate discernment rather than simply demanding skepticism?
    • How do we foreground context over clickbait, depth over virality?
    • How do we build platforms that amplify marginalized voices rather than concentrate power?

    The answers won’t come from technology alone. They require interdisciplinary collaboration—bringing together computer scientists and ethicists, designers and educators, policymakers and community organizers. We need technical solutions, certainly: better content moderation, transparent algorithms, robust verification systems. But we also need social and educational responses: media literacy programs, ethical frameworks for AI development, institutional mechanisms for accountability.

    Paradoxically, AI itself can be part of the solution. When guided by human-centered, value-driven design, these systems can help identify misinformation patterns, reveal algorithmic bias, surface diverse perspectives, and expand civic understanding. Machine learning can detect coordinated disinformation campaigns. Natural language processing can flag manipulated media. Network analysis can reveal hidden influence operations.

    But, these capabilities only serve democratic ends when embedded within ethical frameworks that prioritize human dignity, epistemic justice, and collective wellbeing. Technology is never neutral. The question is always: whose values does it encode, and whose interests does it serve?

    Trust as Democratic Infrastructure

    Here’s what I’ve come to believe: Trust is not merely a feeling between individuals; it’s the infrastructure of democracy itself. Without shared mechanisms for establishing truth, democratic deliberation becomes impossible. Without epistemic common ground, we fragment into parallel realities, each with its own facts, its own expertise, its own conception of the possible.

    The erosion of trust we’re witnessing, in institutions, in expertise, in each other, isn’t just a social problem. It’s a crisis of democratic capacity. When citizens can’t agree on basic facts, when every claim is dismissed as propaganda, when expertise is indistinguishable from opinion, collective self-governance fails.

    This makes the work of building epistemic resilience fundamentally political. It’s not about creating systems of control or enforcing orthodoxy. It’s about designing conditions where truth-seeking can flourish—where evidence matters, where good-faith disagreement is possible, where collective learning can occur.

    The Path Forward: Wisdom Over Intelligence

    I want to close with a provocation: We don’t need smarter machines; we need wiser humans.

    The future of trust will not emerge from code alone. It will be co-created through critical thinking, ethical design, and epistemic humility. It will require us to ask not just “Can we build this?” but “Should we build this?” and “Who benefits when we do?”

    As researchers, educators, and practitioners, we have both opportunity and obligation. We can design better platforms. We can develop pedagogies of digital discernment. We can advocate for policies that prioritize human flourishing over corporate profit. We can build institutions that anchor truth-seeking in an age of algorithmic uncertainty.

    This work is urgent. The velocity problem accelerates daily. AI-mediated communication expands hourly. But it’s also deeply hopeful. Because while technology reshapes the landscape of truth, human beings still author its meaning. We remain the architects of trust.

    The question is, what we’ll choose to build!

    Highlights from my talk delivered at the CommDev Colloquium, September 26, 2024

  • Your Digital Compass: Why Your Online Presence is Your Most Vital PhD Tool

    Your Digital Compass: Why Your Online Presence is Your Most Vital PhD Tool

    I’ll never forget the feeling. It was my second year in the PhD program, and I was buried in a mountain of books and half-formed ideas. I’d spent months on a literature review, feeling like I was making progress, but it was all… internal. One afternoon, a professor from another university emailed me. He said, “I came across your conference abstract online. Your approach is fascinating. Have you read Dr. So-and-So’s latest work? It seems directly relevant.”

    Not only had he found me, but he had given me a crucial citation I’d missed. More importantly, he had given me something I was desperately lacking: an external, validating glimpse of my own work. In that moment, my private struggle became part of a public conversation. It was the first time I realized that building an online presence wasn’t about vanity; it was about building a compass.

    For students in communication, this is even more critical. Your field is about the transmission of ideas in a digital age. Your scholarship shouldn’t be hidden in a drawer until graduation; it should be living, breathing, and interacting with the world from day one.

    I now tell every new cohort: your LinkedIn profile, your Google Scholar page, and your personal website are not just for an external audience. They are your most powerful tools for self-reflection, grounding, and focus. They are the mirror that shows you who you are becoming as a scholar.

    Here’s how each platform serves this dual purpose.

    1. LinkedIn: Your Professional Narrative

    For the world, LinkedIn is your dynamic CV. It’s where you announce your research interests, share your latest conference presentation, celebrate a published paper, and connect with scholars, practitioners, and potential collaborators across the globe. It showcases you as a professional.

    But for you, the student, it’s so much more. It’s a living record of your academic journey.

    • It keeps you on track: Writing a post about attending a virtual symposium forces you to synthesize and articulate your key takeaways. This isn’t just sharing; it’s active learning.
    • It shows you where you stand: Seeing your profile evolve from “First-Year Doctoral Student” to “Researcher at X Lab” to “Author of Y Study” provides tangible proof of progress on a path that often feels endless.
    • It shows you where to go: Your feed is a curated stream of what your field is doing. It reveals emerging trends, new methodologies, and job opportunities you might never have found otherwise.

    2. Google Scholar: Your Intellectual fingerprint

    For the world, Google Scholar is a simple, powerful library of your work. It makes your contributions citable and accessible, amplifying your impact and allowing others to build directly upon your research.

    But for you, it’s your intellectual dashboard.

    • It keeps you grounded: Those citation counts, however small they start, are a humble reminder that knowledge is a collective endeavor. You are contributing to a web of ideas. Seeing someone else cite your work tells you what part of it resonated—invaluable feedback for a young scholar.
    • It is your personal mirror: Looking at your own Scholar page is like looking at the skeleton of your academic identity. Is there a gap? Does it accurately reflect your interests? It’s a stark, objective view of your output, pushing you to ask, “What’s next?”

    3. The Personal Website: Your Intellectual Home

    For the world, your website is your command center. It’s the one place you fully control your narrative. You can host your CV, blog posts, teaching philosophy, media appearances, and links to all your other profiles. It’s the definitive answer to “What do you do?”

    But for you, it is your sanctuary for reflection.

    • It forces clarity: Maintaining an “About My Research” page requires you to explain your complex dissertation idea clearly and concisely. You will revise this page a dozen times, and with each revision, your own thinking will become sharper.
    • It shows you where you need to go: That “Publications” or “Projects” section can be a powerful motivator. An empty page is a silent challenge. A growing list is a testament to your resilience.
    • It keeps you focused: Your website represents your best academic self. Updating it regularly is a ritual that reconnects you with your core purpose, especially on those days when you feel lost in the weeds.

    Start Now, Not Later

    I know the impulse. “I’ll build my website after I publish something.” “I’ll update my LinkedIn when I’m on the job market.” This is the biggest mistake you can make.

    The value of these tools compounds over time. The connections you make in your first year could lead to a collaboration in your third. The habit of writing small blog posts about your readings will make writing your dissertation chapters feel more natural. The digital paper trail is your academic journey.

    Think of it not as building a brand, but as building your digital compass. It points true north towards your goals, reflects the terrain you’ve already covered, and helps you navigate the inevitable fog of doctoral research. It makes the private, public; and in doing so, makes the solitary journey of a PhD a little less lonely, and a whole lot more clear.

    Now, go update your profile. Your future self—and your future collaborators—will thank you for it.

  • Discussing Brand Activism at AEJMC 2025

    Discussing Brand Activism at AEJMC 2025

    I’m honored to serve as a discussant at this year’s AEJMC Conference in San Francisco, where I’ll be contributing to a thought-provoking session titled “Consumer Reactions to Brand Activism.” This session brings together emerging scholarship at the intersection of branding, activism, and consumer psychology—areas that are increasingly vital in understanding the evolving landscape of media and marketing communication.

    As a discussant, my role is to critically engage with each paper, offering constructive feedback, synthesizing common themes, and posing questions that advance dialogue and deepen our collective understanding. The goal is not just to respond, but to elevate the conversation and challenge assumptions, highlight connections, and foster meaningful exchange.

    I genuinely enjoy serving as a discussant because it allows me to engage deeply with emerging research and contribute to the scholarly conversation in a meaningful way. Unlike a moderator, whose role is to facilitate the flow of the session, the discussant offers critical reflection—connecting ideas across papers, raising thought-provoking questions, and offering constructive feedback that can help strengthen the work. It’s an opportunity to both support and challenge fellow scholars, and to help create a more dynamic, engaged, and intellectually rigorous session. I find it especially rewarding to highlight connections that might not be immediately visible and to amplify the potential impact of the research being presented.

    The panel features a diverse lineup of timely and innovative research:

    • Dongjae (Jay) Lim and Samaneh Shirani Lapari (University of Alabama) explore the role of brand identification and moral reasoning in shaping how consumers respond to so-called “woke-washing.”
    • Xinyu Zhao, Hui Shi, and Zhengyan Li (University of Miami) dive into AI influencer activism, unpacking how consumers attribute responsibility and interpret motives in this emerging space.
    • Ashley Johns, Sophia Mueller-Bryson, Alessandra Noli Peschiera, and Julio Velasquez (Florida State and Miami) examine the nuanced line between authentic brand activism and activism-washing in advertising, a phenomenon many consumers are now quick to critique.
    • Tracey Kyles (University of Florida) introduces the concept of mirror branding, investigating how politically congruent consumerism can serve as both a persuasive strategy and a means of agenda setting.
    • Sofia Johansson (University of South Florida) takes us global with a critical analysis of green advertising, using Oatly’s campaigns as a lens to explore transnational appeals to environmental consciousness.

    Across these projects, a recurring question emerges: When brands take a stand, do consumers believe them—and does it matter if they don’t?

    As we grapple with growing consumer skepticism, AI-driven content, and demands for corporate accountability, this session promises rich insights into how brand activism is perceived, judged, and acted upon by increasingly discerning audiences.

    I look forward to engaging with these brilliant scholars and fostering dialogue that not only critiques but also imagines new possibilities for ethical and effective brand communication.

    If you’re attending AEJMC in San Francisco, I invite you to join us for this session. Let’s explore what it truly means for a brand to have a voice—and what happens when consumers start talking back.

  • Transforming Business Education with Data Insights

    Transforming Business Education with Data Insights

    In a world increasingly shaped by algorithms, dashboards, and digital decisions, the role of data analytics in business education is no longer a luxury, it’s a necessity. Business schools across the globe are recognizing this imperative and actively seeking ways to embed data-driven thinking into their curricula, teaching strategies, and institutional planning.

    Recently, I had the opportunity to contribute to this transformation by leading a national-level training workshop titled “Implementing Data Analytics and Big Data in Business Education,” organized by the National Business Education Accreditation Council (NBEAC) in Pakistan. The workshop brought together faculty, department chairs, and deans from some of the country’s most respected business schools for an in-depth, hands-on session that explored the future of business education in the data age.

    This wasn’t just another training session. It was a continuation of my broader commitment to modernizing education across borders and disciplines. Over the past several years, I’ve had the privilege of:

    • Helping design and enhance a communication program at the American University in Ras Al Khaimah (UAE) in alignment with SACSCOC—a leading accrediting body in the U.S.—focusing on curriculum rigor and quality assurance.
    • Enriching NUST Business School in Pakistan with data analytics coursework and pedagogical strategies rooted in practical application.
    • Offering curriculum and data consulting for a business school in Oman, helping align academic offerings with industry demands and analytics capabilities.
    • Leading student training workshops at the University of Ottawa in Canada, focused on integrating communication and data analytics skills into classroom practice.
    • Contributing to the Data Science BSc program at Ohio University, where I currently serve as Associate Professor and Director of the SMART Lab, focusing on social media analytics, digital strategy, and experiential learning.
    NBEAC Training by Dr Laeeq Khan

    These efforts reflect a unified vision: education must align with the skills and challenges of our time. Whether students are learning marketing, management, or communication, fluency in data analytics and big data must be foundational—not supplementary.

    The NBEAC workshop embodied this vision. It offered:

    • Practical strategies for curriculum redesign, including how to integrate tools like Tableau, Power BI, and Python into existing courses.
    • Case studies that illustrated successful institutional change through analytics-informed pedagogy.
    • Insights into how administrators can lead with data, using analytics for strategic decisions beyond the classroom.

    What made this experience truly rewarding was the level of engagement and reflection from participants. These were academic leaders who understood the urgency of transformation and were actively seeking frameworks to turn vision into action. The questions were sharp, the dialogue was rich, and the collaborative spirit was unmistakable.

    For me, this reaffirmed that consulting and academic leadership go hand-in-hand. It’s not just about delivering expertise—it’s about co-creating solutions with institutions ready to evolve.

    As the landscape of business and education continues to shift, I remain committed to supporting institutions around the world in their journey toward data-informed innovation. The tools exist. The urgency is clear. The next step is leadership that’s bold, strategic, and grounded in insight.

    If your institution is preparing to take that step—through curriculum modernization, faculty training, or strategic planning—I welcome the opportunity to collaborate.

    Transformation is possible. It starts with asking better questions—and knowing where to find the answers.

  • New Research Publication: Unpacking Credibility & Influence in Health Messaging

    New Research Publication: Unpacking Credibility & Influence in Health Messaging

    I’m excited to announce the publication of our latest collaborative research in Information, Communication & Society — a leading journal (Q1 Scopus) exploring the social dimensions of communication technologies.

    Our article, titled “Credibility and Influence in Health Messaging: Examining Medical Professionals’ Role on X in Promoting N95 Respirators during COVID-19“, takes a deep dive into how medical professionals leveraged X (formerly Twitter) as health influencers during the pandemic.

    Abstract: This study explores the role of health influencers on X (formerly Twitter) in promoting N95 respirators, with a focus on the accuracy and completeness of the information shared. It evaluates the impact of X influencers on public perception and policy regarding N95 masks. Using a tripartite model integrating eWOM, health messaging, and opinion leadership, the research analyzed 251,740 tweets through social network analysis (SNA) and content analysis. A systematic random sample of 21,436 tweets reveals that influencers with +100k followers and verified accounts achieved higher engagement. While health influencers played a significant role in shaping public understanding, gaps in detailed guidance highlight the need for actionable and precise messaging. Positioned at the intersection of public policy and marketing, our study emphasizes influencer collaboration and standardized communication strategies to improve health information dissemination on digital media.

    Read the full article here: https://www.tandfonline.com/doi/full/10.1080/1369118X.2025.2504605

    What We Explored
    As misinformation spread rapidly during the early stages of COVID-19, especially around mask usage, N95 respirators became a flashpoint for public discourse. Our study set out to examine how trusted voices—particularly medical professionals—navigated the digital space to promote reliable health information.

    We analyzed over 250,000 tweets using a mixed-methods approach, combining social network analysis with systematic content analysis, guided by a tripartite framework grounded in:

    Electronic Word of Mouth (eWOM)

    Health Communication Theory

    Opinion Leadership

    Key Findings
    ✅ Verified users and those with over 100K followers drove the most engagement.

    ❗ Even among top influencers, clear, actionable messaging was often missing—highlighting a crucial gap between credibility and communication effectiveness.

    🤝 Influencers played a pivotal role in shaping not just public understanding, but also broader policy debates around mask usage.

    A Global Collaboration
    This project brought together researchers from seven institutions across four continents, including:

    University of Hull

    Ohio University

    Sultan Qaboos University

    Indian Institute of Management

    Durham University

    Tufts University

    University of Sharjah

    Why This Matters
    In a world where digital platforms are central to public health communication, the credibility of messengers matters as much as the message itself. This research underscores the need for standardized, actionable, and accurate messaging—especially during global health crises.

    As we look toward the future of health communication, this work contributes to a deeper understanding of digital influence, trust, and impact in an age of information overload.

    Let’s continue working toward a digital ecosystem where science speaks louder than noise.

    HealthCommunication #SocialMediaAnalytics #COVID19 #N95 #DigitalHealth #eWOM #AcademicResearch #InfluencerEffect

  • Introducing my new book: The Data Analytics Advantage

    Introducing my new book: The Data Analytics Advantage

    I’m excited to introduce my new book, The Data Analytics Advantage: Strategies and Insights to Understand Social Media Content and Audiences. As someone who has spent years researching and teaching social media analytics, I’ve seen firsthand how challenging it can be for students, professionals, and businesses to navigate the overwhelming world of social media data.

    My journey into social media analytics has been shaped by both industry experience and academic exploration. As a social media strategist and consultant, I’ve worked with organizations to navigate the ever-evolving digital landscape, helping them harness data for better audience engagement, crisis communication, and strategic decision-making. At the same time, my role as an educator and researcher has allowed me to dive deep into the theoretical and methodological aspects of digital analytics.

    Serving as the Director of the SMART Lab at Ohio University, I have had the privilege of mentoring students, conducting cutting-edge research on AI-driven analytics, and developing hands-on workshops that bridge academic knowledge with real-world applications. Over the years, I saw a growing need for a resource that simplifies social media analytics while maintaining academic rigor—this realization ultimately led to the publication of The Data Analytics Advantage. The book brings together practical insights, analytical techniques, and AI-powered tools, providing a roadmap for anyone looking to make data-driven decisions in digital media and communication.

    I wrote this book to bridge that gap—to provide a clear, structured approach to understanding and applying social media analytics. Whether you’re a student, social media professional, data analyst, or business leader, this book will equip you with practical strategies and insights to turn raw social media data into meaningful, strategic decisions.

    Why This Book?

    Social media has become one of the most powerful forces shaping communication, marketing, and business strategy. But making sense of the massive volume of data it generates requires more than just intuition—it demands a structured approach, the right tools, and an understanding of data analytics principles.

    That’s why I designed this book to be both theoretically grounded and hands-on. While many analytics books focus heavily on coding or abstract theories, The Data Analytics Advantage provides a practical, step-by-step guide that makes social media analytics accessible—even to those with minimal coding experience. Striking the right balance between depth and accessibility was a major challenge. I wanted to ensure that readers without a technical background could still grasp complex concepts while offering enough depth to be useful to professionals.

    I am incredibly honored that The Data Analytics Advantage is published by Oxford University Press, a name synonymous with scholarly excellence and intellectual credibility. The book embodies a perfect blend of practical application and academic rigor, making it an essential resource for students, professionals, and educators alike. But my passion for analytics and technology didn’t just emerge from my professional career—it traces back to my early education. I pursued my A-Levels in Computer Science (University of Oxford) within the British education system, where I developed a strong foundation in computational thinking, problem-solving, and analytical reasoning. I still remember the excitement of writing my first lines of code and uncovering how data could tell compelling stories—an experience that ignited my lifelong fascination with technology and analytics. That early exposure to structured, logic-based thinking set the stage for my academic and professional journey, ultimately culminating in this book. With Oxford University Press as the publisher, The Data Analytics Advantage not only meets high academic standards but also serves as a practical guide for those looking to turn social media data into actionable insights.

    What’s Inside?

    This book is structured around a three-stage DAV-framework that simplifies the process of working with social media data:

    Discovery – Learn how to ask the right questions, gather data, and organize it effectively.
    Analysis – Master key analytical techniques, from sentiment analysis to network and image analytics.
    Visualization – Transform data into impactful dashboards, reports, and compelling stories.

    Each chapter introduces real-world case studies, industry applications, and interactive exercises, so readers can apply what they learn immediately. Whether you’re looking to track audience engagement, analyze trends, measure sentiment, or visualize social media interactions, this book provides the tools and techniques to do it.

    Companion Website

    The companion website for The Data Analytics Advantage: Strategies and Insights to Understand Social Media Content and Audiences by Laeeq Khan offers concise tutorials on key concepts and tools covered in the book. Additionally, it includes PowerPoint slides and sample quizzes to enhance your learning experience.

    https://global.oup.com/us/companion.websites/9780197814239/

    Key Topics Covered

    • Setting Goals & Measuring Success – Learn how to track key performance indicators (KPIs) and monitor social media trends.
    • Text Analytics & Sentiment Analysis – Understand audience emotions and reactions.
    • Social Network Analysis – Explore how information spreads and influences social media communities.
    • Image & Video Analytics – Learn how visuals shape engagement and messaging.
    • Data Visualization & Storytelling – Make data-driven insights clear and compelling.
    • AI & the Future of Social Media Analytics – Discover how artificial intelligence is reshaping the field.

    I also address important challenges such as data access, privacy, and ethics, which are becoming increasingly critical in today’s digital landscape.

    Who Should Read This Book?

    Data analytics isn’t just about numbers—it’s about insights that drive action. This book is for anyone looking to develop a strong foundation in social media analytics—without getting lost in technical jargon or heavy coding. It’s particularly useful for:

    📌 Students & Educators – A structured textbook with hands-on exercises for digital media, marketing, and communication courses.
    📌 Social Media Professionals – Actionable strategies to measure and improve engagement.
    📌 Data Analysts & Researchers – Analytical techniques to uncover deep insights from social media data.
    📌 Business Leaders & Marketers – Methods to leverage data for decision-making and brand strategy.

    A social media manager, for instance, could use sentiment analysis techniques from the book to assess public perception of a brand. A journalist could leverage data visualization strategies to tell compelling stories. The book provides practical case studies like these.

    Why Now?

    With artificial intelligence revolutionizing data storytelling, the ability to analyze social media content and extract meaningful insights has never been more critical. AI-driven tools can automate complex processes, uncover hidden patterns, and enhance decision-making—but their effectiveness depends entirely on the user’s ability to ask the right questions and interpret the results. Without a strong foundation in data gathering, cleaning, analysis, and visualization, AI is just an advanced tool without direction. The Data Analytics Advantage ensures that readers build this essential foundation first, enabling them to leverage AI techniques more effectively. By understanding the fundamentals of social media analytics, readers can harness AI not just to process data, but to derive actionable, strategic insights that drive impact.

    This book doesn’t just introduce AI—it equips readers with the skills needed to use AI responsibly and effectively. As AI continues to shape the future of digital engagement, it is essential to maintain a strong grasp of ethical and responsible data use. From identifying biases in AI-driven sentiment analysis to ensuring data privacy in network analytics, The Data Analytics Advantage prepares readers to navigate the opportunities and challenges of AI-powered social media analytics. Whether you are a student, educator, marketer, or analyst, this book offers a roadmap to staying ahead of industry trends while mastering the art of data-driven storytelling in the AI era.

    I hope readers walk away feeling empowered to use data effectively in their careers. Whether it’s making better business decisions, enhancing communication strategies, or simply becoming more data-literate, I want this book to be a practical guide for success.

    A Bit About Me

    As the Director of the SMART Lab at Ohio University, my research focuses on social media analytics, AI, and digital engagement. I have spent years working in both academia and industry, consulting on social media strategy, misinformation, and digital communication. With a Ph.D. from Michigan State University, I’ve dedicated my career to helping professionals and students make sense of the ever-evolving digital landscape.

    Get Your Copy

    Amazon: https://www.amazon.com/Data-Analytics-Advantage-Strategies-Understand/dp/0197814239

    If you’re ready to harness the power of social media data and take your analytics skills to the next level, I invite you to explore The Data Analytics Advantage. This book will empower you to discover, analyze, and visualize social media data with confidence—whether you’re an aspiring analyst, a communication professional, or a business leader looking to make smarter, data-driven decisions.
    https://global.oup.com/academic/product/the-data-analytics-advantage-9780197814222

    Let’s unlock the power of data together!

    Would love to hear your thoughts—how do you currently use social media analytics in your field? Drop a comment or reach out to connect!

  • Honored to Receive the Learn Pillar Award

    Honored to Receive the Learn Pillar Award

    I am deeply honored to receive Ohio University’s Learn Pillar Award, which celebrates excellence in teaching, student success, and experiential learning. This recognition holds immense meaning for me, as it reflects the values I strive to embody in my work every day.

    It is truly humbling to be acknowledged by my peers and students within the Scripps College of Communication—a community that champions innovation, critical thinking, and meaningful engagement with the ever-evolving media landscape. I am especially grateful to MDIA Chair Josh Antonuccio and Dean Scott Titsworth for their unwavering support and leadership.

    For me, teaching has always been a collaborative journey. Every semester, I am inspired by my students—their curiosity, enthusiasm, and willingness to challenge ideas. These qualities not only enrich our classroom discussions but also push me to grow as an educator.

    My approach to teaching is rooted in creating an engaging, student-centered environment that combines academic rigor with real-world application. I strive to foster excellence in teaching by designing courses that not only challenge students intellectually but also provide them with hands-on, experiential learning opportunities that bridge theory and practice. By integrating projects, industry collaborations, and data-driven problem-solving into the curriculum, I equip students with the skills and confidence needed to excel beyond the classroom. This commitment to student success is reflected in the achievements of my students, many of whom go on to publish their work, present at conferences, and make meaningful contributions in their fields.

    I often say that I learn just as much from my students as they do from me. Their perspectives, questions, and bold thinking make our learning environment vibrant and dynamic. For that, I am profoundly grateful.


    This award reinforces my commitment to fostering spaces where students can think critically, engage deeply, and explore boldly. I look forward to continuing this journey—together with my colleagues and students—toward creating meaningful learning experiences that prepare them for the challenges and opportunities of the media world.

  • Tutorial: Data Wrangling and Visualization

    Tutorial: Data Wrangling and Visualization

    Working with large datasets can be overwhelming, but with the right approach, you can extract meaningful insights and create compelling visualizations. In this blog post, we’ll walk through the process of finding a dataset, cleaning and organizing it, summarizing it using pivot tables, and finally visualizing key insights in Datawrapper. We’ll use a dataset on global traffic accidents from 2023 and 2024, containing 10,000 rows of data, as an example.

    Step 1: Finding a Dataset

    The first step is sourcing reliable data. Three excellent sources for open datasets include:

    1. Kaggle – Visit kaggle.com/datasets and search for relevant keywords such as “global traffic accidents.” Choose a dataset that fits your needs and download it.
    2. World Bank – The World Bank Data portal offers datasets on economic, environmental, and societal trends.
    3. Google Dataset Search – Go to datasetsearch.research.google.com to explore datasets from multiple sources.

    Once you find a suitable dataset, download it in CSV format so we can begin processing it. I searched on Kaggle.com and found the following dataset: https://www.kaggle.com/datasets/adilshamim8/global-traffic-accidents-dataset

    You can also download the data file from this website:

    To download the dataset, click on “Download” (note: you must register on Kaggle to access the file). The dataset will be downloaded as a .zip file, which needs to be extracted. After extraction, you will find a Microsoft Excel file named “global_traffic_accidents”. Alternatively, you can download the data file here.

    You can open this file using Microsoft Excel or Google Sheets for further analysis.

    Step 2: Cleaning and Organizing the Data

    After opening the dataset, it’s important to check its structure. Our dataset includes:

    • Date
    • Time
    • Location
    • Weather Condition
    • Road Condition
    • Vehicles Involved
    • Casualties
    • Cause of accident (e.g., speeding, weather, drunk driving)

    You will notice that this global traffic accident data file has 10,000 rows. Let’s tidy up. Dates might be inconsistent—let’s format them properly. I’ll also check for duplicates or blank cells. Clean data means accurate visuals later!

    Now, with 10,000 rows, uploading this directly to Datawrapper could crash it or create cluttered charts. Instead, we’ll use pivot tables to shrink the data and uncover key trends.

    Summarizing data in a Pivot Table:

    A Pivot Table is a powerful data analysis tool commonly used in spreadsheet applications like Microsoft Excel and Google Sheets to summarize, analyze, and reorganize large datasets. It allows users to dynamically group, filter, and aggregate data without altering the original dataset. With a few clicks, you can calculate totals, averages, counts, and percentages across different categories, making it an essential tool for business intelligence and reporting. Pivot Tables are particularly useful for spotting trends, comparing values, and generating insights from complex data. They also support drag-and-drop functionality, enabling users to quickly arrange and manipulate data by rows, columns, and values to gain meaningful insights.

    In Microsoft Excel, creating a Pivot Table is a straightforward process that provides powerful data analysis capabilities. To begin, open your dataset and select any cell within the data range. Then, navigate to the “Insert” tab on the ribbon. Here, you have two options:

    Recommended PivotTables – This option allows Excel to automatically analyze your dataset and suggest Pivot Table layouts based on common summarization patterns. It is a great choice for beginners or those looking for a quick way to visualize their data without manually configuring fields.

    PivotTable (Manual Creation) – Selecting this option lets you create a Pivot Table from scratch. A dialog box will appear, prompting you to choose the data range and the destination for the Pivot Table (either in a new worksheet or an existing one). Once the Pivot Table is inserted, the PivotTable Fields pane will appear on the right side, allowing you to drag and drop fields into Rows, Columns, Values, and Filters to customize the report according to your needs.

    Let’s summarize using Pivot Tables! For example:

    1. Casualties by Cause—are speeding or bad weather deadlier?
    2. Casualties and Vehicles Involved by Location—which cities/countries have the most incidents?
    3. Casualties by Road Condition—do icy roads lead to more deaths?

    We have now created a few pivot tables for each of the questions.

    See how grouping data instantly clarifies patterns? Now, instead of 10,000 rows, we have a focused summary. Now that we have summarized data and clear questions, let’s move to visualization.

    Step 3: Visualizing the Data in Datawrapper

    We’ll create three visualizations to answer our key questions.


    Casualties by Cause


    Casualties and Vehicles Involved by Location

    Second Visualization: Casualties and Vehicles Involved by Location—which cities/countries have the most incidents?

    The second visualization focuses on Casualties and Vehicles Involved by Location, highlighting the cities and countries with the highest number of incidents. This visualization will help identify hotspots where road accidents result in significant casualties and involve multiple vehicles. The visualization can also provide insights into the severity of incidents in different locations, aiding policymakers and researchers in developing targeted road safety measures.

    To answer this question we will create a Symbol map in DataWrapper.

    First, we paste the data from the Pivot Table into Datawrapper. The platform will then prompt us to confirm whether it should generate coordinates for the listed cities.

    Once the latitude and longitude information is added—either automatically or manually—we can proceed to the next step in the visualization process.

    In this step, we have the option to refine, annotate, and adjust the layout of the map. This allows us to enhance the clarity and presentation of the visualization by fine-tuning details such as color schemes, labels, and data points. The annotation feature helps highlight key insights, such as the most affected locations or notable trends. Additionally, modifying the layout ensures that the map is visually engaging and easy to interpret, making it a more effective tool for analysis and communication.

    I encourage you to explore the various functionalities available to enhance the map’s visual appeal and readability. Experimenting with features like color schemes, labels, tooltips, and layout adjustments can help create a more engaging and informative visualization.

    Here is the final version of the map, complete with an attention-grabbing title and a well-crafted subtitle that provides context. Additionally, I have utilized the tooltip functionality in Datawrapper to highlight Beijing, China, making key insights more accessible and interactive for viewers.


    Casualties by Road Condition

    For the third visualization, we aim to analyze Casualties by Road Condition, specifically exploring whether icy roads lead to more fatalities. To effectively present this data, we will use the “Chart” function in Datawrapper.

    To answer this question, we have multiple visualization options, including a bar chart, a range plot, or a well-structured table. Testing different formats is essential to determine the most effective way to convey insights. I chose the range plot because it provides a clear comparison of accident casualties across six different road conditions.

    From this visualization, we can see that dry roads account for the highest number of accidents, followed by gravel. This suggests that factors beyond road conditions, such as driver behavior, speed, or traffic volume, might play a significant role in accident causation—elements not explicitly covered in this dataset.

    Key Observations:

    • Dry roads have the highest number of casualties (8,838), followed closely by Gravel (8,461) and Wet roads (8,295).
    • Icy roads (8,171) and Snowy conditions (8,037) do not have the highest casualty rates, suggesting that bad weather alone is not the primary cause of accidents.
    • Under construction roads (8,080) also contribute significantly to casualties.

    The following is a simple table visualization, which effectively presents numerical comparisons in a clear, structured format.

    This tutorial provided a step-by-step guide on using Datawrapper to visualize road accident data effectively. We explored different visualization techniques to analyze key factors such as drunk driving, location-based accident rates, and road conditions contributing to casualties. By experimenting with maps, tables, bar charts, and range plots, we identified the most suitable ways to present insights.

    Through this tutorial, we not only learned how to create engaging and informative visuals but also gained deeper insights into road safety trends. By refining, annotating, and selecting the best chart types, we can make complex datasets more accessible, allowing for better decision-making and policy recommendations.