
As someone who has spent considerable time at the intersection of technology and organizational change, my work over recent years has become dominated by two interconnected themes: The Technology of Business and the Business of Technology. The first examines how digital tools have transformed from optional enhancements to the essential infrastructure of modern organizations, while the second explores the unprecedented power wielded by the handful of companies that create and control our digital landscape. At the core of these investigations lies a fundamental question that becomes more pressing with the rapid use of AI at scale:
How do we take a responsible approach to the adoption of digital technologies such as AI in our lives?
Such concerns are not new. In 1980, technology philosopher Langdon Winner posed a provocative question in his seminal essay: “Do Artifacts Have Politics?” Four decades later, as we face an increasingly AI-driven world dominated by a handful of technology giants, Winner’s inquiry is worth revisiting. With the reemergence of AI, concerns are being raised about how individuals can be exploited by AI and the power wielded by the companies driving its development. We are being challenged to review our understanding of the world and to reconsider how the political dimensions of technology that Winner identified have evolved and intensified in our modern landscape.
Winner argued that technological artifacts are inherently political – not just in how they’re used, but in their very design and existence. He presented two ways this can be seen: technologies that require specific social arrangements to function, and those deliberately designed to produce particular social outcomes.
His famous example of Robert Moses’ parkway bridges in New York – built intentionally low to prevent buses (and by extension, lower-income and minority populations) from accessing certain areas – illustrates how physical infrastructure can embed discrimination within seemingly neutral design decisions.
Winner’s analysis goes deeper than simply saying “technology can be used for good or ill”. Instead, he demonstrated that certain technologies inherently favour specific distributions of power, authority, and privilege. Some technologies are more compatible with democratic social structures, while others practically demand hierarchical control.
Today, as AI adoption accelerates, we’ve reached a point where digital technology isn’t just a competitive advantage but often existentially necessary for businesses to function. I call this “The Technology of Business” phenomenon. What began as efficiency tools has evolved into the core of every modern organization.
Cloud computing, enterprise software, digital payment systems, and AI-powered analytics aren’t optional add-ons anymore – they’re the foundation upon which contemporary business operates. This creates a profound dependency relationship that Winner would recognize immediately as political – it embeds key elements of the organization’s core beliefs.
Consider how workplace surveillance technologies, productivity tracking software, and algorithmic management systems are restructuring power dynamics within organizations. Leadership teams implementing these technologies must now ask: “What kind of workplace are we designing? What values are embedded in these systems?”.
In many cases, the answers aren’t always easy to define or comfortable to consider. Many of the current AI systems and tools centralize authority, reduce worker autonomy, and enforce particular rhythms of work that benefit certain stakeholders over others. As Winner might observe, they aren’t politically neutral – they actively shape organizational power structures, often without explicit acknowledgment of this function.
Simultaneously, we’re witnessing what I term “The Business of Technology” – where a small number of technology companies have accumulated unprecedented influence over both markets and society. Apple, Amazon, Microsoft, Google, and Meta don’t just produce tools; they establish the underlying digital infrastructure and rules governing large parts of commercial and social activity.
Often, executives only reluctantly acknowledge they must play by the rules of these platforms, accept their commission structures, optimize for their ranking algorithms, and adapt to their privacy policies. When these tech giants make policy changes, entire industries must scramble to adjust. Aspects such as their app store guidelines, content moderation decisions, and API access policies effectively function as private regulations with public impact.
This represents the kind of consolidation of power that Winner cautioned against – technological systems that require or create specific social and economic arrangements. The difference is scale: these artifacts now operate globally, across jurisdictions, often outpacing traditional regulatory frameworks.
Winner’s insights are particularly relevant as we reflect on the current wave of AI adoption. Large language models, GenAI, and algorithmic decision systems embody both technical capabilities and specific values, assumptions, and power structures.
As I’ve worked with organizations to define AI strategies, a key consideration is to expose the tensions between efficiency gains and shifts in workplace authority. When algorithms begin making or recommending decisions previously made by humans, we’re not simply improving productivity – we’re restructuring how authority operates.
These systems reflect their creators’ priorities and biases, as Winner’s work predicts. Training data choices, optimization metrics, and design decisions aren’t merely technical specifications – they encode specific worldviews and priorities. When considering AI governance, Winner reminds us to acknowledge that technology architecture is simultaneously social architecture.
So where does this leave us as business leaders and decision-makers? First, we must acknowledge the political dimensions of the AI-based technologies we are deploying today. Pretending they’re neutral tools obscures important questions about power, accountability, and values.
When evaluating new AI technologies, my personal approach is to focus not just on defining ROI and efficiency goals, but also describing their governance implications: Who gains authority and who loses it? What values are encoded in this system? What dependencies are we creating? Whose interests are prioritized by default?
At an organizational level, I am also placing a much stronger focus today on understanding and supporting policy and regulation agencies as they address AI policy. While individual organizations may feel powerless against Big Tech platforms, coordinated action through industry associations, regulatory engagement, and careful vendor selection can help rebalance the relationship.
In terms of my work with AI technology providers, Winner’s analysis suggests a responsibility to consider the social implications of design choices. For technology adopters, it means thinking must shift well beyond a focus on functionality toward explicitly declared governance practices that recognize AI adoption’s broad impact.
In recent years, digital technologies have had a profound impact on society. Winner concluded his seminal essay by arguing that what matters isn’t technology itself but the social systems in which it’s embedded. Today, those social systems include global platform economies, algorithmic governance, and AI-powered decision frameworks far beyond what he could have imagined in 1980.
Advances in AI will only accelerate these effects. The work of pioneer such as Langdon Winner provide important reminders that responsible digital leaders must become much more aware of both the technological and social implications of AI adoption.
As we navigate this landscape, we would do well to keep his core insight in mind: our technologies are never politically neutral. They encode values, redistribute authority, and shape human relationships in profound ways. By acknowledging this reality, we can make more thoughtful choices about which technological futures we wish to create.
Originally posted here