Jiayu Zou

Artifact 02

AI Agents and Managerial Decision Making

in Business Organizations

Publication

ENGW 3304 Research Report

Date

February 28, 2026

Course

ENGW 3304 — Advanced Writing in Business Administration · Dr. Tabitha Clark · Northeastern University

Target Audience

Business professionals, managers, and academic readers with an interest in organizational management and AI governance

Skills Demonstrated

Research SynthesisAcademic WritingAPA CitationBusiness Report FormatCritical Analysis

Sections

Introduction to Artifact

Full Text

Reflection

Introduction to Artifact

This formal research report was produced for ENGW 3304: Advanced Writing in Business Administration under Dr. Tabitha Clark at Northeastern University. The assignment required students to identify a significant and timely issue in business or management, conduct a structured review of peer-reviewed scholarship and industry data, and produce a professional report that moves from evidence to analysis to actionable recommendations. The genre demands a different set of skills than opinion writing: here, the writer's voice recedes in favor of rigor, structure, and the disciplined synthesis of multiple sources.

I chose to examine how AI agents — systems capable of autonomous or semi-autonomous action — are reshaping managerial decision-making in contemporary business organizations. This topic sits at the center of my academic interests: it is simultaneously a question about technology, organizational behavior, and business strategy. The challenge was not finding relevant research, but navigating a rapidly evolving literature and constructing a coherent argument from sources that sometimes reached conflicting conclusions.

In producing this report, I developed several skills that are central to professional writing in business contexts: the ability to synthesize peer-reviewed scholarship into a clear, accessible argument; the discipline of APA citation and formal report structure; and the analytical skill of moving from findings to recommendations without overstating what the evidence supports. I also practiced writing an AI disclosure statement — a transparency practice that reflects the ethical standards expected of professional writers in an era of generative AI.


Full Text

Introduction

Artificial intelligence is increasingly shaping how business organizations make decisions. Many companies now rely on advanced AI systems to analyze data, generate insights, and support strategic and operational choices. Recent developments in generative AI have further expanded these capabilities, allowing AI systems to act not only as analytical tools but also as autonomous or semi-autonomous agents that influence decisions in real time. As a result, managers face growing challenges in maintaining control and accountability while leveraging these technologies.

Research shows that AI can significantly improve the speed, scale, and quality of strategic analysis. AI systems are capable of processing complex information more efficiently than human decision-makers and can evaluate a wider range of strategic options (Csaszar et al., 2024). However, the integration of AI into decision-making processes also changes how managers understand authority, responsibility, and control within organizations (Raisch & Krakowski, 2021). These shifts require managers to reassess not only what AI systems can do, but also how they should be governed.

This report evaluates how AI agents influence managerial decision-making, accountability, and control in business organizations and provides recommendations for how organizations should manage these systems effectively.

Key Findings

Recent research demonstrates that AI agents are increasingly shaping managerial decision-making in contemporary business organizations. Studies show that these systems enhance analytical capacity by processing large volumes of structured and unstructured data that exceed human cognitive limits (Csaszar et al., 2024). By aggregating diverse data sources and identifying patterns that are difficult for humans to detect, AI systems enable more comprehensive evaluation of market conditions and strategic alternatives. This allows managers to move beyond intuition-based judgments and rely on more data-driven analysis.

However, improved analytical output does not necessarily lead to better decisions. Research shows that AI-generated recommendations depend heavily on how managers define problems and set objectives (Csaszar et al., 2024). When goals are unclear or constraints are biased, AI agents may reinforce flawed assumptions rather than correct them. This suggests that AI does not replace managerial judgment but instead amplifies it — for better or worse.

Managerial control emerges as a central issue in the adoption of AI agents. As AI systems gain autonomy, managers face challenges in supervising decision processes. Humberd and Latham (2025) describe AI agents as evolving organizational actors that influence outcomes without constant human intervention. Many AI models operate as "black boxes" that resist explanation, limiting transparency in how decisions are generated and complicating performance evaluation.

Trust plays a critical role in how managers interact with AI agents. Research shows that optimal performance occurs when trust is calibrated rather than absolute, requiring managers to remain actively engaged in the decision process (Wen et al., 2025). Excessive reliance on AI can introduce ethical and operational risks, as managers may defer responsibility to AI agents and reduce critical evaluation.

Conclusions and Recommendations

The findings indicate that AI agents significantly enhance managerial decision-making by improving analytical capacity, increasing decision speed, and expanding the scope of strategic analysis. However, these benefits are accompanied by substantial managerial challenges. AI systems do not replace managerial judgment but instead amplify the quality of managerial input. Without clear governance structures, firms face increased risks related to misalignment, ethical failure, and regulatory non-compliance.

Based on these findings, four recommendations emerge. First, organizations should establish formal AI governance frameworks that clearly define roles, responsibilities, and decision boundaries. Second, organizations should maintain active managerial involvement in decision processes rather than relying solely on AI outputs. Third, firms should implement transparency and documentation mechanisms to support accountability. Finally, organizations should adopt a balanced approach to trust in AI systems — calibrating reliance based on context, system performance, and decision importance.

Ultimately, the successful use of AI agents depends on the ability of organizations to balance technological efficiency with human judgment. Firms that achieve this balance are more likely to enhance decision quality, maintain accountability, and sustain long-term strategic performance in an increasingly AI-driven business environment.

AI Disclosure Statement

I used generative AI as a limited support tool during the writing process of this report. Specifically, AI was used to help brainstorm ideas, refine sentence clarity, and improve overall organization. The research, analysis, and final written content were developed independently by me based on my own understanding of the sources and course materials. I ensured that all arguments, interpretations, and conclusions presented in this report reflect my own critical thinking.


Reflection

This report required me to hold a complex, multi-source argument together across more than 2,500 words — a discipline that is very different from opinion writing. I learned that the most important skill in research writing is not finding evidence, but knowing which evidence to use and how to sequence it so that the argument builds naturally toward its recommendations. The experience of writing for a professional business audience also sharpened my awareness of register: every sentence had to be precise, every claim had to be supported, and every recommendation had to be grounded in what the evidence actually showed — not what I wished it showed.