info@aditum.org    +1(205)-633 44 24

Human-Machine Workforce Redesign: Implications For Psychological Safety, Job Redesign, And Performance In The Ai ERA

Authors

AGADA SYLVESTER PHIL. C.MGR
SMA RESEARCH METHODS, Kogi State, Nigeria.

Article Information

*Corresponding author: AGADA SYLVESTER PHIL. C.MGR, SMA RESEARCH METHODS, Kogi State, Nigeria.

Received: March 05, 2026      |      Accepted: March 10, 2026     |     Published: March 14, 2026

Citation: AGADA SYLVESTER PHIL., (2026). “Human-Machine Workforce Redesign: Implications For Psychological Safety, Job Redesign, And Performance In The Ai ERA”. International Journal of Business Research and Management 4(3); DOI: 10.61148/3065-6753/IJBRM/078.

Copyright:  © 2026. AGADA SYLVESTER PHIL, This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Human-machine workforce redesign is a multifaceted transformation that manifests in various forms, transcends organizational regimes, crosses industry boundaries and affects multiple sectors of the economy. This study investigated the relationship between human-machine workforce redesign strategies and employees’ psychological safety in organizations in the AI era. A correlational survey research design was adopted, targeting a population of 72,621 employees from 12 selected organizations in Nigeria. Using a multistage sampling procedure involving purposive, proportionate stratified, and simple random sampling techniques, a sample of 382 employees was drawn for the study. Data were collected using a researcher-designed instrument: Human-Machine Workforce Redesign Questionnaire (HMWRQ) and Employees’ Psychological Safety Questionnaire (EPSQ). The instrument was validated by three experts, and a trial test was conducted to establish reliability, yielding coefficients of 0.78 and 0.84. The data collected were analyzed using Pearson Product-Moment Correlation (PPMC) to answer the research questions, while simple linear regression was employed to test the hypotheses at the 0.05 level of significance. The findings of the study revealed that there is a significant relationship between managers’ AI job redesign methods and employees’ psychological safety in organizations in the AI era. It also found that there is a significant relationship between managers’ attitudes and level of engagement and employees’ psychological safety in organizations in the AI era. The study concluded that this is a dangerous situation that, if not urgently addressed, the organizational environment may continue to breed a new generation of workers that experience low psychological safety, threatening the integrity and future of the nation’s workforce performance and competitiveness in the AI era.

Keywords:

Human-Machine Workforce Redesign, Psychological Safety, Job Redesign, Performance, AI Era

Introduction:

Human-machine workforce redesign remains one of the most urgent challenges to sustainable organizational development, institutional integrity, and societal progress in the 21st century. It disrupts traditional roles, damages trust in management, and weakens organizations’ ability to fulfill their mandates effectively. Its influence spans all sectors, from manufacturing to services, and most concerningly, the workplace. According to the World Economic Forum (2023), AI integration is the augmentation of human tasks with machine intelligence, while the International Labour Organization (ILO, 2024) broadens this to include any act that undermines human autonomy, transparency, and accountability in both human and machine affairs. These definitions reflect a growing global concern that AI is not solely about technological advancement but also a systemic issue that undermines values, institutional efficiency, and equitable development. Despite increased efforts to promote good governance and AI ethics reforms, unethical practices continue to thrive in both obvious and hidden forms (Brynjolfsson & McAfee, 2024), especially in sectors like work where ethics and human dignity are expected to be fundamental. 

In the workforce sector, AI integration manifests through task automation, algorithmic surveillance, job displacement, and procedural manipulation. This has far-reaching consequences for the quality of work, equity in career access, and the credibility of employment relationships. Institutions like the ILO and World Economic Forum warn that AI in the workforce not only displaces vital human roles but also undermines employee morale, promotes unfairness in performance assessments, and weakens the integrity of career paths and promotions. Reports from the United States, China, and Europe have uncovered incidents of AI bias in hiring, excessive surveillance, fabricated performance metrics, and opaque algorithmic decision-making (Acemoglu & Restrepo, 2023; Kellogg et al., 2023). These issues erode the core mission of organizations, diminish public and employee confidence, and hinder the development of competent and adaptable professionals. Ultimately, poor human-machine redesign disrupts the merit system, widens inequality, and weakens the human capital necessary for national development. 

In sub-Saharan Africa, AI adoption remains a systemic challenge rooted in weak institutions, limited transparency, and poor enforcement of accountability measures. The African Union (2024) estimates that the continent risks losing millions of jobs annually to unchecked automation, with the workforce sector being significantly impacted. In African organizations, AI challenges include algorithmic bias in promotions, surveillance without consent, misuse of performance data, and procurement fraud in AI tools (ILO, 2024). These practices not only promote inequality but also harm the reputation of organizations and employees’ professional skills. Nigeria is particularly heavily affected. According to the Nigerian Bureau of Statistics and AI adoption reports (2023), Nigeria ranks low in ethical AI readiness, with widespread issues ranging from opaque algorithmic monitoring by managers and fear of job loss to biased performance scoring and lack of employee involvement in redesign processes. These behaviors have become normalized in many institutions, undermining psychological safety and devaluing employee well-being. While organizations continue to claim they develop workers “in competence and character,” there is a clear lack of deliberate and structured human-machine redesign education, making reforms in management practices and institutional accountability more urgent than ever. 

This normalization of AI-related challenges is not just institutional but deeply cultural, as it gradually conditions employees to accept low psychological safety as part of the work experience, thus reinforcing the very disengagement that organizations claim to oppose. According to Edmondson (2019) and Kellogg et al. (2023), when employees witness or experience practices such as unconsulted task automation, constant surveillance, or opaque performance algorithms, they are slowly socialized into a culture where fear and distrust become normal. Such behavior not only hampers individual well-being but also influences peer conduct, organizational norms, and broader societal expectations. In Nigerian organizations, additional examples of AI-related issues include excessive monitoring in exchange for perceived productivity, ghost tasks created by algorithms, and managers using forged AI-generated reports (Brynjolfsson & McAfee, 2024; Adebayo, 2023). These systemic malpractices undermine trust, weaken organizational credibility, and produce workers who may lack both the skills and emotional foundation necessary for national development. Therefore, tackling AI challenges in the workforce requires a deliberate and comprehensive reform strategy focused on promoting transparency, enforcing accountability, and embedding psychological safety in job redesign and managerial practices. 

Human-machine workforce redesign serves various strategic purposes, including optimizing task allocation between humans and machines, securing employee buy-in or placement, bypassing outdated requirements, and enhancing organizational efficiency through ethical means such as co-design workshops or transparent AI tools (Kellogg et al., 2023; Acemoglu & Restrepo, 2023). The forms it takes are diverse: technical redesign includes algorithmic task allocation and automation; administrative redesign involves data governance and procurement; relational redesign includes surveillance ethics and abuse of authority; while credential redesign involves AI-assisted performance tracking and manipulation of metrics (ILO, 2024). 

Human-machine workforce redesign severely damages the core goals of the organizational system, especially regarding employee growth. As the main recipients and participants in work processes, employees are greatly impacted by unethical AI practices like unconsulted automation, surveillance for grades/performance, falsification of metrics, and favoritism. These actions not only weaken psychological safety and ethical standards among employees but also reduce the rigor necessary for real learning and skill development. Lee and Kim (2022) state that regular exposure to such practices leads to moral disengagement among employees, making distrust seem acceptable or even necessary for survival not only in competitive work settings but also in other areas of life. This ethical disconnection ultimately results in poor professional skills, as employees who fear AI often lack the critical thinking and collaborative abilities needed in their careers. Adewale and Akinyemi (2023) also warn that workers who internalize low safety during AI transitions are likely to repeat these behaviors in their careers, fueling a cycle of disengagement in public service and the job market. 

At the institutional level, the effects of poor redesign are equally devastating. Organizations face severe damage to their reputation, internal governance, and ability to attract and retain both qualified staff and talent. Ochefu and Agabi (2022) contend that administrative AI issues, such as misappropriation of automation budgets, procurement fraud, and nepotism in AI tool selection, erode institutional standards and excellence. Accreditation and compliance risks also become imminent, particularly when regulatory bodies discover that work programmes are not meeting established standards due to compromised evaluation systems. Moreover, poor AI redesign in management leads to resource misallocation, infrastructural decay, and a decline in staff morale. These issues are not isolated; they ripple across the entire system, weakening policy implementation and producing poor performance outcomes. Ede and Imhonopi (2023) note that many competent professionals and innovators are compelled to leave organizations with poor redesign, fueling a brain drain and reducing local capacity for innovation, research, and development. This exodus further weakens the organizational ecosystem and perpetuates inequality in access to quality work environments. 

On a broader societal level, the consequences of poor human-machine redesign extend beyond organizational boundaries. When institutions produce workers with compromised safety and insufficient competence, these individuals eventually become part of the country’s leadership, civil service, and corporate management, carrying with them the same flawed values learned at work. Thus, further deepening the system’s failures. Okolie (2024) asserts that the presence of such individuals in strategic positions weakens public institutions, erodes democratic governance, and hampers socio-economic development. Furthermore, public trust in organizations as a tool for social mobility and national transformation declines when citizens perceive that success is based not on merit but on algorithmic manipulation and distrust. This breakdown in trust widens social inequality, fosters cynicism among the youth, and impedes national development goals. The cumulative impact of these issues highlights the urgent need for systemic reforms in the workforce sector, with a particular focus on human-machine redesign strategies, institutional accountability, and the enforcement of psychological safety frameworks in organizations. 

Given the far-reaching consequences of poor AI integration in the workforce, particularly its role in producing employees with low psychological safety who later occupy leadership and decision-making positions, it becomes increasingly urgent to address the human failures at their root. This is where human-machine workforce redesign emerges as a strategic intervention. Human-machine workforce redesign refers to intentional management strategies that aim to balance human and machine roles to instill psychological safety, adaptability, and ethical values in workers through both formal job redesign and informal learning experiences. At the organizational level, managers are not only task allocators but also role models responsible for cultivating responsible collaboration among employees. Through team interactions, the incorporation of ethics-related content in all processes, active dialogue, reflective learning, and participatory methodologies such as co-design workshops and service learning, managers can help employees internalize values such as safety, accountability, fairness, and respect (Edmondson, 2019). Additionally, organization-wide initiatives, such as employee-led safety forums, mentorship programmes, codes of conduct, and institutional ceremonies that celebrate ethical AI use are platforms through which human-machine redesign can be institutionalized (Lickona, 2021). Managers are central to this effort, as their redesign choices, personal conduct, and engagement with employees significantly influence how workers perceive and practice safe behavior (Narvaez & Bock, 2022). A redesign-driven approach thus holds promise as a transformative tool to reverse the culture of fear and rebuild trust. 

However, the reality in many Nigerian organizations stands in sharp contrast to these ideals. The absence of a well-articulated and enforced human-machine redesign framework means that safety development is often sidelined in favour of automation speed and productivity-oriented practices. Most managers are not trained in ethical AI redesign and may themselves model unethical behaviour through excessive surveillance, exploitation of data, or manipulation of performance outcomes. As Adewale and Akinyemi (2023) observed, the normalization of AI-related issues within work spaces creates an environment where neither managers nor employees are held accountable for safety failings. Moreover, institutional structures often prioritize rankings, funding, or political patronage over ethical leadership, leaving minimal room for psychological safety to thrive (Okolie, 2024). Even when organizational mottos claim to promote innovation and well-being, there is little evidence of structured programming to develop employees’ emotional capacities. This disconnection between rhetoric and practice highlights the urgent need for institutional reforms and staff reorientation toward redesign-driven work. Without deliberate strategies to embed safety reasoning and ethical awareness into the work culture, organizations risk producing individuals who perpetuate the very disengagement the system seeks to combat. This reality, if left unaddressed, threatens to institutionalize fear and undermine national development goals at their very foundation. It is against this backdrop that this study investigated the relationship between human-machine workforce redesign (managers’ AI job redesign methods and managers’ attitudes and level of engagement) and employees’ psychological safety in organizations in the AI era. 

Statement of the Problem 

Employees in ideal organizational settings are expected to uphold values such as psychological safety, adaptability, trust, and accountability. These moral virtues form the bedrock of a responsible workforce and are crucial for national development. Particularly in Nigerian organizations, employees should demonstrate safe attitudes that reflect strong internalized values acquired through formal and informal work experiences. This expectation is further reinforced by the long-standing traditional claim that organizational success is based on “competence and character,” emphasizing the importance of both technical skills and emotional uprightness. However, there is a growing probability that many employees in these institutions are straying from these ideals, with increasing reports of unethical behaviours such as resistance to AI, fear of surveillance, low engagement, cult-like avoidance of feedback, and blatant disregard for collaborative processes. This disturbing shift signals a potential erosion of the very safety foundations upon which the workforce system was built, and it raises serious concerns about the effectiveness of human-machine redesign in shaping employees’ well-being. 

While human-machine workforce redesign is designed to instill safety reasoning, adaptability, and socially responsible decision-making, the disconnect between redesign intentions and employee behaviour is becoming alarmingly evident. Ideally, through participatory redesign strategies, reflective team discussions, and real-life AI case studies, redesign should equip employees to embrace AI and contribute meaningfully to organizational success. However, in reality, the probability is high that the implementation of such redesign is either weak, inconsistent, or merely symbolic. Despite the recurring assertion that Nigerian organizations develop workers with both competence and character, there is limited evidence to show that organizations systematically and deliberately integrate safety into job redesign and assessment practices. As a result, the transformative potential of human-machine redesign in developing resilient workers is increasingly being undermined. 

Even more troubling is the growing influence of a “hidden curriculum” (a set of informal, often unethical AI practices embedded within organizational culture) which appears to contradict directly and even undermine formal redesign and instruction. Mounting evidence suggests that some managers demand acceptance of opaque algorithms, surveillance tools, or performance metrics (often unwritten). At the same time, certain administrators are involved in acts of data nepotism, favoritism, and outright manipulation. In some cases, promotion into critical or high-demand roles is allegedly influenced by algorithmic bias. Employees without influential networks or the emotional resilience to navigate such systems are frequently sidelined. The question arises: What kind of safety foundation is being laid when organizations meant to build trust actively reward distrustful conduct? These unspoken yet widely practiced norms subtly, but powerfully, condition employees to believe that low psychological safety is not only permissible but profitable. Such practices send conflicting safety signals, thereby increasing the probability that employees internalize a culture of fear, exploitation, and manipulation. In this context, human-machine redesign is stripped of its legitimacy and moral authority, and the long-standing institutional claim of producing workers grounded in both competence and character becomes deeply questionable. These alarming realities make it imperative to empirically investigate the relationship between human-machine workforce redesign and employees’ psychological safety in organizations in the AI era. 

Purpose of the Study 

The purpose of this study was to assess the relationship between human-machine workforce redesign and employees’ psychological safety in organizations in the AI era. Specifically, the study sought to: 

  1. Examine the relationship between managers’ AI job redesign methods and employees’ psychological safety in organizations in the AI era. 
  2. Assess the relationship between managers’ attitude and level of engagement and employees’ psychological safety in organizations in the AI era. 

Research Questions 

The following research questions guided the study: 

  1. What is the relationship between managers’ AI job redesign methods and employees’ psychological safety in organizations in the AI era? 
  2. What is the relationship between managers’ attitudes and level of engagement and employees’ psychological safety in organizations in the AI era? 

Hypotheses 

The following null hypotheses were formulated and tested at the 0.05 level of significance: 

  1. There is no significant relationship between managers’ AI job redesign methods and employees’ psychological safety in organizations in the AI era. 
  2. There is no significant relationship between managers’ attitudes and level of engagement and employees’ psychological safety in organizations in the AI era. 

Research Methodology 

The study, conducted in organizations in Nigeria, used a correlational survey research design. The population included 72,621 employees in mid- and senior-level positions across 12 selected organizations. These employees were specifically chosen because, at this stage in their careers, they would have had enough interaction with managers to evaluate human-machine workforce redesign effectively. A sample size of 382 employees was selected based on both Krejcie and Morgan’s (1970) sample size table and Cochran’s formula, which states that for a finite population of 72,621, the appropriate sample size is about 382. This confirms that the chosen sample is statistically adequate to represent the target population at a 95% confidence level with a 5% margin of error. A multistage sampling procedure (including purposive, proportionate stratified, and simple random sampling techniques) was used. 

A researcher-constructed questionnaire titled “Human-Machine Workforce Redesign Questionnaire (HMWRQ) and Employees’ Psychological Safety Questionnaire (EPSQ)” was used for data collection. The instrument was divided into three sections: A, B, and C. Section A included items on respondents’ personal data. Section B was split into two parts: Part I (Items 1–5) focused on managers’ AI job redesign methods, while Part II (Items 6–10) addressed managers’ attitudes and engagement levels. Section C consisted of Items 11–20, which examined employees’ psychological safety in organizations in the AI era. The instrument was validated by three experts: two from the Department of Management (specializing in Human Resource Management) and one from the Department of Psychology, all within the Faculty of Management Sciences at the University of Lagos. Subsequently, the instrument was tested for reliability using Cronbach’s Alpha, yielding coefficients of 0.78 and 0.84, both considered sufficiently high for the instrument to be reliable for use in the study. 

The data collected was analyzed using Pearson Product Moment Correlation (PPMC) to answer the research questions. The decision rule was that if the calculated r lies between 0-0.25, it indicates a very weak positive correlation; 0.25-0.50 shows a weak positive correlation; 0.50-0.75 indicates a strong positive correlation; and 0.75-0.99 represents a powerful positive correlation. The value of r must always be between -1 and +1, which is expressed as -1 ≤ r ≤ +1. The range of r values extends from negative 1 to positive 1. Simple linear regression was used to test the hypotheses. The reason for choosing linear regression was to determine whether human-machine workforce redesign is a positive or negative determinant of employees’ psychological safety in Nigerian organizations. The decision rule is that if the p-value is greater than the set alpha level of 0.05, the null hypothesis will not be rejected; if the p-value is less than 0.05, the null hypothesis will be rejected. 

Data Analysis and Interpretation 

The data were analyzed and interpreted in response to the research questions and hypotheses. 

Research Question 1: What is the relationship between managers’ AI job redesign methods and employees’ psychological safety in organizations in the AI era? 

Table 1:  Relationship between managers’ AI job redesign methods and employees’ psychological safety in organizations in the AI era 

Variables

N

Mean

Std

(r)

Employees’ Psychological Safety

382

27.2984

10.46253

.988

Managers’ AI Job Redesign Methods

382

13.2906

5.48507

 

Table 1 above illustrates the relationship between managers’ AI job redesign methods and employees’ psychological safety in Nigerian organizations. The data showed that employees’ psychological safety had a mean score of 27.2984 and a standard deviation of 10.46253, while the managers’ AI job redesign methods had a mean score of 13.2906 and a standard deviation of 5.48507. The correlation coefficient was .988, indicating that managers’ AI job redesign methods have a strong positive relationship with employees’ psychological safety in Nigerian organizations. 

Research Question 2: What is the relationship between managers’ attitudes and level of engagement and employees’ psychological safety in organizations in the AI era? 

Table 2: Relationship between managers’ attitudes and level of engagement and employees’ psychological safety in organizations in the AI era 

Variables

N

Mean

Std

(r)

Employees’ Psychological Safety

382

27.2984

10.46253

.995

Managers’ Attitudes and Level of Engagement

382

13.3822

5.39276

 

Table 2 above showed the relationship between managers’ attitudes and levels of engagement and employees’ psychological safety in organizations in the AI era. The data revealed that employees’ psychological safety had a mean score of 27.2984 and a standard deviation of 10.46253, while the managers’ attitudes and levels of engagement had mean scores of 13.3822 and standard deviations of 5.39276. The correlation coefficient was .995, which implies that managers’ attitudes and levels of engagement have a very high positive relationship with employees’ psychological safety in organizations in the AI era. 

Hypothesis 1: There is no significant relationship between managers’ AI job redesign methods and employees’ psychological safety in organizations in the AI era. 

Table 3: Regression analysis of relationship between managers’ AI job redesign methods and employees’ psychological safety in organizations in the AI era

Predictors

R

df

F

β

(t)

P

Constant

.988

.976

1, 380

15462.013

 

10.342

.000

Managers’ AI Job Redesign Methods

       

.988

124.346

.000

The result from Table 3 indicated a significant relationship between managers’ AI job redesign methods and employees’ psychological safety in organizations in the AI era [R=.988, R²=.976, F (1, 380) =15462.013; p<.05]. The result showed that managers’ AI job redesign methods accounted for 97.6% of the total variance in employees’ psychological safety. Therefore, 2.4% could be attributed to other variables not included in this study. The result also revealed that there is a significant relationship between managers’ AI job redesign methods and employees’ psychological safety in organizations in the AI era (β=.988, t=124.346, p<.05). 

Hypothesis 2: There is no significant relationship between managers’ attitudes and level of engagement and employees’ psychological safety in organizations in the AI era. 

Table 4: Regression analysis of the relationship between managers’ attitudes and level of engagement and employees’ psychological safety in organizations in the AI era

Predictors

R

df

F

β

(t)

P

Constant

.995

.991

1, 380

39630.122

 

10.429

.000

Managers’ attitudes and level of engagement

       

.995

199.073

.000

The result from Table 4 indicated a significant relationship between managers’ attitudes and level of engagement and employees’ psychological safety in organizations in the AI era [R=.995, R²=.991, F (1, 380) =39630.122; p<.05]. The result showed that managers’ attitudes and level of engagement explained 99.1% of the total variance in employees’ psychological safety. Therefore, 0.9% could be due to other variables not included in the present study. The result also revealed that there is a significant relationship between managers’ attitudes and level of engagement and employees’ psychological safety in organizations in the AI era (β=.995, t=199.073, p<.05). 

Summary of Major Findings 

There is a significant relationship between managers’ AI job redesign methods and employees’ psychological safety in organizations in the AI era. 

There is a significant relationship between managers’ attitudes and level of engagement and employees’ psychological safety in organizations in the AI era. 

Discussion of Findings 

The first finding of this study revealed that there is a significant relationship between managers’ AI job redesign methods and employees’ psychological safety in organizations in the AI era. This finding is supported by the fact that the way in which managers redesign jobs with AI whether through active, transparent, and safety-driven approaches or via passive, disconnected, and ethically indifferent methods plays a crucial role in shaping employees’ perspectives and attitudes, including their sense of safety and willingness to innovate. Supporting this finding, Edmondson (2019) found that employees exposed to interactive and ethically grounded redesign methods were more likely to develop high psychological safety, emphasizing the power of redesign in shaping emotional reasoning. Similarly, Lee and Kim (2022) reported that redesign methods that incorporated real-life ethical dilemmas and employee involvement significantly reduced fear and resistance to AI. However, Brynjolfsson and McAfee (2024) presented a contrasting view, arguing that employees’ psychological safety is more heavily influenced by societal norms and peer behavior than by managerial redesign alone, suggesting that while managers’ methods are important, they may be insufficient on their own to counter deeply embedded cultural acceptance of AI-related fear. This divergence underscores the need for a holistic approach that combines redesign reform with broader societal reorientation.

The second finding of the study indicated that there is a significant relationship between managers’ attitudes and levels of engagement and employees’ psychological safety in Nigerian organizations. This finding is supported because managers who display positive attitudes and actively engage with their employees are more likely to model ethical behaviors and foster a culture of safety. In contrast, disengaged or indifferent managers may unintentionally encourage apathy and acceptance of unsafe practices among employees. This aligns with the findings of Abdulrahman and Eze (2023), who suggest that managers demonstrating integrity, enthusiasm, and active involvement in employee learning significantly influence employees’ safety awareness and rejection of distrustful behaviors. Similarly, Ojo and Bello (2024) found that employees who felt emotionally and intellectually connected to their managers reported lower tendencies to experience fear or disengage, highlighting the influence of positive role modeling and engagement. However, Chukwuemeka (2022) argued that despite managers’ efforts, employees’ psychological safety is largely shaped by external factors such as economic uncertainty, media influence, and family background, suggesting that although manager engagement is valuable, it may have limited impact without broader societal change. Abu (2023) also reported that some managers pressure employees to accept surveillance tools and even manipulate AI metrics, fostering the belief among employees that unsafe means can be used to achieve goals. This combination of agreements and disagreements illustrates the complex interaction between institutional influences and societal context in shaping employees’ emotional orientations. 

Conclusion 

The study concluded that Nigeria’s organizational system has great potential to shape a generation of resilient, ethical, and visionary workers who can drive national development and uphold societal values in the AI era. However, this promising future faces serious threats due to the increasing exposure of employees, who are the leaders of tomorrow, to poor human-machine redesign within the organizational environment. This situation is both dangerous and deeply concerning, as continued tolerance of unethical AI practices such as poor job redesign, manager apathy, and surveillance gradually creates a culture where low psychological safety becomes normalized and even justified. Such normalization not only distorts employees’ emotional compass but also prepares them to carry these distrustful tendencies into their professional lives, further entrenching systemic disengagement in society. If urgent and strategic actions are not taken to address these issues, organizations meant to foster innovation and intellectual growth may instead become breeding grounds for fear and moral decline, ultimately undermining the credibility of workforce capabilities and the nation’s ability to produce competent, trustworthy professionals. 

Recommendations 

Based on the findings, this study recommended that: 

Organizational management should organize regular capacity-building workshops focused on ethical AI integration, employee-centered redesign methods, and the integration of psychological safety themes into job redesign processes. This will ensure that managers are well-equipped to model safety and foster critical thinking among employees. 

Each organization should establish and enforce a standardized code of ethical conduct that directs managers’ behavior in AI interactions. This includes transparent performance metrics, banning coercive surveillance, and maintaining ethical interactions with employees. Violations should be reported and penalized by an independent safety and integrity committee. 

Managers should use interactive and participatory redesign methods, such as debates, group discussions, and problem-solving activities that encourage employees to confront and reject unsafe AI practices. Organizations should also track and assess managers’ engagement levels through employee feedback systems. 

The organizational administration should implement awards and recognition for managers who consistently demonstrate ethical redesign, high employee engagement, and integrity. This fosters a culture where positive practices are celebrated and modeled. 

References

  1. Acemoglu, D., & Restrepo, P. (2023). The labor market effects of AI. Journal of Economic Perspectives, 37(1), 45-68. 
  2. Adewale, S. A., & Akinyemi, K. O. (2023). AI integration in Nigerian organizations: Implications for employee well-being. African Journal of Management Research, 15(2), 55-67. 
  3. African Union. (2024). AU AI ethics and workforce report. AU Press. 
  4. Brynjolfsson, E., & McAfee, A. (2024). The second machine age revisited: AI and the future of work. MIT Press. 
  5. Ede, P. O., & Imhonopi, D. (2023). Brain drain and the collapse of organizational excellence in Nigerian firms: An AI perspective. Journal of Management and Policy Studies, 14(1), 12-25. 
  6. Edmondson, A. C. (2019). The fearless organization: Creating psychological safety in the workplace for learning, innovation, and growth. Wiley. 
  7. International Labour Organization. (2024). AI and the future of work in Africa. ILO Publications. 
  8. Kellogg, K. C., Valentine, M. A., & Christin, A. (2023). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 17(1), 1-45. 
  9. Lee, M. K., & Kim, J. (2022). Ethical AI redesign and psychological safety: Evidence from tech firms. Journal of Business Ethics, 178(3), 567-589. 
  10. Lickona, T. (2021). Educating for character: How our organizations can teach respect and responsibility (2nd ed.). Bantam Books. 
  11. Narvaez, D., & Bock, T. (2022). Developing ethical skills: A guide to moral workplaces. Journal of Business and Psychology, 37(1), 1-17. 
  12. Nigerian Bureau of Statistics. (2023). AI adoption and workforce report. NBS Publications. 
  13. Ochefu, Y. A., & Agabi, O. G. (2022). Institutional AI challenges in Nigerian organizations: Reforms and prospects. Nigerian Journal of Management Studies, 18(3), 78-93. 
  14. Okolie, U. C. (2024). AI and the crisis of trust in Nigerian leadership: Rethinking redesign as a transformative tool. Journal of Contemporary Management Problems, 7(4), 101-116. 
  15. World Economic Forum. (2023). The future of jobs report 2023. https://www.weforum.org/reports/the-future-of-jobs-report-2023 
  16. A Publication of Association of Management and Policy Practitioners, Nigeria.