Ethical Implications of AI Adoption in HRM: Balancing Automation with Human Values
Main Article Content
Abstract
There are many moral concerns that come up when artificial intelligence (AI) is used in human resource management (HRM). These include privacy, computer bias, and who is responsible for what. As part of this study, the quality of secondary data sources such as scholarly books, reports, and case studies is judged. When AI systems handle a lot of personal data, privacy concerns appear. This means that they need strong data protection and clear ways to handle the data. It's a problem that AI systems might make biases in old data greater, which could make it less fair to hire people and evaluate their work. There is less responsibility because AI programmes are hard to understand and run. To be clear, businesses need to be open and keep an eye on things. To protect people's rights, rules like the GDPR are very important. It's even more important to use AI in a way that supports freedom and stops discrimination because it has bigger effects on human rights and personal freedom. Different groups of people around the world deal with these moral issues in very different ways. There should be different rules for right and wrong in each country.