Artificial intelligence (AI) is rapidly changing the world, and it is essential that academics are involved in the discussion of its development and use. However, there are a number of reasons why academics who have not used AI ethically or who are not familiar with the technology should not be the only voices in this conversation.
First, academics who have plagiarized or otherwise used AI unethically cannot be trusted to discuss the technology in a fair and objective way. They have already shown that they are willing to put their own interests ahead of the interests of others, and there is no reason to believe that they would be any more ethical in their discussions of AI. Second, academics who feel threatened by AI are likely to be biased in their discussions of the technology. They may be more likely to focus on the potential negative impacts of AI, while downplaying the potential benefits. This could lead to a distorted view of AI and could hinder the development and use of the technology. Third, academics who are not familiar with AI technology, even with the internet itself are simply not qualified to discuss it. They may not understand the capabilities and limitations of AI, and they may be easily misled by false or misleading information. This could lead to uninformed and even dangerous discussions about AI.
"Why bother discussing AI when you haven't mastered the internet yet? Perhaps conquer the login screen and identify a rogue virus first. Baby steps, my friend, baby steps."~Prof. Dumbassholes, PhD.
Therefore, it is important to include a variety of perspectives in this conversation, including those of AI experts, policymakers, and the general public. This will help to ensure that AI is developed and used in a way that benefits all of society. In addition to the reasons mentioned above, there are a number of other ways in which academics who are not familiar with AI can be harmful to the discussion of the technology. For example, they may spread misinformation about AI, which can lead to fear and distrust of the technology, promote unrealistic expectations about what AI can do, which can lead to disappointment and frustration, or fail to identify the potential risks of AI, which can lead to problems down the road.
It is important to remember that AI is a complex and powerful technology. It has the potential to do great good, but it also has the potential to be misused. It is essential that we have a informed and balanced discussion about AI, and that includes making sure that academics who are not familiar with the technology are not the only voices in the conversation.