EspiritoEspirito Logo

AI is not Your Friend: the Very Real Risks Posed by Generative AI

Top 5 Risks of Generative AI

Whether you are a student asking ChatGPT to do your homework or an international consulting firm writing a $440k government report, AI isn’t always your friend. Yes, that’s right, Deloitte had to refund a large amount of a $440k report due to artificial intelligence-related errors. AI hallucination is just one of the many risks of using AI. Humanity and AI can work together in harmony, but we first need to understand the risks it poses to society.

Let’s have a look at why AI is not your friend and the top 5 risks of using it.

1. Loss of Human Connection

Human connection is one of the most important aspects of living a long and healthy life.

​However, with the advent of AI, the risk of losing human connection has never been greater. The increase in sophisticated AI chatbots has led to the rise in humans building relationships with AI. Building these connections with AI chatbots over humans has a plethora of issues, especially since chatbots often replace the need for human intervention.

“Yes, you are absolutely right!

Does this ring a bell? This is absolutely the tell-tale sign of AI sycophancy.

Social media platforms and new websites use AI algorithms to prioritise content that aligns with the users’ existing beliefs, opinions, and interactions, thus creating echo chambers. These echo chambers created by AI algorithms can diminish an individual's ability to empathise and build connections with others, and undermine meaningful discourse.

AI and humanity sharing a common goal
AI and humanity can work together effectively

2. Privacy

Remember when you had to ask someone for permission to take their photo or even use their words? Well, those days may be coming to an end. The rise of AI-powered wearable technology blurs the lines of personal privacy. For example, META Ray-Ban displays use cameras and microphones to record their surroundings, often without people realising they’re being recorded. Now, taking photos of somebody and uploading them to your personal Instagram story is one thing, but using AI to surveil your surroundings is another. There is rising concern about what is happening with the images and audio recordings taken from these devices. With Meta’s, let’s say, ‘scattered’ past with privacy concerns, it may leave people feeling even more worried about breaches of privacy and potential misuse of personal information.

Meta Ray-Ban AI glasses with AI assistant
Meta Ray-Ban AI glasses might be great for usability, but with Meta’s track record of privacy, your data might not be kept private.

3. Misuse of Data

One of the largest risks of generative AI is Data Privacy. The dawn of a new technology brings endless possibilities but also endless risks. Sensitive information is collected and used to create and fine-tune AI and machine learning systems. Policy makers have the responsibility to ensure that personal data and information is collected, stored, and used correctly.

It is crucial to understand the privacy risks of AI, including how your data may be used in the creation of an AI model. A significant privacy risk of AI is the collection of data without consent. For example, LinkedIn recently received backlash after users discovered that they were automatically opted in to allow their data to train generative AI models. No longer having autonomy over how your data is used poses a real threat to privacy rights.

Another major AI privacy risk includes data leakage, which may occur when sensitive, private, or proprietary information becomes accessible or exposed through the training, deployment, or usage of AI systems.

Data can be easily misused.

4. Political Manipulation/Deepfakes

Deepfakes are synthetic media where a person’s likeness is swapped for another, sometimes used for bringing back beloved movie characters. Whilst it can be used for good, not everybody has the same agenda. With just 3 seconds of audio, you can create a very convincing deepfake of somebody. A study conducted by McAfee found that over 70% of people could not tell the difference between a real and an AI-cloned voice. The unfortunate possibilities of how this technology can be misused are growing every day.

KPMG states 3 major risks of deepfakes:

  • ​Fraudulent financial transactions
    • ​Cybercriminals could use deepfakes to impersonate senior executives during phone calls or video conferences, convincing others that they carry the authority to acquire confidential information or even persuade individuals to transfer significant funds.
  • Disinformation
    • Deepfake recordings can spread fraudulent or false and defamatory information about individuals and organisations. This could damage stakeholder, customer, and wider public trust.
  • Enhanced social engineering attacks
    • Using deepfakes, bad actors can penetrate organisations by impersonating a Chief Technology Officer (CTO) to persuade staff to grant access to a core technology system, to steal confidential information or plant malware.
Deepfakes can be used for good, like in the Mandalorian. But there are many unethical and problematic uses of deepfakes.

5. Over-dependence on AI in Education

Now that AI is so easily accessible, we must take caution in being overly dependent on it. The extent of AI dependence is yet to be realised. The role of virtual assistants in simplifying student tasks is noteworthy, but can also pose risks.

There is a growing dependency of students on AI. This dependency is problematic as it can deter students from engaging in thorough research and forming their insights, potentially diminishing their critical faculties. Ease of access to AI tools can lead to complacency, reducing students’ creativity and innovation.

A study on ‘The effects of over-reliance on AI dialogue systems on students' cognitive abilities’ presents some interesting findings. The article states that ‘Over-reliance on AI can undermine students' capacity for independent critical thinking and problem-solving, as they may become too dependent on AI-generated information.’ Who would have thought? Having all the information in the world at your fingertips could have negative implications... The article continues: ‘When students are using AI to do all of their work rather than as a tool, they risk not developing essential skills, such as analysis of information, constructing logical arguments, and integrating diverse knowledge and skills, which are crucial for both academic and professional success.

Education technology concept. Schoolgirl learning in the room. Online school. EdTech.E-learning concept
Technology is already in wide use in education, and AI could be next. We need to make sure we don’t overuse AI in education.

Don’t Lose Hope

AI is a wonderful tool, and humanity can work together with it to create amazing things. It just might not always be the best answer every time. But whether it, selling you highly personalised ads or hallucinating reports, it is important to take caution when using it.

Post Details

Author: James Paterson

Categories:

Updated: 10 Oct 2025

Interested in one of our products?

Get in touch and let us know how we can help! 😇