In today’s world, technology grows faster than any of us can follow, and artificial intelligence is suddenly part of everyone’s life. Thanks to huge improvements in computers and the explosion of big data, large language models have become so advanced that it’s basically impossible to avoid them. Everyone, even my grandma, has interacted with an LLM at least once and she doesn’t even know what “LLM” stands for.
But with great power comes great responsibility. Yes, data is a kind of universal language. But it also carries big risks: unfairness, lack of inclusiveness, and a lot of hidden biases.
I’m Maryam, an Iranian AI specialist working in an international consulting firm in Milan. My daily job is about mixing technology with business and helping companies understand how to use AI wisely. I moved to Italy around four years ago to study computer science, and somehow destiny decided I should stay here. So here I am, still hanging around, still learning, still building.
When I entered Italy, everything was new. I came with my own culture, studied in English, bought groceries in Italian, and lived with international students from all over the world on campus. Every single thing was different, and I had to adapt. And that’s exactly why I started noticing how culture affects the way we think, even the way we think about AI.
Because of my job, I always work with different AI models. And let’s be honest, most of them are built in Western countries, trained on Western data, and shaped by Western perspectives. So of course they struggle with anything outside that bubble.
Ask them something about a smaller ethnic group in Africa? Maybe they were never trained on it. Talk to them in a non-English accent through voice assistants? Good luck! they pronounce Italian with a weird American twist. These small things made me realize a bigger truth, AI often doesn’t understand cultures that didn’t build it.
And the bias doesn’t stop at culture, it hits gender too, in ways that are subtle but incredibly powerful. For example, ask an AI model to describe a “software developer,” and mostly it imagines a man. Ask it for an image of a nurse, and suddenly it’s always a woman. So, a girl who dreams of becoming an engineer might notice that an AI model doesn’t automatically picture someone like her. And a boy who wants to be a nurse might feel like he’s choosing something unusual, because the AI keeps showing it as a “female job.”
For most of history, women weren’t even allowed to study, work freely, or choose their own path. Many professions were legally or socially restricted to men. So of course, the historical data shows “software developer = man”, because only men were permitted to do those jobs in the past. And when you look at roles like nursing, they appear more “female” in old data because during wars men were fighting and women were the ones caring for the wounded.
Again, not because of talent or ability, but because society decided who was allowed to do what. In some periods of history, even a curious, educated woman could be punished, silenced, or called a “witch” simply for thinking differently. That’s the kind of world this old data comes from. So, when AI models repeat those patterns, they’re not reflecting our present, they’re repeating a past where choice wasn’t equal.
Without even realizing it, these systems can shape imagination, confidence, and even identity, especially for younger generations who use AI every day. Kids widely use AI now too, it’s everywhere, so what AI “thinks” actually matters. We need to educate people on how to use AI responsibly, but we also need to improve what we feed into these models. It turns stereotypes into “facts.” And the problem isn’t that the model is evil, the problem is that it has learned from years and years of human data that already carries these biases.
This is why updating the data matters. This is why changing mindset matters. And this is why we need more women, from every background and culture, involved in building AI from the start, inside the rooms where these systems are built. Not afterwards, not as “correctors,” but from the beginning.
We need more unbiased data, and new data that actually includes everyone, every gender, every culture, every story. Because AI can only reflect the world it sees, and right now it sees only a small part of it. To balance these old biases, we need people who bring different histories, different languages, different ways of thinking. People who can look at a dataset and say, “Wait, something is missing here.” People who understand what it feels like to not be represented.
And it’s not only about who builds the technology, but also about who uses it. AI learns from us; The more people use it, the more perspectives it absorbs. So, we need to make AI accessible to everyone, not just a privileged group. We need kids, adults, immigrants, seniors, women, minorities, all voices, interacting with it. That’s how we push it toward fairness.
But access alone is not enough. People need to be educated on how to use AI, how to question it, and how to guide it. Because the better humans understand how to ask, the better AI can learn how to answer. Healthy usage from diverse groups can help correct old patterns and teach the system new ones.
This isn’t just a technical challenge; it’s a human one. AI becomes fair when the people building it and the people using it are diverse enough to notice what others might overlook. The next generation of AI has to be built by all of us. Immigrant women, cultural minorities, people from places the old data never wrote about, we all have something essential to add. Our voices are not “nice to have.” They’re the missing pieces. And if we show up, use the tools, and shape them with our own hands, we can create AI that finally sees the whole world, not just the part that wrote the history books.
That’s the future I want to help build.
Our story matters.
Bring it.
Be visible.
Shape the systems.
Maryam Asgari, AI Specialist & Ambassador Donne 4.0















