Keepler: “Generative AI adoption means ensuring the proper review of generated content before the delivery”

What is a Brand Discovery ?

We interviewed Ramiro Manso, Head of Generative AI at Keepler Data Tech, who explained in detail the capabilities of generative AI as well as its potential impact on companies and the ethical challenges they may face.

Generative AI is the topic du jour. Since Open AI launched ChatGPT, this new evolution of artificial intelligence is playing a leading role in conversations in all fields. It is not just something that has remained in the business field; it has led to the democratization of access to the use of AI that has placed it within anyone’s reach through thousands of applications, each for a specific use.

Companies are overwhelmed by this wave, but are trying to get on it to take advantage of the benefits it can bring to their business goals. Amongst the voices that are leading the conversation on Generative AI is Keepler Data Tech, a data services company in the cloud that helps its clients to become data-driven organizations.

As data specialists, Keepler sees great potential in Generative AI that we need to know how to manage and apply. The speed at which everything is progressing around it and the need to guide its clients in this adoption process has led it to create a specific department and appoint Ramiro Manso as Head of Generative AI. Ramiro is one of the company’s Data Leads who has been leading the data department since Keepler began operations. 

Now he has the challenge of incorporating Generative AI into the organization, creating the necessary internal capacities and helping in the implementation of this type of solution for clients.

We spoke to him to begin to understand the transformation Generative AI is causing in organizations.

– For those who are not yet familiar with it, what is Generative AI? What is the difference with respect to traditional AI?

Generative AI is a (relatively) new discipline within the field of artificial intelligence that seeks to generate new observations (text, audio, images, etc.) based on a request or prompt. The main difference with respect to other disciplines is that in cases such as an ML classification model, the goal is to find a differentiating criterion or boundary decision to separate the observations into class A or B (or C, D, etc.).

Using an example based on images, this would be the difference between training a detection model with photos of cats so it can identify them in new images compared to training a GenAI model with photos of cats so it can invent new photos of cats.

– To what point can generative AI revolutionize the processing of data and the generation of information?

We are really at a point at which what we are really assessing is the real capacity of these models when it comes to processing information, mainly talking about natural language. Creating templates, re-writing in different conversational tones, identifying elements in content… For me, one of the simplest but most efficient examples is the ability to summarize information. I have been testing the different approaches that have appeared over time: LDA for topic modeling, tf-idf for key phrases in a text, recurrent networks for summaries… This is the first time you can say that a real summary of the information is being created as a person would do this.

The impact of applying this technology to the data processing field is enormous. And that is without talking about using it for programming code! Autogeneration of tests, comments and even code itself.

– There isn’t a day that goes by without new generative AI applications coming on to the market. Are we looking at a bubble in this field or is its potential that real that it invites the generation of such an extensive offer?

Probably both. It is breakthrough technology that affects most areas. The more traditional (although I find it hard to label it “traditional” and would do so with so many clarifications that we would need a lot of time) machine learning discipline had a great impact on business areas in terms of predictive and prescriptive analysis. In this case we have a similar scenario, but with a much less technical entry barrier and a much more generalized potential impact.

But all that glitters is not gold. We need to be fully aware that this type of model is especially capable when it comes to Flexibilising information, but this does not mean that it understands it. Understanding how much is real capacity and how much is illusion will be what separates those who dominate the discipline from victims of the hype.

– What use cases are you mainly identifying in organizations that are likely to use this technology?

Facilitating access to information through new search techniques and a summary of the content is probably one of the main use cases an organization might start with. The generation of templates, whether documentary or for the forming of ideas, based on the requirements, is the other most recommendable use case to start with.

This does not mean that they are the only cases, but maybe some of the most accessible. Somewhat more complex examples might focus on the personalization of content and/or client experiences, whether by adapting market strategies to the characteristics of each person or segment, or based on their purchasing preferences. Automated reports can also be generated on different data sources to facilitate decision-making for a company’s internal teams, allowing the information to be compared.

Not everything has to be written documentation. You could consider the fast prototyping of graphic designs, the improvement of computer vision models through image augmentation techniques based on the generation and completion of images, or even the generation of contextual music based on the content of a video.

All these are cases with multiple levels of complexity (and viability) that these technologies are now allowing. One of the main challenges is choosing the best way to apply them correctly internally in a company.

– In terms of knowledge and experience, how does a company like Keepler react to the appearance of a new technology like this?

At Keepler we react to the appearance of technology like Generative AI through ongoing internal training and the constant identification of emerging technology. We carry out internal assessments to understand its potential and the skills we need to be able to apply it correctly. Additionally, we analyses its possible applications to our business and closely watch the rapid changes in frameworks and services that support these.

Two of our values as a company are constant improvement and the contribution of value, and these guide us to remain firm in our commitment to continue innovating and offering personalized, cutting-edge solutions as we move forward in the GenAI era.

– And how are you actually integrating generative AI into your services? Could you give us some details?

When it comes to integrating generative AI into our services, we have set ourselves a strategy based on answering the following key questions. First, we assess what aspects of our existing services could benefit from Generative AI in terms of precision, performance or efficiency. We carefully analyses how we can boost and improve the current solutions using Generative AI as an additional tool to optimize results.

In addition, we explore new opportunities that were not viable previously without this technology. We focus on identifying areas where the generation of content, personalization and data-based decision-making can achieve new levels of quality and effectiveness. 

– What should large organizations that are starting to resolve use cases with this technology bear in mind?

When starting to tackle use cases with technology like Generative AI, there are certain key aspects to bear in mind. Firstly, the associated frameworks and tools are constantly changing and are not necessarily ready for production and require an appropriate assessment of their status, potential problems and limitations. Additionally, the generative language models are not deterministic (which is nothing new in ML models, seeds aside), which implies the need to establish quality control mechanisms and guarantee an appropriate review of the content generated before it is provided to users.

Likewise, organizations must consider the risks associated with generative language models, such as the presence of bias and the generation of inappropriate content, which is why it is essential to implement solid AI ethics and governance policies and practices.

Finally, educating internal teams and clients on the capacities and limitations of these models is key to avoid the dissemination (internal or external) of erroneous or potentially damaging information. Or simply managing expectations of the capacity of the technology correctly, something anyone who has been working long enough in the field of AI will understand.

– Another hot topic is that of the ethical challenges we face with the adoption of generative AI, such as misuse of technology, the creation of fakes, the elimination of jobs, etc. What is the reality in this respect in the short term?

This is a complicated question, especially with such recent technology in relation to which we do not yet have a clear idea of the real capacities and limitations, and how these may evolve in the future.

In the short term, the clearest challenges are misuse of technology and the generation of false content. There is a risk of generative models being used to create deceptive or manipulated information (the different types of fakes you mentioned), which can have a negative impact on public trust, disinformation and the dissemination of fake news. Additionally, the replacement of certain jobs with generative automation is also a current concern, either through excessive trust in the capacities of this technology or because the priority is quantity rather than quality.

– And in the long term?

In the long term, it is likely that these ethical challenges will become more complex as the technology continues to progress. As generative models become more sophisticated, the current concerns in relation to matters like privacy may be accentuated by the technological advances. We could see something similar in processes in which decision-making has become too automated through generative systems. It is essential to tackle these challenges in a proactive manner, implementing appropriate ethical frameworks and regulations to safeguard the responsible, safe use of generative AI.