“This a quickly evolving landscape, with the potential to leverage broader sets of structured and unstructured data sources through neural network machine learning fast coming into view,” explains Daniel Holness, Head of Data and Analytics, Lab49.
“For this to be realised in financial services, for example, confidence is critical. Results must be explainable as the risks faced by financial firms are too high for there to be even the smallest element of doubt. Only where firms believe they are insulated from both monetary and compliance related risks, such as sourcing incorrect data or false advisory recommendations, will they seek to implement these solutions.”
One of the primary benefits of using natural language processors (NLP) like ChatGPT for data analytics is the ability to quickly process large amounts of unstructured data. Traditional data analytics tools require data to be structured in a specific format to be analysed effectively. However, NLP tools are different. These tools can process unstructured data like text and speech and extract valuable insights.
For example, imagine a company that wants to analyse customer feedback from social media. Traditionally, the company would have to hire human analysts to read and interpret each comment manually. However, with an NLP like ChatGPT, the company can quickly process all customer feedback, extract valuable insights, and make informed decisions.
Another benefit of using natural language processors for data analytics is real-time data analysis. In today’s fast-paced business environment, companies must make decisions quickly to stay ahead of the competition. NLP tools can analyse data in real time, allowing companies to make informed decisions instantly.
Natural language processors can also enhance data analytics by providing valuable insights that may not be apparent through traditional analytics tools. For example, these tools can analyse language patterns and identify trends and sentiments that may not be immediately apparent.
Here, a marketing team that wants to analyse customer feedback to identify potential new product ideas. Using a natural language processor like ChatGPT, the team can quickly identify common themes and topics in customer feedback. The tool can also analyse sentiment, recognise positive and negative feedback, and provide insights into what customers seek in new products.
Finally, natural language processors can enhance data analytics by enabling more natural human-machine interactions. Traditionally, data analytics tools required users to have a high level of technical expertise to use effectively. However, NLP tools like ChatGPT can enable more natural human-machine interactions, making it easier for non-technical users to analyse data.
The marketplace for machine analyses is massive: According to MarketsandMarkets, The natural language processing market is estimated to grow from $11.6 billion in 2020 to $35.1 billion by 2026. As a result, the number of vendors offering data analysis will mushroom. For businesses, this will mean performing due diligence before purchasing any services.
In addition, several areas of concern must be addressed, including biased output. Natural language processing models like ChatGPT are only as good as the data they are trained on. If the data used to train the model is biased, then the output generated by the model may also be biased. This can lead to unfair or discriminatory outcomes.
Data privacy. When businesses use natural language processors to analyse data, they must ensure their data is adequately secured and protected. This is especially important when dealing with sensitive or confidential data.
Lack of transparency. Systems like ChatGPT can be very complex, and it can be challenging to understand how they arrive at their conclusions. This lack of transparency can make it difficult for businesses to identify and address any issues or errors in the output generated by the model.
In the future, NLP’s, like ChatGPT, will analyse data from various sources, including social media, customer reviews, and news articles. These tools will enable companies to extract insights and make informed decisions quickly. For example, a company could use ChatGPT to analyse customer feedback from social media and identify common themes and sentiments. This information could improve customer satisfaction and drive business growth.
Another area where NLP’s will be used in the future is predictive analytics. Predictive analytics involves using historical data to predict future events or trends. NLP tools can analyse text and speech data to identify patterns and trends that may not be immediately apparent. This information can then be used to make predictions about future events.
A company could use ChatGPT to analyse news articles related to their industry to identify trends and predict future market conditions. This information could then be used to make informed decisions about product development and marketing strategy.
NLP’s will also be used in the future for personalised customer experiences. Personalisation is becoming increasingly important in today’s market, and companies must deliver tailored experiences to their customers. NLP tools can analyse customer data and provide personalised recommendations and suggestions.
A retail company could use ChatGPT to analyse a customer’s previous purchases, search history, and social media activity to provide personalised product recommendations. This level of personalisation could help companies build stronger relationships with their customers and drive customer loyalty.
Finally, systems like ChatGPT will be used for real-time data analysis, which involves analysing data as it is generated, allowing companies to make informed decisions instantly.
Daniel Holness, Head of Data and Analytics, Lab49 concluded: “In its current form, ChatGPT is being applied most successfully in vertical datasets for simple tasks. For example, drafting email responses to potential clients. Some of the largest names on Wall Street are tapping into its services, most recently with Goldman Sachs announcing its use case proof of concept for document classification and categorisation. Ultimately, explainable AI / ML that enhances the transparency of data sourcing, capturing and analysis will provide a better understanding of how conclusions have been made. We believe that this will enhance confidence in the technology, opening the door to the next phase of advanced AI development.”
Natural language processors like ChatGPT will play a crucial role in the future of data analysis. These tools will be used to analyse unstructured data, provide personalised customer experiences, and make real-time decisions based on insights extracted from data. As NLP technology continues to advance, these tools will become even more essential for data analysis in the future.
“Since the release of ChatGPT, there has been a significant surge in the mass adoption and public discourse of AI and Large Language Models (LLMs). As AI systems evolve, LLMs – such as ChatGPT – will become increasingly sophisticated and make our digital interactions more and more natural. LLMs could help businesses, from creating new products and services, and unearthing fraud, to providing better services to their customers. Companies can also use the technology to supercharge their support and helpdesk teams giving employees time to focus on trickier conversations while leaving AI to handle the more day-to-day queries.
“However, it’s essential to acknowledge that we are still in the nascent stages of this technology, and there is a vast amount to be discovered regarding its impact on businesses. “With all this excitement, companies must also be aware of risks that come with such technologies so they can leverage them appropriately. Historically, AI has almost exclusively been used by the technologically proficient.
“Now, it is becoming available to those who have yet to be trained in AI, and it is an excellent opportunity for businesses to increase productivity and quality of service. However, this development has a more worrying aspect: you are putting very powerful tools in the hands of people who have yet to have a chance to get comfortable with the legal, governance and compliance aspects of using AI.
“As this AI is inserted into people’s day-to-day tasks and built into workflows, it has to be done with integrity and accuracy. Otherwise, unknowingly, people could be fed inaccurate and flawed insights by their AI tools yet still treat them as gospel. Instead, businesses should ensure they maintain strict compliance practices, rules and training processes at every stage of the AI implementation process.
“The higher quality the data that the AI is ingesting, and the narrower the remit of data it is given to pull from, the better and more accurate the results. If the data is inaccurate and incomplete, then there’s a significant chance that the responses from these LLMs will be incorrect or biased too.
“Additionally, advanced LLMs are still models – the answers they produce are as good or bad as the data the models were trained on. “Good or bad” in this context is true both in the analytic and moral context. For example, if data the model was trained captured opinions that discriminate against certain ethnicities, races, genders and other groups of people, this bias would find its way into the model’s answers to decisions. Businesses must be cognizant of this possibility and ensure that LLMs are sufficiently trained to self-regulate harmful speech or discriminatory decisions.
“Businesses should also know how such technologies could impact their Intellectual Property. If a business integrates technology such as ChatGPT into its tech stack, it could inadvertently expose its own Intellectual Property. What’s more, some of these open-source software providers state in their terms that they own the developments created in the software. So, while a business might think something created is theirs, that may be different.
“Thinking about artificial intelligence (AI) as an analogy, it’s been said that AI is like a puppy. Everyone wants one, but once you get that puppy, you have to devote lots of time and attention to train it. Now, with ChatGPT and generative AI, replace puppy with a toddler. A toddler needs undivided attention, guidance, training and education, just like the new versions of AI available today.”
“Recent versions of LLMs perform very well on tasks that involve text classification, summarisation, translation, and generation. ChatGPT is used to make existing tasks more efficient, freeing up time to spend on other work areas. For example, for a marketer or small business owner, ChatGPT can be used to create a simple Q&A page on a certain topic on a business’s website. Then, in the legal industry specifically, these models could be used to search through law texts and prior cases that would otherwise require hours upon hours of investigation. ChatGPT can solve existing and new problems, but again: only if the data it’s fed is correct.
“With AI being integrated into more and more day-to-day workflows, it is critical that businesses implement robust data governance practices alongside any robust AI training data. Companies must ensure that the underlying data the models are being trained on is sufficient, targeted, and of the highest quality and that controls are in place to stop data misuse (Privacy & Compliance Controls).
“A lack of these practices could increase the risk of inaccurate responses, reducing the ability to make well-informed, responsible business decisions. Even with good training and robust data governance, it is still critical that businesses don’t blindly make decisions based on the responses from these models. Instead, businesses should undertake risk assessments when using AI to inform business decisions, understanding the impact of a “bad decision” and what checks and procedures should be in place to mitigate any risk.”
“ChatGPT from OpenAI is just one example of the new generation of Large Language Models. But it’s not the only one. Many technology companies are releasing their LLM models that are being trained on rapidly increasing corpora of data and contain staggering numbers of parameters. We at Dun & Bradstreet have been using LLMs for a while and are pursuing additional opportunities to leverage such technologies to serve our clients better.
“It’s important to remember that the overall advancements in this field all rely on the pre-training, using more data than we can imagine, combined with additional training that uses human feedback. So, this improvement loop must always be on. As a result, it will solve more problems, be more accurate, and interact closely with how a human would act. And when trained right, it will be better over time.
“LLMs are unquestionably a powerful tool that, with time, will become even more beneficial for businesses. There is, however, also a consideration for businesses to make about when it is and isn’t necessary to use LLMs in a scenario and which one is appropriate for the business objective. The latest and most powerful model may not be the best for each use case. As powerful as these models are, the onus of thinking is still on people.
“While much about LLMs remains to be understood, the models are the new normal. Businesses that ignore these developments risk falling behind. But with the democratisation of this technology comes increased responsibilities which businesses need to be consider. It’s a very exciting time to be in the data and analytics field, with the pace of change accelerating and new opportunities arising. It will be fascinating to see how this continues to develop and how businesses embrace it.”
Stand-alone 5G network in the UK, without a 4G core or anchor, will be switched…
Three US online news outlets sue OpenAI, alleging the AI pioneer used thousands of their…
CEO Tim Cook once again indicates Apple will open up about its generative artificial intelligence…