As Mobile World Congress draws closer, how will AI impact the development of the smartphone? AI’s transformative impact will be profound, from cutting-edge camera enhancements to predictive text suggestions.
In the rapid evolution of technology, the intersection of Artificial Intelligence (AI) and mobile phones has become a focal point, heralding a new era in developing these indispensable devices.
The journey of mobile phones, from their humble beginnings as communication tools to multifaceted smart devices, has been nothing short of revolutionary. The impact of AI on mobile devices – none more so than the phone – promises enhanced functionality and a complete paradigm shift in how we interact with our handheld companions.
“Imagine a phone that continually authenticates you, learning your physical and behavioural idiosyncrasies,” Patrick Smith, CEO and Founder of Zally, tells Silicon UK. “This leads to fewer frustrating logins and a seamless, more secure experience. It’s about transforming your device from simply being smart to being intuitive, recognising you in a manner that transcends superficial metrics.”
One of the most notable impacts of AI on mobile phone development is the creation of highly personalised user experiences. As AI algorithms become more sophisticated, they can learn user behaviours, preferences, and habits, tailoring the device’s functionalities accordingly. From predicting the apps a user might open at a specific time to adjusting display settings based on individual preferences, AI transforms the mobile phone into an intuitive companion that anticipates and adapts to the user’s needs.
AI’s influence extends beyond personalisation; it significantly enhances smartphones’ overall performance and efficiency. Machine learning algorithms enable devices to optimise resource allocation, ensuring smoother multitasking and improved power management. This translates into faster processing speeds, extended battery life, and a seamless user experience. As AI algorithms evolve, smartphones will continue to push the boundaries of performance, providing users with devices that effortlessly handle complex tasks.
Samsung Galaxy S24 is already being dubbed the ‘AI phone’ with its expected launch later this month, with Google trying to claim the AI throne with its own Pixel 8 launched last October. And this year will see the fruits of the partnership between Samsung and Meta running their version of ChatGPT Llama 2. The practical upshot for users should be powerful personal assistants we can all use to make our phones even more tailored to our specific needs.
In their press release, Samsung states: “The launch of the Galaxy S24 series demonstrates our initial step toward a new era of AI phones that go beyond the current smartphone,” said TM Roh, President and Head of Mobile eXperience Business at Samsung Electronics. “Designed to be an essential part of our daily lives, Galaxy AI will permanently change the way people interact with the world. We can’t wait to see how our users enhance and empower their everyday lives with Galaxy AI to open up endless possibilities.”
However, the iPhone X – released six years ago, had elements of AI built into the AII Bionic chip. And, of course, the new iPhone 15 Pro has the A17 chi, which includes masses of powerful processing we would call AI today.
Power in your pocket
AI will impact many critical services and aspects of the smartphone as we know it today. AI can improve processing, graphics, and cameras, for example. Indeed, ARM – whose chipsets power over 2.5 billion consumer devices, states: “Levels of realism and immersion in graphics are being profoundly influenced by the rise of AI. Machine learning (ML) enables AI-based systems to perform tasks on data-driven learning and decision making, is also crucial in the ongoing evolution of graphics and gameplay.”
“Smartphones, our most personal devices, are poised to leverage multi-modal generative AI models and combine on-device sensor data,” said Qualcomm Technologies’ Senior Vice President and General Manager of Mobile Handset, Chris Patrick. Using different modalities, these AI assistants will enable natural engagement and process and generate text, voice, images and even videos, solely on-device. This will bring next-level user experience to the mainstream while addressing the escalating costs of cloud-based AI.”
Enhanced image recognition, scene detection, and computational photography techniques are at the forefront of this transformation. AI algorithms can now optimise real-time camera settings, intelligently identify and enhance subjects, and even simulate professional photography effects. The result is an unparalleled imaging experience that empowers users to capture stunning photos effortlessly.
As we transition into the era of 5G connectivity, AI plays a pivotal role in maximising the potential of this high-speed network. AI algorithms are adept at managing network resources, minimising latency, and optimising data transmission, ensuring a seamless and responsive user experience. The synergy between AI and 5G opens doors to innovative mobile phone applications such as augmented reality (AR) and virtual reality (VR), ushering in a new era of immersive and connected experiences.
Nadia Alramli, Vice President of Engineering, HubSpot, sees the digital assistant age being ushered very soon: “We should expect a new generation of digital assistants powered by the latest generative AI tech. Soon, I expect Apple, Google and Amazon to iterate here – their digital assistants will be able to understand and remember your preferences from past commands and recommend actions for you to take next instead of only reacting to specific instructions.”
Alramli continued: “Imagine you’re someone who usually orders groceries on Fridays. Your assistant won’t just wait for a command; it’ll remind you to place your order, or better yet, suggest items you frequently buy. It’s like having a personal assistant who knows your routine and acts on it without needing to be told.”
Also, Zally’s Patrick Smith also points to voice as a game-changer in the AI mobile space: “While it’s still a question whether these advancements will fully materialise in 2024, the trend is clear. According to Intel’s IDC study, it’s predicted that over 75% of enterprise applications will use AI for speech recognition by 2025. This signifies a massive leap in how we interact with our phones. We’re moving towards a future where our devices understand us more like a human would, making interactions more natural, intuitive, and efficient.”
Coupled with advanced speech recognition, will be enhanced AI-enabled search, as Viktor Qvarfordt, Principal Engineer at Sana, outlined to Silicon UK: “Intelligent knowledge retrieval systems will replace search. We are already seeing how intelligent knowledge retrieval systems can enhance the way humans’ access and utilise information. For example, retrieval-augmented generation and neural search engines—trained in particular, pools of knowledge—will unlock new levels of knowledge depth, quality and consistency, providing more factual answers from the right verifiable sources. These systems can also be customised to answer and behave however you want them to.”
With mobile phones and other devices embracing AI to improve performance and the user experience, businesses should pay close attention to security. “With growing AI app usage, employees are more likely to expose sensitive data like credentials, personal information, or intellectual property,” said Ray Canzanese, Threat Research Director, Netskope Threat Labs. “For safe enablement of AI apps, organisations must implement reasonable controls and advanced data security capabilities while focusing on how employees can use AI productively.”
James Malcolm, Head of Mobile Engineering for xDesign also says: “As with most new technologies, the people creating or using it are in a constant battle with those trying to abuse it. People are used to scam calls and emails, but it will be on another level in the next few years. Calls will sound like people they know, images will be generated that look natural, and texts and emails will be sent that mimic how individuals’ type.
“The role of industry professionals will lie in making these situations clearer using countering software that can show when something is doctored or generated by AI. In this more ambiguous age of content and images, everyone will need to be more aware of everything they read and what they see now more than ever,” Malcolm concluded.
While Netskope expects the total number of users accessing AI apps in the enterprise to continue to rise moderately next year, an emerging population of power users is steadily growing their use of generative AI apps. With use currently growing exponentially, the top 25% of users can be expected to increase generative AI activity significantly in 2024 as this group finds new ways to integrate the technology into their daily lives.
And enterprises looking to enhance their services or apps will have a workforce with the required skills. In an era of chronic shortage of technical skills, the race is on to locate, hire and retain the skills that will be needed to make AI across the mobile space a reality.
Nikolaz Foucaud, Managing Director, for Europe, Middle East, and Africa (EMEA) Enterprise, at Coursera, commented: “The meteoric rise of AI is only set to continue in 2024—with demand for related skills accelerating alongside it. Businesses and institutions looking to capitalise on the promise of greater productivity and increased competitiveness will seek AI specialists with various skills.
“Yet rapid technological change can also lead to new risks. Cybersecurity has also seen significant growth this year, alongside audit skills, which are vital to keeping data protected, secure, and compliant. Meanwhile, the demand for leadership skills has become more prominent as institutions seek people capable of steering teams and businesses through these times of change and innovation.
“Together, these changing contexts only cement the need for institutions to identify and invest in the right skills for their learners and employees; those that both empower career success and fuel the sustainable growth of businesses and governments in 2024 and beyond.”
While the future of AI in mobile phone development is promising, it also presents challenges that need thoughtful consideration. Privacy concerns, ethical considerations, and the potential misuse of AI-generated data are among the issues that must be addressed. Striking a balance between innovation and responsible development is crucial to ensuring that AI-enhanced mobile phone benefit users without compromising their privacy and security.
The trajectory of AI’s impact on mobile phone development is dynamic and poised for continuous innovation. As AI algorithms become more sophisticated and accessible, we can expect a democratisation of advanced features, making intelligent technologies accessible to a broader demographic. The future holds the promise of mobile phones that seamlessly integrate with our daily lives, anticipate our needs, and empower us with unprecedented capabilities.
And the symbiotic relationship between AI and mobile device development is reshaping the technological landscape. From personalised experiences to enhanced performance, intelligent assistants, and revolutionary photography, the impact of AI on mobile phones is multifaceted. As we navigate the future, the convergence of AI and mobile technology will undoubtedly redefine how we perceive, interact with, and rely on our pocket-sized companions, unlocking a world of possibilities and pushing the boundaries of innovation.
Prof David Berman, Head of AI in Wireless Digital Services, Cambridge Consultants.
How do you foresee integrating artificial intelligence (AI) into mobile phones to enhance user experiences in 2024?
“There are many places AI is and will be integrated that people won’t notice as AI. Cameras with Super-resolution; simple models that enhance battery life; AI to predict user needs and behaviours so that sophisticated features are easy to use. But the big elephant in the room is how LLMs will be integrated with phones and Deepmind’s Gemini seems to be leading the pack in having multimodal LLMs with phones in mind.”
What advancements in natural language processing (NLP) can we expect in mobile AI, and how will it impact voice-activated smartphone interactions?
“Linking voice to large language models (LLMs) will make voice-activated phones even smarter. We’ve already developed voice-activated large language models that interface to LLMs like ChatGPT through voice and reply with voice. This obvious approach will no doubt be brought in by all the main phone players, but it leaves Apple slightly exposed. They had real success with Siri, but now, in the wake of the recent LLM boom, Siri may need a face-lift.”
In what ways will AI-driven personalisation evolve on mobile devices, providing users with more tailored and context-aware experiences?
“We will move more and more to personalised bespoke AI models for everything. Our phone experience will be unique to us and phones will adapt to their use and the needs of individuals. AI is great at personalising. We believe that the new frontier in AI is edge learning where the AI model continuously learns after deployment. Phones provide a huge possibility here to adapt to users but also to adapt as the user needs change. For example, if people change location or job, and their habits alter, the phone should alter with you.”
Are there any specific advancements in AI-powered virtual assistants for mobile phones, and how will they evolve to be more integral to daily tasks?
“This next step will be achieved by interfacing with an LLM. There is an interesting business question here – what will people pay for? Given the cost of LLM use, this is going to have to be a paid-for service on top of the usual provision. There is huge potential here, but I doubt it will be worth it for everyone.”
How will AI contribute to optimising network connectivity and data transfer speeds on mobile devices, especially with the deployment of 5G technology?
“There will be huge use of AI behind the scenes in everything from AI powered components for things like channel estimation to AI use in the whole stack to gain efficiencies. Network optimisation using reinforcement learning is also around the corner. To make all this work well we again need edge AI implementations so that we get the needed speeds with hyper low latency. Cloud computing can’t work here.”
What challenges do you anticipate in integrating AI into mobile phones, and how are researchers and industry professionals addressing these challenges?
“Edge learning remains a challenge. The reliance on the cloud is something we must progressively challenge and migrate more of the computing on to the phone. Many people are working at getting effective AI that is cheaper on power consumption and more distributed, so we have fewer server farmers. This is at odds with the LLM need, which is compute heavy. These are the key challenges. How do we exploit LLMs with all their value but move to energy aware AI?”