Capgemini Australia https://www.capgemini.com/au-en/ Capgemini Fri, 28 Jul 2023 14:29:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.3 https://prod.ucwe.capgemini.com/au-en/wp-content/uploads/sites/10/2021/07/cropped-favicon.png?w=32 Capgemini Australia https://www.capgemini.com/au-en/ 32 32 Sharing without showing: data clean rooms allow for unprecedented collaboration https://www.capgemini.com/au-en/insights/expert-perspectives/sharing-without-showing-data-clean-rooms-allow-for-unprecedented-collaboration/ Wed, 26 Jul 2023 04:41:14 +0000 https://www.capgemini.com/?p=939350 The post Sharing without showing: data clean rooms allow for unprecedented collaboration appeared first on Capgemini Australia.

]]>

SHARING WITHOUT SHOWING: DATA CLEAN ROOMS ALLOW FOR UNPRECEDENTED COLLABORATION

Jennifer Belissent
26 July 2023

Imagine the potential for secure data collaboration. With the boundaries between different companies, organizations, and entire industries blurring, the use cases are endless. Organizations can perform joint data analysis and train machine-learning (ML) models while ensuring that confidential information will stay protected from their sharing partners. It’s all happening in the world of data clean rooms.

Pharmaceutical companies can identify the best hospitals for clinical trials with a look-alike analysis against patient records. Insurance companies can collaborate to identify fraudulent claims. Media outlets can offer premium placement to advertisers to ensure targeted messaging. Loyalty programs can deliver truly personalized services across hotels, airlines, and other services. Telecom operators can collaborate with location data to enrich those personalized services. Emergency and social services can collaborate to help those in need.

Yet in many cases, the relevant data is personal information, and protected by privacy laws and bonds of trust. How can that data be shared?

The use cases for secure collaboration with data clean rooms are endless

Imagine the following scenario.

A crowd of spectators is watching a big game and the teams are tied. The tension mounts. The fans grow restless. He shoots. He scores! The roar of the crowd can be heard all the way down the neighborhood street. And all the consumer brands want to know who is watching and how to reach these audiences. Yet, these sports fans are watching the game in the privacy of their homes, and the network they’re watching on must legally protect their data.

How can these media outlets share their viewer data – or the insights from it – without violating data protection laws and the trust of their subscribers?

It turns out that a similar question was posed by an academic in the early 1980s. Professor Andrew Yao introduced the problem: Alice and Bob, both millionaires, want to know which of them is richer but neither wants to reveal his or her exact wealth. Through complex mathematical proofs, Yao’s Millionaires’ problem was solved, proving it is possible to share insights without showing the underlying data. Fortunately, modern methods do not require arduous manual calculations.

“Sharing without showing? You bet!”

Increased demand for data sharing

For potential advertisers or anyone who wants to collaborate with data, that’s great news. Data sharing and collaboration deliver business value. A recent Capgemini study, Data sharing masters, found that companies with collaborative data ecosystems reported better business outcomes including new revenues, reduced costs, increased productivity, and greater customer satisfaction. And that promise has spurred new data ecosystem initiatives.

Companies have long used their own data to better understand their customers or to improve operations. Increasingly, data teams turn to external data sources to enrich their internal data and enhance analytics. Budgets for external data are significant and growing. In a recent survey conducted by external data platform Explorium, 22 percent of respondents said they were spending more than $500,000 on external data, with 13 percent saying they spent more than $1 million (up from 7 percent from a similar survey in 2021).

Customer data was the number one type of data acquisition: 52 percent purchased data on companies, followed by 44 percent purchasing demographic data. And the number of sources has grown as well: 44 percent of firms acquire external data from five or more providers. That’s up from only 9 percent the previous year. However, procuring external data is not without challenges, with regulatory constraints often topping the list. Concerns about GDPR or other privacy regulations loom large, and for good reason.

Introducing modern data clean rooms

Not long ago, data sharing meant copying and sending files to a partner. That practice certainly complicated data governance. Short of a manual audit, knowing who accessed the data and for what purpose was impossible. Now, using the principles demonstrated by Yao’s millionaires, two or more parties can derive insights from data without revealing the underlying information.

With a Snowflake Global Data Clean Room, each party controls its own data, allowing governed, controlled analytics by other parties. That is to say, each party specifies who can access the data and for what purpose. Let’s take a look at how it would work with Yao’s two millionaires, Alice and Bob.

First, each party creates a table with the data to be shared. Then one party, let’s say Bob, creates a table to store allowed statements. This is where the queries that Bob will allow another party to run against his data will be maintained. He then creates an access policy granting use of these statements and applies this access policy to his data table.

Next, Bob defines the exact statement or query he will allow, and inserts it into his “allowed statements” table. The statement includes the comparison of their wealth and the answers that will be returned in each case: “Bob is richer,” “Alice is richer,” or “Neither is richer.” Finally, he grants Alice permission to access and use his data for only this specific purpose. Alice then asks the question in the form of the specified query and receives the response: Bob is richer. Sorry, Alice.

Now imagine a more realistic business scenario where two companies want to know which customers they have in common – an overlap analysis. They would put the data in tables, establish the statements to compare their customer lists, and specify the information to be returned. Or one company might be interested in finding new prospects among a partner’s customers and would perform a look-alike analysis comparing customer attributes.

Data clean rooms transform the ad world

In a real use case, commonly seen in media and advertising these days, brands want to optimize their ad spend through better targeting to specific customers or personas – like the fans watching that exciting game. Media outlets want to offer premium placements by knowing exactly which programming the brand’s customers are watching. Comparing customers is a win-win. However, neither wants to show the underlying data. The clean room allows them to share without showing. In this case, as illustrated in the diagram, the returned information would include a customer count for each of the media outlet’s programs, but not specific customer data, in order to ensure compliance with privacy regulations. All queries of the data would be monitored and logged for audit purposes.

In the past, this scenario required data to be copied and moved across the AdTech value chain from enrichment to activation to attribution. Not only were there the aforementioned governance concerns, but that data was also immediately stale. With Snowflake, live, near real-time data can be shared where it resides – no copies necessary. Data governance capabilities allow all parties to assign access and use policies that limit both who can query the data and exactly which queries are allowed. Additional capabilities add further security to the clean room. Data can be encrypted, anonymized, tokenized, or pseudonymized with built-in hashing functions, or obfuscated with data masking or by injecting differential privacy.

With today’s technology, data clean rooms allow parties across teams, companies, government agencies, and international organizations to collaborate and securely share sensitive or regulated data. As Thomas Edison said, “The value of an idea lies in the use of it.” The more data is used, the more value is created. Secure data collaboration accelerates value creation.

INNOVATION TAKEAWAYS

CROSS – INDUSTRY COLLABORATION AND DATA SHARING

A growing trend that’s here to stay.

DATA CLEAN ROOMS FACILITATE JOINT DATA ANALYSIS AND ML

While ensuring that confidential information will stay protected

from sharing partners.

DATA ECOSYSTEMS AND SECURE DATA COLLABORATION

They accelerate value creation.

Interesting read?

Capgemini’s Innovation publication, Data-powered Innovation Review | Wave 6 features 19 such fascinating articles, crafted by leading experts from Capgemini, and key technology partners like Google,  Starburst,  MicrosoftSnowflake and Databricks. Learn about generative AI, collaborative data ecosystems, and an exploration of how data an AI can enable the biodiversity of urban forests. Find all previous Waves here.

#DATACLEANROOMS

#DATACOLLABORATION

#DATAECONOMY

#DATAECOSYSTEMS

Jennifer Belissent

Ph.D., Principal Data Strategist, Snowflake
Jennifer Belissent joined Snowflake as Principal Data Strategist in 2021. Prior to joining Snowflake Jennifer spent 12 years at Forrester Research as an internationally recognized expert in data sharing and the data economy, data leadership and literacy, and best practices in building world-class data organizations.

    The post Sharing without showing: data clean rooms allow for unprecedented collaboration appeared first on Capgemini Australia.

    ]]>
    We elevate your possible with Generative AI https://www.capgemini.com/au-en/insights/expert-perspectives/we-elevate-your-possible-with-generative-ai/ Thu, 20 Jul 2023 20:05:08 +0000 https://www.capgemini.com/?p=938433 The post We elevate your possible with Generative AI appeared first on Capgemini Australia.

    ]]>

    We elevate your possible with Generative AI

    Mark Oost
    20 Jul 2023

    While there is a huge adoption of Generative AI across organizations and industries – our research reveals that over 95% of executives are engaged in Generative AI discussions in their boardrooms – we can observe clearly a shift in the way people perceive AI now.

    I have been working in the field since many years, and the unprecedent enthusiasm around Gen AI is impressive – 74% of executives believe the benefits of generative AI outweigh the associated risks. Beyond the positive feedback around it, there is a massive need for information, education and guidance. Especially for organizations to successfully and responsibly implement Generative AI across their data value chain, considering ethics, privacy and security from the start.

    However, when you leverage Generative AI in a secured and trusted environment the opportunities are immense. From tasks and workflow optimization, to content production, product innovation and R&D, it is revolutionizing the way we create, interact and collaborate, completely shifting the way organizations operate. What if you could as a CXOs leverage Gen AI, across your organization, in a safe, secured and controlled manner, to fit your business reality?

    Creative and Generative AI

    Why consumers love generative AI, we explore the potential of generative AI, its reception by consumers and their hopes for the technology

    Building on your unique skills and knowledge

    By combining your company’s unique knowledge with foundational models to create tailored Gen AI solutions, you can deliver reliable outcomes at scale while addressing your specific business needs. Together, we can unlock this full potential and rewrite the boundaries of what’s achievable with our new offer Custom Generative AI for Enterprise.

    We help you elevate and focus on your excellence to unleash new possibilities. This is what our Custom Generative AI for Enterprise is all about: building on the unique skills and knowledge that make you, you. And it’s because we are tailoring from your data, business knowledge and context that results will create maximum impact and benefit your organization.

    Rather than sharing clients examples, I prefer to illustrate this with our partnership with Dinara Kasko, an extraordinary creative talent and architect-designer. At the intersection of GenAI and 3D printing, she is building on her skills to create unique art pieces in the shape of patisserie, unleashing her creative process with the power of technology.

    We are collaborating with her with a bespoke solution to elevate her possible with Generative AI. Stay tuned for exciting updates!

    And if you are curious about the new possibilities of Generative AI and the rapid pace of its technological advancements, connect with me!

    Author

    Mark Oost

    Global Offer Leader, AI Analytics & Data Science
    Prior to joining Capgemini, Mark was the CTO of AI and Analytics at Sogeti Global, where he developed the AI portfolio and strategy. Before that, he worked as a Practice Lead for Data Science and AI at Sogeti Netherlands, where he started the Data Science team, and as a Lead Data Scientist at Teradata and Experian. Throughout his career, Mark has had the opportunity to work with clients from various markets around the world and has used AI, deep learning, and machine learning technologies to solve complex problems.

      The post We elevate your possible with Generative AI appeared first on Capgemini Australia.

      ]]>
      Simplifying visual inspection https://www.capgemini.com/au-en/insights/expert-perspectives/simplifying-visual-inspection/ Tue, 04 Jul 2023 10:59:38 +0000 https://www.capgemini.com/?p=922697 Discover how IBM Maximo and Capgemini are revolutionizing visual inspection, delivering defect detection. Explore the future of automated QA.

      The post Simplifying visual inspection appeared first on Capgemini Australia.

      ]]>

      Simplifying visual inspection

      Daniel Davenport & Satheesh Sebastian
      4 Jul 2023

      IBM Maximo application and iPhone deliver defect detection at a fraction of the cost.

      Visual inspection is an opportunity. It is estimated that the market will be worth more than $25 billion worldwide by 2027. Companies are excited about the prospect of cost-savings, error reduction, better accuracy, and higher quality output. Automated quality assurance is the utopia all manufacturers want to achieve. The pandemic helped the process along, as companies tried to reduce the amount of human intervention. But finding the right skills and experience to manage a visual-inspection system can be challenging and it is an expensive investment.

      Visual inspection is an application of computer vision with machine learning and artificial intelligence that helps humans make better products with fewer defects. It is much more efficient to find the defects during the process rather than fixing issues later or having to initiate recalls.

      We have been working with Professor John Ward and his team at the University of South Carolina on a new way to perform visual inspections. Rather than waiting months for a system to be designed and installed, Professor Ward has created an easier answer. This leverages the IBM Maximo iPhone application as an input device to create a visual model using photos. The iOS devices are installed on the production line.

      There is a beauty in the simplicity of using smartphones, but any visual-inspection system creates significant amounts of data. High-speed, 5G, millimeter wave, low-latency networks have to move high-resolution images to a system to analyze the information and communicate results in split seconds to stop a line in case of a defect. Having a system to collect information is only the first step. The power is in the data.

      Our contribution is a comprehensive business-intelligence analytics suite that allows companies to drill down all the way to see the image of the defect. It means companies can look at the return on investment (ROI) per defect caught as well as all the way up to the line or plant or number of plants so the company can start aggregating quality data and use it more broadly. Instead of being a point solution, the visual inspection is now a company-wide node in a manufacturing machine.

      We also bring an agile way of working and a focus on collaboration, iterative development, and continuous learning. This enables the rapid delivery of business value so ideas get to market faster. In addition, we further amplify the impact of agile methods by providing specialized expertise in the latest technologies, including cloud, automation, artificial intelligence, and high-speed private networks. From idea to launch, the advanced technology delivery team working from the development center in Columbia, South Carolina, helps visionary companies turn ideas into business value at industry-leading speed.

      Dashboards make managing corrective actions easy. This can include identifying areas that need more training, since people are a huge variable in building products. The immediate feedback of real-time data makes operators more efficient. It also paves the way to a continuous improvement culture and mindset.

      Visual inspection is about more than just quality; it can also be applied to sustainability initiatives that seek to decrease waste and increase accountability through managing more variables at greater levels of detail. As manufacturers trend towards Industry 4.0 with smart plant-control towers, private 5G, and edge computing, this visual-inspection application can provide the rationalization for these foundational technologies to finally present a clear business case for adoption.

      Find out more about our partnership with IBM.

      This article was originally published via Capgemini United States.

      Meet our experts

      Daniel Davenport

      Client Partner, NA Automotive, Capgemini
      I am a passionate and experienced leader with extensive experience in the automotive industry. I collaborate with mobility providers to create the next generation of transportation products and services. This includes understanding their business models as well as their future trends so that we can be an active part of shaping these new markets together.

      Satheesh Sebastian

        The post Simplifying visual inspection appeared first on Capgemini Australia.

        ]]>
        Deep stupidity – or why stupid is more likely to destroy the world than smart AI https://www.capgemini.com/au-en/insights/expert-perspectives/deep-stupidity-or-why-stupid-is-more-likely-to-destroy-the-world-than-smart-ai/ Wed, 28 Jun 2023 07:03:21 +0000 https://www.capgemini.com/au-en/?p=511850&preview=true&preview_id=511850 The hype in AI is about whether a truly intelligent AI is an existential risk to society. Are we heading for Skynet or The Culture? What will the future bring?

        The post Deep stupidity – or why stupid is more likely to destroy the world than smart AI appeared first on Capgemini Australia.

        ]]>

        Deep stupidity – or why stupid is more likely to destroy the world than smart AI

        Steve Jones
        7 Jun 2023

        The hype in AI is about whether a truly intelligent AI is an existential risk to society. Are we heading for Skynet or The Culture? What will the future bring?

        I’d argue that the larger and more realistic threat is from Deep Stupidity — the weaponization of Artificial General Intelligence to amplify misinformation and create distrust in society.

        Social media is the platform, AI is the weapon

        One of the depressing things about the internet is how its made conspiracy theories spread. Where before people were lone idiots, potentially subscribing to some bizarre magazine or conspiracy society in a given area, you really didn’t have the ability to industrial scale these things. Social media and the Internet has increased the spread of such ideas. So while some AI folks talk about the existential threat of AGI, personally I’m much more concerned about Artificial General Stupidity.

        So I thought it is worth looking at why it is much easier to build an AI that is a flat earther than it is to build a High School physics teacher, let alone a Stephen Hawking.

        It is easier being confidently wrong and not understanding

        LLMs are confidently wrong, that inability to actually understand is a great advantage when being a conspiracy theorist. Because when you understand stuff, then conspiracy theories are dumb.

        This means the training data set for our AI conspiracy theorist must be incomplete, what we need is not something that has access to a broad set of data, but actually something that has access to an incredibly small and specific set of data that repeats the same point over and over again.

        To be a conspiracy theorist means denying evidence and ignoring contradictions, this is much easier to learn and code for than actually receiving new information that challenges your current model and altering it.

        Small data set for a single topic

        So this is a massive advantage for LLMs when trying to create a conspiracy theorist. What we need is a limited set of data that repeats a given conclusion and continually lines up all evidence to that conclusion. We can apply this to lots of conspiracy theorists out there, for instance those folks who scream “false flag” after every single mass shooting incident in the US, in other words we have a small set of data, possibly only a few hundred data points that always result in the same conclusion. This means for our custom trained conspiracy theorist the association it always knows is “what ever the data, the answer is the conspiracy”.

        Now we could get fancy and have a number of conspiracies, but given very few of them are logically consistent with each other, let alone with reality, it is more effective to have a model per conspiracy and just switch between them. That a conspiracy theorist is inconsistent with what they’ve previously said isn’t a problem, but we don’t want inconsistencies between conspiracies on a single topic. What we need to add are the standard “rebuttals of reality” like “Water finds its level”, “We don’t see the curve”, “NASA is fake” or “Spurs are a top Premier League club”.

        Hallucinations help

        This small set of data really helps us take advantage of the largest flaw in LLMs, hallucinations, or when the LLM just makes stuff up either because it has no data on the topic, or because the actual answer is rare so the weightings bias it towards an invalid answer. This is where LLMs really can scale conspiracy theories, because the probabilities are weighted towards the conspiracy theory already (as that is the only “correct” answer within the model) then any information we are provided with is recast within that context. So if someone tells us that the Greeks proved the earth was round in the 2nd Century BC our LLM could easily reply:

        Context makes hallucinations doubly annoying

        Our LLMs can go beyond the average conspiracy theorist thanks to the context and hallucinations. While an average conspiracy person will only have a fixed set of talking points, and potentially be constrained at some level by reality, the hallucinations and context of the conversation enables our conspiracy LLM to keep building its conspiracy and adding elements to it. Because our LLM is unconstrained by reality and counter arguments, instead being able to reframe any counter argument by using a hallucination it will be significantly more maddening. It will also mean it will create new justifications for the conspiracy that have never been put forwards before. These will, of course, be total nonsense but new total nonsense is mana from heaven to other conspiracy theorists.

        Reset and start again

        The final piece that makes a conspiracy LLM much easier to create is that if the LLM goes truly bonkers and you need to reset… this is exactly what conspiracy theorists do today. So if our LLM is creating hallucinations that fail some form of basic test, or just every 20 responses, we can reset the conversation in a totally different direction. Making my generative LLM detect either a frustration or an “ah ha” moment from the person it is annoying, a trivial task, enables me to then have my conspiracy bot just jump to another topic, and to do so in a much smoother way than most conspiracy theorists do today.

        This is a much smoother transition for a flat earth conspiracy than you’ll hear on TikTok or YouTube.

        We have achieved AGS, that isn’t a good thing

        I’ve argued that the current generation of AIs aren’t close to genuinely passing the Turing test, let alone more modern tests. Turing set the bar of intelligence as the CEO of a Fortune 50 company, and made it have awareness of what it didn’t know.

        Some folks are concerned about a coming existential crisis where Artificial General Intelligence becomes a threat to humanity.

        But for me that is assuming the current generation of technologies are not a threat, and that intelligence is a greater threat than weaponized stupidity. Many people in AI are in fact arguing that GPT passes the Turing test, not because it replicates an intelligent human, but because either it can pass a multiple choice or formulaic example, or because it can convince people they are speaking to a not very bright person.

        We can today make an AI that is the equivalent of a conspiracy theorist, someone untethered to reality and disconnected from logic. This isn’t General Intelligence, but it is General Stupidity.

        Deep fakes and deep stupidity

        Where Deep Fakes can make us not trust sources, Deep Stupidity can amplify misinformation and constantly give it justification and explanation. Where Deep Fakes imitate a person or event, Deep Stupidity can imitate the response of the crowd to that event. Spinning up a million conspiracy theorists to amplify not just the Deep Fake but the creation of an alternative reality around it.

        The internet and particularly social media has proven a fertile ground for human created stupidity and conspiracy theories. Entire political movements and groups have been created based on internet created nonsense. These have succeeded in gaining significant mindshare without having the capacity to really generate either convincing material or convincing narratives.

        AIs today have the ability to change that.

        Stupidity and misinformation are today’s existential threat

        We need to stop talking about the challenge with AI being only when it becomes “intelligent”, because it is already sufficiently stupid to have massive negative consequences on society. It is madness to think that companies, and especially governments, aren’t looking at this technologies and how they can use them to achieve their ends, even if their ends are simply to sew chaos.

        Stupidity is the foundation for worrying about intelligence

        Worrying about an AI ‘waking up’ and threatening humanity is a philosophical problem, but addressing Artificial Stupidity would give us the framework to deal with that future challenge. Everything about controlling and managing AI in future can be mapped to controlling and avoiding AGS today.

        When we talk about frameworks for Trusted AI and legislation on things like Ethical Data Sourcing these are elements that apply to General Stupidity just as much as to intelligence. So we should stop worrying simply about some amorphous future threat and instead start worrying about how we avoid, detect and control Artificial General Stupidity, because in doing that we lay the platform for controlling AI overall.

        This article first appeared on Medium.

        The post Deep stupidity – or why stupid is more likely to destroy the world than smart AI appeared first on Capgemini Australia.

        ]]>
        ChatGPT and I have trust issues https://www.capgemini.com/au-en/insights/expert-perspectives/chatgpt-and-i-have-trust-issues/ Thu, 30 Mar 2023 15:04:00 +0000 https://www.capgemini.com/?p=913268 Whether we are ready for it or not, we are currently in the era of generative AI, with the explosion of generative models such as DALL-e, GPT-3, and, notably, ChatGPT, which racked up one million users in one day.

        The post ChatGPT and I have trust issues appeared first on Capgemini Australia.

        ]]>

        ChatGPT and I have trust issues

        Tijana Nikolic
        30 March 2023

        Disclaimer: This blog was NOT written by ChatGPT, but by a group of human data scientists: Shahryar MasoumiWouter ZirkzeeAlmira PillaySven Hendrikx and myself.

        Stable diffusion generated image with prompt = “an illustration of a human having trust issues with generative AI technology”

        Whether we are ready for it or not, we are currently in the era of generative AI, with the explosion of generative models such as DALL-eGPT-3, and, notably, ChatGPT, which racked up one million users in one day. Recently, on March 14th, 2023, OpenAI released GPT-4, which caused quite a stir and thousands of people lining up to try it.

        Generative AI can be used as a powerful resource to aid us in the most complex tasks. But like with any powerful innovation, there are some important questions to be asked… Can we really trust these AI models? How do we know if the data used in model training is representative, unbiased, and copyright safe? Are the safety constraints implemented robust enough? And most importantly, will AI replace the human workforce?

        These are tough questions that we need to keep in mind and address. In this blog, we will focus on generative AI models, their trustworthiness, and how we can mitigate the risks that come with using them in a business setting.

        Before we lay out our trust issues, let’s take a step back and explain what this new generative AI era means. Generative models are deep learning models that create new data. Their predecessors are Chatbots, VAE, GANs, and transformer-based NLP models, they hold an architecture that can fantasize about and create new data points based on the original data that was used to train them — and today, we can do this all based on just a text prompt!

        The evolution of generative AI, with 2022 and 2023 bringing about many more generative models.

        We can consider chatbots as the first generative models, but looking back we’ve come very far since then, with ChatGPT and DALL-e being easily accessible interfaces that everyone can use in their day-to-day. It is important to remember these are interfaces with generative pre-trained transformer (GPT) models under the hood.

        The widespread accessibility of these two models has brought about a boom in the open-source community where we see more and more models being published, in the hopes of making the technology more user-friendly and enabling more robust implementations.

        But let’s not get ahead of ourselves just yet — we will come back to this in our next blog. What’s that infamous Spiderman quote again?

        With great power…

        The generative AI era has so much potential in moving us closer to artificial general intelligence (AGI) because these models are trained on understanding language but can also perform on a wide variety of other tasks, that in some cases even exceed human capability. This makes them very powerful in many business applications.

        Starting with the most common — text application, which is fueled by GPT and GAN models. Including everything from text generation to summarization and personalized content creation, these can be used in educationhealthcare, marketing, and day-to-day life. The conversational application component of text application is used in chatbots and voice assistants.

        Next, code-based applications are fueled by the same models, with GitHub’s Co-pilot as the most notable example. Here we can use generative AI to complete our code, review it, fix bugs, refactor, and write code comments and documentation.

        On the topic of visual applications, we can use DALL-eStable Diffusion, and Midjourney. These models can be used to create new or improved visual material for marketing, education, and design. In the health sector, we can use these models for semantic translation, where semantic images are taken as input and a realistic visual output is generated. 3D shape generation with GANs is another interesting application in the video game industry. Finally, text-to-video editing with natural language is a novel and interesting application for the entertainment industry.

        GANs and sequence-to-sequence automatic speech recognition (ASR) models (such as Whisper) are used in audio applications. Their text-to-speech application can be used in education and marketing. Speech-to-speech conversion and music generation have advantages for the entertainment and video game industry, such as game character voice generation.

        Some applications of generative AI in industries.

        Although powerful, such models also come with societal limitations and risks, which are crucial to address. For example, generative models are susceptible to unexplainable or faulty behavior, often because the data can have a variety of flaws, such as poor quality, bias, or just straight-up wrong information.

        So, with great power indeed comes great responsibility… and a few trust issues

        If we take a closer look at the risks regarding ethics and fairness in generative models, we can distinguish multiple categories of risk.

        The first major risk is bias, which can occur in different settings. An example of bias is the use of stereotypes such as race, gender, or sexuality. This can lead to discrimination and unjust or oppressive answers generated from the model. Another form of bias is the model’s word choice. Its answers should be formulated without toxic or vulgar content, and slurs.

        One example of a language model that learned a wrong bias is Tay, a Twitter bot developed by Microsoft in 2016. Tay was created to learn, by actively engaging with other Twitter users by answering, retweeting, or liking their posts. Through these interactions, the model swiftly learned wrong, racist, and unethical information, which it included in its own Twitter posts. This led to the shutdown of Tay, less than 24 hours after its initial release.

        Large language models (LLMs) like ChatGPT generate the most relevant answer based on the constraints, but it is not always 100% correct and can contain false information. Currently, such models provide their answers written as confident statements, which can be misleading as they may not be correct. Such events where a model confidently makes inaccurate statements are also called hallucinations.

        In 2023, Microsoft released a GPT-backed model to empower their Bing search engine with chat capabilities. However, there have already been multiple reports of undesirable behavior by this new service. It has threatened users with legal consequences or exposed their personal information. In another situation, it tried to convince a tech reporter he was not happily married and that he was in love with the chatbot (it also proclaimed their love for the reporter) and consequently should leave his wife (you see why we have trust issues now?!).

        Generative models are trained on large corpora of data, which in many cases, is scraped from the internet. This data can contain private information, causing a privacy risk as it can unintentionally be learned and memorized by the model. This private data not only contain people, but also project documents, code bases, and works of art. When using medical models to diagnose a patient, it could also include private patient data. This also ties into copyright when this private memorized data is used in a generated output. For example, there have even been cases where image diffusion models have included slightly altered signatures or watermarks it has learned from their training set.

        The public can also maliciously use generative models to harm/cheat others. This risk is linked with the other mentioned risks, except that it is intentional. Generative models can easily be used to create entirely new content with (purposefully) incorrect, private, or stolen information. Scarily, it doesn’t take much effort to flood the internet with maliciously generated content.

        Building trust takes time…and tests

        To mitigate these risks, we need to ensure the models are reliable and transparent through testing. Testing of AI models comes with some nuances when compared to testing of software, and they need to be addressed in an MLOps setting with data, model, and system tests.

        These tests are captured in a test strategy at the very start of the project (problem formulation). In this early stage, it is important to capture key performance indicators (KPIs) to ensure a robust implementation. Next to that, assessing the impact of the model on the user and society is a crucial step in this phase. Based on the assessment, user subpopulation KPIs are collected and measured against, in addition to the performance KPIs.

        An example of a subpopulation KPI is model accuracy on a specific user segment, which needs to be measured on data, model, and system levels. There are open-source packages that we can use to do this, like the AI Fairness 360 package.

        Data testing can be used to address bias, privacy, and false information (consistency) trust issues. We make sure these are mitigated through exploratory data analysis (EDA), with assessments on bias, consistency, and toxicity of the data sources.

        The data bias mitigation methods vary depending on the data used for training (images, text, audio, tabular), but they boil down to re-weighting the features of the minority group, oversampling the minority group, or under-sampling the majority group.

        These changes need to be documented and reproducible, which is done with the help of data version control (DVC). DVC allows us to commit versions of data, parameters, and models in the same way “traditional” version control tools such as git do.

        Model testing focuses on model performance metrics, which are assessed through training iterations with validated training data from previous tests. These need to be reproducible and saved with model versions. We can support this through open MLOPs packages like MLFlow.

        Next, model robustness tests like metamorphic and adversarial tests should be implemented. These tests help assess if the model performs well on independent test scenarios. The usability of the model is assessed through user acceptance tests (UAT). Lags in the pipeline, false information, and interpretability of the prediction are measured on this level.

        In terms of ChatGPT, a UAT could be constructed around assessing if the answer to the prompt is according to the user’s expectation. In addition, the explainability aspect is added — does the model provide sources used to generate the expected response?

        System testing is extremely important to mitigate malicious use and false information risks. Malicious use needs to be assessed in the first phase and system tests are constructed based on that. Constraints in the model are then programmed.

        OpenAI is aware of possible malicious uses of ChatGPT and have incorporated safety as part of their strategy. They have described how they try to mitigate some of these risks and limitations. In a system test, these constraints are validated on real-life scenarios, as opposed to controlled environments used in previous tests.

        Let’s not forget about model and data drift. These are monitored, and retraining mechanisms can be set up to ensure the model stays relevant over time. Finally, the human-in-the-loop (HIL) method is also used to provide feedback to an online model.

        ChatGPT and Bard (Google’s chatbot) have the possibility of human feedback through a thumbs up/down. Though simple, this feedback is used to effectively retrain and align the underlying models to users’ expectations, providing more relevant feedback in future iterations.

        To trust or not to trust?

        Just like the internet, truth and facts are not always given — and we’ve seen (and will continue to see) instances where ChatGPT and other generative AI models get it wrong. While it is a powerful tool, and we completely understand the hype, there will always be some risk. It should be standard practice to implement risk and quality control techniques to minimize the risks as much as possible. And we do see this happening in practice — OpenAI has been transparent about the limitations of their models, how they have tested them, and the governance that has been set up. Google also has responsible AI principles that they have abided by when developing Bard. As both organizations release new and improved models — they also advance their testing controls to continuously improve quality, safety, and user-friendliness.

        Perhaps we can argue that using generative AI models like ChatGPT doesn’t necessarily leave us vulnerable to misinformation, but more familiar with how AI works and its limitations. Overall, the future of generative AI is bright and will continue to revolutionize the industry if we can trust it. And as we know, trust is an ongoing process…

        In the next part of our Trustworthy Generative AI series, we will explore testing LLMs (bring your techie hat) and how quality LLM solutions lead to trust, which in turn, will increase adoption among businesses and the public.

        This article first appeared on SogetiLabs blog.

        The post ChatGPT and I have trust issues appeared first on Capgemini Australia.

        ]]>
        The need for wealth-as-a-service https://www.capgemini.com/au-en/insights/expert-perspectives/the-need-for-wealth-as-a-service/ https://www.capgemini.com/au-en/insights/expert-perspectives/the-need-for-wealth-as-a-service/#respond Mon, 06 Mar 2023 08:32:45 +0000 https://www.capgemini.com/?p=868712 Recent times have witnessed the popularity of white-label banking, or as it is widely known – Banking-as-a-Service (BaaS).

        The post The need for wealth-as-a-service appeared first on Capgemini Australia.

        ]]>

        THE NEED FOR WEALTH-AS-A-SERVICE

        Shreya Jain
        06 March 2023

        Recent times have witnessed the popularity of white-label banking, or as it is widely known – Banking-as-a-Service (BaaS).

        BaaS allows banks to expand their reach by catering to wider and newer segments of customers – made possible by the integration of their APIs with non-bank services. Both incumbents and new-age banks are leveraging the BaaS model extensively, as is made apparent by the projected global market size of BaaS – on target to reach USD 74.55 billion by 2030.

        The banking business today expects agility with quick, tangible results. Banks have thus been increasingly reluctant to commit heavy IT spending to programs that are costly, complex, and come with a high risk of failure. In this climate, the BaaS model has proved to be an asset to the Financial Services (FS) industry by providing customers with the financial services they require, delivered at the time-of-need and through the appropriate means. Some in the FS industry are now wondering: could the same SaaS model be applied to wealth management? A Wealth-as-a-Service (WaaS) model could allow wealth managers to expand their reach to hitherto-inaccessible markets. For instance, by offering services modularly to clients, without spending a fortune and sacrificing valuable time building the capabilities in-house.

        Apart from this, there are many traditional challenges that a WaaS offering could help tackle in the WM industry:​

        • Costly and inflexible servicing: Every evolution to a service or product requires a vast amount of energy across siloed applications. Moreover, IT relies on legacy platforms with a frontier between front office, middle office, back office and between data and production. This status-quo favors ballooning back-office compliance costs and risk costs.
        • Limited digital maturity: Traditional wealth solutions rely on ageing platforms and are complex to maintain and upgrade​. Even as banks embrace digitalization, their efforts are often either customer-centric or bank-centric, but rarely both. Also, information that could drive personalization in wealth offerings by building on commonalities in advisory and investment, is rarely used to its utmost benefit.
        • One product, one price: Pricing of WM products has historically been complex and tightly coupled to the product. Since the client base for wealth managers varies from Mass Affluent to Ultra High Net Worth Individuals (UHNWIs), the pricing of products should ideally be customized and variated across customer segments and profiles.
        • Scattered wealth players: The complexity of wealth investment requires expertise and technology from very different areas to be pooled together, with no common ground to play with. This further results in each player having its own tools, limiting their ability to interact without extreme (costly) customization.

        BaaS was made possible by technology. Today WealthTechs perform that role, providing services across the value chain of Wealth Management, and serving as a conducive ecosystem to implement the WaaS model. Firms such as Temenos and InvestCloud already offer platforms that can be modularly deployed across the entire WM value chain. These extended bank services foster the energies of institutions and third parties to better collaborate through a “Wealth Marketplace” that add value for end clients.

        As with any other SaaS model, an ideal WaaS offering should leverage new technological paradigms to enable modularity and be adaptable to customer needs and ambitions. It should thus have the option to be offered either as a turnkey solution on a shared platform with low customization, or as a personalized platform that is custom-made for advanced client needs with extended bank capabilities. Offers could range from a full WaaS on a shared platform, a Hybrid WaaS that can be deployed modularly, to a private WaaS on a personalized platform. In any form, WaaS must have the necessary features to append to the capabilities of an FS provider:

        • Open: It should allowomni-channels to be easily plugged to third-party APIs
        • Modular: It should be built on new architectures that offer a modular approach – to deploy progressively, only the required components
        • Multi-tenant: It should be able to serve multi-entities in a vast array of geographical, legal, and financial combinations
        • Cloud-native: It should be designed to reside in the cloud, across any cloud service provider, and offering the benefits of microservices and auto scaling
        • Pay-as-you-use model: Pricing models should be evolutive – usage based, packages, subscriptions – to be able to serve the needs of every client firm​
        • Continuously enriching: To remain competitive, the WaaS ecosystem should be continuously enriching its service catalogue with best-of-breed solutions​

        As many providers of WaaS models emerge in the market, banks have already started to join the trend and grab the early mover’s advantage. With the Wealth Management industry ripe for harnessing the benefits of a WaaS model, the success of BaaS models has already paved the way for its adoption. With the technology and a marketplace already established, it will be interesting to witness the trend unfold.

        Author

        Shreya Jain

        Manager, Global Banking Industry

          The post The need for wealth-as-a-service appeared first on Capgemini Australia.

          ]]>
          https://www.capgemini.com/au-en/insights/expert-perspectives/the-need-for-wealth-as-a-service/feed/ 0
          The rise of the mass affluent https://www.capgemini.com/au-en/insights/expert-perspectives/the-rise-of-the-mass-affluent/ https://www.capgemini.com/au-en/insights/expert-perspectives/the-rise-of-the-mass-affluent/#respond Sat, 25 Feb 2023 02:20:28 +0000 https://www.capgemini.com/?p=866259 Over the last few years, a growing middle class has led to a steep growth in the number of mass affluent customers across the world.

          The post The rise of the mass affluent appeared first on Capgemini Australia.

          ]]>

          THE RISE OF THE MASS AFFLUENT

          Capgemini
          28 Feb 2023

          Over the last few years, a growing middle class has led to a steep growth in the number of mass affluent customers across the world.

          This segment of wealth customers, described as those having investable assets in the range of US $100,000 to $1 million, accounted for about 11% of the overall global population in 2020, a healthy proportion of which are digitally engaged young professionals. They account for about 40% of global wealth, and are expected to replace the middle class as growth drivers in the coming decade. As per a report from Global Data Wealth Markets Analytics, the US mass affluent wealth band alone is expected to account for upwards of $US 47 trillion of wealth by 2025.

          However, despite their significant scale and the immense potential of the mass affluent segment, it has thus far not been a top priority segment for Wealth Management (WM) firms. Capgemini’s 2022 WM executive survey found that only 27% of WM firms currently serve mass affluent clients, and only 36% firms are exploring mass affluent services.

          In recent years, several different FinTechs have seized this opportunity, and have started to offer cost-effective solutions to help clients reach their investment goals. While traditional banks see the promise of this segment, they’re not sure how to approach it. This segment is financially and digitally savvy, is fee-sensitive, and likes to shop around for various options, not hesitating to spread their assets across providers. Hence, a generic cookie-cutter approach to target such clients is unlikely to create much stickiness in the relationship with this segment. However, The investable wealth levels of this segment do not justify the traditional personal one-to-one wealth advisor approach.

          Given this conundrum, WM firms must consider three steps to attract and retain clients from this segment:

          • Leverage actionable data for insights. Develop a client-centric strategy to create cost-effective yet bespoke offerings with an optimal balance of digital and personal interactions.
          • Invest in advanced tech solutions. Given the rapid rise of FinTechs, hybrid robo-services, and the high client expectation for digital, tailored solutions, WM firms will need to leverage the latest in tech innovation to differentiate and compete. Investments need to be made in digital channels, AI, and machine learning to know the customer and serve her better. At the same time, a high degree of process automation would be needed to remain cost-effective and nimble. Lastly,
          • Invest in an agile operating model. Having a modular architecture centered on an aggregation layer leveraging capabilities from legacy systems, as well as partner components and third parties, will allow WM firms to better leverage their ecosystem. It will also enable them to be better prepared for an expanding product universe consisting of not just traditional asset classes, but also newer ones such as alternatives, private markets, various digital assets (such as cryptos and NFTs), and ESG investments.

          While several firms are attempting to build these capabilities in-house, many others are acquiring these capabilities. Morgan Stanley acquired Solium Capital Inc. to enhance its workplace wealth solutions in 2019. JP Morgan Chase announced the acquisition of Nutmeg in 2021 to boost its digital wealth management capabilities.

          As the size of this segment and the investable wealth it possesses increases over the next few years, competition between banks and WM firms serving this segment will continue to heat up. Banks will need to differentiate themselves on the client relevancy of their offerings (advising what is best for the client rather than pushing specific products), pricing, and the ability to adapt their offerings based on the lifecycle stage of their clients. Additionally, the ability to detect retail banking clients who may soon join the ‘mass affluent’ club, and to start engaging with them early, will position banks to start working with mass affluent clients from the inception of their first portfolios onward. As the wealth of these new prospects grows, so too will the potential business for the banks that have earned their trust.

          Author

          Anuj Agarwal

          Director, Global Banking Industry
          I bring value to our clients by helping them understand the rapidly changing financial services landscape, and advise on emerging trends, technologies, and markets. I leverage my domain and industry knowledge to support them in developing strategies that can address their business objectives.

            The post The rise of the mass affluent appeared first on Capgemini Australia.

            ]]>
            https://www.capgemini.com/au-en/insights/expert-perspectives/the-rise-of-the-mass-affluent/feed/ 0
            New-age wealth management models set to make an Impact! https://www.capgemini.com/au-en/insights/expert-perspectives/new-age-wealth-management-models-set-to-make-an-impact/ https://www.capgemini.com/au-en/insights/expert-perspectives/new-age-wealth-management-models-set-to-make-an-impact/#respond Wed, 22 Feb 2023 14:04:27 +0000 https://www.capgemini.com/?p=864933 The wealth management industry is embarking on its next evolution. For a long time, the focus of the wealth management industry has been on UHNW and HNWI. This focus is now shifting towards the lower end of HNWI and the higher end of mass affluents.

            The post New-age wealth management models set to make an Impact! appeared first on Capgemini Australia.

            ]]>

            New-Age Wealth Management Models Set to Make an Impact!

            Capgemini
            21 Feb 2023

            The wealth management industry is embarking on its next evolution. For a long time, the focus of the wealth management industry has been on UHNW and HNWI. This focus is now shifting towards the lower end of HNWI and the higher end of mass affluents. The race is on to acquire customers from these segments. Financial firms are now looking within their businesses to cross-sell to these groups. JPMC has launched a wealth plan for its over 60 million retail bank customers in the US to provide access to investment advisors.


            The advisors will guide them to a personalized financial plan and provide investment recommendations with access to a pre-built investment portfolio. Cross-selling to one’s own customers is not new – but cross-selling to own customers at such scale is.

            Models That Change the Game

            The challenge is to acquire new wealth management customers in large numbers and serve them effectively at a lower cost. To overcome this, three service models have been introduced: In Person, Digital, and Hybrid.

            • ‘In Person’ Model: Wealth advisors identify customers, create financial plans, decide investment allocation, and onboard them.
            • ‘Digital’ Model: Also known as Robo-advisory.
            • ‘Hybrid’ Model: A combination of “In Person” and “Digital”

            Customers are looking for different levels of support for financial planning, portfolio construction, portfolio tracking, and rebalancing. Wealth Management firms have typically responded with cost-efficient models categorized below.

            Going Beyond “Business as Usual”

            Besides these core areas, there are a few critical adjacent areas – customer reporting, on-demand advice, tax, and estate planning. Wealth management firms are now modularizing where customers can choose the services they want. Initial digital services were limited to the customer experience layer. New services are fully digital through process re-design and automation of manual handoffs. The next wave of customer experience transformation will go beyond customer profiling, customer onboarding, and reporting.

            Investment Planning, Investment Analysis, And Portfolio Management Will Become More Sophisticated in The Hybrid Model

            While the hybrid model already exists, the Do-It-Yourself tools are often rudimentary. The sophisticated tools are only available in the advisor-guided model. The new customer experience transformation will provide these tools to the customers with built-in investment guardrails and a spectrum of advisor oversight.

            • Mapping financial goals to investment options: Brokerage firms and retail banks offer some tools for defining financial goals and sophisticated tools for investment options. The connection between these two is often lacking. The new customer experience will strengthen this connection and enable much more sophisticated self-planning by the mass market. The next wave of sophistication will build on the current foundational tools.
            • Guided portfolio construction: In the current hybrid model, mass-market customers get pre-built portfolios with a few reviews by advisors each year. The next wave of customer experience can enhance the portfolio options themselves. It is conceivable to have many more options for stocks and bonds and present a what-if analysis beyond the limited advisor review. It will open more value-added services from the advisors for complex portfolios of alternate assets, commodities, and real estate at higher fees.

            AUM-based pricing is not flexible enough for this à la carte service model. Neither are clients willing to pay significantly higher fees for bundled services. Wealth management firms should realize that there is no such thing as a perfect pricing model and should focus on smart pricing, which is clear, comprehensible, and reasonable.

            The main aim must be to match propositions to the client’s needs and charge a fair price for the same to build a strong client-customer relationship in the long run. Hence, clear cost attribution and transparent pricing are needed in the new model. This will also help firms adjust to dynamic commercial environments.

            Author

            Nilesh Vaidya

            Global Industry Head – Retail Banking & Wealth Management

              The post New-age wealth management models set to make an Impact! appeared first on Capgemini Australia.

              ]]>
              https://www.capgemini.com/au-en/insights/expert-perspectives/new-age-wealth-management-models-set-to-make-an-impact/feed/ 0
              Harnessing data in ADM services to drive digital transformation https://www.capgemini.com/au-en/insights/expert-perspectives/harnessing-data-in-adm-services-to-drive-digital-transformation/ https://www.capgemini.com/au-en/insights/expert-perspectives/harnessing-data-in-adm-services-to-drive-digital-transformation/#respond Fri, 17 Feb 2023 09:30:00 +0000 https://www.capgemini.com/au-en/?p=509086 The post Harnessing data in ADM services to drive digital transformation appeared first on Capgemini Australia.

              ]]>

              HARNESSING DATA IN ADM SERVICES TO DRIVE DIGITAL TRANSFORMATION

              David McIntire
              17 Feb 2023

              How you can utilize your data to optimize your applications – and build the foundation for your future business

              Digital transformation strategies that fail to recognize and apply the power that data holds can confine themselves to darkness – or at best – leave a lot of potential opportunities untapped. A recent Capgemini Research Institute (CRI) study entitled The data-powered enterprise: Why organizations must strengthen their data mastery highlights how companies can exploit data to drive real business value. The study found that companies that use data as a foundation for their operations – so called “Data Masters” – realize a significant performance advantage relative to their peers. This advantage spans customer engagement, revenue growth, operational efficiency, and cost savings – including 70% higher revenue per employee and 22% higher profitability overall.
               
              However, becoming a Data Master is a journey – not a one-off project with an immediate ROI. A focus on leveraging data within application landscapes and the wider IT ecosystem enables companies to build the foundation for their evolutive journeys to becoming Data Masters.

              Digital transformation that puts you in the driver’s seat – Harnessing the true potential of data-driven ADM

              Building application development and maintenance (ADM) services that can fully utilize data is the first step in a company’s data modernization journey. The systems that are core to the delivery of data-enabled ADM contain a wealth of data and insights to accelerate the delivery of services. For example, the data residing in an ITSM tool can be extracted and analyzed to understand the nature of incidents that typically make up the bulk of an application maintenance team’s workload. This can help in identifying the highest-impact incidents to target automated resolution or enhance monitoring to drive down incident volumes.

              Additionally, analyzing ticket data for recurring incidents targets root-cause analysis initiatives on the highest-impact problems. Extending this data analysis can also facilitate an AI-enabled capability to identify not just the root cause – but also the resolution that eliminates these incidents from even occurring.

              The combination of automating the resolution of one batch of high-frequency incidents, and pre-emptively eliminating another batch of recurring incidents can bring a material reduction in application support effort.

              The resources freed up through this process can then be applied to further the data modernization journey. Assessing the “as-is” and then modernizing data landscapes to eliminate data silos and redundancies enables the further exploitation of data. This newly standardized and sanitized data provides fresh insights into further transformation opportunities – particularly on the business side – that enhance the value of data to drive real change.

              Capgemini’s ADMnext^Data – Bringing data to light to help you successfully navigate your digital transformation journey

              Capgemini’s ADMnext^Data integrates all the assets and capabilities of our market-leading ADM services with our unique insights and data capabilities. These combined capabilities enable us to help guide you on your data modernization journey as part of a long-term relationship.

              Firstly, our Enterprise Automation Fabric (EAF) offering specifically focuses on incorporating data into the heart of the ADM services we offer. EAF is the foundational automation suite that underpins the delivery of services across technology and business process operations. It works with your ITSM to extract incident data and identify the highest value transformation and automation initiatives. It also possesses the AIOps capabilities to automate the resolution of incidents and root-cause-analysis processes.

              As support requirements fall and resources are freed thanks to EAF, Capgemini can then leverage assets such as our eAPMand Advantage-ROI tools to help you better understand your current maturity and implement the highest value transformation opportunities across your data estate. Value can be identified both from your modernization of data landscapes (for example, through migration to cloud or application rationalization), as well as business process transformation efforts.

              Data-enabled digital transformation provides companies with an unprecedented opportunity to leverage data that separates themselves from their competition. Capgemini’s ADMnext^Data gives you the tools and expertise to guide on your data-enabled digital transformation journey.

              To start your drive on the path to data-enabled digital transformation as a Data Master, drop me a line and visit us here to learn more.

              Author

              David McIntire

              ADMnext North America Offer Lead
              As part of the North American ADM Center of Excellence, I focus on developing innovative go-to-market offerings, thought leadership, and client solutions. I possess more than 20 years of experience in both shaping ADM solutions that help clients achieve their business objectives and defining performance management programs that demonstrate the value realized. I also develop thought leadership that enables clients to better understand the current state – and future direction – of the ADM market.

                The post Harnessing data in ADM services to drive digital transformation appeared first on Capgemini Australia.

                ]]>
                https://www.capgemini.com/au-en/insights/expert-perspectives/harnessing-data-in-adm-services-to-drive-digital-transformation/feed/ 0
                Will organizations need to change at a fundamental level? https://www.capgemini.com/au-en/insights/expert-perspectives/will-organizations-need-to-change-at-a-fundamental-level/ https://www.capgemini.com/au-en/insights/expert-perspectives/will-organizations-need-to-change-at-a-fundamental-level/#respond Fri, 17 Feb 2023 05:16:51 +0000 https://www.capgemini.com/?p=862770 Open ecosystems that bring startups into the mix offer huge potential advantages for all players. But will organizations need to change at a fundamental level to facilitate them?

                The post Will organizations need to change at a fundamental level? appeared first on Capgemini Australia.

                ]]>

                Will organizations need to change at a fundamental level?

                Susana Rincón
                17 Feb 2023

                Open ecosystems that bring startups into the mix offer huge potential advantages for all players. But will organizations need to change at a fundamental level to facilitate them?

                As part of our new series of blogs and vlogs focused on startups and their role as a catalyst for sustainable innovation, Capgemini Ventures is exploring the benefits of opening up the conversation. But, of course, there’s a lot to take onboard during this journey, and the first consideration is the introduction of new ways of working.

                For many organizations, there’s a notion that innovation from startups can be positioned at the periphery and not the heart of the bigger picture. In our opinion, however, this represents a gap in corporate strategy that needs to be plugged. Startups are no longer shiny objects but are now embedding themselves firmly into business value propositions.

                However, collaborating successfully with startups is not simply a change of mindset. Because collaboration through an open ecosystem speeds up time to market, enterprises may need to explore new approaches to their old challenges – rapidly adapting their organization, processes, systems, and even business models to respond with agility. But this isn’t necessarily as daunting as it sounds.

                Many organizations are beginning to realize the value in blurring their boundaries and including startups as part of their value proposition. Salesforce, for example, has augmented part of their business to embed startups. They’ve created AppExchange, where startups and independent software vendors (ISVs) can sell their services and grow. AppExchange is the leading enterprise cloud marketplace to help extend Salesforce – and customers can find proven apps and experts to quickly solve their business challenges.

                Dominique Gillies, Regional Vice-President, Strategic ISV Partnerships, EMEA, Salesforce, explains: “This is the fastest way to bring innovation to our customers and ensure their success in the long run.”

                Bringing startups into an effective ecosystem

                If the benefits of bringing startups into an effective ecosystem are clear, and other organizations are starting to reap the rewards, then the next question is: how? Because even though startups bring in disruptive innovations and exciting new technologies, they also bring in niche markets and high-risk associations. So, how do you overcome the challenges?

                Here are a few ways to help organizations create an open ecosystem that fosters powerful collaborations and partnerships with startups – while avoiding many of the associated growing pains:

                1.    Establish internal sponsorship and strategic buy-in

                The first thing you must do is work out your mission and define your unique objectives because it’s vital to have strategic clarity on the business need. 

                Next, it’s important to map out the stakeholders across leadership and the operational teams who will execute the strategic ambition – and take the newly defined objectives to them. These can not only be used to achieve the buy-in that’s required, but also act as a tool to learn more about the business and its needs. When working with a partner like Capgemini, you can then bring the objectives to us to help sharpen the proposition, too.

                It should all be part of an ongoing process that happens over time, as opposed to a one-off strategic play. This will help to foster strong collaborations that nurture long-term associations with startup ecosystems.

                2.    Scout the right startup

                New startups appear all the time and spotting the right one can be a challenge. You should therefore have a pre-defined and time-bound process that provides decision-making support to your business before any startup engagement begins.

                The methodology that’s employed should help your organization understand the strengths, synergies, and risks of engaging with a particular startup – and should ultimately be tied to your overarching strategic ambition.

                Once a suitable startup has been identified, and the due diligence carried out, you can then work with the startup closely to jointly develop a value proposition that will deliver the right outcomes before any actual work begins. This can be followed throughout the entire engagement to ensure everything remains on track.

                3.    Test the water

                Along with gathering anecdotal evidence and assessment outcomes, it’s important to run proof of concept (POC) testing to demonstrate feasibility, while continually interrogating the value proposition that’s been defined.

                This will enable you to outline the constraints and parameters that will ultimately help to validate how the startup’s solution infuses innovative solutions and compliments your market position and go-to markets. It’ll also provide an insight into whether you should invest in, or nurture, a strategic alliance.

                4.    Collaborate with agility and rigor

                Once all the preparation stages are complete, the process itself can begin. Here, it’s crucial to be clear on decision-making and timelines – and simplify them wherever possible.

                The most successful organizations also make sure to propose simplified contract templates that are specific to the startup and approved by the purchasing department. This provides the structure required for everyone to collaborate with agility, moving freely within the guidelines of expectation.

                It’s a good idea to establish a dedicated single point of contact: someone within your organization who can connect the startup with the relevant contact they need based on the project type.

                The business collaboration framework, defined by Capgemini Ventures, has industrialized this set of guidelines to collaborate with startups, which speeds up GTM and provides risk mitigation strategies to be implemented.

                5.    Accelerate to adopt at scale

                When the priorities are established, the business alignment model is defined, and the contracts are signed off, it’s time to scale and accelerate innovation across the global enterprise.

                You can mobilize and access the relevant resources required to make scaling easier. You can promote collaboration internally and externally to help the startup grow. And you can start working towards reaching the desired outcomes of the collaboration.

                However, you must remember to keep working at it in order to achieve success…

                Bridging the gap between business and technology

                In conclusion, it’s become clear that bilateral partnerships are no longer enough. Each partner must understand every other partner’s wider ecosystem. If we all do that successfully, then we’ll collectively benefit from some truly exciting and innovative ideas.

                As the bridge between business and technology, Capgemini is here to help our clients adopt startup solutions at scale. A startup is a partner unlike any other and our open framework is fine-tuned to your organizational requirements to enable you to adapt and build – it’s a strong foundation for creating a mutually beneficial relationship for both sides. For further insight into the other major considerations around bringing startups into an effective ecosystem, don’t forget to keep your eyes peeled for the forthcoming blogs and vlogs in this series. Meanwhile, you can catch up on the opening blog article of the series here.


                Susana Rincón

                Global Startup Manager at Capgemini Ventures
                I have 11 years of professional experience in innovation, open and social innovation, strategic alliances, business development with execution of business plans with focus on high social impact initiatives, transformational change and project management.

                  The post Will organizations need to change at a fundamental level? appeared first on Capgemini Australia.

                  ]]>
                  https://www.capgemini.com/au-en/insights/expert-perspectives/will-organizations-need-to-change-at-a-fundamental-level/feed/ 0