The Promise of AI
A SEERIST WHITE PAPER
6 Opportunities +
6 Challenges
As the CEO of Seerist, I have the fortune of sitting side-by-side some of the pioneers who have been at the heart of the technological advancements that have reshaped the world in recent decades. One of the most transformative forces driving this change is Artificial Intelligence (AI) and its impact on operations, security, defense, and intelligence applications.
AI is altering industries, economies, and societies and will be fully embedded in our daily lives in more disruptive ways than we currently comprehend. From speeding time to discovery of events, to enabling predictive analytics to enhance the quality of decision making, AI is now an integral part of how threat and security professionals stay ahead of global events. It's crucial for us, as leaders and decision-makers, to fully comprehend the opportunities and challenges that AI brings.
The rapid expansion and easy accessibility of AI has stimulated active conversations in academia, amongst technologist, and the experts who build and work with AI tools about the limitations and risks of the rapidly proliferating generative AI tools. In this paper, I will dive into the importance of embracing AI, explore the risks it could entail, and discuss strategies to mitigate these risks to bridge the gap from the experts to everyday users of AI.
AI is altering industries, economies, and societies and will be fully embedded in our daily lives in more disruptive ways than we currently comprehend.
Jim Brooks
CEO, Seerist
The Promise of AI: Opportunities and Transformations
Undeniably, AI offers a wealth of opportunities for businesses. It can automate mundane tasks, enhance decision-making processes making time a strategic advantage, and creates an entirely new genre for innovation most of which are completely unforeseen at this point. But to fully leverage these opportunities, we must not just understand AI and its myriad applications but embrace it by educating ourselves and our teams.
At Seerist we are focused on automating data-rich tasks like distilling down the massive influx of open-source intelligence into something usable, so that our customers can focus on more strategic, higher-value work.
AI-powered systems can handle massive volumes of data, analyze trends – spot anomalies and distortions in volatility – and enable analysts to focus on more relevant information with additional context. This not only increases operational efficiency but also paves the way for more timely and informed business decisions.
Generative AI, a subset of AI that has become the “AI of the day” with the launch of ChatGPT and Bard, has shown immense potential in various fields. These large language models (LLMs) with the ability to generate human-like text, design, and even code, have significant implications for content creation, customer interaction, and software development.
Nonetheless, the key to leveraging generative AI is to understand its limitations and ensure its responsible and ethical use.
The Flip Side: Understanding and Mitigating Risks
Like any technological advancement, AI brings with it a set of risks that need to be carefully thought through and managed. We’ve seen this with the advent of cars, which led to the development of seatbelts, airbags, proximity detectors, and other safety features. In the digital age, the introduction of emails necessitated spam filters. The proliferation of software created an industry of virus detection.
Similarly, as generative AI becomes more pervasive, we need to identify its potential risks and devise strategies to mitigate them.
Some of the generative AI challenges include:
Language models are heavily dependent on data quality and quantity. AI learns to generate statistical outputs based on patterns identified in the input data. If the training data is biased or unrepresentative, the output can be inaccurate or inappropriate and negatively impact downstream decision making.
1. Data Dependency (Quality and Accuracy)
As we’ve seen with ChatGPT, language models can produce impressive results, but they may not always comprehend context or implicit nuances, potentially leading to responses that appear accurate but miss the subtleties important for taking the right action at the right time.
2. Lack of Understanding
AI models depend on prompts (e.g. “generate a picture of a brown dog on a skateboard”) and engineering them correctly. Poorly constructed prompts can be misinterpreted or manipulated, leading to unintended consequences and outcomes. Prompt dependency started as a language learning path for children and has evolved to be the primary mechanism for maximizing the accuracy of outputs and reasoning out of language models.
3. Prompt Dependency
The dependency on data and properly engineered prompts coupled with AI’s inherent lack of understanding creates the potential for generative models to manufacture information that deviate from the facts when it’s trying to satisfy a prompt. AI can do this by manipulating the factual data in the model to create the outcome requested in the prompt.
4.Generative Dilemma
With the dependency on the underlying dataset, AI models that do not have a highly curated “fact base” have a high(er) probability of generating and disseminating false information.
5. Unintended Propagation of False Information
The generative capabilities of ChatGPT, Bard and similar models rapidly accelerates the ability of malicious actors to manipulate, generate, and proliferate disinformation, digital deepfakes and narratives leading to a host of trust and security issues.
6. Intentional Propagation of False Information
Embracing the Future: Preparing for AI-Driven Transformation
As we stand on the brink of a new era of AI-driven disruption, businesses must take proactive steps to prepare for this transformation. This involves rethinking all aspects of business, from the skills we hire for, to the tools we use, and the processes we automate.
Areas that leaders must focus on:
Develop a comprehensive understanding of AI technologies and their potential impact on your industry and ensuring that all employees understand the fundamentals of AI and its implications for their roles. It’s an opportunity to refocus on higher order tasks and human ambitions and leverage the machines to get us there.
1. Education and awareness
Ensure your workforce is equipped with the skills necessary to work alongside AI systems and leverage their capabilities. This is not about training data scientists and AI engineers, but about creating an AI-literate workforce. The world has gone through many technical revolutions. This one is just moving at a more disruptive pace!
2. Hiring for the right skills
Identify areas in your business that can benefit from AI integration and develop strategies for implementation. Early adopters have a sustainable competitive advantage. Those entrusted with seeing, sensing, and protecting people and assets globally will increasingly struggle to do their jobs effectively without leveraging AI.
3. Rethinking business processes
Don’t ignore the increasingly important role of humans! Humans remain the largest value-add to the development of AI tools using methodologies such as reinforcement learning from human feedback (RLHF) to dramatically improve their outcome. Also, humans are the governance layer, the contextualization and verification, the interpreters, and the ultimate decision makers. AI accelerates the decision making, but shouldn’t make the decisions! Not yet at least.
4. Humans In-the-Loop
Don’t forget about doing the homework on AI tools you’re considering using. Just as we protect against information security and privacy risks, we should guard against AI tools where the provenance of the fact base is unknown or unsourced, ensure that bias has been mitigated especially where ethical concerns could surface, and that the application of the tools does not create other regulatory or legal risks (think privacy, copyright, intellectual property, etc).
5. Due diligence
AI applications employed in critical decision making, particularly where life-safety issues are potentially at stake, need to be defensible and ideally source information to its origin. While AI is speeding decision making, it is important that humans can quickly ascertain the credibility and reliably of outputs. At the end of the day, it’s humans who own the outcomes.
6. Defensibility and Sourcing Data to Origin Sources
The rapid emergence of AI represents a double-edged sword, teeming with immense opportunities for growth and innovation on one side, while also carrying significant challenges and risks on the other. As early adopters, senior decision-makers, and leaders, it is not just critical to understand AI's transformative power and its associated risks for driving innovation and maintaining a competitive edge, but also a core responsibility for ensuring the responsible and ethical use of AI without compromising values and safety.
Subscribe for more
Seerist insights!
Act today. Prevail tomorrow. See what Seerist sees so that you can save lives, assets, resources and time.
inherent lack of understanding creates the potential for generative models to manufacture information that deviate from the facts when it’s trying to satisfy a prompt. AI can do this by manipulating the factual data in the model to create the outcome requested in the prompt.
malicious actors to manipulate, generate, and proliferate disinformation, digital deepfakes and narratives leading to a host of trust and security issues.
highly curated “fact base” have a high(er) probability of generating and disseminating false information.
may not always comprehend context or implicit nuances, potentially leading to responses that appear accurate but miss the subtleties important for taking the right action at the right time.
patterns identified in the input data. If the training data is biased or unrepresentative, the output can be inaccurate or inappropriate and negatively impact downstream decision making.
issues are potentially at stake, need to be defensible and ideally source information to its origin. While AI is speeding decision making, it is important that humans can quickly ascertain the credibility and reliably of outputs. At the end of the day, it’s humans who own the outcomes.
Just as we protect against information security and privacy risks, we should guard against AI tools where the provenance of the fact base is unknown or unsourced, ensure that bias has been mitigated especially where ethical concerns could surface, and that the application of the tools does not create other regulatory or legal risks (think privacy, copyright, intellectual property, etc).
of humans! Humans remain the largest value-add to the development of AI tools using methodologies such as reinforcement learning from human feedback (RLHF) to dramatically improve their outcome. Also, humans are the governance layer, the contextualization and verification, the interpreters, and the ultimate decision makers. AI accelerates the decision making, but shouldn’t make the decisions! Not yet at least.
develop strategies for implementation. Early adopters have a sustainable competitive advantage. Those entrusted with seeing, sensing, and protecting people and assets globally will increasingly struggle to do their jobs effectively without leveraging AI.
capabilities. This is not about training data scientists and AI engineers, but about creating an AI-literate workforce. The world has gone through many technical revolutions. This one is just moving at a more disruptive pace!
potential impact on your industry and ensuring that all employees understand the fundamentals of AI and its implications for their roles. It’s an opportunity to refocus on higher order tasks and human ambitions and leverage the machines to get us there.