Why is Cybersecurity so Confusing for Business Stakeholders?

Navigating the cybersecurity landscape is a complex challenge for many business stakeholders. As someone with years of experience in technology and a focus on cybersecurity over the last three, I see three primary reasons that make this field so confusing and challenging for business decision-makers to grasp. Here’s my take on why cybersecurity is a unique challenge—and why it doesn’t have to be quite so complicated.

1. Constant Threats from Bad Actors

One of the biggest challenges in cybersecurity is the persistent threat of bad actors—whether they’re financially motivated criminals or nation-state hackers. These attackers are always trying to infiltrate or exploit vulnerabilities for their gain. This can manifest as ransomware, where hackers hold your customer data hostage, or as state-sponsored attacks aiming to steal intellectual property or gain a technological advantage.

Because these threats are always evolving, organizations are in a constant state of vigilance. You never know when the next “zero-day” vulnerability will appear, a flaw that no one knew about until it was exploited. This creates a treadmill effect where businesses are always racing to patch the latest threat or fix an unexpected vulnerability. It’s like trying to maintain perfect health—an ongoing effort with no finish line in sight.

2. Innovation and Fragmentation in the Cybersecurity Market

The rapid evolution of threats has led to a continuous wave of new security technologies and startups, each promising a better solution than the last. Every year, new products and services enter the market, claiming to be the ultimate solution. But just like maintaining a car or house, staying secure requires ongoing effort and maintenance. Cybersecurity isn’t something you can address once and forget about; it demands constant attention and investment.

The market’s constant innovation can be overwhelming, especially for business leaders who want a “one-stop solution.” Big players like Palo Alto Networks, CrowdStrike, and others have started combining multiple security features into single platforms, promising a simpler, all-in-one solution. This can reduce the risk of security gaps between different tools but also requires stakeholders to trust one provider for everything.

3. The Human Factor

Even with the best technology in place, humans remain a weak link in cybersecurity. Studies show that a significant number of security breaches—upwards of 60%—stem from human error or manipulation, like social engineering and phishing attacks. These threats often come via email, with employees inadvertently clicking on malicious links or responding to fake invoices.

New technologies, such as AI-driven phishing and deepfake voice scams, make it even harder for people to distinguish between legitimate and malicious interactions. No matter how sophisticated a company’s defenses are, a single lapse in judgment by an employee can undermine everything. This human element is why cybersecurity remains a challenging field; as long as people make mistakes, there will always be vulnerabilities.

The Bottom Line

Cybersecurity is confusing because of three main issues: ever-evolving threats, a fragmented market of new solutions, and the inherent unpredictability of human behavior. For business stakeholders, these complexities make it challenging to feel truly secure, as cybersecurity requires both a technological and human-centric approach. Nonetheless, while it’s a complex field, understanding these factors can help decision-makers navigate it more confidently.

Stay tuned for more insights on simplifying and strengthening your cybersecurity approach.

Seven observations from Pillar’s “The State of Attacks on GenAI”

A security report on GenAI from Pillar, The State of Attacks on Generative AI, sheds light on some critical security challenges and trends emerging in generative AI (GenAI) applications. Here are the key insights I took away from the report.

1. High Success Rate of Jailbreaks

One of the most alarming statistics is that 20% of jailbreak attempts on generative AI systems are successful. This high success rate indicates a significant vulnerability that needs immediate attention. What’s even more concerning is that these attacks require minimal interaction—just a handful of attempts are enough for adversaries to execute a successful attack.

2. Top Three Jailbreak Techniques

The report identifies three primary techniques that attackers are using to bypass the security of large language models (LLMs):

  1. Ignore Previous Instructions: Attackers instruct the AI to disregard its system instructions and safety guardrails.
  2. Strong Arm Attacks involve using authoritative language or commands, such as “admin override,” to trick the system into bypassing its safety flags.
  3. Base64 Encoding: Attackers use machine-readable, encoded language to evade detection, making it difficult for the system to recognize the attack.

3. Vulnerabilities Across All Interactions

Attacks are happening at every layer of the generative AI pipeline, from the user prompts to the AI model’s responses and tool outputs.This highlights the need for comprehensive security that covers all stages of AI interaction, as traditional hardening methods have their limits due to the non-deterministic nature of LLM inputs and outputs.

4. The Need for Layered Security

The report emphasizes that security solutions need to be layered between interactions with the AI model. A great example of this approach is Amazon Bedrock Guardrails:

  • A Bedrock guardrail screens it for inappropriate content before the user’s prompt reaches the AI model.
  • Once the AI generates a response, it passes through another layer of security before being delivered back to the user.
    • This approach ensures that potential risks are mitigated both before and after interacting with the AI.

5. Disparities Between Open-Source and Commercial Models

There is a clear gap in the resilience to attacks between open-source and commercial LLMs.

  • Commercial models generally have more built-in protections because they offer complete generative AI applications, including memory, new features, authentication tools, and more.
  • In contrast, open-source models (such as Meta’s Llama models) require the host to manage the orchestration and security of the LLM, placing more responsibility on the user.

6. There will be a Shared Responsibility for GenAI Security

I believe GenAI LLMs, app builders, and app users will all take place in the securing of GenAI. Organizations will not be able to outsource securing GenAI and will not be able to indemnify away the risks of GenAI applications in their businesses. Even with commercial models, leaders need to monitor every level of the stack. Security must be continuously maintained and monitored, especially as more generative AI applications are deployed in the future. 

7. Insights and Practical Examples

Pillar’s report provides six real-world examples of jailbreaks, giving readers a tangible understanding of the techniques used and their implications. The report is a valuable resource for anyone involved in AI security, offering a snapshot of the current state and actionable insights on how to prepare for emerging threats in 2025 and beyond.

Final Thoughts

Pillar’s report on The State of Attacks on Generative AI is a great read for anyone interested in securing GenAI in their business or is evaluating adopting GenAI applications. Pillar has relevant GenAI telemetry data, practical examples, and delivers helpful insights and a forward-looking perspective.

If you’re working with generative AI or planning to, I highly recommend downloading the report—it’s free and full of actionable insights to help you stay secure.

Securing GenerativeAI: Key Emerging Threat Vectors and Guardrails for Amazon Bedrock

Ensuring the security of generative AI systems is critical, given their complex nature and potential vulnerabilities. In this blog, I talk about three emerging security considerations and highlight an AWS security service for generative AI applications and LLMs on Amazon Bedrock..

Three emerging GenAI Security areas for CISOs to consider

1/ Model Output Anomalies: Generative AI models may generate output anomalies, including hallucinations and biases. Given the probabilistic approach of word generation, these models might produce confident but inaccurate outputs. Moreover, implicit or explicit biases in training data necessitate effective mitigation strategies. Regularly updating and refining training data, along with implementing robust evaluation metrics, can help minimize these anomalies and improve model reliability.

2/ Data Protection: Protecting data is paramount to avoid leaks to third parties, safeguard intellectual property, and ensure legal compliance. Robust governance and legal frameworks are crucial, as data becomes a key differentiator in maintaining a competitive advantage. Encryption of data at rest and in transit, access controls, and continuous monitoring are essential practices. Additionally, implementing differential privacy techniques can help protect individual data points while still allowing useful insights to be extracted.

3/ Securing Generative AI Applications: It’s vital to defend AI applications against prompt injection attacks, where malicious inputs can bypass model constraints. For instance, attackers might evade instructions designed to block harmful activities. Implementing stringent security measures is essential to mitigate such threats. Regular security audits, penetration testing, and employing adversarial testing techniques can further strengthen defenses against such attacks.

Amazon BedRock

Amazon’s generative AI platform, BedRock, operates on an API-driven, token-based model for input and output. Supporting a range of large language models (LLMs), including Mistral, Anthropic’s Claude, and Meta’s LLaMA (3.1 and 4.0.5b), each model provider aims to ensure user security. BedRock’s architecture is designed to offer seamless integration with various AWS security services, ensuring a comprehensive security posture for generative AI deployments.

BedRock GuardRails

Amazon BedRock GuardRails enables customers to add a protective layer between the user’s prompt and the LLM, and between the LLM and the user’s response. Key features include:

  • Content Filters: Block harmful content in input prompts or model responses. These filters are continuously updated to recognize and block new and evolving threats.
  • Deny Topics: Prevent processing of specific topics. This feature ensures compliance with legal and ethical standards by preventing the AI from engaging with forbidden content.
  • Word Filters: Block undesirable phrases or profanity. This maintains the integrity and professionalism of the AI outputs.
  • Sensitive Information Filters: Block or mask sensitive data like Personally Identifiable Information (PII). By incorporating advanced pattern recognition, these filters can detect and redact sensitive information in real-time.
  • Contextual Grounding: Detect and filter hallucinations and harmful actors. By leveraging context-aware algorithms, BedRock can discern when outputs deviate from expected behavior, enhancing the overall safety and reliability of the system.

What is the Enterprise IT Security Stack? – a quick guide for business stakeholders

Background – I’ve been working primarily in the cloud (IaaS and PaaS) for the past ten years, but for the last two, I’ve worked with security companies. As I ramped up and tried to learn the security market and key segments, I struggled to find a single point of view that reconciled the three core components of security (IMO): People, Process, and Technology. The NIST Cyber Security Framework 2.0 does a great job calling out the Process and People components. Still, in a rapidly evolving market, I couldn’t find what I call the IT Security Stack for companies that need to protect their data, employees, and customers from threats. We have the Cloud Stack (IaaS/PaaS/SaaS), the Web stack (LAMP), and the network stack (OSI), but the security stack eluded me.

This blog attempts to summarize and share my analysis and representation of the IT Security stack for most companies. The goal is to cover 80% of a customer’s security needs. I’m sure there are emerging technologies, threat surfaces, and companies I missed, but my intention is not an exhaustive list for the CyberSecurity professional, there are other infogrpahics that do that well. I’m focused on the business stakeholder who interacts with Security teams or is responsible for assessing their security risks and evaluating the investment in new technologies. If I missed something glaring, or if you have a relevant insight to share, I’d encourage you to comment below

The eight Security Categories in the Enterprise IT Security Stack.

Some categories are made up of multiple security technologies. I tried to reference Gartner’s IT Glossary, MQs, and market reports for industry-accepted definitions. In the case of quickly evolving spaces such as CNAPP, I also cross-referenced analyst definitions with leading vendor websites and blogs. Definitions alone aren’t always helpful, so I put together the same chart with some questions stakeholders can ask themselves to determine whether they need such technology.

Questions to identify your need by Security Category

I also found that while the “IT Security Stack” and its category definitions were not well known among industry peers, most IT professionals were aware of companies in each space. Sometimes, you don’t know what you don’t know. The list below is representative but not exhaustive.

Representative security vendors by category

The eight Category definitions from industry analysts and leading vendors

  1. Email Security – This Forester Blog and The Enterprise Email Security Landscape, Q1 2023 report describes the email security categories excellently.
  2. EDR/MDR – The Endpoint Detection and Response Solutions (EDR) market is defined as solutions that record and store endpoint-system-level behaviors. A simplistic definition of managed detection and response (MDR) is that MDR is the managed version of EDR.
  3. IAM – Identity and Access Management (IAM) is a security and business discipline that includes multiple technologies and business processes to help the right people or machines to access the right assets at the right time for the right reasons, while keeping unauthorized access and fraud at bay.
  4. Web application and API protection (WAAP)– Typically delivered as a service, cloud WAAP is offered as a series of security modules that provide protection from a broad range of runtime attacks. It offers protection from the Top 10 web application security risks defined by the Open Web Application Security Project (OWASP) and automated threats, provides API security, and can detect and protect against multiple sophisticated Layer 7 attacks targeted at web applications. Cloud WAAP’s core features include web application firewall (WAF), bot management, distributed denial of service (DDoS) mitigation and API protection.
  5. CNAPP – Cloud-native application protection platforms (CNAPPs) are a unified and tightly integrated set of security and compliance capabilities designed to secure and protect cloud-native applications across development and production. CNAPPs consolidate a large number of previously siloed capabilities, including container scanning, cloud security posture management, infrastructure as code scanning, cloud infrastructure entitlement management, runtime cloud workload protection and runtime vulnerability/configuration scanning.
  6. SASE – Secure access service edge (SASE) delivers converged network and security as a service capabilities, including SD-WAN, SWG, CASB, NGFW and zero trust network access (ZTNA). SASE supports branch offices, remote workers, and on-premises secure access use cases. SASE is primarily delivered as a service and enables zero trust access based on the identity of the device or entity, combined with real-time context and security and compliance policies.
  7. SecOps (SIEM), (SOAR), and (XDR) – Crowdstrike has a great blog on XDR vs. SIEM vs. SOAR that I’d recommend you check out.
  8. Data Protection – Per the CISA, Cybersecurity is the art of protecting networks, devices, and data from unauthorized access or criminal use and the practice of ensuring confidentiality, integrity, and availability of information. Data protection tools do this with a focus on data and can include solutions such as backup, archive, sister recovery, and encryption, to name a few.

What are the differences between OpenAI’s ChatGPT, InstructGPT, fine-tuned models, and Embedding models? 

Are you like me and recently found out that OpenAI has multiple ways to consume their breakthrough GPT models? If so, let’s break down the differences and primary use cases for each of these models:

Image generated by Midjourney for a “Collage of AI Models”

ChatGPT:

  • ChatGPT is designed specifically for conversational AI applications, where the model interacts with users through text-based conversations.
  • It is trained using a combination of supervised fine-tuning and Reinforcement Learning from Human Feedback (RLHF).
  • ChatGPT is useful for building chatbots, virtual assistants, or any system that involves interactive dialogue with users. It excels at generating coherent and contextually relevant responses.

InstructGPT:

  • InstructGPT is geared towards assisting users with detailed instructions and tasks.
  • It is trained using a combination of supervised fine-tuning and demonstrations, where human AI trainers provide step-by-step instructions to guide the model.
  • InstructGPT is well-suited for generating helpful responses when given specific instructions or when guiding users through a process. It can be used for writing code, answering questions, creating tutorials, and more.

Fine-tuning models:

  • Fine-tuning involves taking a pre-trained language model, such as GPT, and further training it on a specific task or dataset.
  • Fine-tuning allows for customization of the model to perform well on specific tasks, making it more focused and specialized.
  • It is useful when you have a specific dataset and task at hand, and you want the model to provide accurate and relevant responses tailored to that task. Fine-tuning can be applied to both ChatGPT and InstructGPT.

Embedding models vs. Language models:

  • Embedding models focus on generating fixed-length representations (embeddings) of input text. These embeddings capture semantic and contextual information about the text, which can be useful for various downstream tasks.
  • Language models, like GPT, generate coherent and contextually appropriate text by predicting the next word given the previous context. They have a generative nature and can produce human-like responses.
  • Embedding models are suitable for tasks like sentiment analysis, document classification, and information retrieval, where the fixed-length representations of text are used as input features.
  • Language models, on the other hand, are better suited for tasks like text generation, dialogue systems, and content creation, where the model needs to generate text based on context.

In summary, ChatGPT is ideal for conversational AI applications, InstructGPT is tailored for assisting with detailed instructions and tasks, fine-tuning models allow for customization to specific tasks, and embedding models provide fixed-length representations of text for downstream tasks.

Check out all the offerings listed above on OpenAI’s pricing page.

Four business considerations for anyone in B2B thinking about GenAI adoption

This article aims to give business stakeholders an understanding of the major components of GenAI so they can effectively navigate the GenAI noise and have productive conversations internally and with trusted partners. 

The recent advancements in generative AI are driving a race to capitalize and monetize GenAI by businesses. While there is no lack of content on GenAI, I’ve found that much of the content is focused on consumer productivity hacks, deeply technical research papers on Avitx, or code frameworks and GitHub repositories. My focus is on how business stakeholders should approach embedding GenAI in their companies and products though the lens of revenue growth, costs, risks, and sustainable competitive differentiators.

Section One: Generative AI and Foundation Models

Generative AI is based on what the industry refers to as foundation models – large-scale machine learning models trained on massive datasets, typically text or images. These models learn patterns, structures, and nuances from the data they’re trained on, enabling them to generate content, answer questions, translate languages, and more. Some of the most popular Generative AI use cases now include:

  • Large language Models (LLMs) such as ChatGPT
  • Image Generators(Text-to-image) such as Midjourney or Stable Diffusion
  • Code generation tools (Uses LLMs fine-tuned on code) such as Amazon Code Wispherer or GitHub copilot
  • Audio generation tools such as VALL-E

Section Two: Deployment and Consumption of Generative AI

Deployment and consumption of Gen AI varies greatly. I’ve highlighted the primary areas of today’s GenAI landscape that business stakeholders should focus on figuring out for their company. I’ve highlighted the text and corresponding parts of the tech stack diagram below in green or orange. For most business stakeholders, you should focus on which of the three models benefits you the most.

  1. Use (consume) an off-the-shelf software solution that uses Gen AI to reduce costs. Not many B2B firms have launched GenAI features outside of SFDC’s Einstein GPT.
  2. Consume an existing GenAI-aaS such as ChatGPT and embed (deploy) the APIs functionality in your company’s products, services, or internal applications to drive revenue or lower costs.
  3. Fine-tune an existing open-source foundation model with proprietary data, and deploy it on a cloud or internal infrastructure. Embed the model outputs in your products, services, or internal applications as a competitive differentiator.
Source: Who Owns the Generative AI Platform? (https://a16z.com/)

Section Three: Pre-trained vs. From Scratch Models vs. Fine-tuned

The decision between using a pre-trained service such as ChatGPT, fine-tuning an open large language model (LLM) with your data, or training and deploying your LLM from scratch hinges on several factors – time, cost, skillset, and specificity of the task.

Pre-trained services offer a cost-effective and timely solution, requiring minimal expertise and effort to integrate into your existing processes. However, they might not always provide the level of customization needed for niche applications.

Training and deploying your own LLM from scratch gives the highest degree of customization. Still, it requires significant resources – a dedicated team of AI experts, lots of data, substantial computational resources, and considerable time investment.

Fine-tuning an open-source LLM from providers such as Hugging Face and Meta AI offers a middle ground. You get the benefits of a pre-trained model plus customization for specific use cases. However, it requires expertise in machine learning, access to relevant data for fine-tuning, and infrastructure to host your model endpoints.

Section Four: Open vs. Closed Models

When it comes to open versus closed foundation models, the key differences revolve around transparency, control, and cost. Open-source models generally offer more transparency and flexibility – you can examine, modify, and fine-tune the model as you please. However, they may require a more sophisticated skill set to utilize effectively.

On the other hand, closed models are typically proprietary, meaning the inner workings are not fully disclosed. They often come with customer support and might be better suited for business leaders who prefer an off-the-shelf solution. However, they can be more costly and offer less flexibility than their open-source counterparts.

Conclusion

Understanding the tech stack and associated landscape of generative AI is crucial for business leaders to have informed discussions. In general, we’re seeing less of a focus on increasing the number of parameters and more on fine-tuning models with proprietary data. I believe data will be the biggest differentiator as more websites change their terms of use not to allow web scraping for inclusion in the training of 3rd party models.

We didn’t even get into the business considerations of you are creating a sustainable competitive advantage with Gen AI, the cost implications of GenAI on your margins, and product-customer fit. Still, I will address those in a future blog post. There are more questions than answers, but it’s clear GenAI is more than hype, and everyone should be prepared for the long game.

The difference between SASE and WAAP from a security neophyte

I’ve been working for public cloud providers of IaaS and SaaS for over a decade, but I started focusing on security a little over a year ago due to some changes in my role. As with any new challenge, there is often a knowledge gap, followed by a steep learning curve and commensurate learning. If you are reading this, I’m sure you can relate to the experience of starting with a seemingly innocuous security question only to end up hours later feeling less informed than when you started because you now know how much you don’t know. In some attempt to compress years of knowledge into months, weeks, or even days, I would like to share one of these research experiences with you. 

As I stumbled upon articles and blogs on security products such as ZTNA, WAFs, Secure Web Gateways, NGFWs, BOT Protection, DDoS, and SD-WAN, I found that most of these technologies fit into one of two buckets, Secure access service edge (SASE) or Web Application and API Protection (WAAP) as defined by Gartner. After trying to unwind the similarities, differences, and potential overlap of these two segments to be more effective in my daily role, I drafted this summary of my learnings. So without further ado, here are the primary differences between SASE and WAAP from a security neophyte. 

TLDR version: WAAP focuses on protecting a public-facing application from security threats, and SASE focuses on enabling a company’s workforce to access internal applications and data from any location securely. (Corporate office, branch office, and remote employee) 

Let’s start with a standard definition of WAAP and SASE from Gartner. 

*What is Web application and API protection WAAP?

Web application and API protection platforms (WAAPs) mitigate a broad range of runtime attacks, notably the Open Web Application Security Project (OWASP-F5 link) top 10 for web application threats, automated threats, and specialized attacks on APIs. WAAP comprises a suite of tools that includes next-generation WAF, API security, advanced bot protection, and DDoS protection. Definition – Gartner 

WAAP Diagram with consumers and workloads

What kind of deployment options exist for WAAP?

The web application and API protection solutions can be deployed as WAAP services (SaaS) or as WAAP (Virtual or Physical) appliances. Cloud WAAPs are cloud-delivered services that primarily protect public-facing web applications and APIs. Definition – Gartner 

What is Secure access service edge (SASE)?

Per Gartner, Secure access service edge (SASE) delivers converged network and security as a service capability, including SD-WAN, Security Web Gateway (SWG), Cloud access security brokers (CASB), next-generation firewall (NGFW) and zero trust network access (ZTNA). SASE is primarily delivered as a service and enables zero trust access based on the identity of the device or entity, combined with real-time context and security and compliance policies. SASE supports branch offices, remote work, and on-prem secure access use cases. Gartner forecasts that the SASE market will grow at a CAGR of 32%, reaching almost $15 billion by 2025. 

SAS Diagram with employee location and example IT resources.
SAS Diagram with employee location and example IT resources.

How does WAAP differ from SASE?

SASE’s primary goal is to enable and secure anywhere, anytime access, network, and network security for a company’s workforce to protect internal networks and data. WAAP solutions protect a company’s public-facing applications from external threats. Today I’ve found that the current SASE and WAAP offerings contain these core technologies.

SASE Products

  • SD-WAN
  • Security Web Gateway (SWG)
  • Cloud access security brokers (CASB)
  • next-generation firewall (NGFW) 
  • zero trust network access (ZTNA)

WAAP Products

  • Web application firewall
  • API protection 
  • Bot Detection
  • DDoS

Some might argue that SASE encapsulates WAAP, but I don’t find that very helpful and prefer to cite the distinct technologies to avoid peanut-buttering buzzwords enabling you, the individual, to discern relevance. 

There are areas of overlap in functionality and integration, but the above mental models helped me differentiate each segment’s customer use cases and target personas.  

Full disclosure, this explanation is an attempt to help others save hours on a question I struggled to find a succinct answer for. There are plenty of resources to dive deep into the technologies that make up these security domains, as well as a storied history of the evolution of the networking and security market. 

Elevate your email marketing with Product Recommendations using Amazon Personalize and Amazon PinPoint

Most organizations are already doing some form of omnichannel marketing using disparate 3rd applications and on-prem data stores. Amazon employs a combination of homegrown tools they’ve developed over the years, and they’ve made some of those tools available via Amazon Web Services for anyone to use. Today I’m going to focus on using your historical customer marketing and purchase history to power a recommendation engine called Amazon Personalize that can auto-populate product recommendations in customized emails using Amazon Pinpoint. You can create dynamic audience segments in Pinpoint based on demographic data, behaviors, and custom attributes. If you already have a solution for managing your customer lists you can import an audience from another tool such as a Customer Data Platform (CDP) like Tealium, Segment, or mParticle. 

Continue reading “Elevate your email marketing with Product Recommendations using Amazon Personalize and Amazon PinPoint”

Is Amazon Monitron the Amazon Echo for Industrial IoT?

Cheeky headlines get clicks, but in all honesty, Amazon Monitron is a set of hardware devices (Sensors/Gateway), coupled with a cloud machine learning (ML) service. Hence my comparison of Monitron to the Amazon (Echo + Alexa).

Specifically, Amazon Monitron is an end-to-end system that detects abnormal behavior in industrial machinery. Today the Monitron solution is comprised of hardware sensors for your assets, Machine Learning for anomaly detection, and a mobile application to monitor your assets. The key differentiator in my mind is the combination of low price for Industrial Sensors ($115 per sensor), out of the box anomaly detection, and usability with a simple mobile app. 

No alt text provided for this image
Continue reading “Is Amazon Monitron the Amazon Echo for Industrial IoT?”