Cloud Security – Securing Cloud AI Data Pipelines
Recent attacks on Jupyter Notebook show threat actors exploit cloud-based data science platforms. And as companies are transitioning from on-premises data centers to private cloud, multi-cloud, or hybrid cloud working environments, they need data security controls to mitigate risks of unauthorized access and theft posed by the expanded attack surface. Cloud computing and the AI data pipelines they utilize, in particular, provide companies with tremendous flexibility and opportunity, but the extensive data sets required to perform the analytics present significantly heightened risks of data loss within the cloud environment.
Let’s look at the AI data pipeline and what organizations should be doing to protect it.
Gartner’s take on AI data security
Examining this issue for a 2021 report, Gartner clarified that an AI data pipeline is only as secure as its weakest link. Like “shift left” security in DevSecOps, Gartner recommends that AI data pipelines focus on security controls from the beginning of the development process. Entitlement, Gartner’s term for user access management, must be carefully implemented across all stages of the pipeline before platforms and analytics tools are even considered.
Gartner’s analysis divided AI data pipelines into three primary security concerns:
- Model security refers to the continuous analysis of data to identify any potential tampering or data poisoning that might harm data-driven systems.
- Infrastructure security policies focus on securing networks and data storage and ingestion points, which can typically be handled with mature technologies.
- Data security, the most prominent of the three security concerns, is protecting data at all points in the AI pipeline from malicious or unintended access.
The report also points out that regulatory compliance is a more pressing data security issue companies should consider when designing an AI pipeline. Now that regulations covering customer data privacy have passed in five U.S. states—and many more are in the works—best-in-class data protection is no longer a differentiator but a requirement for many organizations and cloud applications seeking cloud data security.
Creating an AI data pipeline security stack
Gartner emphasizes two subsets of data security for cloud service providers: data theft/unintended disclosure (data breaches, malware)and compliance requirements (GDPR, PCI, HIPAA). The report recommends implementing metadata creation, differential privacy, data masking, tokenization, and fully homomorphic data encryption to mitigate these risks and facilitate cybersecurity compliance.
Finding the right security vendor is the place to start for companies looking to follow Gartner’s recommendations and develop an AI data pipeline security stack. The ideal vendor offers continuous defense against data theft through three mutually reinforcing methods:
- Masking: If a piece of data has sensitive information—a Social Security number, for example—but will undergo analytics, masking occurs. Masking obfuscates the actual value of data, but it comes at a price—that data can never be processed downstream. Instead, this data will be stored for a fixed period, typically for compliance purposes.
- Tokenization: Data that contains sensitive information but will be analyzed is tokenized. This protection method replaces a piece of data with other characters in the same format. For example, a tokenized Social Security number would consist of nine random numbers to look like an actual Social Security number. Tokenization allows existing applications to analyze the data, even though it is not real data. If a hacker accesses tokenized data, the actual values will be protected from exposure.
- Privacy-enhancing Computation: Sensitive data can be fully encrypted and safely processed through the analytics pipeline in the cloud without risk of exposure, i.e., data is never decrypted, even during usage. When an authorized user performs analysis, the output is identical to the expected result as if the analytics and data transformations had been applied to plain text.
While these security solutions may seem cumbersome and challenging to integrate these methods into an existing security stack, companies can do so through a no-code or low-code deployment. When exploring ways to secure data, a data protection solution, especially for AI projects, companies should consider the following attributes:
- As defined by Gartner, continuous real-time data protection at the file level throughout the AI data pipeline.
- Smooth integration without disrupting existing applications.
- Able to maintain compliance with industry and government regulations.
- Data protection without requiring clones of data or schema changes. This minimizes the risk of data polymorphism and how legacy encryption systems not designed for the cloud can expose data.
Data’s value is ever-increasing, and companies rely on AI projects to pinpoint areas where they can increase market share. Protecting data in a manner that does not interrupt or slow down AI data analysis is critical to success. Baffle’s Data Protection Services handles large AI data sets, allowing processing speed in secure cloud environments containing more than 100 billion records. For more about the challenges of de-identifying cloud data pipelines and how Baffle can help you secure your data analytics pipeline, watch this webinar: De-identifying Cloud Data Pipelines: Approaches, Best Practices, and Learnings.
Join our newsletter
Schedule a Demo with the Baffle team
Meet with Baffle team to ask questions and find out how Baffle can protect your sensitive data.
Easy
No application code modification required
Fast
Deploy in hours not weeks
Comprehensive
One solution for masking, tokenization, and encryption
Secure
AES cryptographic protection
Flexible
No impact to user experience