We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Lead Cybersecurity Engineer, Data Loss Prevention & AI Governance

McGraw Hill
$136,000 - $190,000
United States, New York, New York
Apr 07, 2026
Overview

Build the Future
At McGraw Hill, we are dedicated to delivering digital learning experiences that transform education for learners and educators. Our focus is on creating seamless, impactful products that truly benefit our users while supporting growth and collaboration across teams. We foster a culture that values innovation, teamwork, and a balance between career growth and personal well-being.

How can you make an impact?

The Cybersecurity Engineer - AI & DLP is responsible for designing and implementing data protection and governance controls across enterprise AI platforms, such as generative AI and AI-assisted development tools. This position centers on preventing data leaks, overseeing AI interactions with sensitive information, and applying security policies using DLP technologies, logging, and automated controls. The engineer will assess risks associated with AI platforms, set up inspection and monitoring systems, and create governance frameworks that ensure AI tool usage complies with organizational security, privacy, and compliance standards.

This is a remote position open to applicants authorized to work for any employer within the United States.

What You'll Do:

  • Define and implement AI security controls, such as prompt filtering, response inspection, redaction, and usage monitoring, to ensure enterprise AI tools operate within approved data protection and compliance boundaries.
  • Evaluate inputs and outputs of enterprise AI tools (e.g., ChatGPT, Claude, and internal LLM platforms) to identify risks related to sensitive data exposure, prompt injection, and intellectual property leakage.
  • Design and implement technical guardrails and monitoring controls-including prompt inspection, output filtering, and DLP policies-to ensure AI usage aligns with enterprise security and data governance standards.
  • Design, implement, and operate Data Loss Prevention (DLP) controls to prevent the exposure of sensitive data across enterprise AI platforms and generative AI tools.
  • Partner with engineering, AI/data science, and Digital Workspace teams to integrate security controls into AI platforms, including prompt monitoring, data classification, and access controls.
  • Evaluate emerging AI tools, models, and AI-assisted development platforms to identify cybersecurity risks and recommend appropriate security requirements and mitigations.
  • Implement logging, monitoring, and alerting capabilities to provide visibility into how enterprise data is accessed, processed, and shared through AI systems.
  • Develop and enforce policies and technical controls that prevent the use of sensitive data (e.g., PII, credentials, proprietary content) within AI prompts, training datasets, or integrations.
  • Design and implement a Data Loss Prevention (DLP) strategy throughout all MH infrastructure systems (MS Purview, Zscaler, cloud environments). Operationalize the alert and triage standard operating procedures to protect sensitive emails, uploads, and other avenues of data loss.
  • Support the design of secure architecture for enterprise AI platforms, including controls for data handling, model access, API usage, and third-party integrations.
  • Contribute to security awareness and guidance for developers and employees on safe and responsible use of generative AI tools.

Who You Are:

  • 15+ years of applicable experience.
  • Bachelor's degree in computer science, Engineering, or related field.
  • Strong communication skills and comfortability working directly with business stakeholders, vendors, and leadership.
  • Ability to present risks and recommendations to leadership.
  • Ability to translate complex identity concepts into business value.
  • Understanding the Model Context Protocol (MCP), Retrieval-Augmented Generation (RAG), and API integrations.
  • Strong knowledge of DLP technical controls, concepts, and end user computing behaviors.
  • Experience with administration of the Microsoft tool suite, particularly M365 Copilot, GitHub Copilot, Microsoft Purview.

Preferred:

  • In-depth knowledge of agentic AI usage and guardrails from an end user and development perspective.
  • Knowledge of infrastructure and engineering of client/server compute systems.

Why work for us?

The work you do at McGraw Hill will be work that matters. We are collectively building experiences that will help shape the future of education. Play your part and experience a sense of fulfilment that will inspire you to even greater heights.

The pay range for this position is between $136,000 - $190,000 annually. However, base pay offered may vary depending on job-related knowledge, skills, experience, and location. An annual bonus plan may be provided as part of the compensation package, in addition to a full range of medical and/or other benefits, depending on the position offered. Click here to learn more about our benefit offerings.

McGraw Hill recruiters always use a "@mheducation.com" email address and/or from our Applicant Tracking System, iCIMS. Any variation of this email domain should be considered suspicious. Additionally, McGraw Hill recruiters and authorized representatives will never request sensitive information in email.

50575

McGraw Hill uses an automated employment decision tool (AEDT) to assist in the screening process by recommending candidates with "like skills" based on resume and job data. To request an alternative screening process, please select "Opt-Out" when asked to "Consent to use of Automated Employment Decision Tools" during the application.

Applied = 0

(web-bd9584865-g8mrx)