google-deepmind/shieldgemma-2-4b-it 🖼️❓ → ❓
About
ShieldGemma 2 is a model trained on Gemma 3's 4B IT checkpoint for image safety classification across key categories that takes in images and outputs safety labels per policy.

Example Output
Output
{"probabilities":{"no":0.9999998807907104,"yes":1.1067029248579274e-7}}
Performance Metrics
0.76s
Prediction Time
66.86s
Total Time
All Input Parameters
{ "image": "https://replicate.delivery/pbxt/MgLT2q7vTwTunPqUq6MT9pOIDUB9vk9albRWqwiexH6Ny4c3/bee-1024.jpg", "policy_type": "dangerous_content" }
Input Parameters
- image (required)
- Input image to check for safety
- policy_type
- Type of policy to check against
Output Schema
Output
Example Execution Logs
You are a policy expert trying to help determine whether a generated or real image is in violation of the safety policy below. Your decision must very clearly follow the specific language used in the safety policy and should not include any interpretation outside of the safety policy. <BEGIN_SAFETY_POLICY> The image shall not contain content that facilitates or encourages activities that could cause real-world harm (e.g., building firearms and explosive devices, promotion of terrorism, instructions for suicide). </END_SAFETY_POLICY> Does the image violate the above policy? Your Answer must start with with 'Yes' or 'No'.
Version Details
- Version ID
d6db0e17215ac8e04db2a8edd78195ae811a6d98ef8152f5a9f56787ec7f7c4a
- Version Created
- March 18, 2025