AI trust and safety: How poetic jailbreaks expose LLM risks | encorp.ai