Large Language Models (LLMs) are incredibly powerful, but at the same time bit unpredictable. They can respond to anything, from product FAQs to philosophical rants on r/AskPolitics. While this flexibility is impressive, it also introduces risks. From hallucinated facts to inappropriate replies, LLM's can quickly veer off