June 2025, Part 1, Top 5 Takeaways:
1. Meta Prompting: The panel discussed the importance of meta prompting, or prompting about prompting, in creating effective AI interactions. Ben Schorr highlighted how Microsoft used prompts that crafted the best possible prompts for specific goals. Greg Kochansky mentioned using custom GPTs to craft and refine prompts, enhancing the efficiency and effectiveness of the AI's output.
2. Iterative AI Output: Multiple participants emphasized that AI outputs should often be treated as part of an iterative process. Dennis Kennedy and Ben Schorr both mentioned using follow-up prompts to refine initial responses from AI. Greg Kochansky discussed using critic agents within agentic frameworks to ensure the quality of AI-generated outputs, indicating a design pattern that leads to improved results.
3. Specialized Tools Over Generalists: Mathew Kerbis shared his preference for best-of-breed, specialized tools over generalist AI solutions. He pointed out that highly specialized tools often offer better performance and features for particular tasks, using perplexity for search functions as an example.
4. Transparency in AI Generation: Greg Kochansky mentioned the importance of transparency when using AI to generate content. As an example, he described a newsletter on AI and alternative dispute resolution that openly disclosed that its summaries and categorizations were written by AI. This transparency helps maintain trust and clarity for users.
5. Context-Dependent Prompting: Mathew Kerbis discussed the value of providing AI with rich contexts to generate better responses. He advised treating AI like a highly intelligent but entry-level assistant who needs detailed instructions to perform tasks effectively, which can lead to higher-quality outputs and more useful interactions.
Share this post