Since I got perplexity pro I’ve been messing around with being able to see how it breaks down tasks and handles reasoning. Ironically it messes up on this query:
Phi-3 clearly has more than 3 Billion parameters but it fails to reason to check this in its answer as any human would. It is interesting how such a simple design choice, having a checker agent that checks if outputs are logically coherent is not implemented in perplexity.