

a CoT means externally iterating an LLM
Not necessarily. Yes, a chain of thought can be provided externally, for example through user prompting or another source, which can even be another LLM. One of the key observations behind these models commonly referred to as reasoning is that since an external LLM can be used to provide “thoughts”, could an LLM provide those steps itself, without depending on external sources?
To do this, it generates “thoughts” around the user’s prompt, essentially exploring the space around it and trying different options. These generated steps are added to the context window and are usually much larger that the prompt itself, which is why these models are sometimes referred to as long chain-of-thought models. Some frontends will show a summary of the long CoT, although this is normally not the raw context itself, but rather a version that is summarised and re-formatted.
Historically, Firefox has had fewer security measures than Chrome. For example, full tab isolation was only implemented recently in Firefox, many years after Chrome. MV3-only extentions in Chrome also reduce the attack surface from that perspective.
The counterpoint to this is that there are much fewer users of Firefox, so it less attractive to try to exploit. Finding a vulnerability in Chrome is much more lucrative, since it has the potential to reach more targets.