Techno Frontiers (@technofrontiers) 's Twitter Profile
Techno Frontiers

@technofrontiers

The future is bright 🔥

ID: 1679635059354087425

calendar_today13-07-2023 23:33:10

1,1K Tweet

183 Followers

6 Following

Techno Frontiers (@technofrontiers) 's Twitter Profile Photo

"delve" is still used in the output of the new OpenAI o1 model • For summarization use cases, the OpenAI o1 is more beneficial for longer inputs as GPT-4o misses key information You can't really push the system as full documents can't be uploaded "yet"

"delve" is still used in the output of the new OpenAI o1 model 

• For summarization use cases, the OpenAI o1 is more beneficial for longer inputs as GPT-4o misses key information 

You can't really push the system as full documents can't be uploaded "yet"
Techno Frontiers (@technofrontiers) 's Twitter Profile Photo

The new OpenAI o1 model is a multi modal The two biggest reasons it hasn't been deployed yet are due to compute constraints and cost concerns.

The new OpenAI o1 model is a multi modal

The two biggest reasons it hasn't been deployed yet are due to compute constraints and cost concerns.
Techno Frontiers (@technofrontiers) 's Twitter Profile Photo

The inference speed on the Grok models seems to be increasing Grok 2 mini is about double the speed when it was first launched Uploading images & files on the Grok models will be the next big unlock of its usefulness

Techno Frontiers (@technofrontiers) 's Twitter Profile Photo

OpenAI o1 model rate limits has been increased o1 preview: 30 per week → 50 per week o1 mini: 50 per week → 50 per day x.com/OpenAI/status/…

Techno Frontiers (@technofrontiers) 's Twitter Profile Photo

When asking the O1 mini model to iterate on an answer, there appears to be some slippage, as it doesn't fully internalize this as part of its "thinking process." Occasionally, it provides the iterations as the final answer, revealing its chain of thought, at least in relation to

When asking the O1 mini model to iterate on an answer, there appears to be some slippage, as it doesn't fully internalize this as part of its "thinking process."

Occasionally, it provides the iterations as the final answer, revealing its chain of thought, at least in relation to
Techno Frontiers (@technofrontiers) 's Twitter Profile Photo

The GPT-4o model has also been improved Since its launch in May, the Omni models have improved by 4.36% over the past 5 months. x.com/reah_ai/status…

The GPT-4o model has also been improved 

Since its launch in May, the Omni models have improved by 4.36% over the past 5 months. 

x.com/reah_ai/status…
Techno Frontiers (@technofrontiers) 's Twitter Profile Photo

The UI for feedback when choosing between a model in testing phase has changed • The model on the left refused to search the internet which could be the o1 preview model with internet tools disabled and "thinking process" not visible in output • The model on the left did

The UI for feedback when choosing between a model in testing phase has changed 

• The model on the left refused to search the internet which could be the o1 preview model with internet tools disabled and "thinking process" not visible in output 

• The model on the left did
Techno Frontiers (@technofrontiers) 's Twitter Profile Photo

Custom instructions that is applied under Chat GPT settings doesn't seem to apply to the o1 preview & mini models Absent of this, you run into the similar problems that earlier Chat GPT 3.5 and Chat GPT 4 models. Prompting the system to "you are an expert in x domain" vastly

Techno Frontiers (@technofrontiers) 's Twitter Profile Photo

Previously typos in prompts would not affect the speed of its output. With the o1 models, it tends to use up tokens occasionally in correcting typos in its thinking process. This is probably going to be optimized to be omitted this in updated models.

Previously typos in prompts would not affect the speed of its output.

With the o1 models, it tends to use up tokens occasionally in correcting typos in its thinking process. This is probably going to be optimized to be omitted this in updated models.