I learned today, that o-1 is able to search through all chats and can find and verify if the findings are relevant to the actual context. i found that very usefull as i have a lot of very long chats regarding only one project.
ChatGpt lists the findings with the date and context and searches further back if asked for it. (in my case summer 2024)
I’ve always wanted better search and chat organization.
But I’m at a place where I can’t determine if the ephemeral UX of chatting with AI (ChatGPT, Claude) isn’t actually better. Most chats I want to save these days are things like code snippets that I’m not ready to integrate yet.
You could join my native, cross-platform client waitlist[1] if you're looking to use your OpenAI API key. Work-in-progress but it's coming along pretty fast.
That is a perfect use case for having an extension like this. It makes it easier for you to jump back into a previous conversation and is primarily what I use for as well.
I regularly use the request data button on ChatGPT, then parse the JSON and output HTML for each conversation, and a DB to search for the file(s) when needed.
Personally I'll wait for OpenAI to add this feature directly. I'm sure they're working on it.
I don't want this solution delivered in the form of an extension (one practical reason is I use ChatGPT from mobile a lot of the time). I have 0 extensions installed in general.
I built my own client (llmpad.com) to originally solve this problem, as well as using other LLM's and features. A little surprised others have not done this too?
You can try llmpad soon, feel free to message me.
The fact that you even need something like this shows how far we are from truly useful language models. Because ideally they should have all of the context of all of the messages in their mind, and so far we've had to manually manage that context for them.
To be fair this is less a language model problem, and more in the application layer around them.
Theoretically with an infinite context window a model would just work fine forever by shoving the entire conversation history into context with each request. But a message search/retrieval makes a lot more sense.
I think the long term AI chat is just relatively new as a UI pattern, and so it takes time to build patterns around it.
Ex: in 2023 I told GPT to answer all questions like a pirate. I never told it to stop doing that, so if we're loading every historical chat in memory, should it still be answering as a pirate?
> Theoretically with an infinite context window a model would just work fine forever by shoving the entire conversation history into context with each request. But a message search/retrieval makes a lot more sense.
Nope, with an infinite context window the LLM would take forever to give you an answer. Therefore it would be useless.
We don't really have such a thing as a context window, it's an artifact of LLM architecture. We are building a ton of technology around it but who's to say it's the right approach?
Maybe the best AIs will only use a very tiny LLM for actual language processing while delegating storage and compression of memories to something that's actually built for that.
There are two technologies I use every day that demonstrate a company is capable of solving an incredibly hard problem, X, while completely dropping the ball on the presumable easier part of UX, Y. ChatGPT is one of those. Driving in my Tesla is the other. I'm not sure how or why it happens but I think it about it daily.
Ineffective dog fooding. PMs might use it every day but they only use a subset of functionality. Some engineers may intentionally never use it when they get home because they’re so sick of looking at it. Some engineers doing crazy esoteric but it doesn’t propagate because their heads are down within the org. Most people are only showcasing exclusively happy paths to leadership, sorry I meant management. Executives only using it for emails, demos, and again a limited subset of happy paths.
Just burnout, siloing, and a lack of creativity. We can’t solve these problems in the industry because we are greedy short term thinkers who believe we’re long term innovators. To say nothing of believing we are smarter and more entitled then we are
"Harder [for the organization in question] to solve" is definitely right
Not really an excuse though, since a product company's mandate is to create a product that doesn't leave its customers baffled about apparently missing functionality.
Of course. But I use it dozens of times a day across dozens of projects. Many of the concepts are linked together. Intelligently indexing, linking, and referencing them seems like a pretty obvious feature. I doubt I'm in the minority in expecting this.
I wish I could search through my chats more easily.
But I don't want to download extensions, they are too security-unfriendly.
I learned today, that o-1 is able to search through all chats and can find and verify if the findings are relevant to the actual context. i found that very usefull as i have a lot of very long chats regarding only one project. ChatGpt lists the findings with the date and context and searches further back if asked for it. (in my case summer 2024)
Wait, how do you do that?
I did just ask to remember from our last chats, if we hadn’t this particular bug discussed before.
Hey, all chats are stored locally. Also, it is open source: https://github.com/s-sameer/chatgpt-chat-saver
Ok, but oh, I use Firefox ...
I’ve always wanted better search and chat organization.
But I’m at a place where I can’t determine if the ephemeral UX of chatting with AI (ChatGPT, Claude) isn’t actually better. Most chats I want to save these days are things like code snippets that I’m not ready to integrate yet.
You could join my native, cross-platform client waitlist[1] if you're looking to use your OpenAI API key. Work-in-progress but it's coming along pretty fast.
[1] https://www.get-vox.com/
That is a perfect use case for having an extension like this. It makes it easier for you to jump back into a previous conversation and is primarily what I use for as well.
I regularly use the request data button on ChatGPT, then parse the JSON and output HTML for each conversation, and a DB to search for the file(s) when needed.
Personally I'll wait for OpenAI to add this feature directly. I'm sure they're working on it.
I don't want this solution delivered in the form of an extension (one practical reason is I use ChatGPT from mobile a lot of the time). I have 0 extensions installed in general.
Why do you subject yourself to web ads?
I built my own client (llmpad.com) to originally solve this problem, as well as using other LLM's and features. A little surprised others have not done this too? You can try llmpad soon, feel free to message me.
The fact that you even need something like this shows how far we are from truly useful language models. Because ideally they should have all of the context of all of the messages in their mind, and so far we've had to manually manage that context for them.
To be fair this is less a language model problem, and more in the application layer around them.
Theoretically with an infinite context window a model would just work fine forever by shoving the entire conversation history into context with each request. But a message search/retrieval makes a lot more sense.
I think the long term AI chat is just relatively new as a UI pattern, and so it takes time to build patterns around it.
Ex: in 2023 I told GPT to answer all questions like a pirate. I never told it to stop doing that, so if we're loading every historical chat in memory, should it still be answering as a pirate?
> Theoretically with an infinite context window a model would just work fine forever by shoving the entire conversation history into context with each request. But a message search/retrieval makes a lot more sense.
Nope, with an infinite context window the LLM would take forever to give you an answer. Therefore it would be useless.
We don't really have such a thing as a context window, it's an artifact of LLM architecture. We are building a ton of technology around it but who's to say it's the right approach?
Maybe the best AIs will only use a very tiny LLM for actual language processing while delegating storage and compression of memories to something that's actually built for that.
is there something like that for firefox?
Is it just me or is ChatGPT down?
There are two technologies I use every day that demonstrate a company is capable of solving an incredibly hard problem, X, while completely dropping the ball on the presumable easier part of UX, Y. ChatGPT is one of those. Driving in my Tesla is the other. I'm not sure how or why it happens but I think it about it daily.
Ineffective dog fooding. PMs might use it every day but they only use a subset of functionality. Some engineers may intentionally never use it when they get home because they’re so sick of looking at it. Some engineers doing crazy esoteric but it doesn’t propagate because their heads are down within the org. Most people are only showcasing exclusively happy paths to leadership, sorry I meant management. Executives only using it for emails, demos, and again a limited subset of happy paths.
Just burnout, siloing, and a lack of creativity. We can’t solve these problems in the industry because we are greedy short term thinkers who believe we’re long term innovators. To say nothing of believing we are smarter and more entitled then we are
perhaps Y is harder to solve than you are assuming
"Harder [for the organization in question] to solve" is definitely right
Not really an excuse though, since a product company's mandate is to create a product that doesn't leave its customers baffled about apparently missing functionality.
But each chat has a unique link that you can just bookmark right?
Is it possible to bookmark a chat on mobile? Haven't found a way to do so on ios
Claude has a way to star important conversations. Don't think chatgpt has that.
My only solution so far has been aggressively deleting conversations once I find and answer and know I don't need it for reference.
in the ChatGPT iOS app, I can long click on the chat itself on the left sidebar, and one of the options is"share chat".
Ah. That one actually makes a public link and doesn't work if there are images or under some circumstances.
On desktop you can directly copy the url for reference and open it later
Of course. But I use it dozens of times a day across dozens of projects. Many of the concepts are linked together. Intelligently indexing, linking, and referencing them seems like a pretty obvious feature. I doubt I'm in the minority in expecting this.
It basically offers a much better user experience, than manually bookmarking each link
lol ikr, its crazy this doesn't already exist