Explain Image functionality #147
Closed
leventmolla
started this conversation in
Ideas
Replies: 2 comments
-
please see issue report 174 for an example. support was just added, and is being fixed by this pull |
Beta Was this translation helpful? Give feedback.
0 replies
-
There seems to have been a progress in this since the issues was created, specifically #175 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
OpenAI has a new model gp4-1106-vision-preview which can explain a collection of images. I think it could be done with the general chat completions endpoint, but the documentation is not very clear about the message structure. There should be a text prompt initially describing the task and follow-up messages that contain the images. I tried this and passed base64-encoded images, got errors (for some reason the number of tokens requested is a very large number and the query fails). I then tried to pass the URLs of the files for the images, which failed as well. So I am at a loss about how to use this functionality.
Beta Was this translation helpful? Give feedback.
All reactions