Problems integrating OpenAI GPT models

I’ve been playing around with the “openai” package for various classification and completion tasks. I’ve run into a couple issues trying to integrate them into an Exploratory project.

  1. Too many parallel requests: My API access is limited to 20 parallel requests. When I create a calculated field that uses input from other columns to pass into a script, it errors out immediately because there are too many parallel requests.
  2. Limiting the rows by filtering/sampling, I can get past this error, but the output is the same for every row.

My script:
api_summarize ← function(sumprompt) {
output ← openai::create_completion(model = “text-ada-001”, prompt = sumprompt, max_tokens = 100, openai_api_key = {secret key})
output$choices[1]
}

Calculated field:
mutate(Value = api_summarize(Prompt Text))

Has anyone had success doing this?