Home TECHNOLOGY OpenAI: What Can We Do With The API?

OpenAI: What Can We Do With The API?

It’s been a little over a year since the OpenAI API was opened without a waiting list. What does it give access to? What features does the OpenAI API provide access to, and how to implement them? At the beginning of 2022, we took a quick tour of the owner. Since then, the functional scope has evolved. The structure and entry points of the API have changed, as well as, to a lesser extent, terminology. For example, for the categorization of underlying models, the concept of motors has given way to that of families. There remain three of these:

  • GPT-3 (natural language processing and production)
  • Codex (analysis and production of computer code)
  • Content Filter (determination of the sensitivity level of a text)

The GPT-3 family still includes four models under the same names as initially. And on the same principle, as we progress in alphabetical order, they become more efficient. But also more expensive. OpenAI advises experimenting with Davinci and gradually working your way down to the right compromise. Keeping these task-to-model mappings in mind:

  • Davinci: thematic summary, content creation, cause-and-effect detection
  • Curie: translation, sentiment analysis, summary and generic Q&A
  • Babbage: simple classification, semantic search

Ada: Keyword Detection, Address Correction …

GPT-3 templates are used with the /completions endpoint. We can now submit tasks previously assigned to other endpoints to it: /classifications, /search (semantic search), and /answers (Q&A). Since our first overview, two experimental options have been added to /completions. One allows Davinci to insert the answers into the original statement. The other is to edit this same statement. It uses a specific template (text-davinci-edit-001).

Older versions of Ada, Babbage, Curie, and Davinci remain available. Preferably, could you not use them as is, but refine them? This is with the /fine-tunes endpoint, possibly having previously uploaded training data ( /files ). By default, it is Curie that we refine. A command line tool can help validate and reformat the dataset .

Content filter still exists. With the same role as initially: to estimate the “sensitivity” of the results produced by the GPT-3 and Codex models. However, the recommended endpoint has changed to /moderation . Two models are available, depending on whether you want to have the most recent (text-moderation-latest) or the latest stable version (text-moderation-stable).

Another endpoint, another function,/embeddings, allows you to create vector representations of character strings. Other models can then be them to assess the proximity between these chains, for example, in the context of search engines, recommendation systems, or anomaly detection tools. With /embeddings, you can use no less than 16 first-generation models… and one generation (the default: text-embedding-ada-002).

Also Read: Promising Areas Of Artificial Intelligence Application

DALL-E Has Arrived

The “big” addition of 2022 is the DALL-E API. In public beta since November, it has had three options: create images ( /image-generations ), edit them ( /image-edits ), or make variations ( /image-variations ). The first option generates, from a request of 1000 characters maximum, square images of 256, 512, or 1024 pixels on each side. By default, one at a time, but you can push it up to ten. Two output formats are possible: either in Base64 or in the form of a URL that remains valid for one hour.

The “edit” option involves uploading both an image and a mask. In fact a second image, of the exact dimensions, and whose transparent parts correspond to those which will be edited. The image and mask must be in PNG, in a square format, and weigh less than 4 MB. The limit for the textual instruction is the same: 1000 characters. The third option uses the same settings without a mask.

Four Default Training Cycles

There are always two channels made in OpenAI to access the API: Python bindings and a Node.js library. The others come from community initiatives, encompassing around ten platforms from C#/.NET to Unreal Engine. The billing unit is always the token . It is equivalent to a “piece of word”: approximately 4 characters. OpenAI offers an online tool to check the weight of a request.

The OpenAI API In Early 2022

Engines, models, endpoints… Not so easy to navigate OpenAI’s commercial offering, as it has grown in volume since its launch in mid-2020 . It wasn’t that long ago (November 2021) that the API was accessible without a waiting list . The underlying models are categorized into three drivers:

  • GPT-3 (natural language processing and production)
  • Codex (analysis and production of computer code)
  • Content Filter (determination of the sensitivity level of a text)

In the GPT-3 category are four models: Ada, Babbage, Curie, and Davinci. Their old versions – brought together under the Instruct banner – remain accessible. All accept text as input and produce text as output. As you progress in alphabetical order, models become more efficient: they need fewer instructions to do as much as those before them. But they also cost more to use. And can lead to longer treatments. In general, OpenAI advises experimenting with Davinci and gradually working your way down to the right compromise. Keeping these task-to-model mappings in mind:

  • Davinci: thematic summary, content creation, cause-and-effect detection
  • Curie: translation, sentiment analysis, summary and generic Q&A
  • Babbage: simple classification, semantic search
  • Ada: keyword detection, address correction …

Codex Pefers Python

In the Codex family, two models also 100% text, descendants of the first GPT-3:

  • Davinci

Accepting up to 4096 tokens per request, it is ideal for translating natural language into code.

  • Cushman

With up to 2048 tokens /requests, it is better suited to real-time applications.

What exactly are tokens ? It is the basic inference unit of OpenAI. Basically, the text, both input and output, is divided so that four characters = one token. It is also the billing unit. The third engine (Content filter) currently consists of a single model.

It is provided with content that it classifies as safe, sensitive, or inappropriate. Several options make it possible to adjust its rigor, including the definition of a minimum certainty threshold. The content filter needs help with certain text styles (fiction, code, poetry, etc.) and specific formats (frequent line breaks, word repetitions, etc.). Furthermore, as with all other models, its knowledge base ends in 2019. A continuous training mechanism is in the works at OpenAI.

College-Level AI?

To use a Content filter, go through the referenced endpoint: /completions. There are three others, intended respectively, for classification, semantic search, and Q&A. Two official channels to reach these HTTP endpoints : Python and Node.js libraries. The community has developed others (C#/.NET, Crystal, Dart, Go, Java, PHP, Ruby, Unity, and Unreal Engine). Models are provided with instructions and, if possible, context. While possibly configuring certain elements. For example, “temperature”: the closer it is to 0, the more deterministic the model is; the closer it is to 1, the more risks it takes.

By default, with /completions, the API is stochastic (it produces different results on each call). The idea is to talk to him like you would speak to a schoolboy. The result is numerous potential uses: classification, discourse production, transformation (summary, translation, reformulation of concepts, etc.), factual responses, etc. and request for Codex. The latter can transform instructions into code as well as add comments, complete a line, or suggest a helpful element (library, API call, etc.). OpenAI gives some advice, including:

  • Specify the language used and its version
  • Stylize comments according to the language

With Python, for example, Codex handles the unconventional method of triple quotes better than that of the pound sign.

  • Prefer flow to batch to improve latency

A Taste Of AutoML

In beta, /classifications are similar to autoML. We provide labeled examples, either on the fly (maximum 200) or through preloaded files (maximum 150 MB per file and 1 GB in total). Without requiring ad hoc training , it returns the most relevant examples for a given query – after prior filtering of the examples by semantic scoring .

OpenAI: Models Also Train

Rather than providing them with examples each time, OpenAI models can be trained with custom datasets. Billing also depends on the tokens used (number of tokens in the training files x number of cycles). We are also here on JSON in lines, with request-response pairs. OpenAI offers a command line tool to help prepare data from other formats (CSV, TSV, XLSX, JSON).

We train Curie. But the other three representatives of the GPT-3 family are compatible.

Once a model has been introduced, we can communicate it as a parameter to /completions. Depending on the tasks, training will require more or fewer examples: at least 100 per category for classification, at least 500 for conditional text production, several thousand for unconstrained production, etc.OpenAI reserves the right to use the data provided to its models to improve them. New users have an initial spending limit. It evolves as we develop uses. When we exceed five people using the API, we go live—a transition that is not automatic and requires risk assessment type checks.

Also Read: Benefits That Artificial Intelligence Brings To Your Company’s HR

Tech Buzz Reviews
Techbuzzreviews are a team full of web designers, freelancers, marketing experts, bloggers. We are on a mission to provide the best technology-related news with passion and tenacity. We mainly focus on the areas like the latest technology news, upcoming gadgets, business strategies and many more upcoming trends which are trending all over the world.

Most Popular

Optimizing Your Home Network for Better Performance

Most of our day-to-day activities depend on the internet and having a sluggish home network can be a source of constant frustration. Whether it's...

QWERTY: The Origins Of The Keyboard We Use Every Day

The historical backdrop of the QWERTY console: where the format we use on workstations, notepads, and cell phones comes from. We have it before...

The Painter’s Guide to Estimates: Unlocking Insights for Successful Projects

Introduction: Estimating the cost of a painting project is an intricate art form that demands a keen eye for detail, a deep understanding of materials...

Realme C67 – The Smartphone With A Killer Price Arrives

Designed for the needs of younger people, realm's new smartphone offers a premium experience in a low price range. The new Realme C67 is...

An Overview of APA Dissertation Format Requirements with Essential Writing Tips

An APA dissertation format communicates scientific communication by standardising research work. It promotes clarity of expression. APA style makes it easier for the reader...

Reducing Overhead Costs with LED Warehouse Lighting

In the vast expanse of a modern warehouse, lighting isn't just a matter of flipping a switch; it's an ongoing operational cost impacting everything...

The Power Of AI In Cybersecurity: 3 Questions CISOs Should Ask Themselves

To protect businesses against growing threats, it is essential to carefully review new technologies before deploying them. Even more precisely, the year of generative...