Creates a new custom model and returns the newly created metadata record for it.
All custom models must support at least one target type (binaryClassification, regression).
Custom inference models can only support a single target type. A regression model is
expected to produce predictions that are arbitrary floating-point or integer numbers.
A classification model is expected to return predictions with probability scores for each
class. For example, a binary classification model might return:
.. code:: Python
negativeClassLabel: 1.0 - probability
For Custom Inference Models, the file parameter must be either a tarball or zip archive
containing, at minimum,
a script named start_server.sh. It may contain additional files, including scripts and
precompiled binaries as well as data files. start_server.sh may execute these scripts
and/or binaries. When this script is executed, it is run as part of an Environment
(specified via subsequent API calls), and all included scripts and binaries
can take advantage of any programming language interpreters, compilers, libraries,
or other tools included in the Environment. start_server.sh must be marked as executable
When start_server.sh is launched, it must launch and maintain
(in the foreground) a Web server that listens on two URLs:
This route must return a 200 response code with an empty body immediately
if the server is ready to respond to prediction requests. Otherwise it should
either not accept the request, not respond to the request, or return a
503 response code.
This route must accept as input a JSON object of the form:
The predictions data must correspond 1:1 to the rows in the input data lists.
$URL_PREFIX is provided as an environment variable. The Web server process must
re-read its value every time the process starts, as it may change.
It is an opaque string that is guaranteed to be a valid URL component,
but may contain path separators (/).
Click Try It! to start a request and see the response here!