New in version 2.9.
The below requirements are needed on the host that executes this module.
Parameter | Choices/Defaults | Comments | |
---|---|---|---|
auth_kind string / required |
| The type of credential used. | |
auto_scaling dictionary | Automatically scale the number of nodes used to serve the model in response to increases and decreases in traffic. Care should be taken to ramp up traffic according to the model's ability to scale or you will start seeing increases in latency and 429 response codes. | ||
min_nodes integer | The minimum number of nodes to allocate for this mode. | ||
deployment_uri string / required | The Cloud Storage location of the trained model used to create the version. | ||
description string | The description specified for the version when it was created. | ||
env_type string | Specifies which Ansible environment you're running this module within. This should not be set unless you know what you're doing. This only alters the User Agent string for any API requests. | ||
framework string | The machine learning framework AI Platform uses to train this version of the model. Some valid choices include: "FRAMEWORK_UNSPECIFIED", "TENSORFLOW", "SCIKIT_LEARN", "XGBOOST" | ||
is_default boolean |
| If true, this version will be used to handle prediction requests that do not specify a version. aliases: default | |
labels dictionary | One or more labels that you can add, to organize your model versions. | ||
machine_type string | The type of machine on which to serve the model. Currently only applies to online prediction service. Some valid choices include: "mls1-c1-m2", "mls1-c4-m2" | ||
manual_scaling dictionary | Manually select the number of nodes to use for serving the model. You should generally use autoScaling with an appropriate minNodes instead, but this option is available if you want more predictable billing. Beware that latency and error rates will increase if the traffic exceeds that capability of the system to serve it based on the selected number of nodes. | ||
nodes integer | The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. | ||
model dictionary / required | The model that this version belongs to. This field represents a link to a Model resource in GCP. It can be specified in two ways. First, you can place a dictionary with key 'name' and value of your resource's name Alternatively, you can add `register: name-of-resource` to a gcp_mlengine_model task and then set this model field to "{{ name-of-resource }}" | ||
name string / required | The name specified for the version when it was created. The version name must be unique within the model it is created in. | ||
prediction_class string | The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the packageUris field. | ||
project string | The Google Cloud Platform project to use. | ||
python_version string | The version of Python used in prediction. If not set, the default version is '2.7'. Python '3.5' is available when runtimeVersion is set to '1.4' and above. Python '2.7' works with all supported runtime versions. Some valid choices include: "2.7", "3.5" | ||
runtime_version string | The AI Platform runtime version to use for this deployment. | ||
scopes list | Array of scopes to be used. | ||
service_account string | Specifies the service account for resource access control. | ||
service_account_contents jsonarg | The contents of a Service Account JSON file, either in a dictionary or as a JSON string that represents it. | ||
service_account_email string | An optional service account email address if machineaccount is selected and the user does not wish to use the default email. | ||
service_account_file path | The path of a Service Account JSON file if serviceaccount is selected as type. | ||
state string |
| Whether the given object should exist in GCP |
Note
GCP_SERVICE_ACCOUNT_EMAIL
env variable.GCP_AUTH_KIND
env variable.GCP_SCOPES
env variable.- name: create a model gcp_mlengine_model: name: model_version description: My model regions: - us-central1 online_prediction_logging: 'true' online_prediction_console_logging: 'true' project: "{{ gcp_project }}" auth_kind: "{{ gcp_cred_kind }}" service_account_file: "{{ gcp_cred_file }}" state: present register: model - name: create a version gcp_mlengine_version: name: "{{ resource_name | replace('-', '_') }}" model: "{{ model }}" runtime_version: 1.13 python_version: 3.5 is_default: 'true' deployment_uri: gs://ansible-cloudml-bucket/ project: test_project auth_kind: serviceaccount service_account_file: "/tmp/auth.pem" state: present
Common return values are documented here, the following are the fields unique to this module:
Key | Returned | Description | |
---|---|---|---|
autoScaling complex | success | Automatically scale the number of nodes used to serve the model in response to increases and decreases in traffic. Care should be taken to ramp up traffic according to the model's ability to scale or you will start seeing increases in latency and 429 response codes. | |
minNodes integer | success | The minimum number of nodes to allocate for this mode. | |
createTime string | success | The time the version was created. | |
deploymentUri string | success | The Cloud Storage location of the trained model used to create the version. | |
description string | success | The description specified for the version when it was created. | |
errorMessage string | success | The details of a failure or cancellation. | |
framework string | success | The machine learning framework AI Platform uses to train this version of the model. | |
isDefault boolean | success | If true, this version will be used to handle prediction requests that do not specify a version. | |
labels dictionary | success | One or more labels that you can add, to organize your model versions. | |
lastUseTime string | success | The time the version was last used for prediction. | |
machineType string | success | The type of machine on which to serve the model. Currently only applies to online prediction service. | |
manualScaling complex | success | Manually select the number of nodes to use for serving the model. You should generally use autoScaling with an appropriate minNodes instead, but this option is available if you want more predictable billing. Beware that latency and error rates will increase if the traffic exceeds that capability of the system to serve it based on the selected number of nodes. | |
nodes integer | success | The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. | |
model dictionary | success | The model that this version belongs to. | |
name string | success | The name specified for the version when it was created. The version name must be unique within the model it is created in. | |
packageUris list | success | Cloud Storage paths (gs://…) of packages for custom prediction routines or scikit-learn pipelines with custom code. | |
predictionClass string | success | The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the packageUris field. | |
pythonVersion string | success | The version of Python used in prediction. If not set, the default version is '2.7'. Python '3.5' is available when runtimeVersion is set to '1.4' and above. Python '2.7' works with all supported runtime versions. | |
runtimeVersion string | success | The AI Platform runtime version to use for this deployment. | |
serviceAccount string | success | Specifies the service account for resource access control. | |
state string | success | The state of a version. |
Hint
If you notice any issues in this documentation, you can edit this document to improve it.
© 2012–2018 Michael DeHaan
© 2018–2019 Red Hat, Inc.
Licensed under the GNU General Public License version 3.
https://docs.ansible.com/ansible/2.9/modules/gcp_mlengine_version_module.html