Gear Introduction to Computing Containers
Execution model of container
Computing power container as a computing unit. Can perform various computational tasks. Including data preprocessing. Training of machine learning models. Using existing models to infer unlabeled data, etc. Its composition includes the following important elements:
- Basic hardware configuration. Currently includes CPU. GPU. Memory. Store the four major elements, adopt [Computing power type](/docs/concepts/#Computing power)appoint.
- Basic runtime environment. The main thing is to choose the deep learning framework you want to adopt and its supporting peripheral dependencies, adopt imageappoint. For a specific list of dependencies, please refer to Runtime environment.
- Required code, data. By binding [data](/docs/concepts/#data warehouse -data-warehouse), Binding to other containers for execution ["working directory"](/docs/concepts/#The container's "working directory" )Alternatively, you can directly upload the code to provide it.
The container will reallocate storage and save the data stored in the container during each execution. Therefore, each execution under each container is independent of each other, combination Custom ParametersandParameters
Waiting for tools. If used properly, it can greatly improve the reproducibility of machine learning experiments. But without a good understanding of these concepts, it may lead to data being copied back and forth between different executions. This not only slows down the running speed of the container, but also greatly increases the additional non essential data storage overhead.
Container creation
The container currently supports two methods: "Python Script execution" and "Jupyter working space" . Both default working directories are in the system /output
catalogue (At the same time, it will be /hyperai/home
Set up soft links. That is /hyperai/home
With /output
For the same directory).
Containers can create multiple "implement" , each "implement" They are all independent containers. Can have independent computing power configuration, image. And every time "implement" After being closed, its working directory /hyperai/home
The content below will be saved. You can access it through the page "working directory" Tab View.
The working directory of the container during execution is /hyperai/home
, So references to other data warehouses /hyperai/input/input0-4
Need to use an absolute path. And the uploaded code can be executed using relative directories.
for instance. Creating "Python Script execution" I uploaded a file named train.py
The file. The file needs to be read /hyperai/input/input0
Data under the directory. So we need to use an absolute path during execution /hyperai/input/input0/<file content>
To cite. If you need to execute the file directly, you can use the relative path directly python train.py
And there is no need to use it python /hyperai/home/train.py
.
Data binding
see Gear Data binding.
Jupyter working space
Jupyter Our workspace is based on JupyterLab Interactive runtime environment developed. The first programming languages supported are Python It has now become the default working environment for a large number of data scientists.adopt Jupyter Accessing the computing power container in the workspace can be used just like in other environments Gear The computing power resources have been allocated.
Jupyter The workspace supports two environments Notebook as well as Lab We default to supporting it here Lab. If you don't know how to use it yet Jupyter The workspace can be found in it file Or related Chinese translation materials. We won't exhaust the use here Jupyter Some contents of the workspace. But rather emphasize a few HyperAI lower Jupyter Several key characteristics of workspace.
For more information, please refer to Jupyter working space.
Continue to execute
Generally speaking. The same one "container" The multiple executions below will have significant commonalities in business. For the convenience of users, they can create new ones based on the execution history "implement" , at present HyperAI Provided "Continue to execute" The options.
HyperAI Will do the following for us:
- Bind Last Time "implement" Already bound data warehouse to the same location
- If last time "implement" It's a "Python Script task" , So bind the same code with the same binding
- Bind Last Time "implement" of "working directory" reach
/hyperai/home
catalogue
stay HyperAI One can include Gear of "working directory" Bind to a new container, realization "The Conduit" The effect. Here we are referring to the previous one "model training" of "working directory" As a "Model inference" Task input. However, this usage method is based on the previous execution "working directory" Copy everything to a new container. This will double the amount of storage used. Therefore, if there is no need to address the previous execution "working directory" Perform write operation. Suggest binding it to "input0-4" In the directory. This will link the data to a new container in a read-only manner. No additional data usage will be generated.
Apart from using it on the execution page "Continue to execute" Outside. It can also be done in "Execution record" Page operation.
Scenarios for modifying code after selecting to continue execution
"Continue to execute" Intended to facilitate users to continue their previous training on the basis of unchanged code. If "Continue to execute" Special attention should be paid to updating the code in the scenario.
Click on "Continue to execute" If you try to upload new code here later, it may be linked to the current binding "Last executed work directory" Code conflict in. For example, in the previous execution. We have uploaded a file named main.py
The file. This file has been saved to the last execution "working directory" It's inside. If a modified version with the same name is uploaded again main.py
The file, that HyperAI Will ignore this modification. Keep existing files.
therefore. If you use "Continue to execute" And it was discovered that there was a situation where the execution code did not match your expectations. It may be because the uploaded code content was overwritten by the working directory bound to the previous container. If you don't want this situation to happen. You can modify the default binding "Last executed work directory" The binding directory.
How to accelerate container startup
In the working directory of the container (/hyperai/home Or rather, it could be said that /output The two are equivalent)Saving a large number of files will affect the startup speed of the container, in especial Copy a large number of small files It will be very time-consuming. When the container starts the process of copying data. The execution status will change to "Synchronize data" , And display the corresponding synchronization speed.
You can create separate data or models "data warehouse " And bind to it through data binding /intput0-4 To avoid the process of copying.stay [Gear The working directory - Create a working directory as a data warehouse version](/docs/gear/output#Create a working directory as a data warehouse version) You can see how to extract from the container "working directory" Create a new dataset version.
Set notification
at present HyperAI Provides two channels for sending notifications: Email notifications and SMS notifications. Email notification is selected by default and cannot be turned off. Users can set SMS notifications according to their preferences.
Task With Jupyter Combining two modes of workspace
Jupyter The workspace mode is suitable for real-time file execution and modification. However, its efficiency in utilizing computing resources is not high. During the process of user editing and debugging, resources are often wasted; Python The script upload method will be executed immediately after the container starts running Python code. Efficient utilization of computing resources. But it's very difficult to modify. Every time the code is updated, it needs to be re uploaded.
Therefore, it is recommended to operate in low computing power mode (CPU Computing power)First, create Jupyter working space. After ensuring that the code can execute normally, close the resource and remove it "working directory" download. Then create another one Python Script Execution Mode GPU Code for uploading and downloading computing power containers. Execute script.
at present Jupyter The workspace is already built-in [HyperAI Command line tools](/docs/cli/#stay-jupyter-Using command-line tools in the editor) It can be very convenient to Jupyter Environment created through command-line tools task.
Take .ipynb Convert files to .py file
choice "File" - "Export Notebook As..." - "Export Notebook to Executable Script" You can take the current one ipynb File to py Download format locally. Drag it again to Jupyter You can upload the file to the container again from the file directory on the left side of the workspace:
You can see ipynb The code snippets are concatenated and saved in .py It's in the file.
Adopt Jupyter Built in command-line tools for workspace creation task
see [establish Python script](/docs/cli/#establishpython-Script execution) chapter.
Public container
When creating a container, it defaults to "Private container" , In the container"set up"
The interface allows setting containers as public containers. For safety reasons. The public container only allows all registered users to see it Execution of closed containers.
Termination of Container
Containers can be terminated in any process of execution. But please note. Container termination may result in some data results not being synchronized successfully. Please place it in the container "working directory" Confirm the integrity of the current data on the tab before terminating the container.
Delete container
After the container is executed, it will automatically release the computing power resources it occupies. However, generally speaking. After the container is executed, some files will be saved for future use. The working directory will occupy the user's storage resources. If you think the whole "container" The data is no longer needed. Can be in "container" of "set up" Delete the entire container from the tab. After deleting the container, the user storage resources occupied by the entire container will be released.
This operation is very dangerous. The data deleted from the container will not be recoverable!