tensorflow 에서 제공하는 Install 가이드를(https://www.tensorflow.org/install/install_linux) 통해서 설치해 보았다.
Determine how to install TensorFlow
Docker completely isolates the TensorFlow installation from pre-existing packages on your machine. The Docker container contains TensorFlow and all its dependencies. Note that the Docker image can be quite large (hundreds of MBs). You might choose the Docker installation if you are incorporating TensorFlow into a larger application architecture that already uses Docker.
Take the following steps to install TensorFlow through Docker:
- Install Docker on your machine as described in the Docker documentation.
- Optionally, create a Linux group called
docker
to allow launching containers without sudo as described in the Docker documentation. (If you don't do this step, you'll have to use sudo each time you invoke Docker.) - To install a version of TensorFlow that supports GPUs, you must first install nvidia-docker, which is stored in github.
- Launch a Docker container that contains one of the TensorFlow binary images.
The remainder of this section explains how to launch a Docker container.
Docker를 통한 GPU 버전을 설치하기전에 nvidia-docker를 설치해줘야 한다.
현재 설치 할 시스템의 OS가 CentOs7이라서 https://github.com/NVIDIA/nvidia-docker 에서 "CentOS/RHEL 7 x86_64" 가이드 된 대로 진행을 해보면 뭔가 적절하게 되지 않는다.
그래서 구글링을 하게 되었다. http://blog.exxactcorp.com/installing-using-docker-nv-docker-centos-7/ 이 블로거가 가이드한대로 설치하면 잘 설치가 된다.
Installing and getting DOCKER and NV-DOCKER running in CentOS 7 is a straight forward process:
# Assumes CentOS 7
# Assumes NVIDIA Driver is installed as per requirements ( < 340.29 )
# Install DOCKER
sudo curl -fsSL https://get.docker.com/ | sh
# Start DOCKER
sudo systemctl start docker
# Add dockeruser, usermod change
sudo adduser dockeruser
usermod -aG docker dockeruser
# Install NV-DOCKER
# GET NVIDIA-DOCKER
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker-1.0.1-1.x86_64.rpm
# INSTALL
sudo rpm -i /tmp/nvidia-docker*.rpm
# Start NV-DOCKER Service
systemctl start nvidia-docker
그럼 이제 nvidia-docker 가 잘 설치가 되었다.
GPU support
Prior to installing TensorFlow with GPU support, ensure that your system meets all NVIDIA software requirements. To launch a Docker container with NVidia GPU support, enter a command of the following format:
$ nvidia-docker run -it -p hostPort:containerPort TensorFlowGPUImage
where:
- -p hostPort:containerPort is optional. If you plan to run TensorFlow programs from the shell, omit this option. If you plan to run TensorFlow programs as Jupyter notebooks, set both hostPort and
containerPort
to8888
. - TensorFlowGPUImage specifies the Docker container. You must specify one of the following values:
- gcr.io/tensorflow/tensorflow:latest-gpu, which is the latest TensorFlow GPU binary image.
- gcr.io/tensorflow/tensorflow:latest-devel-gpu, which is the latest TensorFlow GPU Binary image plus source code.
- gcr.io/tensorflow/tensorflow:version-gpu, which is the specified version (for example, 0.12.1) of the TensorFlow GPU binary image.
- gcr.io/tensorflow/tensorflow:version-devel-gpu, which is the specified version (for example, 0.12.1) of the TensorFlow GPU binary image plus source code.
We recommend installing one of the latest
versions. For example, the following command launches the latest TensorFlow GPU binary image in a Docker container from which you can run TensorFlow programs in a shell:
$ nvidia-docker run -it gcr.io/tensorflow/tensorflow:latest-gpu bash
The following command also launches the latest TensorFlow GPU binary image in a Docker container. In this Docker container, you can run TensorFlow programs in a Jupyter notebook:
$ nvidia-docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow:latest-gpu
The following command installs an older TensorFlow version (0.12.1):
$ nvidia-docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow:0.12.1-gpu
Docker will download the TensorFlow binary image the first time you launch it. For more details see the TensorFlow docker readme.
위에 내용을 참조하여 내가 실제로 수행한 명령어들은 와래와 같다.
$ sudo nvidia-docker run -d -p 8888:8888 -p 8889:8889 -p 6901:6901 --name tensorflow_gpu gcr.io/tensorflow/tensorflow:latest-gpu
위의 명령어를 수행하면 tensorflow + cuda + notebook 이 설치된 빈 깡통 Ubuntu16.04 OS가 설치된다.
※ 여기에서 중요한점은 -p 는 포트포워딩할 포트인데 내가 사용할 서버의 포트를 어떻게 사용할 것인지를 충분히 고려하여야 한다는점이다. 현재까지는 docker는 런타임중에 포트를 추가할 수 없게 되어 있다. 포트를 추가 하려면 다양한 방법이 있겠지만
나와 같은 경우에는 현재 이미지를 commit 하여 저장한 상태에서 다시 포트를 추가하여 container를 만들었다.
$ sudo nvidia-docker exec -it tensorflow_gpu bash
이 명령을 수행하게 되면 내가 생성한 tensorflow_gpu bash 컨테이너에 접속할 수 있게 된다.
Run a short TensorFlow program
Invoke python from your shell as follows:
$ python
Enter the following short program inside the python interactive shell:
Python
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
If the system outputs the following, then you are ready to begin writing TensorFlow programs:
Hello, TensorFlow!
위에도 언급했지만, nvidia-docker를 이용하여 컨테이너를 생성하면 빈깡통 ubuntu가 설치된다.
그러므로 기본 유틸리티들을 설치해주어야 한다.
Ubuntu 설치 후 기본 도구 설치
http://cafe.naver.com/telcosn/562 글 이외의 설치 목록은 아래에 있다.
apt-getinstall net-tools
apt-get install tcpdump
apt-get install apt-utils
apt install-y vim htop iftop tree openssh-server lrzsz openvswitch-switch
sudo apt-get install git
Ubuntu 설치 후 데스크탑 설치
apt-get update
apt-get upgrade
apt-get install tasksel
참고: https://imitator.kr/Linux/1305
Ubuntu 설치 후 tightvncserver 설치
참고: http://cafe.naver.com/telcosn/437
명령어: vncserver :100 -rfbport 6901 -geometry 1920x1080
Ubuntu 설치 후 시간 동기화
sudo dpkg-reconfigure tzdata -> asia, seoul 선택
'머신러닝&딥러닝 개발 > Tensorflow API' 카테고리의 다른 글
[개념] Tensorflow 자료형 텐서와 상수, 변수, 플레이스홀더 (0) | 2018.01.10 |
---|---|
[API] 텐서플로우 기본 API (0) | 2017.12.19 |
[설치] Object Detection API (0) | 2017.12.18 |
[설치] CentOs Tensorflow Install (0) | 2017.12.18 |
[설치] TensorFlow Install (0) | 2017.12.18 |