Building a Machine Learning Env.
Hardware: GTX1070/8G, i5-7500 CPU, 32G memoryOS: Ubuntu 16.04 LTS 64bits
Install Driver
1. nVidia 1070 driver
sudo add-apt-repository ppa:graphics-drivers/ppa sudo apt update sudo apt-get install nvidia-378
2. Install CUDA
2.1 Download CUDA runfile:
cuda_8.0.61_375.26_linux.run from https://developer.nvidia.com/cuda-release-candidate-download
2.2 Install
Note:sudo sh cuda_8.0.27_linux.run
- Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 361.62? (y)es/(n)o/(q)uit: n
2.3 Add export var in ~/.bashrc
export PATH=/usr/local/cuda-8.0/bin\${PATH:+:\${PATH}} export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64\${LD_LIBRARY_PATH:+:\${LD_LIBRARY_PATH}}
2.4 Test1
nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 378.13 Driver Version: 378.13 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 1070 Off | 0000:01:00.0 On | N/A | | 0% 29C P8 6W / 151W | 266MiB / 8110MiB | 1% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1075 G /usr/lib/xorg/Xorg 135MiB | | 0 1526 G compiz 84MiB | | 0 10323 G /usr/lib/vmware/bin/vmware-vmx 45MiB | +-----------------------------------------------------------------------------+
2.5 Test2
cd 1_Utilities/deviceQuery make ~/NVIDIA_CUDA-8.0_Samples/1_Utilities/deviceQuery/deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "GeForce GTX 1070" CUDA Driver Version / Runtime Version 8.0 / 8.0 CUDA Capability Major/Minor version number: 6.1 Total amount of global memory: 8111 MBytes (8504868864 bytes) (15) Multiprocessors, (128) CUDA Cores/MP: 1920 CUDA Cores GPU Max Clock rate: 1785 MHz (1.78 GHz) Memory Clock rate: 4004 Mhz Memory Bus Width: 256-bit L2 Cache Size: 2097152 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 2 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1070 Result = PASS
3. Install CUDNN
3.1 download CUDNN: cudnn-8.0-linux-x64-v5.1.tgz3.2
tar -xzvf cudnn-8.0-linux-x64-v5.1.tgz cd cuda sudo cp include/* /usr/local/cuda/include/ sudo cp lib64/* /usr/local/cuda/lib64/
Install Machine Learning Software
Install Anaconda3-4.3.0-Linux-x86_64.sh
Create virtual envir.
conda update conda conda create -p YourEnvDir python=3.6 anaconda source activate YourEnvDir
Note:
Deactivate your virtual environment.
> source deactivate
Delete a no longer needed virtual environment
>conda remove -n yourenvname -all
Installing TensorFlow
4.1 Make sure enter virtual env.
pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0-cp36-cp36m-linux_x86_64.whl
4.2 Validate your installation
Enter python.Then, enter the following short program inside the python interactive shell:
import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello))
Install Caffe
1. Build VirtualEnv
conda create -p YourEnvDir python=2.7 anaconda source activate YourEnvDir
2. Install OpenCV 3.1.0
conda install -c menpo opencv3=3.1.0
3. Install Caffe
3.1 Install caffe dependencies
sudo apt-get install --no-install-recommends build-essential cmake git gfortran libatlas-base-dev libboost-all-dev libgflags-dev libgoogle-glog-dev libhdf5-serial-dev libleveldb-dev liblmdb-dev libopencv-dev libprotobuf-dev libsnappy-dev protobuf-compiler python-all-dev python-dev python-h5py python-matplotlib python-numpy python-opencv python-pil python-pip python-protobuf python-scipy python-skimage python-sklearn
3.2 Setting the CAFFE_HOME environment variable will help DIGITS automatically detect your Caffe installation, but this is optional. Add this to your ~/.profile
export CAFFE_HOME=~/caffe
3.3 Download source
git clone https://github.com/NVIDIA/caffe.git $CAFFE_HOME
3.4 Install requirements into virtualenv
pip install -r $CAFFE_HOME/python/requirements.txt cd caffe
3.5 Create a config file
cp Makefile.config.example Makefile.config
3.6 Make the following changes to the Makefile.configfile
Uncomment USE_CUDNN := 1 Uncomment OPENCV_VERSION := 3 Chane CUDA_DIR := /usr/local/cuda-8.0 Change PYTHON_INCLUDE := ~/caffe/include/python2.7 \ ~/caffe/lib/python2.7/dist-packages/numpy/core/include Change PYTHON_LIB := ~/caffe/lib/python2.7/config-x86_64-linux-gnu/libpython2.7.so
3.7 Add the following to your ~/.profile file
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
3.8 Build
mkdir build cd build make .. make all -j8 make runtest -j8
Install DIGITS
1. Enter caffe virtual env. 2. Download dependencies
sudo apt-get install --no-install-recommends git graphviz gunicorn python-dev python-flask python-flaskext.wtf python-gevent python-h5py python-numpy python-pil python-protobuf python-scipy
3.Getting a DIGITS_HOME environment variable for tutorial purposed, this is completely optional. Add this to your ~/.profile
export DIGITS_HOME=~/digits
4. download DIGITS
git clone https://github.com/NVIDIA/DIGITS.git $DIGITS_HOME
5. Install the python dependencies
pip install --ignore-installed -U setuptools pip install -r $DIGITS_HOME/requirements.txt
6. Run
From the digits directory run the following to fire up a Digits server running on port 5000./digits-devserverNote: if error try to do: conda install libgcc
沒有留言:
張貼留言