Dotlayer
  • News
  • Startups
  • Tutorials
  • SEO
  • Marketing
  • Interviews
No Result
View All Result
Dotlayer
  • News
  • Startups
  • Tutorials
  • SEO
  • Marketing
  • Interviews
No Result
View All Result
Dotlayer
No Result
View All Result

How to Mount S3/Wasabi/DigitalOcean Storage Bucket on CentOS and Ubuntu using S3FS

November 10, 2018
in Tutorials
0 0
Share on FacebookShare on Twitter

In this article, we will be using S3FS to mount cloud storage services locally on a CentOS or Ubuntu box. Mounting them locally will allow us to interact with the cloud providers as a local file system.

We will be covering three cloud storage providers namely, AWS S3, Wasabi Hot Storage and Digital Ocean Spaces. What makes this interesting is that all three providers use the same api, which is the S3 API.

S3FS is a FUSE (File System in User Space) based solution used to mount an Amazon S3 or compatible storage solutions, Just as mentioned above, we can use system commands with this drive just like as another Hard Disk in the system. On s3fs mounted files systems we can simply use cp, mv and ls the basic Unix commands similar to run on locally attached disks.

Step 1 – Update Packages

First, we will update the system to make sure we have the latest packages. We will then check if we have any existing s3fs or fuse package installed on your system. We will remove any existing packages and install everything anew.

### CentOS and RedHat Systems ###
$ yum update
$ yum remove fuse fuse-s3fs

### Ubuntu Systems ### 
$ sudo apt-get update
$ sudo apt-get remove fuse

Step 2 – Install Dependencies

Once you have successfully removed the packages. We will install all dependencies for fuse. Install the required packages to system use following command.

### CentOS and RedHat Systems ###
yum install gcc libstdc++-devel gcc-c++ curl-devel libxml2-devel openssl-devel mailcap

### Ubuntu Systems ### 
sudo apt-get install build-essential libcurl4-openssl-dev libxml2-dev mime-support

Step 3 – Download and Compile Latest S3FS

In this step we will download and compile the latest version of s3fs from GitHub. For this article we are using s3fs version 1.74. After downloading extract the archive and compile source code in system using the commands below.

cd /usr/src/
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh
./configure
make && make install

Step 4 – Configuring Access Key

In order to configure s3fs, we would require Access Key and Secret Key of your S3 Amazon account, Wasabi or Digital Ocean. You can access these keys from you account dashboard of the respective services. Once you have these keys, create a file, for the purpose of this tutorial we will be naming the file pwd-s3fs.

echo AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY > ~/.pwd-s3fs
### Note: Change AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with your actual key values.

After creating the password file we will change the permission on the file for security reasons.

chmod 600 ~/.pwd-s3fs

Step 5 – Mount S3 Bucket

Finally we can mount the s3 bucket using following set of commands. For this example, we are using s3 bucket name as test-bucket and mount point as /s3mnt.

mkdir /tmp/cache /s3mnt
chmod 777 /tmp/cache /s3mnt
s3fs -o use_cache=/tmp/cache test-bucket /s3mnt -o passwd_file=/etc/pwd-s3fs

Once this has been successfully run, you will be able to access all files within the test-bucket by just changing directories into /s3mnt.

Step 6 – Mounting a Wasabi Hot Storage Bucket

If you are not familiar with Wasabi, It’s a S3 compatible storage which is 1/5th the price and 6x faster than Amazon S3, according to their website. Since it supports the S3 API the same command works, you only need to change the endpoint as describe below:

s3fs test-bucket /s3mnt -o passwd_file=/etc/pwd-s3fs -o url=https://s3.wasabisys.com 

Step 7 – Mounting a Digital Space

Just like the approach with Wasabi we will be specifying the endpoint in the command. Use the command below to mount a digital ocean space unto your local drive.

s3fs test-bucket /s3mnt -o passwd_file=/etc/pwd-s3fs -o url=nyc3.digitaloceanspaces.com

In Conclusion

In this article we have reviewed how to use s3fs to mount cloud storage services locally. By doing so we can use regular linux commands to interact with the files within the remote bucket locally. Some use cases include running backups, copying files and a whole lot more.

ShareTweetPin
Previous Post

How To Build An Email Campaign

Next Post

How To Display Random Posts on Your WordPress 404 Page To Turn Lost Visitors Into Loyal Users

Next Post

How To Display Random Posts on Your WordPress 404 Page To Turn Lost Visitors Into Loyal Users

You might also like

Mint Linux vs Ubuntu: Which is Right For You?

Mint Linux vs Ubuntu: Which is Right For You?

March 12, 2022
Net Neutrality: What is it and Why Should You Care?

Net Neutrality: What is it and Why Should You Care?

March 12, 2022
Solid State Drives – Why You Should Buy One Today

Solid State Drives – Why You Should Buy One Today

March 12, 2022

Machine Learning Algorithms Every Beginner Should Know

January 25, 2022
What Is the log4j Vulnerability, log4shell, an Example Step-By-Step Exploit and How to Fixed It

What Is the log4j Vulnerability, log4shell, an Example Step-By-Step Exploit and How to Fixed It

December 11, 2021
Simple Video Call integration into Website with Jitsi

Simple Video Call integration into Website with Jitsi

May 26, 2020
  • Terms of Service
  • Privacy Policy
  • Careers

© 2021 Dotlayer.com

No Result
View All Result
  • About Us
  • Advertise
  • Blog
  • Careers
  • Contact
  • Contact Us
  • Get Featured
  • Home Layout 1
  • Home Layout 2
  • Home Layout 3
  • Privacy Policy
  • Security
  • Services
  • Subscribe To Dotlayer
  • Terms of Service
  • Write For Us

© 2021 Dotlayer.com

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In