File upload is a common demand, generally we will not upload files directly to the application server, because a single server storage space is limited, not good to expand.


We will use a separate OSS (Object Storage Service) to upload and download files.

 For example, you would typically buy the OSS service from AliCloud.

 Our local file storage is organized as directory-file:

 And the storage structure of the OSS service looks like this:

 A bucket for some papers.


The AliCloud OSS console also mentions that there is no directory hierarchy for object storage:

 But it’s clear that the catalog is supported below:

 This is really just an analog implementation.


Object stores the id, file contents, and metadata:


AliCloud OSS just simulates the implementation of a directory with a meta-information section.


It’s like having a tag, it doesn’t mean that the file is stored under the tag, just that you can use the tag to retrieve the file.


In addition to object storage OSS, AliCloud also offers file storage and block storage:


Block storage is just giving you the whole disk to use, you need to format it yourself, and the storage capacity is limited.


File storage means that there is a directory hierarchy and you can upload and download files with limited storage capacity.


Object storage is key-value storage, distributed way to achieve, storage capacity is unlimited.


These are simple to understand, and in the vast majority of cases, we use OSS object storage.

 Let’s buy an AliCloud OSS service and give it a try:


I bought the 40G OSS Domestic General Resource Pack for 5 bucks.

 Then we create a Bucket:


A Bucket is created in Beijing and the files are stored on a server there.

 Set up public reads, which means that these files are directly accessible to everyone.


Otherwise the private way, you access every file with some identifying information:


Some students say, not static files to be accessible throughout the country? The existence of Beijing’s server will not be slow access speed?

 It’s a CDN job:


After connecting to the CDN, accessing the domain name will go to the cloud service’s DNS, which will then return the address of the nearest caching server, which will take the file from the source and cache it, and then stop accessing the source.

 The source station here can be the OSS service.

 After creating the Bucket, let’s try uploading a file:

 You can see the file in the file list after uploading:


Tap on it to see the file details, which can be accessed using this URL:


Of course, in the production environment, we will not directly access the OSS URL, but we will turn on the CDN and use the domain name of the website to access it, and ultimately back to the OSS service:

 It’s easy to upload in the console, so what if you want to upload in code?

 There is sample code in the official documentation, so let’s give it a try:

mkdir oss-test
cd oss-test
npm init -y

 Install the packages used:

npm install ali-oss

 Write the code:

const OSS = require('ali-oss')

const client = new OSS({
    region: 'oss-cn-beijing',
    bucket: 'guang-333',
    accessKeyId: '',
    accessKeySecret: '',
});

async function put () {
  try {
    const result = await client.put('cat.png', './mao.png');
    console.log(result);
  } catch (e) {
    console.log(e);
  }
}

put();

 region can be seen in the overview:


What are accessKeyId and acessKeySecret here?

 Originally, we were authenticated by username and password:


But that’s not secure enough, so we create an accessKey to represent the identity, and use it for authentication, even if it’s leaked, it doesn’t affect anything else:

 We create an accesKey:


Once created, get the accesKeyId and accessKeySecret and run the code:

 Here’s what mao.png looks like:

 You can see in the console that the upload was successful:


This is how OSS uses the api to upload files.


It’s just that the accessKey we just used wasn’t secure enough.


This is prompted when you open the accessKey administration page:


Instead of using the accessKey directly, let’s create a subuser and then create the accessKey.

 Then we’ll create a sub-user:


Then just use the id and secret of the accessKey:

 But you can’t replace it straight away:

 You will be prompted with 403, no permission.

 Need you to authorize it:

 Add a new authorization:

 Give this subuser administrative and read access to OSS:

 Then try again:

 That’s when the upload was successful.


Looking back, I have to say that AliCloud has designed the security piece just as cleverly.

 What if we just authenticate with a username and password:

 Wouldn’t it be a disaster if it leaked?


But if you create an accessKey and use it for authentication:

 I can disable it even if it leaks:


Going a step further, directly with this accessKey it has all the permissions.


Wouldn’t we be able to do less by creating a RAM subuser and then assigning him certain permissions so that even if it leaks?

 Of course it’s safer.


So, this accessKey and RAM sub-user authentication method of Aliyun is still very good.


Back to OSS, general files can be uploaded directly, but when it comes to large files, they have to be uploaded in pieces.


Segmented upload implementation principle is a common front-end interview questions, everyone can answer up.


This is to split the file into smaller slices using the slice method, and then request an interface to merge the slices after all the uploads are done.

 Large file slice uploads on AliCloud are also implemented in this way:

 Just read the documentation directly on how to use it, we won’t try it here:


With the OSS service, do we still need to go through the application server to upload files?

 It may or may not go through.


If it goes through the application server, then we have to accept the file in the service after the client uploads the file and upload the OSS:


This is certainly possible and protects the accessKey from being stolen.

 It would just be a waste of application server traffic.

 And what if it doesn’t go through?


On the client side, use the accessKey to pass the file to the OSS, and then just pass the URL to the application server.


This reduces application server traffic consumption, but increases the risk of accessKey exposure.

 Each is bad in its own way.

 So is there any way to get the best of both worlds?

 AliCloud’s documentation mentions this as well.

 The solution it gives is to generate a temporary signature to use.

 The code looks like this:

const OSS = require('ali-oss')

async function main() {

    const config = {
        region: 'oss-cn-beijing',
        bucket: 'guang-333',
        accessKeyId: '',
        accessKeySecret: '',
    }

    const client = new OSS(config);
    
    const date = new Date();
    
    date.setDate(date.getDate() + 1);
    
    const res = client.calculatePostSignature({
        expiration: date.toISOString(),
        conditions: [
            ["content-length-range", 0, 1048576000],  
        ]
    });
    
    console.log(res);
    
    const location = await client.getBucketLocation();
    
    const host = `http://${config.bucket}.${location.location}.aliyuncs.com`;

    console.log(host);
}

main();


Upload the address of the OSS with the temporary signature and policy:

 There is no need to memorize these codes, they are all in the documentation:


This allows you to use these to upload files to OSS from within a web page:

 Create an index.html

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Document</title>
    <script src="https://unpkg.com/[email protected]/dist/axios.min.js"></script>
</head>
<body>
    <input id="fileInput" type="file"/>
    
    <script>
        const fileInput = document.getElementById('fileInput');

        async function getOSSInfo() {
            await 'pre-token';
            return {
                OSSAccessKeyId: 'LTAI5tDemEBPwQkTx65jZCdy',
                Signature: 'NfXgq/qLIR2/v87j/XC9sjrASOA=',
                policy: 'eyJleHBpcmF0aW9uIjoiMjAyNC0wMS0yMFQwMzoyNjowOC4xMDZaIiwiY29uZGl0aW9ucyI6W1siY29udGVudC1sZW5ndGgtcmFuZ2UiLDAsMTA0ODU3NjAwMF1dfQ==',
                host: 'http://guang-333.oss-cn-beijing.aliyuncs.com'
            }
        }

        fileInput.onchange = async () => {
            const file = fileInput.files[0];

            const ossInfo = await getOSSInfo();


            const formdata = new FormData()
 
            formdata.append('key', file.name);
            formdata.append('OSSAccessKeyId', ossInfo.OSSAccessKeyId)
            formdata.append('policy', ossInfo.policy)
            formdata.append('signature', ossInfo.Signature)
            formdata.append('success_action_status', '200')
            formdata.append('file', file)

            const res = await axios.post(ossInfo.host, formdata);
            if(res.status === 200) {
                
                const img = document.createElement('img');
                img.src = ossInfo.host + '/' + file.name
                document.body.append(img);

                alert('suc');
            }
        }
    </script>
</body>
</html>


Here getOSSInfo is supposed to request the server-side interface to get the stuff we just output from the console.

 Here’s how to simplify it and just write it to death in the code.

 Introduce axios and use this information to upload files.

 Run a static server:

npx http-server .

 At this point you’ll get a cross-domain error when you upload the file:

 Let’s turn on cross-domain in the console:

 Then try again:

 The upload was successful!

 This file can also be seen in the console file list:

 This is the perfect OSS upload solution.


The server uses the accessKey of the RAM subuser to generate a temporary signature, which is then returned to the client, which uses this to transfer files directly to the OSS.


Because the temporary signatures expire in a short period of time, we set it to one day, there is not much risk of exposure.


This way the server isn’t pressured to accept the file at all, just wait for the client to finish uploading it and bring the URL with it.


Case code uploaded github: github.com/QuarkGluonP…


Upload files generally do not directly exist in the server directory, so it is not good to expand, generally we will use AliCloud’s OSS, it will do its own elastic expansion, so the storage space is unlimited.


The OSS object store is a bucket that holds multiple files under one bucket.


It is stored with key-value, there is no concept of directory, the directory of AliCloud OSS is just simulated with meta-information to achieve.


We’ve tested uploading files in the console, uploading from node using the ali-oss package, and uploading OSS directly from a web page.


The acessKeyId and acessKeySecret are required regardless of where you upload.


This is AliCloud’s security policy, because directly using the username and password is troublesome once leaked, and the acessKey can be disabled if leaked. It is also recommended to generate the accessKey as a RAM subuser to minimize the privileges and further reduce the risk of leakage.


Client-side direct OSS upload does not consume server resources, but there is a risk of leaking the acessKey, so it is generally used to generate temporary signatures and other information on the server side, and then use this information to upload.

 This scenario is the perfect OSS upload scenario.

By lzz

Leave a Reply

Your email address will not be published. Required fields are marked *