B
B
beduin012019-02-05 15:46:31
git
beduin01, 2019-02-05 15:46:31

How to deploy small projects?

And who deploys small projects? I don’t see the point in raising a full-fledged CI / CD due to the size. However, I would like to somehow automate everything. I'm developing in one branch. Front and Back in separate repositories.
The questions are:
1. Is it a good idea to pull everything together exclusively by tags; I put the v0.4 tag on the front and on the back and the script on the server pulled both
2. Is a self-written script constantly checking Gitlab tags a good idea? What is +\- deployment by tags?
3. How to be with addresses and ports. For example, in index.js on the development machine I have windows.base_url = " localhost:1234 " and on the server I need " 10.1.2.6:9000 " how to automate this?

Answer the question

In order to leave comments, you need to log in

6 answer(s)
S
Stanislav Pugachev, 2019-02-05
@beduin01

your questions are philosophical, you can spend hours of discussion on each
you are still going to stir up some scripts and invent something,
what difference does it make cron scripts on the server or a job in jenkins? in terms of writing speed - it will be the same. so in my opinion the size does not matter here, the
only thing that matters is how clearly you describe the process (algorithm) of building / deploying applications
from this point of view, my vision is something like this:
1) git is not a tool for deploying software, git is only for code versioning
and in theory, the result of your work should not be a code in the github, but some kind of sane artifact ready for deployment (docker-image, pip package, npm package, deb package, jar, war, zip as a last resort, etc., etc.) tp). If you produce artifacts, then the issue with tags will disappear by itself - you will have an artifact of some version and the whole
server should not know about any gits or about any tags in it
. Here I would recommend packing everything into docker images at least just because the server will end up knowing nothing about the application's dependencies, the required libraries, nothing at all, you only need to install docker
A huge advantage of using docker is that in the Dockerfile you are forced to describe exactly and explicitly all the steps required to install the application. And what is most remarkable - it will all be stored in the same repository, under the control of git - gorgeous.
It is desirable to store artifacts in some kind of artifactory,
but if everything is really simple, then you can store the last few versions directly on the server in some folder
2) as soon as you receive the artifact, you can deploy it. It would be
nice to know the features of your project, but roughly saying let's say that it's enough to upload it to the server, put it in the right place
again, jenkins will handle this with a bang and will take you 10 minutes to do this whole thing. If you describe the logic in the Jenkinsfile, you will win again because the deployment process (algorithm) will be described EXPLICITLY again. And it will also be under the control of the Gita. (Jenkins only needs to know in which repository and where to look for the Jenkinsfile)
If you run some hidden cron script on the server, no one will know anything about it. Believe me, after a short time, this whole thing will begin to become more complicated, something will be forgotten, something will change, and all this together will hurt you in the balls.
What is the other advantage of this approach: if you need to roll-back to the previous version, you do not need to build the project again by downloading everything from the git, because you have previous artifacts, rollback in this case is not a problem at all - just specify the previous version of the artifact and deploy once again and that's all
3) Env Variables
when the application starts - reads everything it needs from the environment variables
deploy job can set these variables each time before deploying - that would be cool too because you would make this knowledge just as explicit
Total we have
- the project build logic is described in the Dockerfile and is under the git
- the deployment logic is in the Jenkinsfile and is under the git, and most importantly is the code (Jenkinsfile is written in groove, for simple things you need 30 minutes of study and that's it)
- we did not install anything on the server except the docker itself
- we store several versions of our application just in case and we can quickly rollback without resorting to a git at all
- the server does not know anything about gits
- there is NO additional logic on the server for deploying your application
- with all this it is very easy to add other servers for deployment - what we need - roughly speaking specify another ip and a set of env variables to it (if they differ of course)
giphy.gif

D
Denis Bedoyar, 2019-02-05
@PulpiRZVK

3. You must have separate configs for prod, test and dev-machine.
1. Normal idea. The upside is that using the tag name is more descriptive than the commit hash.

D
Dmitry Larin, 2019-02-05
@fanrok

1. possible, but I don't see the point.
2. why?
3. make separate configs in different files.
My suggestion is to have a separate branch for deployment. As soon as you are convinced that everything works, you merge into the combat branch of the change. On the git, the webhook catches this and launches the script (let's say the combat branch is master):
Connects to the combat server via ssl and runs the commands:
git fetch --all
git reser --hard origin/master
, then migrate according to your taste, etc., etc. Simple, cheap, cheerful. Perfect for little things

V
Vitsliputsli, 2019-02-05
@Vitsliputsli

Since you are working in Gitlab, you already have a tool for organizing CI / CD.
No, this is not obvious and will complicate the work, if the turnips are different, then their versioning should be different. If you need the latest version, pull out master. And to update the connected external software, use specialized programs (the same composer in php).
Why constantly check? Gitlab has hooks if you need to perform some actions via push.
All settings must be in the config, when deploying, the config is configured depending on the environment. You can make different files for each environment, but you still have to change the config during build (for example, you won’t keep passwords for the database in the repository).

V
Vitaly Musin, 2019-02-14
@vmpartner

We use the free gitlab CI, install the gitlab runner on the final wheelbarrow and bind it to the project, in the project you describe the deployment in the form of a yaml file, which you put in the root. This is a very convenient and easy way.

R
roxblnfk, 2019-02-22
@roxblnfk

Stanislav Pugachev , in my opinion, described the best modern solution. However, if you really don't want to mess with docker, you can read this article

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question