N
N
netsky8002014-06-08 06:18:50
Debian
netsky800, 2014-06-08 06:18:50

How to properly configure automatic updates on a live Ubuntu Server?

For a couple of years I have been the owner and administrator (beginner) of several combat web servers running debian and ubuntu. Ordinary things like nginx, apache, unicorn, mysql, postgresql spin on them.
I haven’t grown up to chef/puppet yet, so I do everything manually via ssh. The problem is that every time I write aptitude update, I see some unrealistic figure with the number of packages offered for updating, and every time I don't know what to do with it.
On the one hand, it's scary to do an upgrade, because something, somewhere, might break due to upgrading the package to a new major version. The situation is further complicated by the fact that among the servers there are VPSs where the kernel cannot be updated, because it was assembled by the hoster with the xen module.
On the other hand, paranoia sits in my head that in any of the same apache, ssh or mysql, something terrible can come out at any moment, like the recent heartbleed.
Actually, a question for connoisseurs: how to properly update packages on a debian-based server so that nothing breaks?
And even better: how to properly configure automatic updates on a debian-based server so that you can safely forget about it until the next LTS is released?
PS To the heap, recommendations and links are welcome on setting up other things on the server (for example, iptables), which also need to be "configured and forgotten."

Answer the question

In order to leave comments, you need to log in

3 answer(s)
S
Semyon Voronov, 2014-06-08
@netsky800

1. From experience, I’ll say that on production servers, it’s better to update manually if there is no mirror server for tests. I recommend updating every 15-20 days. Of course, in case of emergency heartbleed security problems, as you mentioned, you need to update immediately, when patches are released.
2. If you can't master chef and puppet yet, read about Ansible . Its entry threshold is significantly lower than that of the aforementioned ones and it is much easier to maintain it in the future, and in terms of functionality, to be honest, it is not inferior.
3. General advice.

  1. Disable unused services (rpcbind, nfscommon) for example
  2. Block unused users, check if system users have passwords (in /etc/shadow there is a hash in the second field, if they have and they don't need access, put '!')
  3. It is advisable to enable ssh authorization only by keys, enable the AllowUsers option with a list of users who are allowed to log in. Disable ssh root authorization and create a system user with sh shell, without any privileges. Enter it in AllowUsers, when logging in, execute /bin/su - with the full path.
  4. If possible, all used services control TCP Wrappers (/etc/hosts.{allow,deny})
  5. Install the snoopy library (snoopy logger) - very detailed logging of running processes.
  6. Accordingly, configure rsyslog/syslog-ng and redirect logs preferably to a third-party server. For logs, there are various web faces with different reports and sortings.
  7. To control versions of configuration files, I advise you to use git in /etc or take it out for various services in separate repositories and, of course, configure .gitignore correctly. Take a quick look at the event subsystem, the inotify file system , and the excellent incron daemon . It can be useful not only at this point, with it you can control many interesting things.
  8. If possible, configure all services and services in a chroot environment.
  9. iptables . It is possible on it, and it is possible for example to configure rules on ipset . Everything is simple here. Better at first create a script by the test machine. Policies by default - deny all, and then you allow what you need. Specifically, it does not make sense to advise something if you, like a monkey, start copying without understanding. There is excellent literature on iptables . And then search the Internet for iptables tips, examples of configuring iptables and delve into what they do there and why. But don't be too smart. Understand that packets run through all chains of rules. It's heavy. Consider using ipset.
    I will not advise any fail2bans. Too controversial and unnecessary if you properly build any service running on top of netfilter. Also, many hostings out of the box have protection against DDOS and brute at the level of their hardware such as ASA-k, etc.
    Periodically make cuts with tcpdump (especially in suspicious situations) and analyze with wireshark.
  10. Permanent backups, at least rsync, fsbackup, unison, bacula, etc. - a great choice

In general, this is not all, but "set it and forget it" on a production server is a bad idea.
Connect monitoring.
To do this, I can advise 4 systems (these are my personal priorities, but there are so many of them ).
A good option would be to install zabbix_agent rather than a server and remotely monitor from another server. You set up your scripts + analyze syslog logs and now the guarantee of a fairly stable and, in my opinion, well-protected system.

S
Sergey, 2014-06-08
@edinorog

With niks .. I work only for myself. Services under them I do not hold. But in any case, a couple of tips will help you.
0. Timeliness
1. Test machine
2. Time factor
3. Uniformity
4. Qualification
Any gestures start from the first point. Preferably a copy of a working machine. After the update, the machine remains to work and you observe its well-being. If everything went well ... it rolls updates on everything !!! cars. You don't need to be smart about anything. Otherwise, you'll get stuck later. Well, the last stage. Something will always come out! Here you already have to fence crutches or work for a living. To fix something you didn't plan. AND EVERYTHING SHOULD BE DONE IN TIME!!! Don't wait until the very end.
note ... sometimes it's easier to migrate a server to another virtual machine than to pick up snot on the old one.

I
Igor, 2014-06-08
@merryjane

Probably no automatic update.
Of course, you can suggest that you have the same OS version and the same set of software on all your servers. Then you can start another server for tests, where to do the update and see what happens. If everything is OK, then run the update on the remaining servers.
As an option, you can consider another example with your repository, to which you connect your servers. The servers are set to auto-update. The very scheme of work is the same, we checked the update on the test server, added the updated packages to the repository, the rest of the servers were updated from it.
Now let's look at the real world. With a bunch of servers on which different projects are spinning, this is all from the realm of fantasy. Because different projects mean different code. In one project one thing is used, in another another. When changing the major version, something may turn out to be deprecated and not start when the service is restarted, if at all your project works with the new version. It is also necessary to abandon the packages installed via make install\checkinstall on the server itself.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question