K
K
kolyanovikov2020-12-09 21:40:57
linux
kolyanovikov, 2020-12-09 21:40:57

How to fully work with Linux without the Internet?

There is a need for the operation of a number of servers and user machines connected to a local network. All computers, including workstations, have Linux OS (debian or mint) installed. The situation is complicated by the complete lack of access to the Internet.

It is necessary to deploy various services on servers (such as a file sharing service, web servers, project management systems, etc.). Also, in the process of work, you may need to install some packages for users from the standard repository or python (pip).

What is the easiest way to organize your repository mirror? Is there an option to create an offline pypi, at least with a limited list of packages? In general, what are the possible solutions?

Answer the question

In order to leave comments, you need to log in

3 answer(s)
V
Vitaly Karasik, 2020-12-09
@vitaly_il1

This is the standard configuration in many institutions.
Need to pull up local repos for system and pip packages ( https://packaging.python.org/guides/hosting-your-o... ).
The only issue is updates. The simplest way - once a week, for example, update manually with a flash drive.
Sometimes they do it automatically, if they allow it - they put a machine with two interfaces inside. and external network. But the interfaces are not connected at the same time - one or the other.
https://ru.wikipedia.org/wiki/%D0%92%D0%BE%D0%B7%D...

V
ValdikSS, 2020-12-09
@ValdikSS

Bring up a mirror of your distribution's repository (apt-mirror for Debian/Ubuntu), a pypi mirror. Programs that are not in the repository and require a large number of dependencies can be packaged in flatpak by also expanding your repository.

N
Nicholas, 2020-12-29
@romancelover

In Gentoo, you can override the package download command (FETCHCOMMAND) so that instead of downloading the source package, it lists a link to it, along with the filename that emerge expects. The list is then written to a flash drive, the files are downloaded to a computer with Internet access, and then they are written to the distfiiles directory, and emerge sees them.
I can't say, maybe a similar method can be applied to other distributions, so as not to download the entire repository entirely? In Gentoo, the repository is divided into two parts, the portage tree and distfiles, portage is the index part, which is responsible for information on how to build packages and what dependencies are between them, and has a relatively small size, about 200 MB or 50 MB in a compressed archive, and distfiles are just files with source archives, which can be large. Debian keeps both parts in the same tree. It would be possible to download only the index (Packages.gz), and packages only as needed, as described above, by redefining the download command.
If there are many computers on the network with the same Gentoo configuration (architecture, use flags, processor-dependent build options), then the packages are built on one according to the specified scheme (you must also write FEATURES="buildpkg" to form binary packages), and the rest are written PORTAGE_BINHOST pointing to this computer to install from the binary, and not build on each.
https://wiki.gentoo.org/wiki/Binary_package_guide
If the configuration on different computers is different, then you can simply make a shared distfiles directory over NFS so as not to copy the files manually.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question