K
K
KTG2019-01-16 23:17:25
Computer networks
KTG, 2019-01-16 23:17:25

What is the best way to organize the IT infrastructure of an enterprise?

In their free time, they offered to try themselves in organizing the IT infrastructure of an enterprise from scratch. Network, servers, data storage, and everything else related. In each area (except ATS) I was engaged in some small and specific tasks, eliminated individual problems, but I never had a chance to work together. So the main motivation is to upgrade your skills in this area, so the advantage is not for unfamiliar technologies, but rather for the most correct solutions. UPD: while I’m briefly google articles on this topic, of course I can be mistaken, I will go deeper as I go through the stages. Now I’m just throwing in a rough plan and determining what I have to work with.
First, I will tell you my vision, but in general I wanted to consult with the community and get some recommendations. Plus there are specific questions on some decisions. Basic questions on the choice of equipment.
There is a margin of time, 1 month for reflection. Then a month for a detailed study of current affairs, for the development of a draft and working plan. I think you can learn a lot.
At the moment there is:
A small building, 4 floors with small offices for 2-3 people and the same small extension. Staff: 40-50.
Analog automatic telephone exchange with ten external numbers (it is serviced by the supplier) - all on twists.
An old rack server (I won’t tell you by the characteristics, but when I looked at them I definitely decided to leave it for small needs) and an external drive - file storage.
Incomprehensible network, similar to home. From the provider, the cable goes to the server, from it to a 5-port switch, from it to exactly the same switches ... in general, 2-3 switches and only then to the computer. None of them are managed, but with support for DHCP giving out addresses dynamically, which regularly causes conflicts.
From the software, a couple of 1C configurations for 5-10 users each.
A zoo of custom PCs, a spread of 15 years, interspersed with monoblocks from different manufacturers and complete anarchy in access policies.
And it is not known that on remote branches.
If you presented this picture, and even if you are far from such a sphere, then you undoubtedly became confused and realized that it was impossible to go on like this.
What the benchmark is set for:
First of all, of course, stable operation with a fault-tolerant system.
After the major upgrade, I would like to reduce the cost (time / effort / money) of support for troubleshooting and unforeseen situations.
Scalability and further development already in the form of additional software services (EDMS, ACS, ticket systems, etc.) and goodies like monitoring IT resources.
Reducing the entire modernization plan to a one-time reasonable spending on hardware by switching to free software.
Well, a very strong desire to do everything according to Feng Shui.
What I see in my head (with questions):
When choosing equipment and technologies, I would like to build the company's growth by 2 times - the number of people / equipment. If I'm wrong somewhere, feel free to correct me.
We take a normal rack for server and network equipment.
1. Network
First of all, I would like to redraw it into normal boxes with outlets, and not stuff the wires into thin cable channels.
We choose a shielded twisted pair cable.
Here comes the choice:
1. Or install a managed switch from a cheap segment on each floor (some D-Link for 16-24 ports), which I don’t really like in terms of expansion, so I tend to the second option;
2. Splurge on a couple of multiport rackmount switches and run your own cable to each machine.
As far as I dug on the Internet, you can take two 48 ports each and combine them.
In addition to user PCs, there will be network MFPs, possibly cameras, ip-telephony, etc.
Further, because there are branches, then you can not do without a router. In the future, it will help to organize an active backup communication channel, respectively, two WAN ports + a wifi module.
WiFi will be for user mobile devices through a backup channel, tk. the main channel is planned to be cut from "parasitic" traffic - only working resources. Why not separate modems on the floors, but repeaters? In order to have more flexible control, both in terms of the "dead" main channel, to disable the "leftist", and in terms of fixing traffic from devices to avoid leaking information to the Internet (protection from a fool who leaks photos of documents into public access via a phone, but he forgets that he automatically connected to a working wafer. All devices will be required to register.
With branches, it is still unknown what is there specifically and what will happen, the main thing is to be able to lift VPN from the main building and let them into the network if necessary. I don’t know much about equipment, but I hear two companies: Mikrotik and Cisco. Some are more expensive, others are cheaper. Is my network sketch correct, how correct is it? And if everything is correct, what network equipment would you recommend? Of course, we are striving for more budget options, we are not chasing brands.
2. Server
Here I am overcome by vague doubts, take two servers and combine them into a cluster for quick replacement or, in order to save money, take one and assume that nothing will happen to it.
Wishlist for servers: inexpensive rack kvm-console with IP.
The idea for servers is:
I have already said about a cluster of two new servers, if the budget allows (I haven’t gone deep yet, but as I understand it, it synchronizes during operation, you can set up load distribution and when one server fails, there is a complete switch to the second). We use virtualization on free software. I settled on Citrix XenServer (I poked buttons a couple of times, read comparisons with similar hypervisors). We add servers to the pool (I poked it as a user a couple of times through KDE, but I realized that in order to grow professionally, I need to master it better).
1 virtual server - Linux Ubuntu: Works as a DHCP server (or is it better to give the role to a switch / router?) and a domain controller and directory service (LDAP. A quick analysis showed that Samba is popular) and a license server.
2 virtual server: the OS will depend if you still find a license for MS SQL and Windows Server. If not, then install Linux Ubuntu and give the server for the PostgreSQL database. Resource priority: memory.
3 virtual server Linux Ubuntu: We give under 1C. Resource priority: percent
The old weak server is also virtualized, but there will be two servers in the pool:
1 Linux Ubuntu virtual server: Under the web server for internal needs - EDMS, Internal corporate portal, etc.
2 virtual server Linux Ubuntu: Asterisk for organizing internal telephony - plans for the future.
The external site will be on a paid hosting.
Looks like I didn't forget anything.
The questions will be:
- Is the approach as a whole to the organization of servers and the distribution of "duties" correct?
- How justified is the choice of software, given the fact that there is little time to study? Of course, the calculation is also based on a large and benevolent Linux community and a lot of manuals on the net.
What advice can you give on hardware and software?
3. Data storage
I see that it will be a separate RAID array in a rack. There is no desire to clog server disks with unnecessary stuff.
We will collect RAID 10 on SSD (although here again there is a catch, opinions on the Internet are divided 50/50 that SSD in raid arrays does not give advantages)
We will store:
1. Installers of the main user software and driver. (As a plan for the future - PXE setup.)
2. Server configuration images and network equipment configuration files.
3. Database and service backups.
4. Final versions of working and internal documents.
An existing external drive divided into two sections will be used as the current file hosting:
1 current files. It is cleaned once a week with the transfer of files to the second partition.
2 files from the previous week.
There will be additional manual external storage for quarterly data transfers. Nothing protects data better than writing it to disk and physically disabling it. The main question is the rationality of using an SSD and its budget.
4. ATS - the "tenth" case, after the first three.
There is still a complete zero in knowledge, except for the existence of Asterisk.
What equipment is needed to save all external numbers, and even set up internal IP telephony?
There is an option to give control of the PBX to the communication provider, but I would like to fill my cones on setting up.
5. Uninterrupted
power Good uninterruptible power supply. I wanted to keep all the equipment above for at least 30-60 minutes. Because the choice depends on the load and the inherent reserve (~ 30%), then first I will decide on the first three points, then I will move on to the UPS. One thing is clear: single phase and with SNMP and temperature/humidity sensors. I would like to be able to receive a message on the phone about a power outage and emergency shutdown everything from the same phone, or by sensors to give a command to the servers to shut down.
That's basically it. Then quietly it will be possible to equip the organization with all sorts of MFPs. Along the way, I’m already doing automation directly, with this I’m doing much better. And also deal with the PC zoo
Remaining questions:
1. Is it worth bothering with software raids on virtual machines if their images are stored on an external raid?
2. On which servers is it better to install a mail server? Run another virtual machine?
I know one thing - boxes limited in volume, cleaned once a month.
3. Will rack cooling systems handle all this equipment?
4. I know that, for example, Cisco has a network simulator where you can draw a diagram and configure it, and I'm sure that the settings can be exported to working equipment. Whether a question it is possible to fasten similar simulators with virtual machines? Or do I want too much? Are there any tools for such an IT infrastructure design with the ability to deploy it in combat?
5. What could have been overlooked in this outline of the plan?

Answer the question

In order to leave comments, you need to log in

8 answer(s)
R
Rodion Kudryavtsev, 2019-01-17
@rodkud

The question is complex - divide it into sub-questions, so that you can at least read it. If the infrastructure - draw at least by hand a diagram. Try mikrotiks - they can also l3 and vlans and a lot of things in general (their courses will also be useful to you). Monitoring is important. Addressing plan and individual networks - voice, data, transport (device management). Well, somewhere to start...

C
CityCat4, 2019-01-17
@CityCat4

Maybe it's time to introduce a long post icon like on a peekaboo?
The idea is definitely a good one. But ... how much money? The budget may well be "rubber". Or maybe not.
There are business requirements. There is something that is beyond these requirements - all kinds of prettiness. It’s better to lay them down right away - then they just won’t sign them :)
For each floor, a managed switch with uplink via optics. Lay optics between floors. The number of ports should be one third more than the maximum number of devices you can imagine on a floor.
What nonsense, sorry? What is the backup channel via wifi? Two providers in the front Mikrotik decide. wifi is done only with the control of poppies, with a decent size WPA2 key, and then - if you really need it. And it's better without it at all. You can merge documents stupidly by taking pictures of them and merging the pictures into Google drive.
Take Mikrotik, of course.
No, if a fan of the command line - you can buy a tsiska. On nage (nag.ru) you can buy relatively inexpensive boo ciscos.
Since I see only linux here, it's better to use KVM (qemu + libvirt), if of course you can overcome or Proxmox. Although if the server is the only one - you can use VmWare - it is free for one server, and in 6.7 it seems like the HTML client was finished.
True, the question immediately arises - what, how and where will you backup? The question is by no means an idle one :)
There is no concept "RAID array in a rack". You can put a device in a rack - Synology, QNAP or something more expensive and specialized. RAID array is a software concept.
RAID10 on SSD? Are you doing so well with the budget? Then I would consider buying licenses for VmWare so that LiveMigration works between hosts and buying software for backup like Nakivo instead of SSD for RAID10.
First, they don't do RAID on an SSD - it's expensive. RAID10 on an SSD is veryexpensive. To store all this canoe, what do you suppose there is enough RAID6 storage on ordinary disks (plus one hotspartnik). And here it will be enough shelves like Synology, rummaging through access to the network. By the way, it's better to have another shelf for backups - keeping backups in the same place as documents is ... stupid :)
Asterisk, of course. And a piece of iron for him. Not virtual. bare metal. Because to receive external lines via copper, if there are any (regular city phones), you will need FXO cards. To install ordinary telephones (not IP) in the office, you will need FXS cards. Nothing is needed for IP phones :) If the office is large and there are many incoming numbers, you can take the PRI (E1) stream - 15 or 30 BCC from the telephony provider - this will require a separate stream card. All this is easily assembled in linux.
No. It will be extra work for the hardware disk controller - and that's it.
Certainly. Or even more than one - for example, how will antispam work?
On a tsiska - maybe. Want to fool around on a tsiska - fool around. Do not want - build pens.
VLANs
Binding poppies to switch ports
Proxies
Access control in tyrnet

A
athacker, 2019-01-17
@athacker

1) foil twisted pair - only between floors. With mandatory grounding, otherwise there will be no sense from it. Otherwise, you can breed regular UTP without shielding;
2) Commutators must be on each floor. There are not enough 16 ports - put 24 ports. There are not enough 24-ports - put on 48. But it will really be easier to live if you have access switches on each floor. Because then you can buy less capacious aggregation switches and put them in the server room. Less capacious - because not all computers in the world will be brought there, but only access switches. Connection of each access switch to aggregation - two links, in ether-channel, for redundancy;
3) Citrix XenServer... Well, look at the picture and find XenServer there:
4) "SSD in raid arrays does not provide advantages" - shta? O_o Collecting SSD in RAID10 is somewhat wasteful. Although, of course, if your budgets are not limited ... I would build an array like this - as many small disks as possible, which I would already collect in RAID6 or RAID60 (depending on the number of disks and the required volumes). Here are the spindle disks - it makes sense to collect them in RAID10, because they are painfully slow. But the approach is the same - smaller disks, but in larger quantities. In recent years, I constantly observe situations - there are up to a fig of volume, but there is not enough productivity. The opposite - there is performance, but there are few volumes, so far it has not come across. Hence the recommendation to take smaller discs, but more in quantity. Although it all depends on the tasks, of course. And from budgets.
Yes, and "RAID-array in a rack" not to deliver. You can put a piece of iron. Some. Industrial storage, for example. Or self-assembled storage (consider a regular server, with Linux or FreeBSD on board, configured to return volumes using some kind of storage protocol). Or a DAS basket. Or NAS. Listed in order of preference. Again, it all depends on the budget.
5) UPS should be taken with a margin of not 50%, but 100% or even 200%. Because when you add more servers, then your 50% will be eaten by the moment. Or when your load rises, you will also begin to rest against the UPS power ceiling. And the fact that the load will grow and the server will have to be added is not to go to the grandmother. Therefore, you need to take with a large margin, at least 100%. As an option - take a UPS to which you can connect external battery packs. But the UPS power must still have a margin of at least 100%.
6) Humidity, temperature sensors without registration and SMS and the ability to send SMS messages are better implemented using devices like UniPing server v3/SMS . There you can separately buy all kinds of sensors, up to infrared controllers for air conditioners in the server room.
7) RAIDs in virtual machines are nonsense, of course. Enough to provide fault-tolerant disk storage. Best of all, with fault tolerance, things are with industrial storage systems, of course. HPE MSA, Dell MF - that's it.
8) The rules for distributing services among virtual machines are very simple - one shot - one corpse service - one machine. That is, under the mail server - one virtual machine. Under DNS - still the machine. Even two. Under domain controllers - two virtual machines (minimum). Under DHCP - another machine.
XXX) If the customer wants everything to work well, then it is better, of course, to invite professionals. Because judging by your questions, you are not yet one. If he is ready to endure and reap the fruits of experimentation for quite a long time - then of course.
XXX+1) Yes, and don't forget about backup. What to back up, and where to put it. Hint: to add to the same storage on which the virtual machines work is bad manners and is fraught with sad consequences.

A
Artem @Jump, 2019-01-16
Tag

If you presented this picture, and even if you are far from such a sphere, then you undoubtedly became confused and realized that you can’t go on like this
Works - do not touch!
If it doesn't work, fix the problem.
If the management set the task to put in order - bring.
Why not separate modems on the floors, but repeaters?
I don’t know about individual modems, I have never encountered such a perversion. But repeaters are evil. They are used as a last resort.
And in no way they will allow something to be flexibly controlled - the repeater is not controlled at all.
There will be additional external storage, connected manually, for quarterly data transfer. Nothing protects data better than writing it to disk and physically disconnecting it.
Such a storage helps a lot when the working base has flown, and the next backup for the last year, because as it was turned off, it was no longer turned on.
A normal backup is done automatically!
Here, in fact, the main question is the rationality of using an SSD and its budget.
It is easy to look at the prices for a gigabyte of SSD on the Yandex market and compare with the prices for an HDD.
And its budget will be clear.
Let me explain - it's expensive. Much more expensive than HDD.
SDD is set for active work with data, but not for storing archives.
And if you are going to store with the power off, an SSD without power loses data.
Is it worth bothering with software raids on virtual machines if their images will be stored on an external raid?
For a start it is necessary to be accurately defined - for what reyd is necessary. Then the answer becomes pretty obvious. Since it is not clear why you need a raid, there is no answer.
opinions on the Internet are divided 50/50 that SSD in raid arrays does not provide advantages
SSD is faster than HDD in any case, even in RAID. But often not as fast as just an SSD. In addition, you need to be able to properly prepare a RAID from an SSD.
Will rack-mount cooling systems pull all this equipment?
To answer this question, you need to know - a) the power of the heat generated by the equipment in watts, and b) the performance of
a particular cooling system.
There are requirements dictated by business.

D
d-stream, 2019-01-17
@d-stream

Here comes the choice:
1. Or install a managed switch from a cheap segment on each floor (some D-Link for 16-24 ports), which I don’t really like in terms of expansion, so I tend to the second option;
2. Splurge on a couple of multiport rackmount switches and run your own cable to each machine.
As far as I dug on the Internet, you can take two 48 ports each and combine them.
In addition to user PCs, there will be network MFPs, possibly cameras, ip-telephony, etc.

The truth lies somewhere in the middle. 1. All switches must be manageable (about vlan has already been written), 2. "center" and "access" - here it is rather necessary to look at the geometry-extent - it is not a fact that less than 90m of cable will be guaranteed from the switch in the server room to any office ...
Well, in any case, normal feng shui - "nuclear" switches are desirable with increased intelligence. For example, those who know how to route limitedly at switching speeds (the so-called L3 switches). Well, fast. And already the user part - L2 switches are simpler, cheaper and, for example, 100 Mbps with gigabit uplinks.
Good interrupter. I wanted to keep all the equipment above for at least 30-60 minutes. Because the choice depends on the load and the pledged reserve (~ 30%),
In general, usually uninterruptible power supply on UPS is oriented for 5-10-15 minutes, and the rest of the time - standby inputs, diesel generator sets, etc.
Moreover, this is not fashion, but simply rationality - because DGU + 10-minute UPS is often noticeably cheaper than 2-hour UPS.

S
Saboteur, 2019-01-17
@saboteur_kiev

What the benchmark is set to:
First of all, of course, stable operation with a fault-tolerant system.
You can easily look for the bush.

1. Network
First of all, I would like to redraw it into normal boxes with outlets, and not stuff wires into thin cable channels.
What difference does it make where the cables go? The main thing is that they do not go in the same channel with the power ones and there are enough of them.
This is a specific solution, mainly or in the case of strong interference, which is not often found in office premises, or often shielded is placed down the street. And in general, in the case of strong winds, it is not so expensive to lay optics in the most unpleasant place now.
In your case, escaped might be completely optional.
Here comes the choice:
1. Or install a managed switch from a cheap segment (some D-Link for 16-24 ports) on each floor, which is not particularly desirable in terms of expansion. So I'm leaning towards the second option.
Get several pairs on each floor. It will be necessary to expand - connect 2-3 switches on the floor. In addition, if any cable is damaged, there will be a spare. In addition, some switches are able to increase the speed by combining two ports.
2. It will go broke into a couple of multiport rack-mount switches and lead its own cable to each machine.
How many dug up on the Internet, you can take 2 to 48 ports, combine them.
Very often there are internal rearrangements, with the demolition of walls and new offices. Therefore, it is easier to make a small box on each floor, from which everything is scattered around the floor, in the box there is a switch. From the server room to the box, several pairs or optics.
If anything, some providers can provide you with the "corporate network" service by organizing vpn with your branches. Depends on their location and connection.
In the future, it will help to organize an active backup communication channel, respectively, 2 WAN ports + wifi module.
WiFi will be for user mobile devices through a backup channel, tk. the main channel is planned to be cut from "parasitic" traffic - only working resources.

Vlan
2. Server
Here I am overcome by vague doubts, take 2 servers and combine them into a cluster for quick replacement or
, in order to save money, take 1 server and assume that nothing will happen to it.
Wishlist for servers: inexpensive rack kvm console with IP.
Saying "server" you must immediately say the server of what. What will he do.
A cluster is needed if a downtime is much more expensive than raising from a backup. And now to raise a virtual machine from a backup is a very fast thing.
On servers, the idea is this:
About a cluster of 2 new servers, if the budget allows, I have already said (I haven’t gone deep yet, but as I understand it is synchronized during operation, you can set up load distribution and when one server fails, there is a complete switch to the second one)
The load on the domain controller, DHCP and ldap is usually so small that it makes absolutely no sense to create a cluster for this. An ordinary virtual box on virtualbox will cope with an office for a couple of hundred jobs. And considering that new users/new machines are added to the domain infrequently, a week-old backup may be quite relevant. Therefore, instead of a cluster - just every night a backup copy of the entire virtual machine to another physical computer, and in which case you simply raise the virtual machine with a domain controller on another server in a couple of minutes. Immediately minus all the difficulties with clusters.
Why do you need a base if you move so easily from one to another? If it's unprincipled for you, then immediately take PostreSQL, why bother with licenses and Windows?
Seriously? SSD, more precisely REID from SSD for storing drivers and backups?
SSD is needed only under postgres or 1C and without any raids, everything else is on regular HDDs.
Backups are configured based on the estimated losses in case of downtime.
The raid makes sense only at the first level. If your disks are virtualized, you don't need a raid.
A raid is needed only for extremely critical data every minute. If the company can wait an hour, why raid? Moreover, many services can raise a virtual machine from a backup in 5 minutes.
Hire a sysadmin or company in your city who will set up or develop a detailed plan for you and advise you for the first couple of months.

A
ashv24, 2019-01-17
@ashv24

What city is the author from? And that is, used equipment, a cabinet for 15 cabeus units and a large large server tower case ...

B
BAF285, 2021-09-26
@BAF285

The most important thing is to make a cluster of DHCP servers for load balancing and redundancy
https://bafista.ru/otkazoustojchivyj-dhcp-klaster/

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question