Answer the question
In order to leave comments, you need to log in
Looking for a technological solution to optimize the process of mass production of an embedded system?
Hello.
There is a software system consisting of:
1. VMWare ESXi 4.1
2. Two virtual machines that will actually run under the guidance of the aforementioned hypervisor.
All this joy works on an embedded card, which has only USB and a serial port as interfaces. The embedded card also has a processor, memory, and a 500 GB disk.
The task is the following:
I need to find a technological solution to optimize the mass production process, by replicating the entire above system together as quickly as possible, i.e. some kind of miraculous, which would allow to wrap everything together [ESXi + 2 Wirth. machines] into one container like Norton Ghost and then deployed on standard hardware in seconds/minutes. Or some system that allows you to do the same from the master disk.
Today, as a temporary solution, I wrote some kind of unattended installation procedure, which is activated by autorun at the moment it is connected to a USB flash drive, raises a Linux live distribution in memory and starts the installation procedure in it first ESXi, then transfers it to the desired place virtual machines and add them to inventory. The entire kitchen just described works, but it is categorically not suitable for mass production due to unacceptable time costs. In other words, if it were necessary to install several, for example, 3-4-5 machines in this way, then it didn’t go anywhere else. However, in the case of an order for 100 or 200 units, you can safely turn off the lights and go home.
There is also a hardware platform for disk replication, but it is not familiar with VMWare's proprietary file system, so in default mode it starts copying a 500 GB disk block by block, regardless of the occupied disk space, which, of course, is terribly inefficient.
Thanks in advance for possible solutions to the problem.
Answer the question
In order to leave comments, you need to log in
Try to develop the idea of sector-by-sector copying of a disk, because it is necessary to exclude the copying of empty data (after all, not all 500GB are used by you?).
Would you like to write a utility ... of 50-100 lines of code that will analyze the copied partition, identify consecutive empty sectors and simply not copy them when deployed? Of course, there is a danger of getting zeros in the useful data... but this point is swept aside by additional analysis already from within the virtual image.
ps and you can also try to implement the algorithm:
1. Change the size of the container with the source data to the minimum (due to free space, almost all modern file systems are processed by the gparted / qparted utility)
2. Copying the reduced container during deployment with subsequent expansion to 500GB
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question