Answer the question
In order to leave comments, you need to log in
Is it worth storing files in the database in this case?
There was a question about the design of the database. There are 98 thousand files (xls, csv, doc) in the file system, the
total size of which is slightly more than 30 GB. The IS using these files is morally obsolete, a decision was made to transfer to a new system. The microsoft technology stack is actively lobbied in the public sector. The authorities decided to use sharepoint and ms sqlserver. access.
I don’t have much experience in these matters, but this solution seems to me not optimal in terms of file access speed (although it is claimed that this issue is solved by caching on the web server),
I would like to hear the opinion of more experienced specialists in this field
Answer the question
In order to leave comments, you need to log in
The fewer layers, the faster the work.
It all depends on what you need to do with the files and how.
It is considered bad form to store binary data (we are talking about files) in the database. Store metadata that can be used to uniquely locate data.
The overhead for selecting data from the database will be much larger than the overhead from "go to the database for metadata, find them on the file system."
motivating this by the convenience of storage, backup, solving the problem of competitive access.
If files do not change - there are no problems with competitive access. If change - that problems will be though in FS, though in a DB.
SP2013 works well in Shredded Service + RBS + SQL Filestream mode, in fact the file is stored in storage, and at the DB level it is simply mapped by Guids. With this approach, you can store as many docks as you like in the file storage and the content of the database itself will be many times smaller, with this approach, sometimes you have to wait for a file that is quite old, for example + 2 seconds.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question