SQL SERVER – BLOB – Pointer to Image, Image in Database, FILESTREAM Storage

When it comes to storing images in database there are two common methods. I had previously blogged about the same subject on my visit to Toronto. With SQL Server 2008, we have a new method of FILESTREAM storage. However, the answer on when to use FILESTREAM and when to use other methods is still vague in community.

Let us look into two traditional methods first along with their advantage and disadvantages.

Method 1) Store image in filesystem and store pointer in database

This is quite an old method and you can find this implemented in many places, even though SQL Server 2005 has been released 5 years ago.
The advantage of this method is that there is no limit on the size of images file and it is the fastest way to retrieve images from system.
The drawback of this method is that application and database security acts separately and application/OS has to take backup for images.

Method 2) Store images in a database
This was introduced in SQL Server 2005 where images (binary data) can be stored in database.
The advantage of this method is same as the disadvantages of method 1, i.e. security and permissions.
The disadvantage of this method is same as the advantages of method 1, i.e. speed and maximum size is limited to 2 GB.

On observing Method 1 and Method 2, it is very clear that they are complementing each other but both are very powerful methods on their own.

Although both the methods are very good they have limitations, which together only they can provide. This is where FILESTREAM Storage comes into action.

FILESTREAM Storage system can store BLOB (Binary Large Object OBject) or images file, which is larger than 2 GB, and it can store them in database itself. It can be backed up just like regular database backup, and the security of the BLOB is maintained along with database application. After all this, it is as fast as filesystem. Now, this is what makes this superior to any other methods described earlier.

Summary: We can use FILESTREAM Storage to overcome traditional limitations of SQL Server storage of BLOB, varbinary or images.

Reference : Pinal Dave (https://blog.sqlauthority.com)

SQL Data Storage
Previous Post
SQLAuthority News – Big Thinkers – Robert Cain
Next Post
SQL SERVER – Backup Timeline and Understanding of Database Restore Process in Full Recovery Model

Related Posts

21 Comments. Leave new

  • Again good modularized sense of explanation.
    Please help us with some example



  • You forgot to mention another disadvantage of storing images in the database, and that’s partitioning.

    Once your server starts noticing the load, you need to add another database server. That can happen quicker than you think. If you store images in the filesystem, you can just add a light weight web server that’s responsible for serving only images. You can even offload the images to a CDN.

    Another disadvantage is the relatively longer connections you keep in your databases. Unless the database does some prefetching to memory, you would need to keep that connection open while the user downloads the file. And if the database prefetches the image into memory, it will increase the memory requirements of your database server.

  • Jacob Sebastian
    August 1, 2009 10:51 pm

    And for those using SQL Server Express, the 4 GB limit per database limitation does not apply to FILESTREAM data.

    Also, FILESTREAM data can be stored in a compressed disk volume as well. FILESTREAM data can be used in FULL Text Indexing and can be used in replication as well as log shipping.

    However, you cannot do database mirroring on a database that uses FILESTREAM feature.

  • Hey Pinal,

    I have a question.

    Can we ALTER database with FILESTREAM option?
    Please let me know.


  • Jacob Sebastian
    August 19, 2009 9:30 pm

    Yes, you can alter a non-filestream database and add a filestream file group and make it file-stream-enabled database.

  • @Jacob
    Thanks for your reply.

  • Hi, All

    Do i have any performance gain if i create my table containing large binary data into separate file group.

  • @Saud,
    There are all chances that you get better performance if the separate file group is on a separate disk volume. The performance gain will come from being able to do parallel reads and writes.

  • i have a t-sql query that dumps data and images in XML format. images vary substantially in size due to lack of restrictions on image uploads to DB.

    i would like to extract all DB resident images and force to a certain height/width as part of the XML extraction.

    i know there are many ways to do this with but we would like to keep it internal to the DB query if possible.


  • I have a dilemma.

    I am designing a database for an application that stores 20000 images (100-200K each).

    For one the recommendation is not to use FILESTREAM if the file size is smaller than 250K, but on the other side I have 20000 images a day which would require tons of DB size….

    So now is the question should those images be saved as a varbinary(max) blob or should we use the FILESTREAM attribute?

    Images are not accessed frequently after they were saved.


  • Hi Pinal ,

    Did FILESTREAM supports replication

  • jacob sebastian
    January 4, 2010 5:40 pm

    FILESTREAM columns can be replicated. However, the clients needs to be running SQL Server 2008.

  • Hi Pinal

    I have to come know that security of the files saved on the local drive using Filestream is managed by the sql server itself.

    When i delete the the file from NTFS , it allowed me to delete the file. but when i executed the query SELECT * FROM dbo.FileStreamTable
    it given me the error

    Msg 233, Level 20, State 0, Line 0
    A transport-level error has occurred when receiving results from the server. (provider: Shared Memory Provider, error: 0 – No process is on the other end of the pipe.)

    Why the file was deleted from the local path (instead it should be deleted from sql server because the security was managed by FileStream)


  • jacob sebastian
    January 15, 2010 3:00 pm

    This is a known behavior. SQL Server currently does not restrict ‘unauthorized access to the file through explorer or using command line’. Deleting/editing the files directly is not recommended and it can corrupt the database.

    Consider it as bad as modifying the mdf/ldf files. SQL Server does not allow you to modify them when the server is up and running because the files are exclusively opened by SQL Server. In the case of FILESTREAM data, SQL Server opens a filestream data file only when needed. It does not open all the files and keeps a handle open, as it will be a huge wastage of resources.

    FILESTREAM data is supposed to be accessed though TSQL or through the API exposed by SQL Server. When the data is accessed through these methods, the permissions set by the SQL Administrator comes into picture and that is the security part you mentioned in the post.

    In a real-world environment, a windows server administrator (not sql administrator) might protect the filestream folder against unauthorized ‘direct-harmful-access’.

  • When using log shipping and with some columns of FILESTREAM type. The secondary server needs to have access to the files stored in the Primary server or the files are transferred to the secondary server.

  • Hi Pinal,

    File Stream is only works with Integrated Security ?

  • File Stream is only works with Integrated Security ?

  • what is the best technique to store & retrieve image into / from database? Please advice, thanks…

  • Hello Pinal Sir,
    Hope you are doing good.
    Sir i have one table and in this table i have one field with data type,
    But now my database size is increasing because of this table.
    Can you please advise me that how to decrease the database size.
    Thanks and regards
    Raj Thapliyal

  • Hey Pinal,

    Nice article. I have always used your blog to find answers to most of my sql questions.

    I have a question on the filestream backup.

    If am not mistaken, FileStream backup does a backup of the data as well as the filesystem managed under the filestream. So in essence as the number of files keeps growing, the time to do a complete backup and then to a complete restore would keep growing.

    What would be the right approach then if i anticipate my files to keep growing as part of document management. Even if i do a partial backup and just backup the data and the filesytem seperately; it would essentially boil down to method 1 of storing on filesystem and then storing the pointer in the database

    Please let me know your thoughts. The concept of filestream is good, but there is hardly any mention about backups and restore from a practical stand point.



Leave a Reply