NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Greetings Arild! Because opening a file in Java can be an expensive (time-wise) operation, the TDS employs a file handle cache. What you are likely seeing is a result of that particular cache. The cache is controlled by settings in threddsConfig.xml, which are outlined here: https://www.unidata.ucar.edu/software/tds/current/reference/ThreddsConfigXMLFile.html#RafCache The defaults behavior is: * If the number of open files goes over 500 OR every 11 minutes (whichever happens first), reduce the number of file handles in the cache to a total of 400 by removing them in the order of least recently used. Appropriate values for this setting are very much server dependent. If you want to be sure files are deleted as soon as possible, then you would want to disable the file handle cache all together by setting the maximum number to zero (which will impact server performance). Otherwise, the open question revolves around the balance between: 1. How many files are repeatedly accessed over a given period of time 2. How often those files are deleted Again, that's highly dependent on the server and typical access patterns of the users. There is also a NetcdfFile object cache that works the same way, but has a different default behavior: * If the number of NetcdfFile objects goes over 150 OR every 12 minutes (whichever happens first), reduce the number of NetcdfFile objects in the cache to a total of 100 by removing them in the order of least recently used. Each NetcdfFile object, I believe, will correspond to one or more files (aggregations, for example, may have be more than one actual file), so there could be situations where the cache settings across these two caches combined with usage patterns could result in unexpected behaviors with regards to changing data files on disk. In any case, the cache related links in the admin/debug interface of the TDS might be helpful: https://www.unidata.ucar.edu/software/tds/current/reference/RemoteManagement.html Cheers, Sean On Fri, Apr 3, 2020 at 7:41 AM Arild Burud <arildb@xxxxxx> wrote: > We are using TDS 4.6.14 and I have recently been aware of a problem: > We use an NFS server for our repository of netcdf-files, but in some > datasets older files are frequently deleted to make space for more recent > files, some as frequent as several times per day. > Now we see that the TDS server is keeping file handles open after files > have been deleted, but I can see from the threddsServlet.log messages like : > ... ERROR - ucar.nc2.util.cache.FileCache - FileCache RandomAccessFile > close failed on foobar.nc > And the file handle to foobar.nc is kept open until the next reboot of > the TDS server. > To make this worse, the file server behind the NFS will not (permanently) > delete files from the disks before all file handles are closed, so these > files keep filling up disk space long after deletion. > This failure to close files seems like a bug in the TDS, has anyone seen > this before and found a solution to avoid it? > > Arild Burud > MET Norway > > Arild.Burud@xxxxxx - Senior Engineer - https://met.no > MET Norway, Box 43 Blindern, NO-0313 Oslo > _______________________________________________ > NOTE: All exchanges posted to Unidata maintained email lists are > recorded in the Unidata inquiry tracking system and made publicly > available through the web. Users who post to any of the lists we > maintain are reminded to remove any personal information that they > do not want to be made public. > > > thredds mailing list > thredds@xxxxxxxxxxxxxxxx > For list information or to unsubscribe, visit: > https://www.unidata.ucar.edu/mailing_lists/ >
thredds
archives: