Java Heap Space (CMS with huge files)

Go To StackoverFlow.com

2

EDIT:

Got the directory to live. Now there's another issue in sight:

The files in the storage are stored with their DB id as a prefix to their file names. Of course I don't want the users to see those.

Is there a way to combine the response.redirect and the header setting für filename and size?

best,

     A

Hi again,

new approach:

Is it possible to create a IIS like virtual directory within tomcat in order to avoid streaming and only make use of header redirect? I played around with contexts but could'nt get it going...

any ideas?

    thx

A

Hi %,

I'm facing a wired issue with the java heap space which is close to bringing me to the ropes.

The short version is:

I've written a ContentManagementSystem which needs to handle huge files (>600mb) too. Tomcat heap settings:

-Xmx700m -Xms400m

The issue is, that uploading huge files works eventhough it's slow. Downloading files results in a java heap space exception.

Trying to download a 370mb file makes tomcat jump to 500mb heap (which should be ok) and end in an Java heap space exception.

I don't get it, why does upload work and download not? Here's my download code:

byte[] byt = new byte[1024*1024*2];

response.setHeader("Content-Disposition", "attachment;filename=\"" + fileName + "\""); 

FileInputStream fis = null;
OutputStream os = null;

fis = new FileInputStream(new File(filePath));
os = response.getOutputStream();

BufferedInputStream buffRead = new BufferedInputStream(fis);

while((read = buffRead.read(byt))>0)            
{
    os.write(byt,0,read);
    os.flush();
}

buffRead.close();
os.close();

If I'm getting it right the buffered reader should take care of any memory issue, right?

Any help would be highly appreciated since I ran out of ideas

Best regards,

W

2009-06-16 08:38
by NoName
Hi all,

and many thx for the usefull hints. I've played aorund a lot and think that the problem relates to JSF which buffers the whole downloadable content before prompting the user with the 'Save as' dialog. Is there a way to tell JSF not to do so?

Maybe some ExtensionsFilter tweaking?

thx again and best regards

    A
< - NoName 2009-06-16 16:39
Are you able to fix the issue?? - Chinmoy 2016-06-07 05:28


5

If I'm getting it right the buffered reader should take care of any memory issue, right?

No, that has nothing to do with memory issues, it's actually unnecessary since you're already using a buffer to read the file. Your problem is with writing, not with reading.

I can't see anything immediately wrong with your code. It looks as though Tomcat is buffering the entire response instead of streaming it. I'm not sure what could cause that.

What does response.getBufferSize() return? And you should try setting response.setContentLength() to the file's size; I vaguely remember that a web container under certain circumstances buffers the entire response in order to determine the content length, so maybe that's what's happening. It's good practice to do it anyway since it enables clients to display the download size and give an ETA for the download.

2009-06-16 08:53
by Michael Borgwardt
+1 for setContentLengt - dfa 2009-06-16 09:52
tried this one. The buffer is 8192, setting the content length did not have any impact. thx anywa - NoName 2009-06-16 16:43


1

Try using the setBufferSize and flushBuffer methods of the ServletResponse.

2009-06-16 09:09
by kgiannakakis


1

You better use java.nio for that, so you can read resources partially and free resources already streamed!

Otherwise, you end up with memory problems despite the settings you've done to the JVM environment.

2009-06-16 09:14
by cafebabe


1

My suggestions:

The Quick-n-easy: Use a smaller array! Yes, it loops more, but this will not be a problem. 5 kilobytes is just fine. You'll know if this works adequately for you in minutes.

byte[] byt = new byte[1024*5];

A little bit harder: If you have access to sendfile (like in Tomcat with the Http11NioProtocol -- documentation here), then use it

A little bit harder, again: Switch your code to Java NIO's FileChannel. I have very, very similar code running on equally large files with hundreds of concurrent connections and similar memory settings with no problem. NIO is faster than plain old Java streams in these situations. It uses the magic of DMA (Direct Memory Access) allowing the data to go from disk to NIC without ever going through RAM or the CPU. Here is a code snippet for my own code base...I've ripped out much to show the basics. FileChannel.transferTo() is not guaranteed to send every byte, so it is in this loop.

WritableByteChannel destination = Channels.newChannel(response.getOutputStream());
FileChannel         source      = file.getFileInputStream().getChannel();

while (total < length) {
    long sent = source.transferTo(start + total, length - total, destination);
    total += sent;
}
2009-06-16 09:23
by Stu Thompson
Hi Stu,

I tried your 'hardest' code. ;) Sadly I still expirience heap exceptions. I'm pretty sure that this is a configuration issue. Since I have JSF abord I could imagine that the framework interferes my java settings. The exception occurs allways the first run in the iteration, no matter if I use your code or mine. D'ya have any ideas which config option might be the faulty one?

tia

     A
< - NoName 2009-06-17 13:52
@Andras: If you are still having heap exhaustion issues then I don't think that this loop is the source of your problem. 99.87% sure of this. Add -XX:+HeapDumpOnOutOfMemoryError to your JVM start up parameters, fail the application, and analyze the output - Stu Thompson 2009-06-18 07:06


1

The following code is able to streaming data to the client, allocating only a small buffer (BUFFER_SIZE, this is a soft point since you may want to adjust it):

private static final int OUTPUT_SIZE = 1024 * 1024 * 50; // 50 Mb
private static final int BUFFER_SIZE = 4096;

@Override
protected void doGet(HttpServletRequest request,HttpServletResponse response) 
                     throws ServletException, IOException {
    String fileName = "42.txt";

    // build response headers
    response.setStatus(200);
    response.setContentLength(OUTPUT_SIZE);
    response.setContentType("text/plain");
    response.setHeader("Content-Disposition", 
                        "attachment;filename=\"" + fileName + "\"");
    response.flushBuffer(); // write HTTP headers to the client

    // streaming result
    InputStream fileInputStream = new InputStream() { // fake input stream
        int i = 0;

        @Override
        public int read() throws IOException {
            if (i++ < OUTPUT_SIZE) {
                return 42;
            } else {
                return -1;
            }
        }
    };

    ReadableByteChannel input = Channels.newChannel(fileInputStream);
    WritableByteChannel output = Channels.newChannel(
                                    response.getOutputStream());
    ByteBuffer buffer = ByteBuffer.allocate(BUFFER_SIZE);

    while (input.read(buffer) != -1) {
        buffer.flip();
        output.write(buffer);
        buffer.clear();
    }

    input.close();
    output.close();
}
2009-06-16 09:31
by dfa


1

Are you required to serve files using Tomcat? For this kind of tasks we have used separate download mechanism. We chained Apache -> Tomcat -> storage and then add rewrite rules for download. Then you just by-pass Tomcat and Apache will serve the file to client (Apache->storage). But if works only if you have files stored as files. If you read from DB or other type of non-file storage this solution cannot be used successfully. the overall scenario is that you generate download links for files as e.g. domain/binaries/xyz... and write redirect rule for domain/files using Apache mod_rewrite.

2009-06-16 11:13
by Rastislav Komara


0

Do you have any filters in the application, or do you use the tcnative library? You could try to profile it with jvisualvm?

Edit: Small remark: Note that you have a HTTP response splitting attack possibility in the setHeader if you do not sanitize fileName.

2009-06-16 08:46
by akarnokd


0

Why don't you use tomcat's own FileServlet?

It can surely give out files much better than you can possible imagine.

2009-06-16 09:37
by alamar
what do you mean by that? i did not find the right thing on google i guess. ; - NoName 2009-06-16 16:43
Tomcat have the default servlet called FileServlet or whatever. If you figure out its full name you can instantiate it, init() it and ask it to serve files for you. Or just include() paths to files if they're under web-app root, fileservlet would serve them - alamar 2009-06-16 17:39


-1

A 2-MByte buffer is way too large! A few k should be ample. Megabyte-sized objects are a real issue for the garbage collector, since they often need to be treated separately from "normal" objects (normal == much smaller than a heap generation). To optimize I/O, your buffer only needs to be slightly larger than your I/O buffer size, i.e. at least as large as a disk block or network package.

2009-06-16 10:02
by mfx
Ads