Search Results

Search found 3136 results on 126 pages for 'buffer overrun'.

Page 96/126 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Maximum Possible File Name Length in Windows Kernel

    - by Lambert
    I was wondering, what is the longest possible name length allowed by the Windows kernel? E.g.: I know the kernel uses UNICODE_STRING structures to hold all object paths, and since the byte length of a wide-character string is stored inside a USHORT, that allows for a maximum path length of 2^15 - 1 characters. Is there a similar, hard restriction on a file name (rather than path)? (I don't care if NTFS or FAT32 imposes a particular restriction; I'm looking for the longest possible theoretically allowed name in the kernel, assuming no additional file system or shell restrictions.) (Edit: For those wondering why this even matters, consider that normally, traversing a directory is achieved by FindFirstFile/FindNextFile calls, one call per file. Given the function named NtQueryDirectoryFile, which is the underlying system call and which returns multiple file names per call, it's actually possible to take advantage of this maximum-length restriction on the path to make an extremely-fast directory traverser that uses solely the stack as a buffer. Now I'm trying to extend that concept, and I need to know the maximum size of a file name.)

    Read the article

  • Writing my own iostream utility class: Is this a good idea?

    - by Alex
    I have an application that wants to read word by word, delimited by whitespace, from a file. I am using code along these lines: std::istream in; string word; while (in.good()) { in>>word; // Processing, etc. ... } My issue is that the processing on the words themselves is actually rather light. The major time consumer is a set of mySQL queries I run. What I was thinking is writing a buffered class that reads something like a kilobyte from the file, initializes a stringstream as a buffer, and performs extraction from that transparently to avoid a great many IO operations. Thoughts and advice?

    Read the article

  • [Python] How can I speed up unpickling large objects if I have plenty of RAM?

    - by conradlee
    It's taking me up to an hour to read a 1-gigabyte NetworkX graph data structure using cPickle (its 1-GB when stored on disk as a binary pickle file). Note that the file quickly loads into memory. In other words, if I run: import cPickle as pickle f = open("bigNetworkXGraph.pickle","rb") binary_data = f.read() # This part doesn't take long graph = pickle.loads(binary_data) # This takes ages How can I speed this last operation up? Note that I have tried pickling the data both in using both binary protocols (1 and 2), and it doesn't seem to make much difference which protocol I use. Also note that although I am using the "loads" (meaning "load string") function above, it is loading binary data, not ascii-data. I have 128gb of RAM on the system I'm using, so I'm hoping that somebody will tell me how to increase some read buffer buried in the pickle implementation.

    Read the article

  • How to reliably measure available memory in Linux?

    - by Alex B
    Linux /proc/meminfo shows a number of memory usage statistics. MemTotal: 4040732 kB MemFree: 23160 kB Buffers: 163340 kB Cached: 3707080 kB SwapCached: 0 kB Active: 1129324 kB Inactive: 2762912 kB There is quite a bit of overlap between them. For example, as far as I understand, there can be active page cache (belongs to "cached" and "active") and inactive page cache ("inactive" + "cached"). What I want to do is to measure "free" memory, but in a way that it includes used pages that are likely to be dropped without a significant impact on overall system's performance. At first, I was inclined to use "free" + "inactive", but Linux's "free" utility uses "free" + "cached" in its "buffer-adjusted" display, so I am curious what a better approach is. When the kernel runs out of memory, what is the priority of pages to drop and what is the more appropriate metric to measure available memory?

    Read the article

  • nxhtml and geben: debug mode stops responding to keystrokes upon step into html/php mixed line

    - by artistoex
    I'm using the php debugger geben and nxhtml-mode. While debugging, as soon as I step into a mixed line such as <foo><?php bar(); ?></foo> the debugger is no longer accepting any key-strokes. However, the mode line still indicates the debugger's presence (*debugging*'-entry). I guess this due to nxhtml's mode changes, because it's the exact same behavior geben shows after disabling end re-enabling it. Does anybody use nxhtml together with geben and has fixed it? Or is it possible to configure emacs to enable nxhtml conditionaly, such that php-mode is used instead when the buffer was opened by geben?

    Read the article

  • Double Buffering with awt

    - by DDP
    Is double buffering (in java) possible with awt? Currently, I'm aware that swing should not be used with awt, so I can't use BufferStrategy and whatnot. If double buffering is possible with awt, do I have to write the buffer by hand? Unlike swing, awt doesn't seem to have the same built-in double buffering capability. If I do have to write the code by hand, is there a good tutorial to look at? Or is it just easier/advisable for a novice programmer to use swing instead? Sorry about the multi-step question. Thanks for your time :)

    Read the article

  • SSIS process files from folder

    - by RT
    Background: I've a folder that gets pumped with files continuously. My SSIS package needs to process the files and delete them. The SSIS package is scheduled to run once every minute. I'm picking up the files in ascending order of file creation time. I'm building an array of files and then processing-deleting them one at a time. Problem: If an instance of my package takes longer than one minute to run, the next instance of the SSIS package will pick up some of the files the previous instance has in its buffer. By the time the second instance of teh package gets around to processing a file, it may already have been deleted by the first instance, creating an exception condition. I was wondering whether there was a way to avoid the exception condition. Thanks.

    Read the article

  • "Streaming" MJPG using python.

    - by tyler
    I have a webcam that I want to do some image processing on using Python. It's coming through as a Motion-JPEG. I want to try to process the stuff "live," but really what I want to do is this: Open the URL, start data streaming to some buffer... Read x bytes (where x is image size) to an image Process that image Display in result panel Return to number 2 The problem is that, while I do have the resolution, I have no idea how many bytes to read. I've tried googling the M-JPEG specification but can't find anything on if the images are separated by some header or what. Anybody have any ideas?

    Read the article

  • mysql - filtering a list against keywords, both list and keywords > 20 million records

    - by threecheeseopera
    I have two tables, both having more than 20 million records; table1 is a list of terms, and table2 is a list of keywords that may or may not appear in those terms. I need to identify the terms that contain a keyword. My current strategy is: SELECT table1.term, table2.keyword FROM table1 INNER JOIN table2 ON table1.term LIKE CONCAT('%', table2.keyword, '%'); This is not working, it takes f o r e v e r. It's not the server (see notes). How might I rewrite this so that it runs in under a day? Notes: As for server optimization: both tables are myisam and have unique indexes on the matching fields; the myisam key buffer is greater than the sum of both index file sizes, and it is not even being fully taxed (key_blocks_unused is ... large); the server is a dual-xeon 2U beast with fast sas drives and 8G of ram, fine-tuned for the mysql workload.

    Read the article

  • what is the return value of BeautifulSoup.find ?

    - by prosseek
    I run to get some value as score. score = soup.find('div', attrs={'class' : 'summarycount'}) I run 'print score' to get as follows. <div class=\"summarycount\">524</div> I need to extract the number part. I used re module but failed. m = re.search("[^\d]+(\d+)", score) TypeError: expected string or buffer function search in re.py at line 142 return _compile(pattern, flags).search(string) What's the return type of the find function? How to get the number from the score variable? Is there any easy way to let BeautifulSoup to return the value(in this case 524) itself?

    Read the article

  • How to play multiple online videos on IOS continuously

    - by Matt.Z
    The scenario is like this: I have some long video, and slice it into small files(mp4, for example: 5 min per file), put them under some website. I wanna play (on IOS) these mp4 videos continuously, one by one, try to do not let user feel there has a pause between video pieces. So I need to buffer next video when I play current one. But I don't know where to start. What should I do? Can anyone give me some information of related documentation or source code I can study with?

    Read the article

  • Open a file with su/sudo inside Emacs

    - by Chris Conway
    Suppose I want to open a file in an existing Emacs session using su or sudo, without dropping down to a shell and doing sudoedit or sudo emacs. One way to do this is (require 'tramp) C-c C-f /sudo::/path/to/file but this requires an expensive round-trip through SSH. Is there a more direct way? [EDIT] @JBB is right. I want to be able to invoke su/sudo to save as well as open. It would be OK (but not ideal) to re-authorize when saving. What I'm looking for is variations of find-file and save-buffer that can be "piped" through su/sudo.

    Read the article

  • Considering getting into reverse engineering/disassembly

    - by Zombies
    Assuming a decent understanding of assembly on common CPU architectures (eg: x86), how can one explore a potential path (career, fun and profit, etc) into the field of reverse engineering? There is so little educational guides out there so it is difficult to understand what potential uses this has today (eg: is searching for buffer overflow exploits still common, or do stack monitoring programs make this obselete?). I am not looking for any step by step program, just some relevant information such as tips on how to efficiently find a specific area of a program. Basic things in the trade. As well as what it is currently being used for today. So to recap, what current uses does reverse engineering yield today? And how can one find some basic information on how to learn the trade (again it doesn't have to be step-by-step, just anything which can through a clue would be helpful).

    Read the article

  • c++ to vb.net , problem with callback function

    - by johan
    I'm having a hard time here trying to find a solution for my problem. I'm trying to convert a client API funktion from C++ to VB.NET, and i think have some problems with the callback function. parts of the C++ code: typedef struct{ BYTE m_bRemoteChannel; BYTE m_bSendMode; BYTE m_nImgFormat; // =0 cif ; = 1 qcif char *m_sIPAddress; char *m_sUserName; char *m_sUserPassword; BOOL m_bUserCheck; HWND m_hShowVideo; }CLIENT_VIDEOINFO, *PCLIENT_VIDEOINFO; CPLAYER_API LONG __stdcall MP4_ClientStart(PCLIENT_VIDEOINFO pClientinfo,void(CALLBACK *ReadDataCallBack)(DWORD nPort,UCHAR *pPacketBuffer,DWORD nPacketSize)); void CALLBACK ReadDataCallBack(DWORD nPort,UCHAR *pPacketBuffer,DWORD nPacketSize) { TRACE("%d\n",nPacketSize); } ..... aa5.m_sUserName = "123"; aa5.m_sUserPassword="w"; aa5.m_bUserCheck = TRUE; MP4_ClientSetTTL(64); nn1 = MP4_ClientStart(&aa5,ReadDataCallBack); if (nn1 == -1) { MessageBox("error"); return; } SDK description: MP4_ClientStart This function starts a connection. The format of the call is: LONG __stdcall MP4_ClientStart(PCLIENT_VIDEOINFO pClientinfo, void(*ReadDataCallBack)(DWORD nChannel,UCHAR *pPacketBuffer,DWORD nPacketSize)) Parameters pClientinfo holds the information. of this connection. nChannel holds the channel of card. pPacketBuffer holds the pointer to the receive buffer. nPacketSize holds the length of the receive buffer. Return Values If the function succeeds the return value is the context of this connection. If the function fails the return value is -1. Remarks typedef struct{ BYTE m_bRemoteChannel; BYTE m_bSendMode; BYTE m_bImgFormat; char *m_sIPAddress; char *m_sUserName; char *m_sUserPassword; BOOL m_bUserCheck; HWND m_hShowVideo; } CLIENT_VIDEOINFO, * PCLIENT_VIDEOINFO; m_bRemoteChannel holds the channel which the client wants to connect to. m_bSendMode holds the network mode of the connection. m_bImgFormat : Image format, 0 is main channel video, 1 is sub channel video m_sIPAddress holds the IP address of the server. m_sUserName holds the user’s name. m_sUserPassword holds the user’s password. m_bUserCheck holds the value whether sends the user’s name and password or not. m_hShowVideo holds Handle for this video window. If m_hShowVideo holds NULL, the client can be record only without decoder. If m_bUserCheck is FALSE, we will send m_sUserName and m_sUserPassword as NULL, else we will send each 50 bytes. The length of m_sIPAddress and m_sUserName must be more than 50 bytes. ReadDataCallBack: When the library receives a packet from a server, this callback is called. My VB.Net code: Imports System.Runtime.InteropServices Public Class Form1 Const WM_USER = &H400 Public Structure CLIENT_VIDEOINFO Public m_bRemoteChannel As Byte Public m_bSendMode As Byte Public m_bImgFormat As Byte Public m_sIPAddress As String Public m_sUserName As String Public m_sUserPassword As String Public m_bUserCheck As Boolean Public m_hShowVideo As Long 'hWnd End Structure Public Declare Function MP4_ClientSetNetPort Lib "hikclient.dll" (ByVal dServerPort As Integer, ByVal dClientPort As Integer) As Boolean Public Declare Function MP4_ClientStartup Lib "hikclient.dll" (ByVal nMessage As UInteger, ByVal hWnd As System.IntPtr) As Boolean <DllImport("hikclient.dll")> Public Shared Function MP4_ClientStart(ByVal Clientinfo As CLIENT_VIDEOINFO, ByRef ReadDataCallBack As CALLBACKdel) As Long End Function Public Delegate Sub CALLBACKdel(ByVal nPort As Long, <MarshalAs(UnmanagedType.LPArray)> ByRef pPacketBuffer As Byte(), ByVal nPacketSize As Long) Public Sub CALLBACK(ByVal nPort As Long, <MarshalAs(UnmanagedType.LPArray)> ByRef pPacketBuffer As Byte(), ByVal nPacketSize As Long) End Sub Public mydel As New CALLBACKdel(AddressOf CALLBACK) Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim Clientinfo As New CLIENT_VIDEOINFO() Clientinfo.m_bRemoteChannel = 0 Clientinfo.m_bSendMode = 0 Clientinfo.m_bImgFormat = 0 Clientinfo.m_sIPAddress = "193.168.1.100" Clientinfo.m_sUserName = "1" Clientinfo.m_sUserPassword = "a" Clientinfo.m_bUserCheck = False Clientinfo.m_hShowVideo = Me.Handle 'Nothing MP4_ClientSetNetPort(850, 850) MP4_ClientStartup(WM_USER + 1, Me.Handle) MP4_ClientStart(Clientinfo, mydel) End Sub End Class here is some other examples of the code in: C# http://blog.csdn.net/nenith1981/archive/2007/09/17/1787692.aspx VB ://read.pudn.com/downloads70/sourcecode/graph/250633/MD%E5%AE%A2%E6%88%B7%E7%AB%AF%28VB%29/hikclient.bas__.htm ://read.pudn.com/downloads70/sourcecode/graph/250633/MD%E5%AE%A2%E6%88%B7%E7%AB%AF%28VB%29/Form1.frm__.htm Delphi ://read.pudn.com/downloads91/sourcecode/multimedia/streaming/349759/Delphi_client/Unit1.pas__.htm

    Read the article

  • How to name variables wich are structs

    - by evilpie
    Hello, i often work on private projects using the WinApi, and as you might know, it has thousands of named and typedefed structs like MEMORY_BASIC_INFORMATION. I will stick to this one in my question, what still is preferred, or better when you want to name a variable of this type. Is there some kind of style guide for this case? For example if i need that variable for the VirtualQueryEx function. Some ideas: MEMORY_BASIC_INFORMATION memoryBasicInformation; MEMORY_BASIC_INFORMATION memory_basic_information; Just use the name of the struct non captialized and with or without the underlines. MEMORY_BASIC_INFORMATION basicInformation; MEMORY_BASIC_INFORMATION information; Short form? MEMORY_BASIC_INFORMATION mbi; I often see this style, using the abbreviation of the struct name. MEMORY_BASIC_INFORMATION buffer; VirtualQueryEx defines the third parameter lpBuffer (where you pass the pointer to the struct), so using this name might be an idea, too. Cheers

    Read the article

  • Emacs: Define a function which loads the file where the function itself is defined

    - by damd
    I'm refactoring a bit in my Emacs set up and have come to the conclusion that I want to use a different init file than the default one. So basically, in my ~/.emacs file, I have this: (load "/some/directory/init.el") Up until now, that's been working just fine. However, now I want to redefine an old command that I've used for ages, which opens my init file: (defun conf () "Open a buffer with the user init file." (interactive) (find-file user-init-file)) As you can see, this will open ~/.emacs no matter what I do. I want it to open /some/directory/init.el, or wherever the conf command itself is defined. How would I do that?

    Read the article

  • How to handle failure to release a resource which is contained in a smart pointer?

    - by cj
    How should an error during resource deallocation be handled, when the object representing the resource is contained in a shared pointer? Smart pointers are a useful tool to manage resources safely. Examples of such resources are memory, disk files, database connections, or network connections. // open a connection to the local HTTP port boost::shared_ptr<Socket> socket = Socket::connect("localhost:80"); In a typical scenario, the class encapsulating the resource should be noncopyable and polymorphic. A good way to support this is to provide a factory method returning a shared pointer, and declare all constructors non-public. The shared pointers can now be copied from and assigned to freely. The object is automatically destroyed when no reference to it remains, and the destructor then releases the resource. /** A TCP/IP connection. */ class Socket { public: static boost::shared_ptr<Socket> connect(const std::string& address); virtual ~Socket(); protected: Socket(const std::string& address); private: // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; But there is a problem with this approach. The destructor must not throw, so a failure to release the resource will remain undetected. A common way out of this problem is to add a public method to release the resource. class Socket { public: virtual void close(); // may throw // ... }; Unfortunately, this approach introduces another problem: Our objects may now contain resources which have already been released. This complicates the implementation of the resource class. Even worse, it makes it possible for clients of the class to use it incorrectly. The following example may seem far-fetched, but it is a common pitfall in multi-threaded code. socket->close(); // ... size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use! Either we ensure that the resource is not released before the object is destroyed, thereby losing any way to deal with a failed resource deallocation. Or we provide a way to release the resource explicitly during the object's lifetime, thereby making it possible to use the resource class incorrectly. There is a way out of this dilemma. But the solution involves using a modified shared pointer class. These modifications are likely to be controversial. Typical shared pointer implementations, such as boost::shared_ptr, require that no exception be thrown when their object's destructor is called. Generally, no destructor should ever throw, so this is a reasonable requirement. These implementations also allow a custom deleter function to be specified, which is called in lieu of the destructor when no reference to the object remains. The no-throw requirement is extended to this custom deleter function. The rationale for this requirement is clear: The shared pointer's destructor must not throw. If the deleter function does not throw, nor will the shared pointer's destructor. However, the same holds for other member functions of the shared pointer which lead to resource deallocation, e.g. reset(): If resource deallocation fails, no exception can be thrown. The solution proposed here is to allow custom deleter functions to throw. This means that the modified shared pointer's destructor must catch exceptions thrown by the deleter function. On the other hand, member functions other than the destructor, e.g. reset(), shall not catch exceptions of the deleter function (and their implementation becomes somewhat more complicated). Here is the original example, using a throwing deleter function: /** A TCP/IP connection. */ class Socket { public: static SharedPtr<Socket> connect(const std::string& address); protected: Socket(const std::string& address); virtual Socket() { } private: struct Deleter; // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; struct Socket::Deleter { void operator()(Socket* socket) { // Close the connection. If an error occurs, delete the socket // and throw an exception. delete socket; } }; SharedPtr<Socket> Socket::connect(const std::string& address) { return SharedPtr<Socket>(new Socket(address), Deleter()); } We can now use reset() to free the resource explicitly. If there is still a reference to the resource in another thread or another part of the program, calling reset() will only decrement the reference count. If this is the last reference to the resource, the resource is released. If resource deallocation fails, an exception is thrown. SharedPtr<Socket> socket = Socket::connect("localhost:80"); // ... socket.reset();

    Read the article

  • Does malloc() allocate a contiguous block of memory?

    - by user66854
    I have a piece of code written by a very old school programmer :-) . it goes something like this typedef struct ts_request { ts_request_buffer_header_def header; char package[1]; } ts_request_def; ts_request_buffer_def* request_buffer = malloc(sizeof(ts_request_def) + (2 * 1024 * 1024)); the programmer basically is working on a buffer overflow concept. I know the code looks dodgy. so my questions are: Does malloc always allocate contiguous block of memory ?. because in this code if the blocks are not contiguous , the code will fail big time Doing free(request_buffer) , will it free all the bytes allocated by malloc i.e sizeof(ts_request_def) + (2 * 1024 * 1024), or only the bytes of the size of the structure sizeof(ts_request_def) Do you see any evident problems with this approach , i need to discuss this with my boss and would like to point out any loopholes with this approach

    Read the article

  • Interop Structure: Should Unsigned Short be Mapped to byte[]?

    - by Ngu Soon Hui
    I have such a C++ structure: typedef struct _FILE_OP_BLOCK { unsigned short fid; // objective file ID unsigned short offset; // operating offset unsigned char len; // buffer length(update) // read length(read) unsigned char buff[MAX_BUFF_SIZE]; } FILE_OP_BLOCK; And now I want to map it in .Net. The tricky thing is that the I should pass a 2 byte array for fid, and integer for len, even though in C# fid is an unsigned short and len is an unsigned char I wonder whether my structure ( in C#) below is correct? public struct File_OP_Block { [MarshalAs(UnmanagedType.ByValArray, SizeConst = 2)] public byte[] fid; public ushort offset; public byte length; [MarshalAs(UnmanagedType.ByValArray, SizeConst = 240)] public char[] buff; }

    Read the article

  • Reverse massive text file in Java

    - by DanJanson
    What would be the best approach to reverse a large text file that is uploaded asynchronously to a servlet that reverses this file in a scalable and efficient way? text file can be massive (gigabytes long) can assume mulitple server/clustered environment to do this in a distributed manner. open source libraries are encouraged to consider I was thinking of using Java NIO to treat file as an array on disk (so that I don't have to treat the file as a string buffer in memory). Also, I am thinking of using MapReduce to break up the file and process it in separate machines. Any input is appreciated. Thanks. Daniel

    Read the article

  • Python IPC, popen too slow

    - by UnableToLoad
    i need to run a subprocess (./myProgram) form python script and get output, actually i do this: import subprocess proc = subprocess.Popen('./generate_out', shell=False, stdout=subprocess.PIPE, ) while proc.poll() is None: out = proc.stdout.readline() data = doStuff(out) print(data) but is slow, sometimes pass a lot of time between the output produced by ./generate_out and the print(data), knowing that my doStuff() function is very fast, i think there is some buffer slowing down my pipe... Notes: ./generate_out, generates potentially an unlimited number of lines of finite length each. It seems that when too few chars are put in the pipe between the two processes nothing happens, then when enough is produced i get a huge print (non the expected behaviour!) sometimes i wait many seconds (10-20 and more) between generate_out print and python print) what can i do? maybe communicate() is faster? anithing else? Thank you a lot!

    Read the article

  • producing a typewriter-like effect

    - by Tony Ennis
    Android newb here. Please use small words :-) I'd like to simulate typewriter output on my Android. The output being displayed is generated by a game and is somewhat freeform. The effect I want to see individual characters appear at a rate of about 6 characters a second. When a 'carriage return' is seen, I'd like to insert a delay then resume typing on the left. What are some suggestions on views? Would the view of choice for this be a TextView? Even that seems like overkill for this read-only coarsely scrolling output. I saw something on this thread about an AsyncTask. That looks useful. Perhaps my game will write to some manner of buffer, and a subclass of AsyncTask will pull characters out every .15 seconds or so, add them to the TextView, then invalidate() the TextView? Sound like a plan?

    Read the article

  • Thread Local Memory for Scratch Memory.

    - by Hassan Syed
    I am using Protocol Buffers and OpensSSL to generate, HMACs and then CBC encrypt the two fields to obfuscate the session cookies -- similar Kerberos tokens. Protocol Buffers' API communicates with std::strings and has a buffer caching mechanism; I exploit the caching mechanism, for successive calls in the the same thread, by placing it in thread local memory; additionally the OpenSSL HMAC and EVP CTX's are also placed in the same thread local memory structure ( see this question for some detail on why I use thread local memory and the massive amount of speedup it enables even with a single thread). The generation and deserialization, "my algorithms", of these cookie strings uses intermediary void *s and std::strings and since Protocol Buffers has an internal memory retention mechanism I want these characteristics for "my algorithms". So how do I implement a common scratch memory ? I don't know much about the rdbuf of the std::string object. I would presumeably need to grow it to the lowest common size ever encountered during the execution of "my algorithms". Thoughts ?

    Read the article

  • Determine how much can I write into a filehandle; copying data from one FH to the other.

    - by Vi
    How to determine if I can write the given number of bytes to a filehandle (socket actually)? (Alternatively, how to "unread" the data I had read from other filehandle?) I want something like: n = how_much_can_I_write(w_handle); n = read(r_handle, buf, n); assert(n==write(w_handle, buf, n)); Both filehandles (r_handle and w_handle) have received ready status from epoll_wait. I want all data from r_handle to be copied to w_handle without using a "write debt" buffer. In general, how to copy the data from one filehandle to the other simply and reliably?

    Read the article

  • export excel taking long time from ASP pages?

    - by ricky
    i am using following code for export to excel from .ASP page? GMID = Request.QueryString ("GMID") Response.Buffer = False Response.ContentType = "application/vnd.ms-excel" DIR_YR = Request.QueryString ("DIR_YR") CD = Request.QueryString("CD") YEAR = Request.QueryString("IND") Problem that I am facing is that When records are around 2,000 or more{ export to excel ask for open option .When i click on that option only Download in progress... shown but actually no excel pop up will open .How can I fixed this bug because for 700-800 rows its working Fine. I am not looking for whole change codes because there is a problem with only One Sale rep who is having more than 2000 rows.I am looking for one or two rows changes.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >