Search Results

Search found 4759 results on 191 pages for 'depth buffer'.

Page 140/191 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Asp.Net Error Message:Unable to validate data

    - by Amitabh
    We have a Asp.Net Webform page which contains a GridView inside UpdatePanel and refreshes every minute. And every one minute we get the following error in Event log. Error Message:Unable to validate data. Stack Trace: at System.Web.Configuration.MachineKeySection.GetDecodedData(Byte[] buf, Byte[] modifier, Int32 start, Int32 length, Int32& dataLength) at System.Web.UI.ObjectStateFormatter.Deserialize(String inputString). We have tried the following. Adding a static machine key in the Web.Config. (Did not work?) Disabling the View State Mac in the Web,.Config using following entry. (Did not work) <pages buffer="true" enableViewStateMac="false"> Is there something else that might cause this?

    Read the article

  • design patterns for hierarchical structures

    - by JLBarros
    Anyone knows some design patterns for hierarchical structures? For example, to manage inventory categories, accounting chart of accounts, divisions of human resources, etc.. Thank you very much in advance EDIT: Thanks for your interest. I am looking for a better way of dealing with hierarchical items to which they should apply operations depending on the level of hierarchy. I have been studying the patterns by Martin Fowler, for example Accounting, but I wonder if there are other more generic. The problem is that operations apply to the items must be possible to change even at run time and may depend on other external variables. I thought of a kind of strategy pattern but would like to combine it with the fact that it is a hierarchical scheme. I would appreciate any reference to hierarchical patterns and you'll take care of them in depth.

    Read the article

  • Recommendations with hierarchical data on non-relational databases?

    - by Luki
    I'm developing an web application that uses a non-relational database as a backend (django-nonrel + AppEngine). I need to store some hierarchical data (projects/subproject_1/subproject_N/tasks), and I'm wondering which pattern should I use. For now I thought of: Adjacency List (store the item's parent id) Nested sets (store left and right values for the item) In my case, the depth of nesting for a normal user will not exceed 4-5 levels. Also, on the UI, I would like to have a pagination for the items on the first level, to avoid to load too many items at the first page load. From what I understand so far, nested sets are great when the hierarchy is used more for displaying. Adjacency lists are great when editing on the tree is done often. In my case I guess I need the displaying more than the editing (when using nested sets, even if the display would work great, the above pagination could complicate things on editing). Do you have any thoughts and advice, based on your experience with the non-relational databases?

    Read the article

  • heirarchial data from self referencing table in tree form

    - by Beta033
    Ii looks like this has been asked and answered in all the simple cases, excluding the one that i'm having trouble with. I've tried using a recursive CTE to generate this, however maybe a cursor would be better? or maybe a set of recursive functions will do the trick? Can this be done in a cte? consider the following table PrimaryKey ParentKey 1 NULL 2 1 3 6 4 7 5 2 6 1 7 NULL should yield PK 1 -2 --5 -6 --3 7 -4 where the number of - marks equal the depth, my primary difficulty is the ordering.

    Read the article

  • Most frustrating programming style you've encountered

    - by JaredPar
    When it comes to coding style I'm a pretty relaxed programmer. I'm not firmly dug into a particular coding style. I'd prefer a consistent overall style in a large code base but I'm not going to sweat every little detail of how the code is formatted. Still there are some coding styles that drive me crazy. No matter what I can't look at examples of these styles without reaching for a VIM buffer to "fix" the "problem". I can't help it. It's not even wrong, I just can't look at it for some reason. For instance the following comment style almost completely prevents me from actually being able to read the code. if (someConditional) // Comment goes here { other code } What's the most frustrating style you've encountered?

    Read the article

  • How to use C# nested structures to access tree of data

    - by zotty
    I'm importing some XML to C#, and want to be able to access data from the XML in the form of what I think is a nested structure. (I may be wrong!) What I have in my XML is in the following form: <hardwareSettings initial="true> <cameraSettings width="1024" height="768" depth="8" /> <tiltSettings theta="35" rho="90"> </hardwareSettings> I can import each setting alright, so I have them all in individual ints, but I would like to be able to access it in the form int x=hardwaresettings.camerasettings.width; int rho=hardwaresettings.tiltsettings.rho; I've tried various arrangements of structs within structs, but I don't seem able to cast a new object (hardwaresettings) that contains the appropriate children (camerasettings.width & tiltsettings.rho). Sorry if I'm not using the right lingo... I'm reading myself in circles here!

    Read the article

  • Using .dll methods to load data from file in C# code

    - by Espinas.iss
    I want to use in C# these methods: * int LibRaw::open_datastream(LibRaw_abstract_datastream *stream) * int LibRaw::open_file(const char *rawfile) * int LibRaw::open_buffer(void *buffer, size_t bufsize) * int LibRaw::unpack(void) * int LibRaw::unpack_thumb(void) that are stored in a libraw.dll. These functions one by one load data from file... I've been reading about P/Invoke but i'm not sure how to invoke them. Can anyone show me an example how to use all of these functions together in C# to load file (raw image stored in folder) or just how to PIvoke one of them. Thanx!

    Read the article

  • SDL+OpenGL app: blank screen

    - by Lococo
    I spent the last three days trying to create a small app using SDL + OpenGL. The app itself runs fine -- except it never outputs any graphics; just a black screen. I've condensed it down to a minimal C file, and I'm hoping someone can give me some guidance. I'm running out of ideas. I'm using Windows Vista, MinGW & MSYS. Thanks in advance for any advice! #include <SDL/SDL.h> #include <SDL_opengl.h> size_t sx=600, sy=600, bpp=32; void render(void) { glEnable(GL_DEPTH_TEST); // enable depth testing glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // clear to black glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear color/depth buffer glLoadIdentity(); // reset modelview matrix glColor3b(255, 0, 0); // red glLineWidth(3.0); // line width=3 glRecti(10, 10, sx-10, sy-10); // draw rectangle glFlush(); SDL_GL_SwapBuffers(); } int input(void) { SDL_Event event; while (SDL_PollEvent(&event)) if (event.type == SDL_QUIT || (event.type == SDL_KEYUP && event.key.keysym.sym == SDLK_ESCAPE)) return 0; return 1; } int main(int argc, char *argv[]) { SDL_Surface* surf; if (SDL_Init(SDL_INIT_EVERYTHING) != 0) return 0; if (!(surf = SDL_SetVideoMode(sx, sy, bpp, SDL_HWSURFACE|SDL_DOUBLEBUF))) return 0; glViewport(0, 0, sx, sy); // reset the viewport to new dimensions glMatrixMode(GL_PROJECTION); // set projection matrix to be current glLoadIdentity(); // reset projection matrix glOrtho(0, sx, sy, 0, -1.0, 1.0); // create ortho view glMatrixMode(GL_MODELVIEW); // set modelview matrix glLoadIdentity(); // reset modelview matrix for (;;) { if (!input()) break; render(); SDL_Delay(10); } SDL_FreeSurface(surf); SDL_Quit(); exit(0); } UPDATE: I have a version that works, but it changes orthographic to perspective. I'm not sure why this works and the other doesn't, but for future reference, here's a version that works: #include <SDL/SDL.h> #include <SDL_opengl.h> size_t sx=600, sy=600, bpp=32; void render(void) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); // set location in front of camera glTranslated(0, 0, -10); glBegin(GL_QUADS); // draw a square glColor3d(1, 0, 0); glVertex3d(-2, 2, 0); glVertex3d( 2, 2, 0); glVertex3d( 2, -2, 0); glVertex3d(-2, -2, 0); glEnd(); glFlush(); SDL_GL_SwapBuffers(); } int input(void) { SDL_Event event; while (SDL_PollEvent(&event)) if (event.type == SDL_QUIT || (event.type == SDL_KEYUP && event.key.keysym.sym == SDLK_ESCAPE)) return 0; return 1; } int main(int argc, char *argv[]) { SDL_Surface *surf; if (SDL_Init(SDL_INIT_EVERYTHING) != 0) return 0; if (!(surf = SDL_SetVideoMode(sx, sy, bpp, SDL_OPENGL))) return 0; glViewport(0, 0, sx, sy); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, (float)sx / (float)sy, 1.0, 100.0); glMatrixMode(GL_MODELVIEW); glClearColor(0, 0, 0, 1); glClearDepth(1.0); glEnable(GL_DEPTH_TEST); for (;;) { if (!input()) break; render(); SDL_Delay(10); } SDL_FreeSurface(surf); SDL_Quit(); return 0; }

    Read the article

  • Write to pipe deadlocking program

    - by avs3323
    Hi, I am having a problem in my program that uses pipes. What I am doing is using pipes along with fork/exec to send data to another process What I have is something like this: //pipes are created up here if(fork() == 0) //child process { ... execlp(...); } else { ... fprintf(stderr, "Writing to pipe now\n"); write(pipe, buffer, BUFFER_SIZE); fprintf(stderr, "Wrote to pipe!"); ... } This works fine for most messages, but when the message is very large, the write into the pipe deadlocks. I think the pipe might be full, but I do not know how to clear it. I tried using fsync but that didn't work. Can anyone help me?

    Read the article

  • emacs: is there a semantic-jump-to-declaration (using semantic.el)?

    - by Cheeso
    Suppose I am editing a buffer containing C code. I have started semantic with semantic-load-enable-code-helpers . I have point placed on the name of a function . If I then invoke senator-jump I can jump to the place where that fn is first declared, in that module. What if it is an extern? Is it possible to use senator to jump to the definition of the fn, which resides in a separate module? Thanks.

    Read the article

  • Using the windows api and C++, how could I load an exe from the hard drive and run it in its own thread?

    - by returneax
    For the sake of learning I'm trying to do what the OS does when launching a program ie. parsing a PE file and giving it a thread of execution. If I have two exe's one called foo.exe and the other bar.exe, how could I have foo.exe load the contents of bar.exe into memory then have it execute from there in its own thread? I know how to get it into memory using MapViewOfFile or by simple loading the contents on the hard drive into a buffer. I'm assuming simply copying the contents of bar.exe on disk into its own suspended thread and running it wouldn't work. I am semi-familiar with PE file internals. All help is very much appreciated, of course :)

    Read the article

  • Named keywords in decorators?

    - by wheaties
    I've been playing around in depth with attempting to write my own version of a memoizing decorator before I go looking at other people's code. It's more of an exercise in fun, honestly. However, in the course of playing around I've found I can't do something I want with decorators. def addValue( func, val ): def add( x ): return func( x ) + val return add @addValue( val=4 ) def computeSomething( x ): #function gets defined If I want to do that I have to do this: def addTwo( func ): return addValue( func, 2 ) @addTwo def computeSomething( x ): #function gets defined Why can't I use keyword arguments with decorators in this manner? What am I doing wrong and can you show me how I should be doing it?

    Read the article

  • Oracle PL/SQL: Dump query result into file

    - by CC
    Hi. I'm working on a pl sql stored procedure. What I need is to do a select, use a cursor and for every record build a string using values. At the end I need to write this into a file. I try to use dbms_output.put_line("toto") but the buffer size is to small because I have about 14 millions lines. I call my procedure from a unix ksh. I'm thinking at something like using "spool on" (on the ksh side) to dump the result of my procedure, but I don' know how to do it (if this is possible) Anyone has any idea? Thank alot. C.C.

    Read the article

  • How to run shell script with live feedback from PHP?

    - by Highway of Life
    How would I execute a shell script from PHP while giving constant/live feedback to the browser? I understand from the system function documentation: The system() call also tries to automatically flush the web server's output buffer after each line of output if PHP is running as a server module. I'm not clear on what they mean by running it as a 'server module'. I attempted to run the script in the cgi-bin, but either I'm doing it wrong, or that's not what they mean. Example PHP code: <?php system('/var/lib/script_test.sh'); Example shell code: #!/bin/bash echo "Start..." for i in {1..10} do echo "$i..." sleep 1 done echo "Done."

    Read the article

  • Strange behaviour with fputs and a loop.

    - by Jonathan
    When running the following code I get no output but I cannot work out why. # include <stdio.h> int main() { fputs("hello", stdout); while (1); return 0; } Without the while loop it works perfectly but as soon as I add it in I get no output. Surely it should output before starting the loop? Is it just on my system? Do I have to flush some sort of buffer or something? Thanks in advance.

    Read the article

  • Increase the TCP receive window for a specific socket

    - by rursw1
    Hi, How to increase the TCP receive window for a specific socket? - I know how to do so for all the sockets by setting the registry key TcpWindowSize, but how do do that for a specific one? According to MSFT's documents, the way is Calling the Windows Sockets function setsockopt, which sets the receive window on a per-socket basis. But in setsockopt, it is mentioned about SO_RCVBUF : Specifies the total per-socket buffer space reserved for receives. This is unrelated to SO_MAX_MSG_SIZE and does not necessarily correspond to the size of the TCP receive window. So is it possible? How? Thanks.

    Read the article

  • php multidimensional array problem

    - by ntan
    Hi to all, i am trying to setup a multidimensional array but my problem is that i can not get the right order from incoming data. Explain $x[1][11]=11; $x[1]=1; var_dump($x); In the above code i get only x[1]. To right would be $x[1]=1; $x[1][11]=11; var_dump($x); But in my case i can dot ensure that x[1] will come first, and x[1][11] will come after. Is there any way that i can use the first example and get right the array. Keep in mind that the array depth is large. Thats

    Read the article

  • ASP.net web page still displaying cached versions

    - by user279521
    My web page is still displaying a previously cached versions of the page. I have this in the page_load event: Response.Clear(); Response.Buffer = true; Response.ExpiresAbsolute = DateTime.Now.AddDays(-1d); Response.Expires = -1; Response.CacheControl = "no-cache"; Response.Cache.SetCacheability(HttpCacheability.NoCache); I have this in the Page_Init: protected void Page_Init(object Sender, EventArgs e) { Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.Cache.SetExpires(DateTime.Now.AddDays(-1)); } Any idea what I might be missing?

    Read the article

  • FtpWebResponse and StreamReader - specifying an offset

    - by AJ
    Hi, I am using the FtpWebRequest / FtpWebResponse objects in C# to download files from a server - so far, so good. I create a StreamReader object from the response stream and use a StreamWriter to create a local file. Now, the file I am reading happens to be in a very simple 'archive' format - there is a small TOC at the start of the file followed by the actual file data. I can therefore read the TOC and get a file offset and size of the data I want to download. My question is: Supposing the offset is 1024. I would use StreamReader.Read(buffer, 1024, length), but will .NET and the FTP protocol actually allow me to skip bytes 0-1023, or does the reader still go through the (relatively) slow process of downloading and discarding the bytes I don't need? This may make the difference between whether I want to use a single archive file, or a TOC file with the data files stored separately. As a bit of a secondary question, would my mileage vary using the Http classes instead of Ftp? Cheers, Adam

    Read the article

  • Vim and clang_complete, how to explicitly compile my code

    - by puller
    I use Vim with clang_complete for omnicompletion. The plugin is automatically triggered when I need completion, e.g., after I type . or -> to access an object members or methods (see attached screenshot). The plugin works really nice, however I would need a way to trigger it manually (i.e. to compile my code for syntax checking). This is useful for two reasons: Static syntax checking Clear previous errors which have been fixed (which otherwise will remain in their buffer). See the two screenshots below for a better understanding. Any help is appreciated. Thanks. Screenshot 1 Screenshot 2

    Read the article

  • Python: Pickling highly-recursive objects without using `setrecursionlimit`

    - by cool-RR
    I've been getting RuntimeError: maximum recursion depth exceeded when trying to pickle a highly-recursive tree object. Much like this asker here. He solved his problem by setting the recursion limit higher with sys.setrecursionlimit. But I don't want to do that: I think that's more of a workaround than a solution. Because I want to be able to pickle my trees even if they have 10,000 nodes in them. (It currently fails at around 200.) (Also, every platform's true recursion limit is different, and I would really like to avoid opening this can of worms.) Is there any way to solve this at the fundamental level? If only the pickle module would pickle using a loop instead of recursion, I wouldn't have had this problem. Maybe someone has an idea how I can cause something like this to happen, without rewriting the pickle module? Any other idea how I can solve this problem will be appreciated.

    Read the article

  • Preventing FIN_WAIT2 when closing socket

    - by patrickvacek
    I have a server program that connects to another program via a given socket, and in certain cases I need to close the connection and almost immediately re-open it on the same socket. This by and large works, except that I have to wait exactly one minute for the socket to reset. In the meantime, netstat indicates that the server sees the socket in FIN_WAIT2 and the client sees it as CLOSE_WAIT. I'm already using SO_REUSEADDR, which I thought would prevent the wait, but that isn't doing the trick. Setting SO_LINGER to zero also does not help. What else can I do to resolve this? Here are the relevant code snippets: SetUpSocket() { // Set up the socket and listen for a connection from the exelerate client. // Open a TCP/IP socket. m_baseSock = socket(PF_INET, SOCK_STREAM, IPPROTO_IP); if (m_baseSock < 0) { return XERROR; } // Set the socket options to reuse local addresses. int flag = 1; if (setsockopt(m_baseSock, SOL_SOCKET, SO_REUSEADDR, &flag, sizeof(flag)) == -1) { return XERROR; } // Set the socket options to prevent lingering after closing the socket. //~ linger li = {1,0}; //~ if (setsockopt(m_baseSock, SOL_SOCKET, SO_LINGER, &li, sizeof(li)) == -1) //~ { //~ return XERROR; //~ } // Bind the socket to the address of the current host and our given port. struct sockaddr_in addr; memset(&addr, 0, sizeof(addr)); addr.sin_family = AF_INET; addr.sin_addr.s_addr = INADDR_ANY; addr.sin_port = htons(m_port); if (bind(m_baseSock, (struct sockaddr*)&addr, sizeof(addr)) != 0) { return XERROR; } // Tell the socket to listen for a connection from client. if (listen(m_baseSock, 4) != 0) { return XERROR; } return XSUCCESS; } ConnectSocket() { // Add the socket to a file descriptor set. fd_set readfds; FD_ZERO(&readfds); FD_SET(m_baseSock, &readfds); // Set timeout to ten seconds. Plenty of time. struct timeval timeout; timeout.tv_sec = 10; timeout.tv_usec = 0; // Check to see if the socket is ready for reading. int numReady = select(m_baseSock + 1, &readfds, NULL, NULL, &timeout); if (numReady > 0) { int flags = fcntl(m_baseSock, F_GETFL, 0); fcntl(m_baseSock, flags | O_NONBLOCK, 1); // Wait for a connection attempt from the client. Do not block - we shouldn't // need to since we just selected. m_connectedSock = accept(m_baseSock, NULL, NULL); if (m_connectedSock > 0) { m_failedSend = false; m_logout = false; // Spawn a thread to accept commands from client. CreateThread(&m_controlThread, ControlThread, (void *)&m_connectedSock); return XSUCCESS; } } return XERROR; } ControlThread(void *arg) { // Get the socket from the argument. socket sock = *((socket*)arg); while (true) { // Add the socket to a file descriptor set. fd_set readfds; FD_ZERO(&readfds); FD_SET(sock, &readfds); // Set timeout to ten seconds. Plenty of time. struct timeval timeout; timeout.tv_sec = 10; timeout.tv_usec = 0; // Check if there is any readable data on the socket. int num_ready = select(sock + 1, &readfds, NULL, NULL, &timeout); if (num_ready < 0) { return NULL; } // If there is data, read it. else if (num_ready > 0) { // Check the read buffer. xuint8 buf[128]; ssize_t size_read = recv(sock, buf, sizeof(buf)); if (size_read > 0) { // Get the message out of the buffer. char msg = *buf; if (msg == CONNECTED) { // Do some things... } // If we get the log-out message, log out. else if (msg == LOGOUT) { return NULL; } } } } // while return NULL; } ~Server() { // Close the sockets. if (m_baseSock != SOCKET_ERROR) { close(m_baseSock); m_baseSock = SOCKET_ERROR; } if (m_connectedSock != SOCKET_ERROR) { close(m_connectedSock); m_connectedSock = SOCKET_ERROR; } } SOCKET_ERROR is equal to -1. The server object gets destroyed, at which point the connection should close, and then recreated, at which point the SetUpSocket() and ConnectSocket() routines are called. So why do I have to wait a minute for the socket to clear? Any ideas would be appreaciated.

    Read the article

  • The implicit function __strcpy_chk() call

    - by Summer_More_More_Tea
    Hi everyone: I'm now performing a stack buffer overflow attack test on my own PC( Ubuntu 9.10, gcc-4.4.1 ) based on the article http://www.tenouk.com/Bufferoverflowc/Bufferoverflow4.html. Yet I haven't achieved the goal. Each time a segfault is thrown accompanied with some error informaiton. I compile the source code, and wanna get further information using objdump. Function __strcpy_chk is invoked in the assembly code dumped out, and it's said that "The __strcpy_chk() function is not in the source standard; it is only in the binary standard." Does this the mechanism a compiler employed to protect runtime stack? To finish my test, how can I bypass the protection? Regards.

    Read the article

  • What strategies are efficient to handle concurrent reads on heterogeneous multi-core architectures?

    - by fabrizioM
    I am tackling the challenge of using both the capabilities of a 8 core machine and a high-end GPU (Tesla 10). I have one big input file, one thread for each core, and one for the the GPU handling. The Gpu thread, to be efficient, needs a big number of lines from the input, while the Cpu thread needs only one line to proceed (storing multiple lines in a temp buffer was slower). The file doesn't need to be read sequentially. I am using boost. My strategy is to have a mutex on the input stream and each thread locks - unlocks it. This is not optimal because the gpu thread should have a higher precedence when locking the mutex, being the fastest and the most demanding one. I can come up with different solutions but before rush into implementation I would like to have some guidelines. What approach do you use / recommend ?

    Read the article

  • FFmpeg + iPhone - Interesting (incorrect?) video encoding results

    - by jtrim
    I'm encoding some video on the iPhone by running the png image data through swscale to get YUV420P data then encoding that frame using the MSMPEG4V1 codec. In the api docs, avcodec_encode_video should return the number of bytes used from the output buffer by that encode operation. There are 234,000 bytes going into the encoder, but the result returned by avcodec_encode_video is simply "4". The result is exactly the same over 24 frames. Something seems fishy here...any insight? Here's a pastebin link to the code: http://pastebin.com/ht94FWva (sorry for the link away from SO, I just didn't want to have the code duplicated in several places) EDIT: Also, I've set up a custom log callback for ffmpeg to use and I have the log level set to "Verbose" (libavutil/log.h), so libavcodec should be logging any goofs to the console, but avcodec is quiet throught he whole operation. (note: I did test to make sure my log callback was working)

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >