Search Results

Search found 7065 results on 283 pages for 'cpu sockets'.

Page 223/283 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Tracing or Logging Resource Governor classification function behavior in Sql Server 2008

    - by nganju
    I'm trying to use the Resource Governor in SQL Server 2008 but I find it hard to debug the classification function and figure out what the input variables will have, i.e. does SUSER_NAME() contain the domain name? What does the APP_NAME() string look like? It's also hard to verify that it's working correctly. What group did the function return? The only way I can see this is to fire up the performance monitor and watch unblinkingly for little blips in the right CPU counter. Is there some way I can either run it in Debug mode, where I can set a breakpoint and step through and look at variable values, or can I at least do the old-school method of writing trace statements to a file so I can see what's going on? Thanks...

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at Game Development

    Read the article

  • Book resources for x86/x64 assembly programming on Win platform

    - by Scott Davies
    Hello, I ran a search for assembly language resources on stackoverflow.com and found some interesting results, but they seemed to boil down to two groups: 1) Assembly references to old ia32 architecture, such as the 80386 to Pentium 2) Windows agnostic books. Most of the commenters make the point that assembler is CPU dependent and that the OS is irrelevant, but it seems pointless to me to pick a book that has assembly examples that refer to MS-DOS interrupts and memory layouts. Likewise, learning assembler on Linux would seem to produce Linux executables Are there any: 1) Modern 2) x86/x64 3) on Windows platform - book resources available ? The reason I am targeting the Win platform is I would like to do low-level, OS internals programming, to supplement my Win C/C++ work. Thanks

    Read the article

  • Simple "Hello World!" console application crashes when run by windows TaskScheduler (1.0)

    - by user326627
    I have a batch file which starts multiple instances of simple console application (Hello World!). I work on Windows server 2008 64-bit. I configure it to run in TaskScheduler, at startup, and whether user is logged-in or not. The later configuration means that the instances will run without GUI (i.e. - no window). When I run this task, some of the instances just fail, after consuming 100& CPU. Application event-log shows the following error: "Faulting module KERNEL32.dll, version 6.0.6002.18005, time stamp 0x49e0421d, exception code 0xc0000142, fault offset 0x00000000000b8fb8, process id 0x29bc, application start time 0x01cae17d94a61895." Running the batch file directly works just fine. It seems to me that the OS has a problem loading too many instances of the application when no window is displayed. However - I can’t figure out why... Any idea??

    Read the article

  • Progressbar: Force element.innerHTML update before javascript sort call

    - by maras
    Hi, what is the best practice for this scenario: 1) User clicks "Sort huge javascript array" 2) Browser shows "Sorting..." through element.innerHTML="Sorting" 3) Browser sorts huge javascript array (100% CPU for several seconds) while displaying "Sorting..." message. 4) Browser shows result. Pseudo code: ... <a href="#" onclick="sortHugeArray();return false">Sort huge array</a> ... function sortHugeArray(){ document.getElementById("progress").innerHTML="Sorting..."; ...do huge sort ... ...render result... document.getElementById("progress").innerHTML=result; } When i do that this way, browser never shows "Sorting...", it freezes browser for several seconds and shows result without noticing user... Thank you for advice.

    Read the article

  • How do I view how many concurrent long polling requests there are on my server?

    - by Pascal
    My host is Joyent. My host says I have 15 process limit and prstat -J shows those processes but that doesn't tell me how many long polling requests are currently being served. I could record it myself but that would add alot of performance overhead. I need to know when the server is at its long polling limits. I know this limit occurs far before the memory or CPU is used up. From experimentation, I've already verified that the number of long polls open is NOT equivalant to the number of processes running, probably because each process has multiple threads, each serving a request. thanks.

    Read the article

  • Launch x64 Windows application in C# while the project is set to x86

    - by daydr3am3r
    Hi all I'm trying to launch the osk.exe and I keep getting "Could not start osk" message. The problem is that my project is set to x86 (i'm using a ms access database). If I switch to x64 or Any CPU everything works fine but the database will no longer work. I tried this using System.Diagnostics; private void btnOSK_Click(object sender, EventArgs e) { Process.Start("osk.exe"); Process.Start(@"C:\windows\system32\osk.exe"); } I also tried to run SysWOWW\osk but this also didn't work. Besides my application should run on both x86 and x64 machines. Is there any way to bypass this? It's really frustrating.

    Read the article

  • Running my web site in a 32-bit application pool on a 64-bit OS.

    - by Jeremy H
    Here is my setup: Dev: - Windows Server 2008 64-bit - Visual Studio 2008 - Solution with 3 class libraries, 1 web application Staging Web Server: - Windows Server 2008 R2 64-bit - IIS7.5 Integrated Application Pool with 32-bit Applications Enabled In Visual Studio I have set all 4 of my projects to compile to 'Any CPU' but when I run this web application on the web server with the 32-bit application pool it times out and crashes. When I run the application pool in 64-bit mode it works fine. The production web server requires me to run 32-bit application pool in 64-bit OS which is why I have this configured in this way on the staging web server. (I considered posting on ServerFault but the server part seems to be working fine. It is my code specifically that doesn't seem to want to run in 32-bit application pool which is why I am posting here.)

    Read the article

  • Single static HTML file: how can I serve it efficiently?

    - by Stevo M.
    Good afternoon. I have a domain and server that I've procured for non-HTML services. Should anyone stumble across port 80 of this server, I'd like to serve them a page explaining what the domain & server is for, instead of just 404'ing them. How can I serve one small, static, independent (i.e.: no images or CSS) HTML file, with a minimum of effort (meaning both a smooth setup for me, and minimum CPU expense). The server contains an untouched installation of ArchLinux, and I'm open to solutions in any language. (Note: I am a slight newbie when it comes to this, so forgive me if this question seems trivial or obvious.) Thank you.

    Read the article

  • Monitoring .NET ASP.NET Applications

    - by James Hollingworth
    I have a number of applications running on top of ASP.NET I want to monitor. The main things I care about are: Exceptions: We currently some custom code which will email us when an exception occurs. If the application is failing hard it will crash our outlook... I know (and use) elmah which partly solves the problem however it is still just a big table of exceptions with a pretty(ish) UI. I want something that makes sense of all of these exceptions (e.g. groups exceptions, alerts when new ones occur, tells me what the common ones are that I should fix, etc) Logging: We currently log to files which are then accessible via a shared folder which dev's grep & tail. Does anyone know of better ways of presenting this information. In an ideal world I want to associate it with exceptions. Performance: Request times, memory usage, cpu, etc. whatever stats I can get I'm guessing this is probably going to be solved by a number of tools, has anyone got any suggestions?

    Read the article

  • Why doesn't infinite recursion hit a stack overflow exception in F#?

    - by Amazingant
    I know this is somewhat the reverse of the issue people are having when they ask about a stack overflow issue, but if I create a function and call it as follows, I never receive any errors, and the application simply grinds up a core of my CPU until I force-quit it: let rec recursionTest x = recursionTest x recursionTest 1 Of course I can change this out so it actually does something like this: let rec recursionTest (x: uint64) = recursionTest (x + 1UL) recursionTest 0UL This way I can occasionally put a breakpoint in my code and see the value of x is going up rather quickly, but it still doesn't complain. Does F# not mind infinite recursion?

    Read the article

  • Unwanted SQLite inserted in \bin

    - by wilk
    I am using Visual Studio 2010 and using web deployment to promote the .Net MVC site to specific environments. I installed Elmah, and it worked great on my DEV environment, but when I pushed TEST, I got exceptions because SQLite was not a good format. I am not using SQLite in Elmah or otherwise that I know of. I have removed all visible refernces to SQLite, and I have removed the .dll from all configuration bin directories. But it still gets inserted with each build. I realize the exception problem is that SQLite cannot be built for CPU Any, and my environments vary from x86 to x64. But I would prefer SQLite to not even be present. I have since uninstalled Elmah, and SQLite is still inserted into the \bin directory. I have now re-installed Elmah, and I manually delete the SQLite.dll from \bin after each build. How can I determine what is causing SQLite to be inserted into my \bin after each build?

    Read the article

  • Parallelize Bash Script

    - by thelsdj
    Lets say I have a loop in bash: for foo in `some-command` do do-something $foo done do-something is cpu bound and I have a nice shiny 4 core processor. I'd like to be able to run up to 4 do-something's at once. The naive approach seems to be: for foo in `some-command` do do-something $foo & done This will run all do-somethings at once, but there are a couple downsides, mainly that do-something may also have some significant I/O which performing all at once might slow down a bit. The other problem is that this code block returns immediately, so no way to do other work when all the do-somethings are finished. How would you write this loop so there are always X do-somethings running at once?

    Read the article

  • Compilation hangs for a class with field double d = 2.2250738585072012e-308

    - by 01es
    I have come across an interesting situation. A coworker committed some changes, which would not compile on my machine neither from the IDE (Eclipse) nor from a command line (Maven). The problem manifested in the compilation process taking 100% CPU and only killing the process would help to stop it. After some analysis the cause of the problem was located and resolved. It turned out be a line "double d = 2.2250738585072012e-308" (without semicolon at the end) in one of the interfaces. The following snipped duplicates it. public class WeirdCompilationIssue { double d = 2.2250738585072012e-308 } Why would compiler hang? A language edge case?

    Read the article

  • Load balancing and scheduling algorithms.

    - by Lukas Šalkauskas
    Hello there, so here is my problem: I have several different configuarion servers. I have different calculations (jobs); I can predict how long approximately each job will take to be caclulated. Also, I have priorities. My question is how to keep all machines loaded 99-100% and schedule the jobs in the best way. Each machine can do several calculations at a time. Jobs are pushed to the machine. The central machine knows the current load of each machine. Also, I would like to to assign some kind of machine learning here, because I will know statistics of each job (started, finished, cpu load etc.). How can I distribute jobs (calculations) in the best possible way, keeping in mind the priorities? Any suggestions, ideas, or algorithms ? FYI: My platform .NET.

    Read the article

  • Inheriting System.Windows.Controls.Panel

    - by modosansreves
    I use the Panel to provide custom layout of UIElements. For this, I override MeasureOverride and ArrangeOverride. In ArrangeOverride proper locations are given to Children. In my application, there is a bunch of rather heavy (by number of visuals) children inside the panel, but only 2 of them are located inside the visible region at a time. Working with my application I feel like all the visuals are drown. I need to introduce a kind of 'virtualization' and make the children render (or take CPU cycles) selectively. How do I?

    Read the article

  • pyOpenSSL and the WantReadError

    - by directedition
    I have a socket server that I am trying to move over to SSL on python 2.5, but I've run into a snag with pyOpenSSL. I can't find any good tutorials on using it, so I'm operating largely on guesses. Here is how my server sets up the socket: ctx = SSL.Context(SSL.SSLv23_METHOD) ctx.use_privatekey_file ("mykey.pem") ctx.use_certificate_file("mycert.pem") sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) addr = ('', int(8081)) sock.bind(addr) sock.listen(5) Here is how it accepts clients: sock.setblocking(0) while True: if len(select([sock], [], [], 0.25)[0]): client_sock, client_addr = sock.accept() client = ClientGen(client_sock) And here is how it sends/receives from the connected sockets: while True: (r, w, e) = select.select([sock], [sock], [], 0.25) if len(r): bytes = sock.recv(1024) if len(w): n_bytes = sock.send(self.message) It's compacted, but you get the general idea. The problem is, once the send/receive loop starts, it dies right away, before anything has been sent or received (that I can see anyway): Traceback (most recent call last): File "ClientGen.py", line 50, in networkLoop n_bytes = sock.send(self.message WantReadError The manual's description of the 'WantReadError' is very vague, saying it can come from just about anywhere. What am I doing wrong?

    Read the article

  • How Two Programs Can Talk To Each Other In Java?

    - by Arnon
    My first time here... I want to ?reduce? the CPU usage/ROM usage/RAM usage - in general ?speaking?, all system resources that my App use - how doesn't? :) For this reason i want to split the preferences window from the rest of the application, and let the preferences window to run as ?independent? program. The preferences program ?should? write to a Property file(not a problem at all) and to send a "update signal" to the main program - which mean, to call the update method(that i wrote) that found in the Main class. How can i call the update method in the Main program from the preferences program? Or in the other hand... There is a way to build preferences window that take system resources just when it's appear? Is this approach - of separating programs and let them talk to each other(somehow) - is a right approach for speeding up my programs? tnx

    Read the article

  • Very Different Execution Times of SQL Query in C# and SQL Server Management Studio

    - by Paul
    I have a simple SQL query that when run from C# takes over 30 seconds then times-out every time, whereas when run on SQL Server Management Studio successfully completes instantly. In the latter case, a query execution plan reveals nothing troubling, and the execution time is spread nicely through a few simple operations. I've run 'EXEC sp_who2' while the query is running from C#, and it is listed as taking 29,000 milliseconds of CPU time, and is not blocked by anything. I have no idea how to begin solving this. Does anyone have some insight? The query is: SELECT c.lngId, ... FROM tblCase c INNER JOIN tblCaseStatus s ON s.lngId = c.lngId INNER JOIN tblCaseStatusType t ON t.lngId = s.lngId INNER JOIN [Another Database]..tblCompany cm ON cm.lngId = cs.lngCompanyId WHERE t.lngId = 25 AND c.IsDeleted = 0 AND s.lngStatus = 1

    Read the article

  • Efficiently remove points with same slope

    - by Ram
    Hi, In one of mine applications I am dealing with graphics objects. I am using open source GPC library to clip/merge two shapes. To improve accuracy I am sampling (adding multiple points between two edges) existing shapes. But before displaying back the merged shape I need to remove all the points between two edges. But I am not able to find an efficient algorithm that will remove all points between two edges which has same slope with minimum CPU utilization. Currently all points are of type PointF Any pointer on this will be a great help.

    Read the article

  • ZeroMQ REQ/REP on ipc:// and concurrency

    - by Metiu
    I implemented a JSON-RPC server using a REQ/REP 0MQ ipc:// socket and I'm experiencing strange behavior which I suspect is due to the fact that the ipc:// underlying unix socket is not a real socket, but rather a single pipe. From the documentation, one has to enforce strict zmq_send()/zmq_recv() alternation, otherwise the out-of-order zmq_send() will return an error. However, I expected the enforcement to be per-client, not per-socket. Of course with a Unix socket there is just one pipeline from multiple clients to the server, so the server won't know who it is talking with. Two clients could zmq_send() simultaneously and the server would see this as an alternation violation. The sequence could be: ClientA: zmq_send() ClientB: zmq_send() : will it block until the other send/receive completes? will it return -1? (I suspect it will with ipc:// due to inherent low-level problems, but with TCP it could distinguish the two clients) ClientA: zmq_recv() ClientB: zmq_recv() so what about tcp:// sockets? Will it work concurrently? Should I use some other locking mechanism to work around this?

    Read the article

  • Identifying idle state on a windows machine

    - by SaM
    I know about the GetLastInputInfo method but that would only give me the duration since last user input - keyboard or mouse. If a user input was last received 10 minutes ago, that wouldn't mean the system has been idle for 10 minutes - scans, downloads, movies - lots of reasons. So how can we identify whether the system is truly idle or not? Does anyone know what Window's definition of "idle" is? Must be a combination of thresholds - like less than 5% cpu util, less than 3% disk util etc. - along with the user input parameter...anyone knows the exact definition?

    Read the article

  • How to hide helper functions from public API in c

    - by emge
    I'm working on a project and I need to create an API. I am using sockets to communicate between the server (my application) and the clients (the other applications using my API). This project is in c not C++ I come from a linux background and this is my first project using Windows, Visual Studio 2008, and dll libraries. I have communication working between the client and server, but I have some that is duplicated on both projects. I would like to create a library (probably a dll file), that both projects can link to so I don't have to maintain extra code. I also have to create the library that has the API that I need to make available for my clients. Within the API functions that I want public are the calls to these helper functions that are "duplicated code", I don't want to expose these functions to my client, but I do want my server to be able to use those functions. How can I do this? I will try to clarify with an example. This is what I started with. Server Project: int Server_GetPacket(SOCKET sd); int ReceiveAll(SOCKET sd, char *buf, int len); int VerifyLen(char *buf); Client Project: int Client_SendCommand(int command); int Client_GetData(int command, char *buf, int len); int ReceiveAll(SOCKET sd, char *buf, int len); int VerifyLen(char *buf); This is kind of what I would like to end up with: //Server Project: int Server_GetPacket(SOCKET sd); // library with public and private types // private API (not exposed to my client) int ReceiveAll(SOCKET sd, char *buf, int len); int VerifyLen(char *buf); // public API (header file available for client) int Client_SendCommand(int command); int Client_GetData(int command, char *buf, int len); Thanks any help would be appreciated.

    Read the article

  • What is the optimal number of threads for performing IO operations in java?

    - by marc
    In Goetz's "Java Concurrency in Practice", in a footnote on page 101, he writes "For computational problems like this that do not I/O and access no shared data, Ncpu or Ncpu+1 threads yield optimal throughput; more threads do not help, and may in fact degrade performance..." My question is, when performing I/O operations such as file writing, file reading, file deleting, etc, are there guidelines for the number of threads to use to achieve maximum performance? I understand this will be just a guide number, since disk speeds and a host of other factors play into this. Still, I'm wondering: can 20 threads write 1000 separate files to disk faster than 4 threads can on a 4-cpu machine?

    Read the article

  • How do I open a hardware accelarated DirectX window on a secondary screen

    - by user567021
    I'm looking to create a hardware accelarated DirectX (9 at the moment) window on a secondary screen. This screen is connected to the same graphics display as the primary screen (at least at the moment). Currently, when I try to open the window on the secondary screen based on window position or by dragging it there, CPU usage jumps by about 10%, which seems to indicate that windows is switching to a software fallback rather than the hardware accelaration. Machine is windows XP running a NVIDIA graphics card (varying cards as this runs on several machines), with the latest driver. It's also running CUDA at the same time to produce the images if that matters. Programming language is c++, manual window and message queue creation, no tookbox used at the moment to manage the GUI Thanks

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >