Search Results

Search found 7213 results on 289 pages for 'multi processor'.

Page 181/289 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Modern monitor technologies - need to find a new monitor

    - by Michal Minicki
    I'm preparing to change my old LCD monitor for a new one. I have an old NEC 20WGX2 Pro based on an IPS panel. I'm looking for a screen that gives good color output but is very good at gaming at the same time (since it is its primary service). I tend to switch monitors between my different computers at home so it has to be multi purpose, hence IPS technology before. Now, where can I read on newest monitor technologies so I can make an informed decision? I need to find a best fit for myself and I have a very outdated knowledge at the moment. So any hints are greatly appreciated, be it info on technologies, web sources, links to other questions, etc.

    Read the article

  • What is application and process?

    - by Lu Lu
    An application consists of one or more processes. A process, in the simplest terms, is an executing program. One or more threads run in the context of the process. A thread is the basic unit to which the operating system allocates processor time. A thread can execute any part of the process code, including parts currently being executed by another thread. Source: http://msdn.microsoft.com/en-us/library/ms684841%28VS.85%29.aspx I understand about thread, but I can't distinguish between application & process. What is application? What is process? How do an application have more than 1 process? And please give me an example in C#. Thanks.

    Read the article

  • Is MacBook powerful enough to do Ipad development? or do I need a MacBook Pro?

    - by ronaldwidha
    The title probably says it all. Considering an ipad's processor is nothing compared to a macbook, I would think a Macbook should be more than capable to run the simulator. However, not knowing much about iphone/ipad development, I'd like to get some opinions on this. for e.g. how many apps are typically need to be run for ipad dev (editor, debugger, perf monitor, trace log, etc). are these apps resource (memory, cpu) intensive? please do not take into consideration the actual image, 3d, video, sound development. I understand one would need quite a beefy machine to produce these type of creative assets. What I'm looking at is a machine to do code development, physics, putting together the produced assets (images, vector graphics, 3d video, sound, etc).

    Read the article

  • Image processing in a multhithreaded mode using Java

    - by jadaaih
    Hi Folks, I am supposed to process images in a multithreaded mode using Java. I may having varying number of images where as my number of threads are fixed. I have to process all the images using the fixed set of threads. I am just stuck up on how to do it, I had a look ThreadExecutor and BlockingQueues etc...I am still not clear. What I am doing is, - Get the images and add them in a LinkedBlockingQueue which has runnable code of the image processor. - Create a threadpoolexecutor for which one of the arguements is the LinkedBlockingQueue earlier. - Iterate through a for loop till the queue size and do a threadpoolexecutor.execute(linkedblockingqueue.poll). - all i see is it processes only 100 images which is the minimum thread size passed in LinkedBlockingQueue size. I see I am seriously wrong in my understanding somewhere, how do I process all the images in sets of 100(threads) until they are all done? Any examples or psuedocodes would be highly helpful Thanks! J

    Read the article

  • Caching DNS server (bind9.2) CPU usage is so so so high.

    - by Gk
    Hi, I have a caching-only dns server which get ~3k queries per second. Here is specs: Xeon dual-core 2,8GHz 4GB of RAM Centos 5x (kernel 2.6.18-164.15.1.el5PAE) bind 9.4.2 rndc status: recursive clients: 666/4900/5000 About 300 new queries (not in cache) per second. Bind always uses 100% on one core on single-thread config. After I recompiled it to multi-thread, it uses nearly 200% on two core :( No iowait, only sys and user. I searched around but didn't see any info about how bind use CPU. Why does it become bottleneck? One more thing, here is RAM usage: cat /proc/meminfo MemTotal: 4147876 kB MemFree: 1863972 kB Buffers: 143632 kB Cached: 372792 kB SwapCached: 0 kB Active: 1916804 kB Inactive: 276056 kB I've set max-cache-size to 0 to make sure bind can use as much RAM as it want, but it always stop at ~2GB. Since every second we got not cached queries so theoretically RAM must be exhausted but it wasn't. Do you have any idea? TIA, -Gk

    Read the article

  • Translating external api results in Drupal

    - by Chuck Vose
    We're building a multi-language Drupal stack and one of the concerns we have is that our payment processor is going to have to send back some information to us. We've been able to narrow this down so that the strings they're sending back look like <country code>-<number of months> so we can easily translate that into any number of languages, except English. t('FR-12') is all well and good if we want to translate that into a french description, but because there's not an English language a similar string like t('EN-12') is not translatable. Similarly for the generic string: #API_Connection_Error This sort of generic string approach seemed really compelling to me at first but it seems to not work in Drupal. Do you have any suggestions about how to translate generic strings like this into both English and other languages? Thank you, I've been looking through Google all morning.

    Read the article

  • new vhost - main host AWstats

    - by vn
    Hi, I just began working at this new job and I have to config a new host for stats with awstats. I once used awstats on my own server, no biggie. Now, I'm on a multi-sites server with the acces_log files nicely splitted. I copied a awstats.conf file from one of the sites that already has (working) stats. I changed the LogFile and SiteDomain values as mentioned from http://awstats.sourceforge.net/docs/awstats_setup.html#BUILD_UPDATE, saved the conf and ran the commands perl awstats.pl -config=mysite -update and perl awstats.pl -config=mysite -output -staticlinks awstats.mysite.html (yes I changed it with my infos...) PROBLEM IS : whenever I try to access the html file or the dynamic page (with the config option on awstats.pl like my working site does), I get the stats of the MAIN site from access.log itself (and not access_log-mysite) from what it says at the top of the page and from the hostname on the left tab (stats for mysite.com)... what did I do wrong? There's no errors from what I see... Thanks a lot for any help

    Read the article

  • Speech.Recognition GrammarBuilder/Choices Tree Structure

    - by user2210179
    In playing around with C#'s Speech Recognition, I've stumbled across a road block in the creation of an effective GrammerBuilder with Choices (more specifically, Choices of Choices). IE considering the following logical commands. One solution would to "hard code" every combination of Speech lines and add them to a GrammarBuilder (ie "SET LEFT COLOR RED" and "SET RIGHT CLEAR", however, this would quickly max out the limit of 1024, especially when dealing with number combinations. Another solution would to Append all 'columns' as "Choices" (and filter out incorrect paths upon 'recognition', however this seems like it's processor heavy and unnecessary. The middle ground, seems like the best path - with Choices of Choices - like a tree structure on a GrammarBuilder - however I'm not sure how to proceed. Any suggestions?

    Read the article

  • 1Gigabit vs 1.25Gibabit mismatch

    - by Joel Coel
    I need to re-connect the network to a small old outbuilding that hasn't been used in several years. I have to use the existing 62.5um multi-mode fiber run. This end of the fiber is already connected. For the end in the building, I was looking at this pair: http://www.tp-link.com/products/productDetails.asp?class=switch&content=spe&pmodel=TL-SM311LM http://www.tp-link.com/products/productDetails.asp?class=&content=spe&pmodel=TL-SL2210WEB If you look at the sfp first (first link), it's listed at 1.25Gpbs. That's odd, because IIRC the fiber should really only do 1Gbps. It's also supposed to work with the switch I posted (2nd link), but the gbic port on the switch also only shows 1Gbps. What am I missing here?

    Read the article

  • Erlang: Interfacing with Xalan: port driver or nif?

    - by mwt
    I'd like to get a real XSLT processor working with erlang. Which would be the best interface, nif or port driver? According to the nif documentation, nif calls block the runtime, so they should not take long. Is processing a long xml document too long? Also, I'd like to allow erlang callbacks during the transformation. Does that seem possible? Possible with nif but not port drivers or vice versa? I've never written any C, so I figured this would be good introduction. Xalan is C++. I assume nif can work with that, right?

    Read the article

  • fastest in-memory cache for XslCompiledTransform

    - by rudnev
    I have a set of xslt stylesheet files. I need to produce the fastest performance of XslConpiledTransform, so i want to make in-memory representation of these stylesheets. I can load them to in-memory collection as IXpathNavigable on application start, and then load each IXPAthNavigable into singleton XslCompiledTransform on each request. But this works only for styleshhets without xsl:import or xsl:include. (Xsl:import is only for files). also i can load into cache many instances of XSLCompiledTransform for each template. Is it reasonable? Are there other ways? What is the best? what are another tips for improving performance MS Xslt processor?

    Read the article

  • Rolling Back Microsoft CRM during testing

    - by npeterson
    Process related question: Currently we have a multi-tenant installation of MS CRM 4.0 on three servers, Dev, Test, and Live. We are actively working on customizing one of the tenants, but the others are static. During user testing, we often find it necessary to 'start fresh' in one of the tenants. Is it better to try and delete out the changes from the tenant (created accounts, leads, etc), or just revert the database to a backup from before the testing started? Is there compelling reasons why bulk delete is not advisable for MSCRM or that reverting the database frequently could cause issue?

    Read the article

  • In what way does non-"full n-key rollover" hinder fast typists?

    - by Michael Kjörling
    Wikipedia claims (although the latter claim does not cite a source) that: High-end keyboards that provide full n-key rollover typically do so via a PS/2 interface as the USB mode most often used by operating systems has a maximum of only six keys plus modifiers that can be pressed at the same time.[4] This hinders fast typists, ... In what way would the system being able to recognize only six non-modifier keys at once hinder a fast typist? I consider myself a relatively fast typist and I usually press one key, plus modifiers, at once; I can't imagine any real-life situation in which the system only recognizing six non-modifier keys being pressed at once has been a limiting factor in my keyboard usage. (Multi-stroke keyboard shortcuts as used by high-end software like Visual Studio, Emacs and the like are a different matter.) Note that I am not really interested in answers centered around multiplayer computer games; I'm looking for answers that give reasons that would be relevant to typists, somehow supporting the statement made on Wikipedia.

    Read the article

  • Problems with merge replication

    - by jess
    Hi, We are developing a multi-user desktop application with users located in different countries. The platform is - .net 3.5, SQL Server 2008, WinForms. Now, my client has used the help of a DBA who has implemented merge replication. To facilitate replication, we made all our primary keys as GUID. Now, we are facing these issues with replication - subscribers expiration sometimes stops replication and we have found no clean way to re-add every change to db schema requires to poll the whole data all over again! This seems to be strange, what could be the problem here? Also, sometimes we have duplicate keys, and that too stops replication I am sure these issues can be resolved. Maybe, we have not gone the right way to implement. Can you suggest how to go about implementing. Or, is the above information enough to diagnose the problem?

    Read the article

  • How can I stop Flash from leaving full-screen mode when it loses focus due to a mouse-click in the o

    - by therefromhere
    On a multi-monitor system, if I'm viewing a full-screen video in Flash on one monitor, clicking the mouse on the other monitor causes Flash to leave full-screen mode and revert to normal size. What's the easiest way of preventing this that works on my version of Flash? My system is Flash 10 (10.0.12.36), in Firefox 3.5 on Windows Vista 64, but I think it affects all current versions. This is very annoying behaviour, but unfortunately, according to this bug report response it seem to be a security feature, rather than a bug: We understand that many users would like fullscreen on one monitor and to be able to interact with your OS on another monitor. However, due to security requirements, we require that Flash and Browser must be the current focus of your OS.

    Read the article

  • skip-limit ignored for skippable exception thrown from writer

    - by ck
    i am working on a project with spring batch 2.0.2 and have skippable exception set up in the config. for exceptions thrown from the processor everything works fine. it skips and once the limit is execeed the job fails (or stops). for exceptions thrown from the writer (same chunk) it keeps skipping. the skip-limit doesnt seem to matter. maybe i missunderstood and skipping and writer don't go together or maybe i am missing a configuration. does anyone know how to skip issues - with a limit - within the writer properly?

    Read the article

  • What's wrong with my logic here?

    - by stu
    In java they say don't concatenate Strings, instead you should make a stringbuffer and keep adding to that and then when you're all done, use toString() to get a String object out of it. Here's what I don't get. They say do this for performance reasons, because concatenating strings makes lots of temporary objects. But if the goal was performance, then you'd use a language like C/C++ or assembly. The argument for using java is that it is a lot cheaper to buy a faster processor than it is to pay a senior programmer to write fast efficient code. So on the one hand, you're supposed let the hardware take care of the inefficiencies, but on the other hand, you're supposed to use stringbuffers to make java more efficient. While I see that you can do both, use java and stringbuffers, my question is where is the flaw in the logic that you either use a faster chip or you spent extra time writing more efficient software.

    Read the article

  • Should I Use PHP as FastCGI?

    - by Synetech inc.
    Hi, I am running an Apache webserver on my Windows machine. It is not generally a public server (most of the little bit of traffic comes from the machine itself, and most of the public traffic comes from crawlers). Basically, it is mostly just for use as a test-bed, development system. I have read about how running PHP as FastCGI is better (ie faster and more stable) than as an Apache module. However, I really don’t like the idea of multiple PHP.exe processes (I don’t like that Apache has two processes and I’m not even too thrilled with Chromium’s multi-process model). So I’m wondering if it would be worthwhile to change PHP to FastCGI for this scenario. If it is, how would I configure it? Pretty much all of the information I have seen has been either for non-Windows or for IIS. As I said, I’m running Windows+Apache. Thanks a lot.

    Read the article

  • Linux: How to break a large file into smaller files?

    - by Runcible
    I have a giant file (20 gigs) sitting on my source machine and I need to transfer it to my target machine. For the purposes of this question, let's assume that I do not have network connectivity between the two machines. I need to break this file into a series of smaller files, write the smaller files to DVD(s), then re-assemble everything on the target machine. Both source and destination machines are Linux boxes. Is there a way to accomplish this using tar? I have a feeling that I need to use the --multi-volume parameter. What are my options? I need to be able to specify the size of the volume files, in order to make sure that each one will fit onto a single DVD. Thanks!

    Read the article

  • Automated payment notification with php

    - by Rob Y
    I'm about to integrate automated payments into a site. To date, I've successfully used paypal for a number of projects, but these have always been sites which sell physical goods, meaning I can upload the cart contents, user pays, person physically ships goods. This site is a one off payment to enable extra features on a web app. My current thinking is to go down the paypal IPN route to get a notification back and update the users account based on the successful payment. Question is in two parts: 1 - is there a better / simpler way? (any payment processor considered) 2 - does anyone know of a code library or plug in for php which will speed up my integration? Thanks for your help. Rob

    Read the article

  • XSLT workflow with variable number of source files

    - by chiborg
    I have a bunch of XML files with a fixed, country-based naming schema: report_en.xml, report_de.xml, report_fr.xml, etc. Now I want to write an XSLT style sheet that reads each of these files via the document() XPath function, extracts some values and generates one XML files with a summary. My question is: How can I iterate over the source files without knowing the exact names of the files I will process? At the moment I'm planning to generate an auxiliary XML file that holds all the file names and use the auxiliary XML file in my stylesheet to iterate. The the file list will be generated with a small PHP or bash script. Are there better alternatives? I am aware of XProc, but investing much time into it is not an option for me at the moment. Maybe someone can post an XProc solution. Preferably the solution includes workflow steps where the reports are downloaded as HTML and tidied up :) I will be using Saxon as my XSLT processor, so if there are Saxon-specific extensions I can use, these would also be OK.

    Read the article

  • Why do my download speeds drastically vary during a download?

    - by J. Anthony Carter
    I watch the download speed rise and fall like waves in a storm. At night, during low bandwidth usage I have achieve speeds as high as 3.23 M/sec but the watch them decline to 250 K/sec. and then climb back up. Over and over. During the day my best is around 1.67 M/sec with lows into the 65 K/sec. On top of this, why does a download need to slow down when approaching the end of the download? It's not like a multi-hundred ton train needing to decrease speed as it approaches the station.

    Read the article

  • Computer freezes for 2+ seconds, mouse still moves

    - by xsaero00
    I have this problem on my workstation. The computer would effectively freeze for 2-5 seconds for no apparent reason, then continue as normal. While frozen the mouse would still be movable, but only on one of screens in my multi-screen setup. What can be the likely cause. System: CPU: i7-920 Memory: 12G of Patriot DDR3, 6 modules OS: SLED 11, Suse Linux Enterprise Desktop, using Gnome Main board: Asus P6T Video: two Nvidia 9500GT connected to three displays I am using memory at recommended settings of 8-8-8-1333. It has an XMP profile. Th CPU is a bit overclocked to 3.3 GHz, but my cooling more than allows for it. I ran the computer with all overclocks off and lower memory speed but the issue was still there. Any ideas? Where should I start looking?

    Read the article

  • Reccomendation for tuning 100's of Sql Databases

    - by wayne
    Hi, I'm running several sql servers, each running a few hundred multi gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite alot from database to database. What would be the best way to auto-index / profile / tune this large amount of databases? As there are atleast 600 or more catalogs i cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine!

    Read the article

  • Improving the efficiency of Kinect for Windows DTWGestureRecognition Application

    - by Ray
    Currently I am using the DTWGestureRecognition open source tool for Kinect SDK v1.5. I have recorded a few gestures and use them to navigate through Windows 7. I also have implemented voice control for simple things such as opening PowerPoint, Chrome, etc. My main issue is that the application uses quite a bit of my CPU power which causes it to become slow. During gestures and voice commands, the CPU usage sometimes spikes to 80-90%, which causes the application to be unresponsive for a few seconds. I am running it on a 64 bit Windows 7 machine with an i5 processor and 8 GB of RAM. I was wondering if anyone with any experience using this tool or Kinect in general has made it more efficient and less performance hogging. Right now I removed sections which display the RGB video and the Depth video but even doing that did not make a big impact. Any help is appreciated, thanks!

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >