Search Results

Search found 8429 results on 338 pages for 'batch processing'.

Page 198/338 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Predictive vs Least Connection Load Balancing Techniques

    - by Mani
    I have a windows based desktop application that communicates via TCP to the application servers. (windows 2003). No sticky sessions between client calls. We have exactly 2 servers to load balance and we are thinking to use a F5 hardware NLB. The application is a heavy load types, doing not much bussiness logic in the services but retrieving quite a big amount of data at most of the times. May be on an average 5000 to 10000 records at all times. Used mainly for storing and retirieving data and no special processing of data or calculations running on the server side. I am favouring 'predictive' considering my services take a while at times to return data and hence tracking the feedback would yield some better routing as in predictive. I am not sure if the given data is sufficient enough to suggest some ideas but considering these, what would be some suggestions\things to consider\best between Predictive and Least Connections ? Thanks.

    Read the article

  • How to add flags to RC.EXE through QMake .pro makefiles

    - by Hernán
    I've the following definition in my .pro file: RC_FILE = app.rc This RC file contains a global include at the top: #include "version_info.h" The version_info.h header is on a common header files directory. Since RC.EXE takes INCLUDE environment variable in consideration, according to MS documentation, my build process batch sets up that accordingly: SET INCLUDE=%PROJECTDIR%\version;%INCLUDE% ... QMAKE project.pro -spec win32-msvc2008 -r CONFIG += release This works perfect as RC seems to read that INCLUDE var so the "version_info.h" file is including on every RC file properly. The problem is when I generate a VS solution (or Import it through the VS Addin). The RC invocation does not contain any /I flag (as I expect) but does not read any INCLUDE variable, even when I've setup through system 'environment variables' dialog in XP. So I'm stuck with this problem, with two alternatives I could not get to work: Make VS RC.exe invocation honour the INCLUDE variable (didn't work either as user or system variable). Force QMAKE to pass /I flag to RC invocation, and get that /I flag imported into the project settings (Resource Compiler properties). Thanks in advance.

    Read the article

  • Is Flash typically slow on Linux?

    - by CSarnia
    Specifically, I'm running Mint 8 (Helena). I'm extremely new to Linux, and was searching for a solution that was user-friendly and GUI oriented. The box won't be used for much other than web browsing and word processing. Anyway, it runs relatively smoothly, except for Youtube videos... especially full-screen, which runs at like 1 FPS, and even after closing, slows Firefox to a crawl until I restart it. I'd seen an xkcd comic on the matter, but regarded it as a joke until now. Is this actually a problem? Are there any remedies I can try to smooth the applications?

    Read the article

  • Opscode Chef nginx compile from source issue reports successful run but does nothing

    - by v_abhi_v
    I am trying to install nginx from source in Opscode Chef and its bit weird, it runs complaining nothing but does not install it either. This is how my role attributes look like looks like "nginx":{ "default_site_enabled":false, "version":"1.2.6", "init_style":"init", "install_method":"source", "configure_flags":[ "--without-http_access_module", "--without-http_auth_basic_module", "--without-http_autoindex_module", "--without-http_browser_module", "--without-http_charset_module", "--without-http_fastcgi_module", "--without-http_memcached_module", "--without-http_referer_module", "--without-http_scgi_module", "--without-http_split_clients_module" ], "log_dir":"/var/log/nginx", "binary":"/opt/nginx/sbin/nginx", "source":{ "prefix":"/opt/nginx/dist", "modules":["http_ssl_module", "http_gzip_static_module" ] } }, The chef log shows: [2012-12-19T02:37:44+00:00] INFO: Processing bash[compile_nginx_source] action run (nginx::source line 82) [2012-12-19T02:37:45+00:00] INFO: bash[compile_nginx_source] ran successfully I am clueless on what's going on :(

    Read the article

  • Netcat server output with multiple greps

    - by Sridhar-Sarnobat
    I'm trying to send some data from my web browser to a txt file on another computer. This works fine: echo 'Done' | nc -l -k -p 8080 | grep "GET" >> request_data.txt Now I want to do some further processing before writing the http request data to my txt file (involving regex maniuplation). But if I try to do something like this nothing is written to the file: echo 'Done' | nc -l -k -p 8080 | grep "GET" | grep "HTTP" >> request_data.txt (for simplicity of explanation I've used another grep instead of say awk) Why does the 2nd grep not get any data from the output of the first grep? I'm guessing piping with netcat works differently to what I've assumed to get this far. How do I perform a 2nd grep before writing to my txt file? My debugging so far suggests: It is nothing to do with stderr vs stdout Parentheses don't help

    Read the article

  • Best way to produce automated exports in tab-delimited form from Teradata?

    - by Cade Roux
    I would like to be able to produce a file by running a command or batch which basically exports a table or view (SELECT * FROM tbl), in text form (default conversions to text for dates, numbers, etc are fine), tab-delimited, with NULLs being converted to empty field (i.e. a NULL colum would have no space between tab characters, with appropriate line termination (CRLF or Windows), preferably also with column headings. This is the same export I can get in SQL Assistant 12.0, but choosing the export option, using tab delimiter, setting my NULL value to '' and including column headings. I have been unable to find the right combination of options - the closest I have gotten is by building a single column with CAST and '09'XC, but the rows still have a leading 2-byte length indicator in most settings I have tried. I would prefer not to have to build large strings for the various different tables.

    Read the article

  • samsung HMX-H100P camcorder and video encoding with mencoder

    - by jskg
    Hi everyone, my background is totally not related to video stuff so pardon my newbie style. I own a samsung HMX-H100P camcorder and I'm trying to encode videos to be uploaded to Youtube and Vimeo. First problem: videos generated by the camera with no processing appear like this: http://www.youtube.com/watch?v=AANbl_DTuzE when I play them with Totem(Linux) or VideoLan. Second problem: When I try to encode the videos produced by the camera using mencoder I get the video at the resolution I chose but those ugly lines and lagging are still present. Here's the command I use: mencoder $inputFile -aspect 16:9 -of lavf -lavfopts format=psp -oac lavc -ovc lavc -lavcopts aglobal=1:vglobal=1:coder=0:vcodec=libx264:acodec=libfaac:vbitrate=4500:abitrate=128 -vf scale=1280:720 -ofps 25000/1001 -o $outputFile Any ideas? Thanks in advance

    Read the article

  • mysql settings - using the available resources

    - by Christian Payne
    I've got a lot of processing work I need to run on a mysql server. I've installed mysql 5.1.45-community on a Win 2007 64bit. Its running on a xenon, 3ghz 6 processors with 8 gig ram. It doesn't seem to matter what queries I run (or the number I run at the same time), when I look in task manager, I'll see one processor is out at 100%. The other 5 are idol. Memory is static at 1.54 gig. When I installed mysql, I used the wizard and selected the default "server" (not workstation) option. I feel like I should be getting more bang for my buck. Is there something else I should be monitoring or something I should change to use the other system resources???

    Read the article

  • Firefox "auto-complete" is very slow

    - by netvope
    Firefox version: 3.6 My places.sqlite is rather big (114MB, after being optimized by SpeedyFox.) If I turn on auto-complete, it may take 1 or 2 seconds for Firefox to accept a newly typed URL. To reproduce the issue: Type a URL into the URL bar, press enter. Nothing happens, and Firefox consumes 100% CPU (actually 50% of 2 cores) for 1 to 2 seconds Then Firefox start the network connection and load the webpage. Since it consumes 100% CPU, I don't think the bottleneck is the disk. I have some experience with SQLite and I know a 100MB DB is very small. To achieve the delay Firefox must be doing some expensive processing or inefficient queries. The issue does not appear if: auto-complete is turned off, or the URL is frequently used, or a new profile with no history is used Does anyone have any idea how to solve the problem? Should I file this as a bug? I don't want to give up my 100MB history, but I don't want to give up auto-complete either :)

    Read the article

  • In SharePoint, why can I "multiple document upload" a 47,297 byte file, but not a 47,298 byte file?

    - by Jim
    It's strange. I can upload a document named 47k.txt that is 47,297 bytes using the "Multiple Document Upload" feature. However, if I add a single character to the end of the text file, the upload fails. Also, if I rename the file to 47k*x*.txt and try to upload it, it fails. This is the error I get in the SharePoint logs: Category: General Event ID: 8jzm Level: High Message: #90012: An error was encountered while processing files on the server. Try uploading one file at a time by using the single upload page. The same error is reported in a message box on the client side. Does anybody know why this would happen?

    Read the article

  • MYSQL Fast Insert dependent on flag from a seperate table

    - by Stuart P
    Hi all. For work I'm dealing with a large database (160 million + rows a year, 10 years of data) and have a quandary; A large percentage of the data we upload is null data and I'd like to stop it from being uploaded. The data in question is spatial in nature, so I have one table like so: idLocations (Auto-increment int, PK) X (float) Y (foat) Alwaysignore (Bool) Which is used as a reference in a second table like so: idLocations (Int, PK, "FK") idDates (Int, PK, "FK") DATA1 (float) DATA2 (float) ... DATA7 (float) So, Ideally I'd like to find a method where I can do something like: INSERT INTO tblData(idLocations, idDates, DATA1, ..., DATA7) VALUES (...), ..., (...) WHERE VALUES(idLocations) NOT LIKE (SELECT FROM tblLocation WHERE alwaysignore=TRUE ON DUPLICATE KEY UPDATE DATA1=VALUES(DATA1) So, for my large batch of input data (250 values in a block), ignore the inserts where the idLocations matches an idLocations values flagged with alwaysignore. Anyone have any suggestions? Cheers. -Stuart Other details: Running MySQL on a semi-dedicated machine, MyISAM engine for the tables.

    Read the article

  • What value does SenderID provide over SPF and DKIM?

    - by makerofthings7
    I understand that SPF "binds" a message envelope to a set of permitted IP addresses. SenderID (with the default pra option) "binds" the message header to a set of permitted IPs in addition to the SPF logic. DKIM "binds" the from address header (and any additional header the sender chooses), and the body to a DNS Domain name I'm using the word "bind" above instead of "authorized" because it makes more sense (to me) Questions: If SPF is already verifies a message FROM in the envelope, why is there a need to check the headers? When would the need to verify the envelope (SPF) need to be different than the headers (SenderID) If I'm already verifying the headers with DKIM, why do I need SenderID? Most large companies I've checked don't disable SenderID with an explicit record. EBay is a notable example of one that does. What is the rationale for disabling SenderID "pra" processing of outbound messages?

    Read the article

  • Understanding this error: apr_socket_recv: Connection reset by peer (104)

    - by matthewsteiner
    So, if I do some benchmarking with apache benchmark (ab), and I use large numbers of requests. Then sometimes in the middle of a test I get this error. I don't even know what it means. So how can I fix it? Or is it just something that will happen if the server gets too many hits anyway? The problem is, if I run 10,000 hits, it'll all run perfectly. If I run it again, it'll get to 4000 and get the error: apr_socket_recv: Connection reset by peer (104) A little about my setup: I have nginx taking static requests and processing dynamic ones to apache. The file in question is served from cache by nginx, so I guess it's probably got to do with how nginx is handling the requests? Ideas?

    Read the article

  • Strange requests coming from Korean Site

    - by Jim Jeffers
    Lately I've been finding a lot of strange requests like this coming to my rails app: Processing ApplicationController#index (for 189.30.242.61 at 2009-12-14 07:38:24) [GET] Parameters: {"_SERVER"=>{"DOCUMENT_ROOT"=>"http://www.usher.co.kr/bbs/id1.txt???"}} ActionController::RoutingError (No route matches "/browse/brand/nike ///" with {:method=>:get}): It looks like it's automated as I get a lot of them and notice the strange parameters they're trying to send: _SERVER"=>{"DOCUMENT_ROOT"=>"http://www.usher.co.kr/bbs/id1.txt??? Is this something malicious and if so what should I do about it?

    Read the article

  • SMO missing dll on a clients

    - by Dale
    I've created an app that connects remotely to SQL Server 2008. SQL connections work and all traditional oCommand.ExecuteNonQuery(), work great! But my SMO class using server.ConnectionContext.ExecuteNonQuery(scriptfile); ERROR: missing batch parsing.dll . I can't install these independent utils on a client machines, and then take them all off when done: as suggested by: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=228de03f-3b5a-428a-923f-58a033d316e1 Since my bulk-inserts are large memory hogs containing complete tables, I wrote the tables to a temptable.sql files and used SQLCMD Util and later switch too SMO and I have the same problem. Neither of these can be leveraged on the client's PCs . Any suggestion? thanks :-)

    Read the article

  • Mini-ITX board for AM3 Athlon X4 600e processor.

    - by Kamil Zadora
    Hello, I am planning to build a PC to control a robotic platform that I am building (about 50% complete). I need more power than ATOM platform could bring, as the robot will need to do on the fly image processing to work as intended. I was considering using Athlon X4 600e as it is rated 45W Maximum output. Probably underclocked it would go lower than 30-35W. I do not I'am at very long battery life, but the 17Ah, 12V battery should keep it running for few hours. My problem is: motherboard. I am space limited so I am looking for a nice mini-itx AM3 motherboard to match the processor. It is hard to find many tests about power usage of the motherboards itself (for example, when using the same processor on different motherboards, test are usualy done in the opposite matter). Could you provide any motherboard examples or suggest what chipset to look for? Thank you in advance.

    Read the article

  • Can Acer Aspire Revo (Atom 330) be used with two monitors simultaneously?

    - by LeeD
    I'm so attracted to Acer Revo for the price & the look. As long as I can work on two monitors simultaneously, I'll be happy. Not planning to do heavy video editing or gaming. Occasional movie streaming would be fine. Will mainly use it to do trading, lots of word processing, some photo editing, connecting with friends. Anyone has experience using Revo with 2 or more monitors? The spec says it has VGA and HDMI output but Acer sales person over the phone told me it can support one monitor only..??

    Read the article

  • Schema Inheritance in BizTalk Server

    - by newbtdev
    Hi, I just wondering if anyone has already tried of doing something like schema inheritance in BizTalk schemas? I am using WCF Adapter and using 'consume adapter service' to generate a schema automatically, what I wanted is instead of always generating a schema and since most of my schema is the same then I want to have something like a base schema. I have this scenario that I'm testing flat file debatching, for debatching I need to set maxoccur property of the schema to '1' but for batch processing it should be '*', instead of creating a two different schemas I want just to create a base schema and inherit from it and then change the maxoccur property in the derived schema. Any help would be appreciated. Many Thanks

    Read the article

  • automatic IIS worker process recycle fails

    - by Sander Rijken
    The server is set to its default configuration to recycle the app pool every 1740 minutes. When this happens the following message is logged: A worker process with process id of '1234' serving application pool 'XX' has requested a recycle because the worker process reached its allowed processing time limit. Directly after logging this message, the web site is unresponsive. The only way to get it back online is by running iisreset manually. Does anyone know a fix for this behavior, other than turning the recycle feature off? Is it a known problem?

    Read the article

  • Explanation of WCF application life cycle in IIS 6 hosting environment.

    - by David Christiansen
    Hi all and thanks for reading, I have a delay issue where my application takes a long time to start up when first called after an determinate period since the last call. The web application is a WCF service and we are talking about a delay of ~18seconds before the actual processing starts. Now, I believe I know how to reduce this delay so that is not my question (it's more a stackoverflow deal anyway) My question is, Can anyone explain to me why is it that despite me disabling worker process shutdown, and worker process recycling the application still 'winds down' after a indeterminate period of time of inactivity? To understand this I need to know more about the innerworkings of WCF services hosted in IIS. I fully expect there to be a straight forward answer to this. Thank you v. much for any help you may offer, DC

    Read the article

  • Securely wiping a file on a tmpfs

    - by Nanzikambe
    I have a script that decrypts some data to a tmpfs, the directory is secure (permissions), the machine's swap is encrypted (random key on boot) and when the script is done it does a 35 pass wipe (Peter Gutmann) of the cleartext on the tmpfs . I do this because I'm aware wiping files on a journaling file system is insecure, data may be recovered. For discussion, here're the relevant bits extracted: # make the tmpfs mkdir /mnt/tmpfs chmod 0700 /mnt/tmpfs mount -t tmpfs -o size=1M tmpfs /mnt/tmpfs cd /mnt/tmpfs # decrypt the data gpg -o - <crypted_input_file> | \ tar -xjpf - # do processing stuff # wipe contents find . -type f -exec bcwipe -I {} ';' # nuke the tmpfs cd .. umount -f /mnt/tmpfs rm -fR /mnt/tmpfs So, my question, assuming for the moment that nobody is able to read the cleartext in the tmpfs while it exists (I use umask to set cleartext to 0600), is there any way any trace of the cleartext could remain either in memory or on disk after the snippet above completes?

    Read the article

  • Database Server Hardware components (order of importance), CPU speed VS CPU cache vs RAM vs DISK

    - by nulltorpedo
    I am new to database world and would like to know what are crucial hardware specs when it comes to database performance. I have searched the internet and found this so far (In order of decreasing importance): 1) Hard Disk: Get an SSD basically (much more IOPS than spinners) 2) Memory: Get as much as you can afford 3) CPU: For the same $ spent, prefer larger cache size over speed. Are these findings sensible? EDIT: I would like to focus on CPU speed VS CPU cache size. EDIT2: The database is used to store some combination of ints and int arrays with few text fields. There are a lot of Select queries looking for existing entries. If entry is not found, then insert it. I would say most of processing would be trying to find a match across a table with 200 columns and 20k rows. The insert statements are very few. EDIT3: Also, we have a lot of views (basically select queries).

    Read the article

  • How to send EML data in chuck to Google Apps Mail using Google API ver 2 ?

    - by Preeti
    Hi, I am migrating EML mails to Google Apps. When i try to Migrate a EML file with two attachment 2.1 MB and 1.96 MB. It is throwing exception: "The request was aborted: The request was canceled." I am using below code: MailItemEntry[] entries = new MailItemEntry[1]; String msg = File.ReadAllText(EmlPath); entries[0] = new MailItemEntry(); entries[0].Rfc822Msg = new Rfc822MsgElement(msg); ........ MailItemFeed feed = mailItemService.Batch(domain, UserName, entries); I think sending data can resolve this issue.So,how can send this EML data in chunk to Google Apps? Thanx

    Read the article

  • How to check for an existing executable before running it in a post-build event in VS2008?

    - by wtaniguchi
    Hey all, I'm trying to use SubWCRev to get the current revision number of our SVN repository and put it in a file so I can show it in the UI. As I'm working with a Web App, I use the following post build command line: "SubWCRev.exe" "$(SolutionDir)." "$(ProjectDir)Content\js\revnumber.js.tpl" "$(ProjectDir)Content\js\revnumber.js" It works great, but now I want to make sure I have SubWCRev before running it, so I can skip this post build if a fellow developer is not running TortoiseSVN. I tried a few batch codes here, but couldn't figure this out. Any ideas? Thanks!

    Read the article

  • Database OR Array

    - by rezoner
    What is the exact point of using external database system if I have simple relations (95% querries are dependant on ID). I am storing users and their stats. Why would I use external database if I can have neat constructions like: db.users[32] = something Array of 500K users is not that big effort for RAM Pros are: no problematic asynchronity (instant results) easy export/import dealing with database like with a native object LITERALLY ps. and considerations: Would it be faster or slower to do collection[3] than db.query("select ... I am going to store it as a file/s There is only ONE application/process accessing this data, and the code is executed line by line - please don't elaborate about locking. Please don't answer with database propositions but why to use external DB over native array/object - I have experience in a few databases - that's not the case. What I am building is a client/gateway/server(s) game. Gateway deals with all users data, processing, authenticating, writing statistics e.t.c No other part of software needs to access directly to this data/database.

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >