Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 334/3203 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Is there an application I can run that will tell me how fast my computer is running?

    - by Robert Hume
    At work I log into a virtual Windows machine. I'm told it runs as fast as a PC, but I'm skeptical. Is there an application I can run on the machine that will tell me how faster the machine is actually running? It would be nice if it ran on Windows and on Mac. Updating with more details -- I was asked "why does it matter" -- here's why: It matters because I'm a programmer and I need as much speed (CPU and memory) as possible to do my work. IMO, the virtual machine is noticeably slower than a basic $800 PC would be, but I need a way a proving it. Websites like Bandwidthtest.com can show me my internet speed, so I'm wondering if there's an app that can test my computer's speed.

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • Linux running games in another x-session

    - by mnml
    I have been trying to optimize my settings to the maximum lately and someone told me that running a game in another xsessions w/ another user would increase my perfs. It will also allow me to kill it from the other x session at anytime without having to restart the computer when it gets stuck. Today I have tried to do that in a Xephyr "screen" and I had ten times less fps on glxgears, I haven't tried on a real game ran by wine yet. Just looking for some advices on that.

    Read the article

  • adress-chunk: separate data stored in one collumn - into three (street, postal-code, town)

    - by zero
    hello dear community. Hello dear friends form all over the planet first of all - this is a great great forum. I like this place to share the ideas. It is so great to see such a supportive place - featuring the knowledge exchange! today i have the following thing to discuss: i want to separate the following data that are stored in one column of a calc-spreadsheet: See the following data: You see that there are the following categories:_ steet, postal-code, town All the data are stored in only one Colum; Well to be honest: i want to separate them into three colums steet, postal-code, town see the data: what can i do? note - you see that there are commas inbetween the enties: and besides this we see that we have a postal-code with four digits: that is a good thing. Perhaps we can use this as a marker that helps us to separate the data?! Perhaps See a data-sample! Here you can see some exceptions: eg. the town that has two words combinde with a "-" ... or somethims without any signs and characters... see the following... as an example: Max-Bader-Platz 1, 5620 Schwarzach im Pongau Pestalozzistraße 4, 9990 Nussdorf-Debant Schulstraße 4, 5162 Obertrum am See But i guess that this means no problem... What do you think about this? I am very very interested to get your opinion! i look forward to hear from you! regards see a snipped of the dataset - that is stored in one column -[b]Goal: [/b]i want to separate the datas into three collumns... : Schulweg 6, 9871 Seeboden Khevenhüllerstraße 45, 4861 Schörfling Franz Xaver Rennstr.18, 6460 Imst Schulstraße 4, 5162 Obertrum am See Schulweg 6, 7432 Oberschützen Pestalozzistraße 4, 9990 Nussdorf-Debant Niederndorf bei Kufstein 53c, 6342 Niederndorf bei Kufstein Hauptschulstraße 18, 2183 Neusiedl an der Zaya Seeweg 14, 5202 Neumarkt am Wallersee Europaplatz 1, 8820 Neumarkt in Steiermark Schulstraße 7, 4212 Neumarkt im Mühlkreis Schulstraße 20, 4720 Neumarkt im Hausruckkreis Bahnhofstr. 10, 4872 Neukirchen an der Vöckla Schulstraße 5b, 4780 Schärding Reitbergstraße 2, 4311 Schwertberg Europaplatz 1, 2320 Schwechat Am Schulberg 5, 3931 Schweiggers Waidach 8, 6130 Schwaz Waidach 8, 6130 Schwaz Max-Bader-Platz 1, 5620 Schwarzach im Pongau Markt 29, 2662 Schwarzau im Gebirge Hofsteigstraße 68, 6858 Schwarzach Gmundner Straße 7, 4690 Schwanenstadt Mühlfeldstraße 1, 4690 Schwanenstadt Mainsdorferstraße 18, 8541 Schwanberg Jakob Stemer-Weg 3, 6780 Schruns Obere Umfahrungsstraße 16, 2432 Schwadorf bei Wien Battloggstraße 54, 6780 Schruns Schloss-Straße 19, 5020 Salzburg Schillerplatz 2, 8280 Fürstenfeld Erzherzog-Johann-Str. 400, 8970 Schladming Schulgasse 261, 8811 Scheifling i look forward to hear from you!! regards

    Read the article

  • perf tuning for vmfs3 on RAID

    - by maruti
    recommendations for ESX4 OS - VMFS version3: matching: RAID-5 stripe size with VMFS block size? (64K, 128K etc) enabled "adaptive read ahead, write-back" on PERC 6i 90% VMs on server are Windows (2008, 2003, Vista etc, SQL 2005 etc) i have read that smaller stipes are good for writes and larger for reads. Since this is virtual env, not sure whats good.

    Read the article

  • Hardware profiling [closed]

    - by mgroves
    I'd like to upgrade my computer so that it's faster when editing/rendering video. I'm thinking of first getting a faster hard drive, but I'd like to be able to run some sort of profiling software to tell me what the bottleneck is when rendering video. Any suggestions about software that can do this for preferably Windows XP and preferably for free?

    Read the article

  • why dstat failed with --tcp option?

    - by hugemeow
    why --tcp option failed? should i install some package? if yes, which package i should install? this question is not the same as http://stackoverflow.com/questions/10475991/tcp-sockets-double-messages dstat --tcp Module tcp has problems. (Cannot open file /proc/net/tcp6.) None of the stats you selected are available. what is the problem with module tcp, how to find the problem? what i should do in order to fix this issue?

    Read the article

  • Centos running Apache Tomcat keep getting "java.net.SocketException: Too many open files"

    - by Gerard Moroney
    We're running Apache Tomcat 7.0.41 on CentOS 6 with java version "1.7.0_21". We were getting a lot of too many open files errors so I did some research. The consensus was that it was to to with the number of open files. So I did the following: Increased max files in /etc/security/limits.conf soft nofile 100000 hard nofile 100000 Rebooted the server Checked the limits were valid for the user which was to run the process [app_admin@xxx ~]$ ulimit -Hn 100000 [app_admin@xxx ~]$ ulimit -Sn 100000 Monitored open files on the server using the lsof command What I observed was when the total open files reached circa 13000 and tomcat had around 4500 open files the error reappeared. I am confused. I thought it would have resolved the problem but clearly I don't fully understand the root cause and also how to set the parameter correctly. To (maybe) help I have not modified the server.xml file for Tomcat (although I'm tempted). I don't want to start fiddling with that and make things worse. I'm more than happy to share any more information if someone can give me some hints on where to start looking.

    Read the article

  • How to import Evolution application data from home folder backup

    - by Wolter Hellmund
    I had many folders and filters and mail in general in Evolution on my previous Ubuntu install, which I thought would be available for me for I had backed up my home directory. I have copied now Evolution related folders to my new home directory, and Evolution is not showing either the folders I had or any of the filters. To be more precise, I have copied the mail folder to ~/.config/evolution/ but that hasn't changed anything, as I said.

    Read the article

  • HP Probook 4530s great specs, but lagging. Hard Drive?

    - by Mark
    I have this laptop, which has a i3 processor, 4gb memory, 7200rpm hard drive. So there is nothing wrong with the specs. Even when I have no applications open, simply closing and opening windows, lags. Or opening the start menu, or dragging icons across the desktop. sometimes even the cursor lags. So I checked out the resource monitor, and the resources using disk activity are svchost Avast ------- my antivirus, but not much System (PID 4) ------ This is using a huge chunk The total disk activity fluctuates between %50 - %100

    Read the article

  • Bad idea to keep htop running?

    - by Michael T. Smith
    I'm now monitoring multiple servers (3) and in the coming weeks that'll increase (towards 5 or 6). I've been keeping three terminal windows open running htop via SSH and I'm now wondering if there are any downsides to having a connection constantly open to production servers?

    Read the article

  • Brasero Burns Data, Not Time - or Piles of Discs

    <b>Linux Insider:</b> "There are a lot of CD/DVD burners for Linux out there, but Brasero stands out as a straightforward, easy-to-use, burner that has some nice extra features but won't make you relearn a lot of complex commands if you only use it occasionally. One nicety is the option to start a burn project and finish it much later, even if you're not using a rewritable disc."

    Read the article

  • Delete Data From Your Hardrive With DBAN

    This week, I felt the need to re publish an article that my web designer wrote a few years ago. It was written about formatting files from a hard drive to make them unrecoverable. This is very import... [Author: Chris Holgate - Computers and Internet - April 09, 2010]

    Read the article

  • Card deck and sparse matrix interview questions

    - by MrDatabase
    I just had a technical phone screen w/ a start-up. Here's the technical questions I was asked ... and my answers. What do think of these answers? Feel free to post better answers :-) Question 1: how would you represent a standard 52 card deck in (basically any language)? How would you shuffle the deck? Answer: use an array containing a "Card" struct or class. Each instance of card has some unique identifier... either it's position in the array or a unique integer member variable in the range [0, 51]. Shuffle the cards by traversing the array once from index zero to index 51. Randomly swap ith card with "another card" (I didn't remember how this shuffle algorithm works exactly). Watch out for using the same probability for each card... that's a gotcha in this algorithm. I mentioned the algorithm is from Programming Pearls. Question 2: how to represent a large sparse matrix? the matrix can be very large... like 1000x1000... but only a relatively small number (~20) of the entries are non-zero. Answer: condense the array into a list of the non-zero entries. for a given entry (i,j) in the array... "map" (i,j) to a single integer k... then use k as a key into a dictionary or hashtable. For the 1000x1000 sparse array map (i,j) to k using something like f(i, j) = i + j * 1001. 1001 is just one plus the maximum of all i and j. I didn't recall exactly how this mapping worked... but the interviewer got the idea (I think). Are these good answers? I'm wondering because after I finished the second question the interviewer said the dreaded "well that's all the questions I have for now." Cheers!

    Read the article

  • Goal Tracking data seems to be inaccurate?

    - by Khuram Malik
    I setup some Goal Tracking about one week ago. I had multiple goals in one set. The goal itself was the "send" button being pressed on the callback form (i did that by pushing a pageview to Google Analytics everytime the send button is pressed) For each goal, i listed the first step as a required step. So for example, the ILR Page was step 1 and set as required and the goal was "/CallbackFormFilled" Looking at the stats a week later i'm getting some very inflated numbers especially when comparing them to my manually filled excel spreadsheet and i'm struggling to understand the cause of this behaviour. I'm unable to attach screenshots unfortunately since my StackExchange account for this site is brand new My own thoughts My own thoughts were that maybe its because i have setup multiple goals with the same end goal URL, but i thought that was a valid setup since i want to track multiple routes so to speak(?) I've disabled all other goals for now to confirm this, but im waiting for stats to come in as i write this. I also wonder if the contact form im using in Wordpress is causing a problem, but i've simply added one javascript line on the send button that pushes a pageview so not sure if that should cause an issue. Here is a link to setting up analytics on this contact form plugin in wordpress for reference: (see javascript action hook section) - http://ideasilo.wordpress.com/2009/05/31/contact-form-7-1-10/

    Read the article

  • Disable Offline Files (mobsync.exe) on Windows 7 Home

    - by Synetech
    This morning I was watching the CPU graph of a Windows 7 Home laptop and noticed that every few seconds, the CPU would spike several percent. I watched the processes and determined that it was mobsync.exe (Offline Files) that was the culprit. I tried the usual steps that Googling turns up, and clicking the Manage Offline Files link to bring up the Offline Files dialog to click Disable Synch does not work because the dialog will not display. This makes sense since everything I have read indicates that Offline Files is not even included/supported in the Home version, so I am at a loss as to why it is running at all, let alone why it is sucking up CPU cycles. (My best guess is that it was started when they pressed Win+X to access the Mobility Center.) Of course I can just kill mobsync, but it could always just come back. How/why would mobsync be running on a Home version and how can it be disabled (of course the Group Policy editor is not available on a Home version).

    Read the article

  • SQL Server 2008 CPU usage goes to 100% - troubleshooting help needed?

    - by Ali
    I have a fairly powerful database server having SQL Server 2008 R2 installed. There is only one database on it which is being accessed from 2 servers (around 5/6 applications). The problem is as soon as applications start pointing to the database, system's CPU usage goes upto 100% with sqlserver itself using 95+%. I have checked profiler, there aren't any heavy queries running there. I have checked active connections, they are hardly 150. Still CPU usage is around 100% and applications are experiencing slow response/connection to database server are getting refused. Database grus, I really need some ideas here. Thanks.

    Read the article

  • New Oracle Database Performance and Tuning Specialization version available to all partners!

    - by Catalin Teodor
    Any partner working with the Oracle database should take advantage of the new Oracle Database Performance and Tuning specialization version. Have partners review the criteria and see what’s new in this specialization solution update. Partners should plan to take the new version as the Oracle Database 11g Performance Tuning specialization becomes non-qualifying as of August 1, 2015.

    Read the article

  • ft_stopword_file not picked up

    - by Alex Holsgrove
    I have a VPS server with a company called Webfusion. I want to remove some or all of the FULLTEXT stopwords because some specific words needs to be searchable with my DB content. I opened /etc/mysql/my.cnf and added the line ft_stopword_file="". I restarted the mysql service, ran a repair table and then tried my MATCH query with no success. I ran SHOW VARIABLES LIKE 'ft_%' and it simply shows (built-in) next to the stopword file. I am running WAMP on my workstation, and whilst I realise this isn't configured the same as a commercial VPS, the above method worked just fine. Couple someone please offer some guidance?

    Read the article

  • Torque2D, Class vs Datablock

    - by Max Kielland
    I'm scripting my first game with Torque2D and have not fully understood the difference between "Class" and Datablock. To me it seems like Datablock is similar to a struct in C/C++ or a Record in Pascal. If I create Datablocks with new, are they instantiated in the same way as a "Class"? I have a large TileMap and need to attach some information to each Tile. I was thinking to use a Datablock, as a struct, to attach this information to the tile's CustomData property. The two questions are: What is a Datablock and should I use a Datablock or a "Class" for this tile information?

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >