Search Results

Search found 24353 results on 975 pages for 'test coverage'.

Page 425/975 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • Reinitialize GPU on RADEON HD 7970 under linux

    - by user1610662
    I have got a RADEON HD 7970 sapphire on Debian Squeeze. Since I often use it with running GPU codes, sometimes the performances highly decrease as I test it with "glxgears" tool (I get only 20 FPS in fullscreen). So I would like to be able to reinitialize the GPU without reboot the system. I know the "clinfo" tool which display the features of the graphics card. Is there a tool which allows to do this reinitialization ?

    Read the article

  • nginx + php fpm -> 404 php pages - file not found

    - by Mahesh
    *2037 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name .site.com; root /var/www/site; error_page 404 /404.php; access_log /var/log/nginx/site.access.log; index index.html index.php; if ($http_host != "www.site.com") { rewrite ^ http://www.site.com$request_uri permanent; } location ~* \.php$ { fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; #fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } location ~ /\. { access_log off; log_not_found off; deny all; } location ~ /(libraries|setup/frames|setup/libs) { deny all; return 404; } location ~ ^/uploads/(\d+)/(\d+)/(\d+)/(\d+)/(.*)$ { alias /var/www/site/images/missing.gif; #i need to modify this to show only missing files. right now it is showing missing for all the files. } location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ { access_log off; expires 20d; } location /user_uploads/ { location ~ .*\.(php)?$ { deny all; } } location ~ /\.ht { deny all; } } php-fpm config is default and is not touched. The problem is little strange for me. Error pages are showing File not found only if they are .php files. Other error files are clearly calling the 404.php file site.com/test = calls 404.php site.com/test.php = File not found. I am searching and making changes. but it hasn't solved the problem.

    Read the article

  • Ghost/Acronis/Clonezilla Live Image Creation (Without Rebooting)

    - by user39621
    I know Ghost and Clonezilla aren't able to build images of a system while the system is running(Without Rebooting). Haven't Checked on Acronis though, but i don't simpatize with private solutions. Question: Is there a software solution which is able to build a "Live" image? Would appreciate anwsers, since I'm one step away from building a Clonezilla test enviroment and this will just help on my decision. Thank you.

    Read the article

  • Cobbler 2.2.2 problems

    - by Peter
    I have setup a dedicated LAN for Cobbler tests. My setup is: Cobbler server: openSUSE 12.3, cobbler 2.2.2 (from openSUSE repos) Imported distros: Centos 6.5, Red Hat 6.5, Red Hat 7.0, openSUSE 13.1 Target Machine: VMs in a Windows 7 Virtualbox Systems provisioning works OK, but I have some problems. The first one is that cobbler does not honor the "pxe_just_once: 1" setting. When the setup of the target OS is finished, after the reboot the target systems continues to PXE boot! The second problem is that the target server is not correctly configured! See my setup: cobbler system report --name=test Name : test TFTP Boot Files : {} Comment : Fetchable Files : {} Gateway : 192.168.0.1 Hostname : testcob1.example.com Image : IPv6 Autoconfiguration : False IPv6 Default Device : Kernel Options : {} Kernel Options (Post Install) : {} Kickstart : <<inherit>> Kickstart Metadata : {} LDAP Enabled : False LDAP Management Type : authconfig Management Classes : [] Management Parameters : <<inherit>> Monit Enabled : False Name Servers : ['192.168.0.1', '8.8.8.8'] Name Servers Search Path : [] Netboot Enabled : False Owners : ['admin'] Power Management Address : Power ID : Power Password : Power Management Type : ipmitool Power Username : Profile : RHEL-6.5-x86_64 Proxy : <<inherit>> Red Hat Management Key : <<inherit>> Red Hat Management Server : <<inherit>> Repos Enabled : False Server Override : <<inherit>> Status : testing Template Files : {} Virt Auto Boot : <<inherit>> Virt CPUs : <<inherit>> Virt Disk Driver Type : <<inherit>> Virt File Size(GB) : <<inherit>> Virt Path : <<inherit>> Virt RAM (MB) : <<inherit>> Virt Type : <<inherit>> Interface ===== : eth0 Bonding Opts : Bridge Opts : DHCP Tag : DNS Name : Master Interface : Interface Type : IP Address : 192.168.0.200 IPv6 Address : IPv6 Default Gateway : IPv6 MTU : IPv6 Secondaries : [] IPv6 Static Routes : [] MAC Address : Management Interface : True MTU : Subnet Mask : 255.255.255.0 Static : True Static Routes : [] Virt Bridge : So, although I have setup the hostname and the network interface of the target system, after the setup, the hostname is set to localhost.localdomain and eth0 is configured as a DHCP not static! How can I find the problem and fix it? Note that I have synced and restarted cobbler a couple of times, but the problems persists.

    Read the article

  • I have been told to accept one error with Memtest86+

    - by DustByte
    Bought a new computer back in August with 4x4 GB RAM. Had problems with the RAM. They sent me four new sticks, which also generated errors. Singled out four sticks (from the eight I now had) that didn't generate any errors. Discovered by coincident a new RAM error last week (this time no BSOD). Contacted the company. According to them there have been issues with a bad stock from last summer so I got two tested 8 GB sticks sent to me. Been running Memtest86+ over the weekend. After 20 hours I got an error (see attached photo). The test has now been running for 37 hours but so far only this one error. I contacted the company where I bought the computer. They wrote back: I wouldn't worry about hat one fail. We have had similar situations here whereby it passes numerous times but then fails once. We think it's an issue with memtest, after all memory is faulty or it isn't so you can't really have it pass a few times, fail the next time around and then pass again! Please trust me on this and continue with the memory we sent you and if your problems continue we'll look at getting it replaced again. I gather from other forum posts that many people do not accept a single error. What could this single error signify, faulty RAM or a glitch in the MEMTEST program (or other)? Update: From the helpful comments below I conclude that an occasional (and rare) "random" error could occur and be acceptable, but repeated errors at the same address would indicate malfunction. Memtest has now run for 45 hours and I still have only one error. For everyone's information, I will keep running the test. In less than two days I am going away for a month. I will most likely leave Memtest running. As I do not have a UPS there is a risk that a power outage will ruin the experiment. The computer is a desktop so I cannot bring it with me (which would curiously have exposed it to more cosmic rays as I will be flying ;)).

    Read the article

  • crontab: question about a special case of the dash character in the time field spec

    - by mdpc
    In the SuSE /etc/crontab the entry to run the cron.{hourly,daily,monthly,weekly} scripts is coded as: -*/15 * * * * root test -x /usr/lib/cron/run-crons && /usr/lib/cron/run-crons /dev/null 2&1 Notice that the very first character of the specification is a dash character (-), and this is NOT a typo. Can somebody explain what the time spec '-*/15' means? BTW, the stuff seems to be running fine. Thanks

    Read the article

  • Echo 404 directly from nginx to improve performance

    - by user64204
    I am in charge of production servers serving static content for a website. Those servers are constantly being crawled by bots looking for potential exploits (which isn't that much of a problem security-wise because no application can be reached behind the web server) but generates thousands of 404 per day, sometimes per hour. I am looking into ways of blocking those requests but it's tricky (you want to make sure you don't block legitimate traffic and these bots are becoming more and more clever at looking like they're legit) and is going to take me a while to find an acceptable solution. In the meantime I would like to reduce the performance impact of serving those 404 pages. Indeed we're using nginx which by default is configured to serve it's 404 page from the disk (This can be changed using the error_page directive but in the end the 404 will either have to be served from disk or from another external source (e.g. upstream application which would be worst)) which isn't ideal. I ran a test with ab on my local machine with a basic configuration: in one case I echo a message directly from nginx so the disk isn't touched at all, in the other case I hit a missing page and nginx serves its 404 from disk. server { # [...] the default nginx stuff location / { } location /this_page_exists { echo "this page was found"; } } Here are the test results (my laptop has Intel(R) Core(TM) i7-2670QM + SSD in case you're wondering why they are so high): $ ab -n 500000 -c 1000 http://localhost/this_page_exists Requests per second: 25609.16 [#/sec] (mean) $ ab -n 500000 -c 1000 http://localhost/this_page_doesnt_exists Requests per second: 22905.72 [#/sec] (mean) As you can see, returning a value with echo is 11% ((25609-22905)÷22905×100) faster than serving the 404 page from disk. Accordingly I would like to echo a simple 404 Page not Found string from nginx. I tried many things so far but they all failed, essentially the idea was this: location / { try_files $uri @not_found; } location @not_found { echo "404 - Page not found"; } The problem is that as soon as the echo directive is used, the http response code is set to 200. I tried changing that by doing error_page 200 = 400 but that breaks the configuration. How can I serve a 404 page directly from nginx? (without hacking the source which may be might next step)

    Read the article

  • Install Lubuntu 10.04 to a USB Drive

    - by Nironan12
    I'm looking to install Lubuntu 10.04 onto my flash drive to test it out without the risk of altering my system which runs Windows 7. I think I need what is called a "persistant install", so that Lubuntu will run off of my flash drive and any changes made will stay there, unlike the demo feature of the CD.

    Read the article

  • Simulate printers on AD network

    - by MikeyB
    I need to create an Active Directory lab that contains printers and test out various printer-related functionality (adding printers in AD, clients attaching to printers, printers, etc) Is there a good way to properly simulate printers on a network? Or does there need to be real physical printers somewhere that eventually are attached, even if no output comes out. How would you solve this problem?

    Read the article

  • How to Run vnc4server on Ubuntu in Certain Resolution?

    - by Boris Pavlovic
    There's a headless Ubuntu instance used as a host for our build server. I have some UI code that requires some graphical output. Installing a vnc4server and redirecting a DISPLAY to it worked like a charm. Not that my UI tests are running but test scripts can take screen shots. Problem is that I need to set the resolution that vnc4server uses to serve the graphical content. Does anybody know how to configure it on Ubuntu server?

    Read the article

  • Run Jar in Background on Linux

    - by Benny
    I have a jar that runs forever (infinite loop with socket listening thread) and need it to run in the background at all times. An example would be: "java -jar test.jar" How do I do this? Thanks in advance!

    Read the article

  • DELETE method not allow in IIS (7)?

    - by DucDigital
    some how im developing ASp.net mvc application, the DELETE method work fine in VS server, but however, when i test it in an IIS. it's not working and absolutly return a 405 Error.. and currently, I dont know where and how i can get IIS allow my DELETE/PUT HTTP method in my application.... Can some one help me please?

    Read the article

  • Is it possible to add wildcard serveralias to virtualhost without modifying httpd.conf manually?

    - by Favourite Chigozie Onwuemene
    Is it possible to add wildcard serveralias (example: *.somesite.com) in an apache server without modifying httpd.conf manually? I use a DNS different from my hosting server and i have added asterisk A record to my DNS to point all request like (test.somesite.com,test2.somesite.com) to my hosting servers IP, but i don't see anyway of adding asterisk serveraliases to apache httpd.conf file in my cpanel. Pls is there a solution?

    Read the article

  • Can someone access my locally ran website even if I haven't specified any port forwarding?

    - by user701510
    I am using Xampp so I can test my web application directly on my own computer. I am concerned that someone can access my Xampp site since I am still connected to the internet. However, I have NOT explicitly enabled any port forwarding with respect to my Xampp site in my router firewall settings. Furthermore, I am using a dynamic ip address. Given the factors already stated, can someone from outside my local network still access my locally ran website?

    Read the article

  • PHP ftp_connect

    - by Dude Lebowski
    I try to use the php ftp_connect fucntion on my dedicated server and I'm unable to establish a connection: $conn_id = ftp_connect($ftp_server, 21) or die("Unable to connect to $ftp_server") ; I'm sure the function is available as I test with : function_exists('ftp_connect') and it returns true When I ftp the server through the shell I can reach it so I guess it's not a firewall issue. Am I missing something else ? Thanks for your precious advices

    Read the article

  • Visual Studio not auto-building when I press the debug button

    - by Kurru
    Hi I'm writing code in Visual Studio but whenever I want to test the application and press the green arrow for "Start debugging", Visual Studio does not automatically recompile the active solution for me and I have to manually build the solution then debug it. Visual Studio used to automatically build before debug and I want this back as contantly having to manually build is a serious pain. Thanks

    Read the article

  • Automatically restore windows network drive (red "X") via batch file

    - by user33958
    i need check if a network drive is mapped and accessible. From time to time windows displays a red X on the drive, and i would need to manually click the drive in explorer to reconnect. I already found solutions which involve editing the registry which unfortunately isn´t possible. So i would need a batch file checking for connection, and (re-)mounting the drive. What i´m using at the moment: IF NOT EXIST z: net use z: \\10.211.55.5\test

    Read the article

  • Wireless USB devices compatible with Windows 7 64-bit

    - by BrynJ
    I want to upgrade my Vista 64-bit edition to Windows 7 64-bit, so I've installed and run the Windows 7 Compatibility Test. The only item that is being highlighted as incompatible is my Zyxel G-202 wireless usb network device. Does anyone know of a wireless USB device that is compatible with Windows 7 64-bit?

    Read the article

  • vsftp hangs at "150 Here comes the directory listing."

    - by Rikr
    In a vsftpd server enviroment, shared various directories from nfs mountpoints, I can log in without problem, but when I send the first "ls", the vsftp give me the directory listing: lftp [email protected]:~ ls -rw-rw-rw- 1 1160 1016 392 Jun 06 09:28 test.gif but not give me the shell again (lftp client). In the server log I can see that the last message is: "150 Here comes the directory listing." Why happend this?

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >