Search Results

Search found 15605 results on 625 pages for 'exchange cached mode'.

Page 504/625 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Windows XP dual screen problems, user account related

    - by Chris
    I have had this issue with a few laptops now and it looks like it is some sort of user account problem. Specifics of the system are: Dell Laptop Windows XP Pro SP3 Non-domain member computer DLP Projector connected to laptop via VGA I use this setup almost daily to do presentations, always the mirrored display mode where I can see on the laptop monitor the same thing that is displayed on the projector. Today, when I boot up, I get the mirrored display at the login screen, but after I log in, it switches to Extended Desktop (like two desktops side-by-side). Fn+F8 just cycles through all the normal settings except the mirrored display. I created a new user account on the computer and it performs normally. Mirrored display works as normal. I have run into this about 4 times now and it always can be solved by creating a new user account on the computer, and then all is well. I would like to either: 1. Find a way to reset the customized settings for a specific user account which would hopefully make this go away, or 2. Find the specific setting that causes this so that I can easily fix it when the problem comes up. Creating new user accounts is kind of a pain and a easy fix must be out there somewhere.

    Read the article

  • Hard drive not correctly recognized on a new Windows 7 installation, but works correctly on Windows XP

    - by david
    I'm having problems configuring a hard disk in a brand new, clean Windows 7 installation. System specs: Hard disk: WD VelociRaptor WD6000HLHX (600 GB, 10000 RPM) Motherboard: Gigabyte Z77X-UD3H BIOS SATA mode set to AHCI (not RAID), with disk connected to SATA0 (6 Gb/s port). Windows 7 Enterprise SP1 64-bit The disk is recognized by the BIOS and is correctly identified, with the name and size correctly reported. Windows recognizes the disk itself and reports the device is functioning correctly, but it doesn't appear in Explorer. Disk Management shows the drive, but incorrectly states that it is uninitialized and has no partitions. If I try to initialize the drive, I get an error saying that "the system cannot find the file specified" (what file?). Before connecting the drive to the new machine, I partitioned and formatted it under Windows XP SP2, creating 2 partitions (MBR, not GPT) and copying over a boatload of data. However, none of this data appears under Windows 7. If I put the disk back into the Windows XP machine, I can access the disk and all of its data. Is it possible to get Windows 7 to correctly recognize the disk without having to erase it and start over? If so, how do I do so? I checked this question, which seems to cover the same issue, but it didn't help.

    Read the article

  • Can I have 2Gbit over 1Gbit Nics

    - by Daniel
    So this really baffles me. Apparently because 1Gbit can transmit data in both directions simultaneously it should be possible to get 2Gbit of data transfer on a single NIC (1Gbit flow seend and 1Gbit receive). People claim that because 1Gbit is full-duplex (almost always) it is exactly 2Gbit in total. My intuition and electrical background tells me that something is not right here 4 twisted pairs 250Mbit capacity each gives 1Gbit. Unless it is really possible to transfer data in both directions simultaneously. I did a test with iperf. Ubuntu server 12.04 <-- MacBook Pro. Both with decent CPU speed. Tested speed of connection individually and on Mac I can see 112MB/s regardless which direction data is going. On Ubuntu with vnstat and ifstat I got 970Mbit speeds. Now, launching iperf in server mode on both machines at the same time and sending data using 2 iperf clients shows that I'm for example on Ubuntu box sending at 600Mbit, and receiving 350Mbit. which adds up to pretty much 1Gbit link. So to me there is no magical 2Gbit. Can someone confirm that or tell why I'm wrong? Another thing that confuses me i the fact that e.g. 24-port switch has for example: Throughput»up»to:»50.6Mpps Switching»capacity:»68Gbps Switch»fabric»speed:»88Gbps Which would suggest thay can handle 2GBit per port.

    Read the article

  • How harmful is a hard disk spin cycle?

    - by Gilles
    It is conventional wisdom¹ that each time you spin a hard disk down and back up, you shave some time off its life expectancy. The topic has been discussed before: Is turning off hard disks harmful? What's the effect of standby (spindown) mode on modern hard drives? Common explanations for why spindowns and spinups are harmful are that they induce more stress on the mechanical parts than ordinary running, and that they cause heat variations that are harmful to the device mechanics. Is there any data showing quantitatively how bad a spin cycle is? That is, how much life expectancy does a spin cycle cost? Or, more practically, if I know that I'm not going to need a disk for X seconds, how large should X be to warrant spinning down? ¹ But conventional wisdom has been wrong before; for example, it is commonly held that hard disks should be kept as cool as possible, but the one published study on the topic shows that cooler drives actually fail more. This study is no help here since all the disks surveyed were powered on 24/7.

    Read the article

  • Which ports are needed for NTLM (Windows Authentication) to connect to SQL Server?

    - by Adam Bellaire
    I've got SQL server running on a machine which is not in a domain, and which is not operating in mixed mode (it's running with "Windows Authentication"). I'm trying to connect to it from a Linux web server running freetds via TCP/IP, using NTLM to authenticate. The firewall on the SQL server is very restrictive. 1433 is open to my web server, but I'm getting conflicting information from the web on what additional ports (TCP/UDP) are needed for NTLM to succeed. It is currently fail; I can talk on 1433 to request NTLM, but the actual authentication always fails. One source says 137, 138, 139, but those are just the NetBIOS ports. Do I really need those? Another source says 135. Still others seem to say 1434... I can't make heads or tails of it. Dammit Jim, I'm a programmer, not a network administrator! EDIT: The exact error message: Msg 18452, Level 14, State 1, Server , Line 0 Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection. Msg 20002, Level 9, State -1, Server OpenClient, Line -1 Adaptive Server connection failed I am attempting to connect with a remote machine username, i.e. 'servername\username'. Some sources recommend that I set up mirrored accounts on the local and remote machines, but the local machine is running Linux, not IIS under Windows.

    Read the article

  • Is there any special way to force GoBack to work with Windows Vista and 7?

    - by dfree
    Norton/Roxio's GoBack doesn't work with Vista/7 for reasons unknown. I have tried several alternatives (Norton Ghost, RollbackRX, Norton Save and Restore), none of which offer the same functionality as GoBack. Not only does GoBack not eat up all your hard drive space while creating a legitimate fail safe for any pc problems, it also allows you to see ACTIVELY EXACTLY WHAT PROCESS ARE BEING EXECUTED ON YOUR COMPUTER. This feature (called Advanced Disk Drive Restore) also allows you to troubleshoot problems and determine causes for things in about half a second by seeing what is happening on your machine. It's how I learned everything I know about computers. GoBack also features something called Safe Try Mode where you can put it in SafeTry and then mess up the whole computer and when you come out of it, your computer will be exactly how it was before. Amazing for people who like to tinker without risking their machine stability. It also helps for that accidentally erased paper or whatever you may have erased. I believe GoBack installs a type44 partition around the drive, which loads prior to windows to allow this functionality. If you're going to recommend another program, please don't (unless it does all of the above). I've tried all the competition and nothing is as good. I just want my GoBack to work with 7 :) Any ideas of crazy ways to make this work?

    Read the article

  • How to configure a Web.Config file to allow custom 404 handling while still displaying on-page 500 e

    - by Mark
    To customize 404 handling and based on the hosting company's suggestion, we are currently using the following web.config setup. However, we quickly realized that with this configuration, any page error (500 error) are also getting redirected to this custom error page. How can I modify this config file so we can continue to handle 404 with custom file while still able to view on-page error? <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.webServer> <httpErrors errorMode="DetailedLocalOnly" defaultPath="/Custom404.html" defaultResponseMode="ExecuteURL"> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="/Custom404.html" responseMode="ExecuteURL" /> </httpErrors> </system.webServer> <system.web> <customErrors mode="On"> <error statusCode="404" redirect="/Custom404.html" /> </customErrors> </system.web> </configuration>

    Read the article

  • Windows installation repair option not showing up

    - by Jason
    I'm trying to repair an existing Windows XP installation. Following the instructions from http://www.microsoft.com/windowsxp/using/helpandsupport/learnmore/tips/doug92.mspx this should work: - 1.When the Press any key to boot from CD message is displayed on your screen, press a key to start your computer from the Windows XP CD. - 2.Press ENTER when you see the message To setup Windows XP now, and then press ENTER displayed on the Welcome to Setup screen. - 3.Do not choose the option to press R to use the Recovery Console. - 4.In the Windows XP Licensing Agreement, press F8 to agree to the license agreement. - 5.Make sure that your current installation of Windows XP is selected in the box, and then press R to repair Windows XP. - 6.Follow the instructions on the screen to complete Setup. On step 5 pressing R does nothing and there is nothing on the screen saying it would. When I just select to install I get a message that a previous installation is there and proceeding will destroy it and installed applications, I can optionally select a directory other than c:/windows, and I can optionally format before continuing. I had tried to go from SP2-SP3. It failed, and then I couldn't get to Safe Mode. I put the SP1 disk back in to do a repair, and I don't see that option. (I don't have an SP2 boot/install disk, I just have the non-boot upgrade package.)

    Read the article

  • OpenVPN: ifup tap0 drops all connections

    - by raspi
    I'm trying to create star shaped "virtual" LAN with OpenVPN which is not connected to physical network. ie. tap0 packets should not go to eth0. Packet should only go through OpenVPN to connected clients. This setup works with my OpenVPN testing machine which runs Virtual Box but not on my actual server which is running on top of Xen. Both servers are running Ubuntu Intrepid. /etc/network/interfaces: iface tap0 inet manual address 10.10.10.1 netmask 255.255.255.0 gateway 10.10.10.1 /etc/openvpn/server.conf mode server tls-server port 1194 proto udp dev tap client-to-client ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/servername.crt key /etc/openvpn/easy-rsa/keys/servername.key dh /etc/openvpn/easy-rsa/keys/dh384.pem ifconfig-pool-persist ipp.txt server-bridge 10.10.10.1 255.255.255.0 10.10.10.128 10.10.10.250 push .route 10.10.10.1 255.255.255.0 keepalive 5 60 comp-lzo persist-key persist-tun status /var/log/openvpn-status.log log-append /var/log/openvpn.log verb 3 user nobody group nogroup ifup tap0 on Virtual Box: everything ok, SSH keeps running. But on Xen SSH connection drops and I have to reboot whole machine. What I'm missing?

    Read the article

  • Hard drives (SATA/ATA) corrupting

    - by JC Denton
    Hello All, 2 years ago I bought relatives a new computer for christmas and installed Ubuntu on it. Ever since then it has been experiencing problems with the hard drives. The hard drive supplied with the machine was a SATA drive. When it appeared it was having problems (files and folders with invalid encoding started appearing) I replaced the SATA drive with the drive of their previous computer. I replaced it (The replacement) later on, the drive being rather old and thus becoming more prone to the risk of failure. The replacement drive is a IDE drive but the same problems started to appear (files and folders showing up in nautilus with invalid encoding). I fear the files and folders that are showing are existing FS entries, starting to corrupt. As it's happening to both the IDE and SATA drive it's unlikely to be the drives themselves or the IDE/SATA controller, I believe. Any ideas as to what could be causing the (assumed) corruption? EDIT: You're right about the paragraphs. They were there in edit mode but I'm still getting to grips with the whitespace format codes. The system is a "Primo Pro" AMD Phenom II X4 Quad Core 920 2.80GHz SILENT DDR2, ordered from overclockers.co.uk and nothing has been added to it except for the replacement of the SATA drive with an AMD drive. It would seem unlikely for a barebones system to be underpowered.

    Read the article

  • Android failure to boot on LG [migrated]

    - by Ukavi
    I need to recover data from my AT&T LG Thrill Android Phone Background: My AT&T LG Thrill phone's battery died a couple of days ago because I forgot to charge it. When I charged the phone and tried to turn it on, it showed the LG logo followed by the dropping balls and the AT&T "Rethink Possible" screen. I then get a mesage that the Application Google Services Framework has crashed and the phone goes into a loop with the dropping balls showing again followed by "Rethink Possible" screen. This sequence repeats itself over and over and the phone does not get out of this loop. I have been able to go into the recovery screen (both Safe Mode and the Android Recovery Service) and have cleared cache, etc. However, I DO NOT want to wipe user data and restore to factory settings as this will wipe all of my data (pictures, application data, etc). Solution Needed: I need a suggestion to a way of accessing my data so that I can back it up onto an SD card/computer. I DO NOT want to root the phone as this may void the warranty. What I'm looking for is a way of perhaps putting the original flash image on the micro SD card and then have the phone read that image. Or some other similar solution that will get the phone out of this loop and allow me to get to the data.

    Read the article

  • Built local glibc, broke system, how do I ssh without parsing the .bashrc?

    - by Mikhail
    The cluster I am on had really old build tools and I needed to use CUDA5. I'm a pretty clever dude and I planned on building the necissary tools. So, I built a local copy of gcc, bintools, and glibc. Everything a CUDA5 could want. All builds finished without error. and I tested gcc and bintools. Everything was wonderful and I built and ran a few of the programs. I set up the LD_LIBRARY_PATHs in the .bashrc and logged back in, expecting a productive night ahead. To my horror I realized that everything is dynamically linked. Now I can't do simple commands like ls [ex@uid377 ~]$ ls ls: error while loading shared libraries: __vdso_time: invalid mode for dlopen(): Invalid argument and I can't do commands to fix the problem like rm or vim! Is there a way for me to ssh but also to ignore .bashrc file? Any suggestions are much appreciated. This machine is obviously under maintained and I don't know when I could have administrator support.

    Read the article

  • Selenium server won't start

    - by moff
    I'm getting the following error when trying to start selenium: C:\Temp\selenium-server-1.0.3java -jar selenium-server.jar 22:02:07.615 INFO - Java: Sun Microsystems Inc. 16.0-b13 22:02:07.617 INFO - OS: Windows 7 6.1 x86 22:02:07.625 INFO - v2.0 [a2], with Core v2.0 [a2] 22:02:07.811 INFO - RemoteWebDriver instances should connect to: http://127.0.0. 1:4444/wd/hub 22:02:07.813 INFO - Version Jetty/5.1.x 22:02:07.815 INFO - Started HttpContext[/selenium-server/driver,/selenium-server /driver] 22:02:07.817 INFO - Started HttpContext[/selenium-server,/selenium-server] 22:02:07.818 INFO - Started HttpContext[/,/] 22:02:07.866 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler@2bbd86 22:02:07.867 INFO - Started HttpContext[/wd,/wd] 22:02:07.870 WARN - Failed to start: [email protected]:4444 Exception in thread "main" org.openqa.jetty.util.MultiException[java.net.SocketE xception: Unrecognized Windows Sockets error: 0: JVM_Bind] at org.openqa.jetty.http.HttpServer.doStart(HttpServer.java:686) at org.openqa.jetty.util.Container.start(Container.java:72) at org.openqa.selenium.server.SeleniumServer.start(SeleniumServer.java:3 96) at org.openqa.selenium.server.SeleniumServer.boot(SeleniumServer.java:23 4) at org.openqa.selenium.server.SeleniumServer.main(SeleniumServer.java:19 8) java.net.SocketException: Unrecognized Windows Sockets error: 0: JVM_Bind at java.net.PlainSocketImpl.socketBind(Native Method) at java.net.PlainSocketImpl.bind(Unknown Source) at java.net.ServerSocket.bind(Unknown Source) at java.net.ServerSocket.(Unknown Source) at org.openqa.jetty.util.ThreadedServer.newServerSocket(ThreadedServer.j ava:391) at org.openqa.jetty.util.ThreadedServer.open(ThreadedServer.java:477) at org.openqa.jetty.util.ThreadedServer.start(ThreadedServer.java:503) at org.openqa.jetty.http.SocketListener.start(SocketListener.java:204) at org.openqa.jetty.http.HttpServer.doStart(HttpServer.java:716) at org.openqa.jetty.util.Container.start(Container.java:72) at org.openqa.selenium.server.SeleniumServer.start(SeleniumServer.java:3 96) at org.openqa.selenium.server.SeleniumServer.boot(SeleniumServer.java:23 4) at org.openqa.selenium.server.SeleniumServer.main(SeleniumServer.java:19 8) Java is installed: C:\Temp\selenium-server-1.0.3java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) Client VM (build 16.0-b13, mixed mode, sharing) Thanks in advance

    Read the article

  • CentOS OpenVZ fail to boot after kernel update

    - by SkechBoy
    After upgrading to latest OpenVZ kernel CentOS server won't boot. When i try go boot the latest kernel server is stuck at this point: (note that images are taken from virtual kvm) http://i.stack.imgur.com/4lusz.jpg Then i try to start the server on some old kernels and than i get this error message: kernel panic - not syncing - attempted to kill init better shown on this image: http://i.stack.imgur.com/2SReF.jpg Here is some useful information fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2995.7 GB, 2995739688960 bytes 255 heads, 63 sectors/track, 364211 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004c4e4 Device Boot Start End Blocks Id System /dev/sda1 1 523 4199044+ 82 Linux swap / Solaris /dev/sda2 524 785 2104515 83 Linux /dev/sda3 786 261869 2097157230 83 Linux /dev/sda4 261870 364211 822062115 83 Linux /etc/fstab proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/sda1 none swap sw 0 0 /dev/sda2 /boot ext3 defaults 0 0 /dev/sda3 / ext3 defaults 0 0 /dev/sda4 /home ext3 defaults 0 0 and grub config file: title OpenVZ (2.6.18-274.18.1.el5.028stab098.1) root (hd0,1) kernel /vmlinuz-2.6.18-274.18.1.el5.028stab098.1 ro root=/dev/sda3 vga=0x317 selinux=0 initrd /initrd-2.6.18-274.18.1.el5.028stab098.1.img title OpenVZ (2.6.18-274.7.1.el5.028stab095.1) root (hd0,1) kernel /vmlinuz-2.6.18-274.7.1.el5.028stab095.1 ro root=/dev/sda3 vga=0x317 selinux=0 initrd /initrd-2.6.18-274.7.1.el5.028stab095.1.img title OpenVZ (2.6.18-194.8.1.el5.028stab070.4) root (hd0,1) kernel /vmlinuz-2.6.18-194.8.1.el5.028stab070.4 ro root=/dev/sda3 vga=0x317 initrd /initrd-2.6.18-194.8.1.el5.028stab070.4.img Any help is greatly appreciated Thanks.

    Read the article

  • Upgrade Nokia Maps from v2 to v3 fails

    - by ssollinger
    I'm trying to install Nokia Maps 3.0 on my Nokia N82, without much success. I believe other similar Nokia phones have the same problem. My phone is connected through USB in "PC Suite" mode, and the latest firmware available for N82 is installed. I currently have Maps 2.0 installed. I'm installing from a Windows XP PC, and tried the update first from within Ovi Suite (latest version) and from Nokia Maps Updater (latest version). In both cases it detects that there is an update available (Maps 3.0), downlowds it and starts the install. On my phone, I then get the following error message: Unable to install. Component is built in. And on the PC I get the error Error Cannot update maps application. The installation failed or was cancelled on the phone (18). I found an entry for Maps in the App. manager and deleted it (and turned phone off and on again afterwards), but this didn't make any difference (and I don't think it changed the version of Maps installed either). This is the release version of Maps 3.0, not the beta. I found the problem mentioned many times on various web sites, but couldn't find a solution anywhere. Has anybody any ideas how to get the upgrade from Maps 2.0 to Maps 3.0 to work?

    Read the article

  • How to use the AWUS036H on MacBook Pro with Lion and Backtrack in VM?

    - by Swader
    I have the AWUS036H USB WiFi adapter and have recently upgraded the OSX to Lion. The thing is, there are no drivers for Lion for the AWUS036H, and I would have to boot into 32bit mode every time I want to launch the adapter as per instructions here: http://www.youtube.com/watch?v=n9_HAGi1ce0 I also want to install BackTrack as I deal in networks a lot for my company. While this would be a simple matter on any other laptop, the company issued Macbook does not allow booting into any OS other than MacOSX or Windows with Bootcamp. Now, since dual booting into BT is not an option, I would like Backtrack to run in VM inside my MacOSX Lion - and this it does. It works like a charm inside VirtualBox. But since there are no 64bit drivers for the wifi adapter, Lion doesn't recognize it and cannot install it. This, in turn, means that Backtrack cannot see it even though AWUS036H usually works flawlessly with BT. How can I make my VM-based BT see the wifi adapter even if the parent OS doesn't see it, if at all? Is there a way, or am I better off buying a new WiFi adapter that supports OSX 10.7 such as the AWUS036NHR?

    Read the article

  • Can't make Dovecot communicate with Postfix using SASL (warning: SASL: Connect to private/auth failed: No such file or directory)

    - by Fred Rocha
    Solved. I will leave this as a reference to other people, as I have seen this error reported often enough on line. I had to change the path smtpd_sasl_path = private/auth in my /etc/postfix/main.cf to relative, instead of absolute. This is because in Debian Postfix runs chrooted (and how does this affect the path structure?! Anyone?) -- I am trying to get Dovecot to communicate with Postfix for SMTP support via SASL. the master plan is to be able to host multiple e-mail accounts on my (Debian Lenny 64 bits) server, using virtual users. Whenever I test my current configuration, by running telnet server-IP smtp I get the following error on mail.log warning: SASL: Connect to /var/spool/postfix/private/auth failed: No such file or directory Now, Dovecot is supposed to create the auth socket file, yet it doesn't. I have given the right privileges to the directory private, and even tried creating a auth file manually. The output of postconf -a is cyrus dovecot Am I correct in assuming from this that the package was compiled with SASL support? My dovecot.conf also holds client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } I have tried every solution out there, and am pretty much desperate after a full day of struggling with the issue. Can anybody help me, pretty please?

    Read the article

  • Nginx + PHP5-FPM repeated cut outs 502

    - by James
    I've seen a number of questions here that highlight random 502 (Nginx + PHP-FPM = "Random" 502 Bad Gateway) and similar time outs when using Nginx + PHP-FPM. Even with all the questions, I'm still unable to find a solution. Using Ubuntu 10.10 + Nginx + PHP5-FPM + APC and every 1 out of 4 requests ends in a timeout and failure. This isn't a load issue or large traffic, it happens even in dev environment with one person. I am doing this across 3 1GB machines, each with the same configurations and same problems. fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; /etc/php5/fpm/main.conf ; FPM Configuration ; ;include=/etc/php5/fpm/*.conf ; Global Options ; pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log ;log_level = notice ;emergency_restart_threshold = 0 ;emergency_restart_interval = 0 ;process_control_timeout = 0 ;daemonize = yes ; Pool Definitions ; include=/etc/php5/fpm/pool.d/*.conf /etc/php5/fpm/pool.d/www.conf [www] listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data ;pm.max_children = 50 pm.max_children = 15 ;pm.start_servers = 20 pm.min_spare_servers = 5 ;pm.max_spare_servers = 35 pm.max_spare_servers = 10 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong request_terminate_timeout = 30 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes

    Read the article

  • Why a 10 years old software still is so slow even today?

    - by Cawas
    I just noted this question due to a game (which happens to be Diablo 2), but the matter of fact is: why is my brand new mac book pro, made in 2009 with latest technology (tho it's the cheapest one) can't rival my computer which used to run this much faster back in 2000? Really, it was much faster on my AMD K6 450 back in those days, and I could even run two clients at same time with no slow down. I've always had the feeling this machine was slow, but this is a very odd way to attest it. Granted, the machine is smaller, runs on wifi and "boots" way faster thanks to sleep mode. But other than that, what have we evolved after all?! I'm pretty sure this shouldn't be graphical card's fault. Sure if I buy latest technology it will run fast, and probably most people here can confirm this and won't even understand my question. But the thing is, all the hardware is supposedly much faster and better than the stuff from 10 years ago. The software and operating system became more complex, but also more well refined. Now I'm trying a piece of software that is actually 10 years old and it's not getting any better results! Why?

    Read the article

  • Word 2013 can't compare readonly files

    - by Moshe Katz
    I am using Tortoise SVN to work with a repository that contains some documentation saved as Word documents. On my old computer, with Office 2010, I was able to compare with previous revisions. Tortoise would open Word in compare view so I could see the differences between the files. I have installed Office 2013 (final version from Technet, not the preview version) on my new laptop for testing and now I can no longer compare Word Documents. Tortoise pops up a generic error that it was unable to compare the two files. Tortoise uses a JScript file to interface with Word, so I ran that file through a debugger and found that the actual error is: The Compare method or property is not available because this command is not available for reading. Some Googling followed by some testing revealed that the error is caused by the first file opened (in this case, the previous version) being opened as Read-Only. If I change the JScript code to open in normal mode, and I find the file on the system and un-check the "Read Only" property (if necessary), then the comparison opens as expected. I was unable to find any documentation about this change to Word on any Microsoft site. Does anyone know why this has been changed, and if it is intentional and not a bug, what the benefit is of requiring the file to be writable in order to compare it with another? Note: This is tagged word-2013-preview but it is actually for the release version of Word that is available on MSDN and Technet. I do not have enough rep. on this site to create new tags (yet).

    Read the article

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

  • Dante (SOCKS server) not working

    - by gregmac
    I'm trying to set up a SOCKS proxy using dante for testing purposes. However, I can't even get it to work with a web browser, after looking at several tutorials on how to do that. I've tried in both IE and Firefox, in both cases, using "Manual proxy configuration", leave everything blank except for SOCKS host, and then put in the IP of my proxy and the port number (1080). I just get "Server not found" / "Problems loading this page" and don't see anything in danted, even running in debug mode. If I do a "telnet 10.0.0.40 1080" I do see the connection open in danted debug output, so I know that much is working. Here's my config: logoutput: stdout /var/log/danted/danted.log internal: eth0 port = 1080 external: eth0 method: username none #rfc931 user.privileged: proxy user.notprivileged: nobody user.libwrap: nobody connecttimeout: 30 # on a lan, this should be enough if method is "none". client pass { from: 10.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client pass { from: 127.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } block { from: 0.0.0.0/0 to: 127.0.0.0/8 log: connect error } pass { from: 10.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } pass { from: 127.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } I'm sure I'm probably missing something simple, but I'm lost. I haven't even thought about SOCKS since the late 90's.

    Read the article

  • Authenticated User Impersonation in Classic ASP under IIS7

    - by user52663
    I've recently moved one of our servers from Server 2003 and IIS6 to Server 2008 R2 and IIS7 (technically IIS7.5 I suppose). In doing so I am transitioning a small account management tool written in classic ASP and have run into a problem with user impersonation. Extensive searching hasn't been much help so far. Under IIS6, the site was configured to impersonate the logged-in user. Thus, if a domain admin logged in, he was able to run commands to create user directories, adjust permissions, etc. Using Procmon you can see the processes executing as that user. This worked fine. However, with the same code under IIS7, I am unable to get this behavior. I have enabled Basic Authentication, disabled Anonymous Auth, enabled impersonation and have changed the app pool to classic instead of integrated pipelining. Everything seems to be configured correctly, however, all the processes launched by the classic ASP site continue to run as the default AppPool identity and not the logged-in user. If it matters, programs are being launched with code such as: set Wsh = Server.CreateObject("WScript.Shell") Wsh.Run("cmd.exe /C mkdir D:\users\foo") Monitoring via Procmon shows cmd.exe being run as either "Classic .NET AppPool" or "DefaultAppPool" depending on the pipeline mode. Any suggestions on how to get the classic ASP site to impersonate and execute as the authenticated user would be great. Thanks!

    Read the article

  • OpenSwan (IPSEC) on Fedora 13 with Snow Leopard as a client

    - by sicn
    I recently installed OpenSwan on my Fedora 13 machine. I want to use it to connect with Mac OS X with L2TP over IPSEC, unfortunately I am already stuck on the IPSEC-negotation part. My server is running behind a NATted firewall so my external IP differs from the server's IP. The server has a fixed IP on the network and the same is almost always valid for the clients (they are usually behind a NATted firewall). I installed OpenSwan on Fedora 13 and have following configuration: config setup protostack=netkey nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off nhelpers=0 conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=my.servers.external.ip leftprotoport=17/1701 right=%any rightprotoport=17/0 IPSEC starts fine and listens to UDP 500 and 4500. These two ports are opened in the firewall and are forwarded fine to the server. In my /etc/ipsec.secrets file I have my.servers.external.ip %any: "LongAndDifficultPassword" And finally in my sysctl.conf (the redirect-entries are there because OpenSwan was strongly protesting about send/accept_redirects being active) I have net.ipv4.ip_forward = 1 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.all.accept_redirects = 0 Running "ipsec verify" gives me "all greens" (except Opportunistic Encryption Support, which is DISABLED), however, when trying to connect my Mac gives me following in the logs: Nov 1 19:30:28 macbook pppd[4904]: pppd 2.4.2 (Apple version 412.3) started by user, uid 1011 Nov 1 19:30:28 macbook pppd[4904]: L2TP connecting to server 'my.servers.ip.address' (my.servers.ip.address)... Nov 1 19:30:28 macbook pppd[4904]: IPSec connection started Nov 1 19:30:28 macbook racoon[4905]: Connecting. Nov 1 19:30:28 macbook racoon[4905]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Nov 1 19:30:31 macbook racoon[4905]: IKE Packet: transmit success. (Phase1 Retransmit). Nov 1 19:30:38: --- last message repeated 2 times --- Nov 1 19:30:38 macbook pppd[4904]: IPSec connection failed Any ideas at all?

    Read the article

  • How to create btrfs RAID-1 filesystem (assertion error in mkfs.btrfs)?

    - by amcnabb
    I tried to make a btrfs RAID-1 filesystem in "degraded mode" by following the btrfs UseCases instructions but hit a fatal assertion error. Why is this failing, and is there any workaround? The instructions I followed are at: https://btrfs.wiki.kernel.org/articles/u/s/e/UseCases_8bd8.html The output of the mkfs.btrfs and btrfs filesystem show commands is: # mkfs.btrfs -m raid1 -d raid1 /dev/sdd1 /dev/loop1 WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL WARNING! - see http://btrfs.wiki.kernel.org before using failed to read /dev/sr0 adding device /dev/loop1 id 2 mkfs.btrfs: volumes.c:802: btrfs_alloc_chunk: Assertion `!(ret)' failed. zsh: abort (core dumped) mkfs.btrfs -m raid1 -d raid1 /dev/sdd1 /dev/loop1 # btrfs filesystem show failed to read /dev/sr0 Label: none uuid: 773908b8-acca-4c30-85c5-6642b06de22b Total devices 1 FS bytes used 28.00KB devid 1 size 223.13GB used 2.04GB path /dev/sda5 Label: none uuid: 0f06f1a8-5f5f-4b92-a55c-b827bcbcc840 Total devices 2 FS bytes used 24.00KB devid 2 size 2.00GB used 0.00 path /dev/loop1 devid 1 size 1.36TB used 20.00MB path /dev/sdd1 Btrfs Btrfs v0.19 # EDIT: It turns out that the filesystem isn't mountable: # mount /dev/sdd1 /mnt/big2 mount: wrong fs type, bad option, bad superblock on /dev/sdd1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # So, why did the mkfs fail, and is there any workaround?

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >