Search Results

Search found 19256 results on 771 pages for 'boost log'.

Page 646/771 | < Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >

  • How can I confirm that burn process is completed successfully??

    - by infant programmer
    Just now I tried to make a duplicate copy of my Windows XP Pro SP3 CD. I had set 32X speed while the max speed allowed is 48X. And checked "Verify the Data after burning". I didn't observe how much % did it complete. But after 15 mins (which is sufficient period to burn disc), I heard "Fatal Error" sound. When I saw the monitor, There was a message box which was showing "Error while burning Disc", I saved the error log which you can find here click_me. How do I acknowledge that burning process is done?? Well. I my PC is able to read the cd and it is bootable too.. I am not sure .. whether the burning process got failed or the verify Process! Please let me know if you have come across or aware of such situation .. As it is XP installation CD .. I don't want to do trial and error method. regards,

    Read the article

  • disk space keeps filling up on EC2 instance with no apperent files/directories

    - by sasher
    How come os shows 6.5G used but I see only 3.6G in files/directories? Running as root on an Amazon Linux AMI (seems like Centos), lots of free memory available, no swapping going on, no apparent file descriptors issue. The only thing I can think of is a log file that was deleted while applications append to it. Disk space usage is slowly but continuously rising towards full capacity (~1k/min with very small decreases from time to time) Any explanation? Solution? du --max-depth=1 -h / 1.2G /usr 4.0K /cgroup 22M /lib64 11M /sbin 19M /etc 52K /dev 2.1G /var 4.0K /media 0 /sys 4.0K /selinux du: cannot access /proc/14024/task/14024/fd/4': No such file or directory du: cannot access<br/> /proc/14024/task/14024/fdinfo/4': No such file or directory du: cannot access /proc/14024/fd/4': No such file or directory du: cannot<br/> access/proc/14024/fdinfo/4': No such file or directory 0 /proc 18M /home 4.0K /logs 8.1M /bin 16K /lost+found 12M /tmp 4.0K /srv 35M /boot 79M /lib 56K /root 67M /opt 4.0K /local 4.0K /mnt 3.6G / df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 6.5G 1.4G 84% / tmpfs 3.7G 0 3.7G 0% /dev/shm sysctl fs.file-nr fs.file-nr = 864 0 761182

    Read the article

  • How to continue an HTTrack mirroring session from the command line?

    - by isme
    I want to drive my mirroring project using the Command Prompt instead of the WinHTTrack interface so that I can script and schedule the mirroring session more easily. The output of httrack --help gives a simple command for continuing an interrupted mirroring session: example: httrack --continue continues a mirror in the current folder When I try httrack --continue in my HTTrack project folder, all I get is output like this: Example: -%F "<!-- Mirrored from %s by HTTrack Website Copier/3.x [XR&CO'2010], %s -->" * Option %F needs to be followed by a blank space, and a footer string With each parameter on a new line for readability, the first line of my doit.log file looks like this: -qiC1%P0s0b0u1j0%s%u0N0%I0p1DaK0c1T30H0%kf2E1800A25000%c0.1%f#f -F "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)" -%F "" -%l "en, en, *" http://saa.gov.uk/search.php?SEARCHED=1&SEARCH_TABLE=council_tax&SEARCH_TERM=City+of+Edinburgh&DISPLAY_COUNT=100 -O1 "C:\\Users\\Iain\\Projects\\Council Tax Analysis\\Code\\HTTrack\\Council Tax Valuation List" -* \ +*search.php?SEARCHED=1* -*DISPLAY_MODE=FULL* The parameter %F "" should tell HTTrack to use an empty footer. I used the WinHTTrack interface to create the project and start the mirroring session. I can interrupt and continue the mirroring session using the interface. The HTML files saved by WinHTTrack have no footer.

    Read the article

  • Cannot configure hostname keeps on changing after reboot CentOS 6 + nginx [on hold]

    - by The Wolf
    I just finished this tutorial I found online: http://www.unixmen.com/install-lemp-nginx-with-mariadb-and-php-on-centos-6/ Now, I am having trouble making a hostname, you can see the result at: http://www.intodns.com/busilak.com here are my confs /etc/hosts 127.0.0.1 localhost.localdomain localhost localhost4.localdomain4 localhost4 # Auto-generated hostname. Please do not remove this comment. 198.49.66.204 host.busilak.com busilak.com host ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 /etc/sysconfig/network NETWORKING="yes" GATEWAYDEV="venet0" NETWORKING_IPV6="yes" IPV6_DEFAULTDEV="venet0" HOSTNAME="host.busilak.com" /etc/nginx/conf.d/default.conf server { #listen 80; #server_name host.busilak.com; #charset koi8-r; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } Question: Is there anything I should have done? I just want to use my domain: busilak.com as default domain for my server, such that when I open busilak.com it would point readily to my VPS ip address.

    Read the article

  • Is Ubuntu a bad distro for a standalone mysql database server?

    - by DhruvPathak
    I read an article here : http://www.mysqlperformanceblog.com/2011/12/08/which-linux-distribution-for-mysql-server/ On the other end there are Debian and Ubuntu. Both use tool called dpkg for package management. There isn’t a month that I log in to a system based on either distribution where there are no issues with packages consistency. Unfinished installations, unresolved conflicts are so common that it’s just beyond simple negligence. The packaging system is just not robust enough. Another problem is that one broken package may block you from installing or uninstalling anything else. Imagine that someone left system in such shape, you prepared for downtime, stopped MySQL and… error – text editor has not been properly installed, so you cannot upgrade MySQL either until the problem is fixed. In a stressful situation when downtime clock ticks – annoying at best We prefer Ubuntu server because of familiarity and Ubuntu also being development environment. Questions: Is Ubuntu used commonly in production for a mysql database server ? Is it worth the trouble ever to have one distro eg Ubuntu in web server, and another say Red Hat in database server ? Or Is a homogenous server pool a better choice ?

    Read the article

  • Scheduled Task unable to create/update any files

    - by East of Nowhere
    I have several tasks in Task Scheduler in Windows Server 2008 SP2 (32-bit) and they all successfully "do their work", except for creating or updating any files on Windows. All the tasks point to simple .cmd files that have the real work but beyond that there's no pattern: some call robocopy with the /LOG option, some call .exe files I wrote that manipulate XML files, some just do stuff with > redirection. With all of them, if I double-click the .cmd file myself, it works fine and the files are created or updated or whatever. If I run it from Task Scheduler (by the schedule or just clicking Run), the task always completes "successfully" but without any of the desired changes to files. I don't see any "unable to create file" errors in Event Viewer either. The tasks do all Run As a specific account, but I have logged in as that account and verified that it has permissions to do everything it needs to. Further details -- Task is set to Run whether user is logged in or not. Configured for: "Windows Vista or Windows Server 2008", there is no other Configured for option available.

    Read the article

  • How does one debug Windows network share authentication?

    - by ajs410
    I have machine0 with 32-bit Vista, logged in as a domain user, running a VMWare image of 32-bit Vista, logged in as a local user, with the VM set to bridge the network. From an administrator account (called admin) within the VM, I try to access the hidden C$ share on machine0 (i.e. start - run - "\\machine0\C$\"). I get no prompts for credentials. Worse, machine0 has an admin account (different password), and machine0\admin gets locked out when VM\admin tries to access the network share. I get a message several seconds later, which feels like a cached credential failure leading to the lockout. I have checked several places for cached credentials; net use, Stored Usernames and Passwords, mapped shares. I rebooted (both machine0 and VM) to make sure the session was clear of any cached credentials. I can force net use to use my domain credentials when accessing machine0, and then I can see the share. I can also see shares that do not require credentials. I decided to try another machine on the network (machine1), 64-bit Vista, local user. This machine has no lockout policy, and after several seconds (feels like failed cached credentials again) it prompts me for credentials. After I enter them, it re-prompts me, saying "logon unsuccessful" (tried my domain credentials, and also machine1\admin's). Which is bogus, because I proceed to log on with remote desktop using the machine1\admin credentials. I have tried this on another machine (machine2, 64-bit Vista), running a copy of the same 32-bit VM, and I don't remember having this problem. machine0 has a fingerprint reader...could that try storing passwords and interfere? Are there any places I'm missing where there could be cached credentials? Is there a way to see what credentials are flying around when I try to connect?

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • Win 7 Remote Desktop connection failure when already logged in.

    - by Andy E
    I have a bit of a strange problem, magnified recently by my broadband dropouts. I wasn't sure whether to post this on SU or SF, so I thought I'd start here as more users would be likely to know what the problem is. In short, when I try and connect to my server (Windows Server 2008) from my laptop running Windows 7, I can only connect if my remote account was previously logged out. If I'm still logged in I get the error message: Windows cannot connect to the remote server. No explanation or anything. If my IP address is the same, I don't have this problem. If I boot up Windows XP Mode and run XP's remote desktop connection it works just fine -- I think the difference there is it takes me to the remote server's logon screen. With Win 7 RDC you never see the logon screen, it asks you for credentials before entering full screen mode. The real problem is that I'm having random broadband dropouts and my IP isn't static. If I logon via Win XP RDC, log out and then run Win 7 RDC then it works fine. I realize I can just use Win XP's RDC for now, but I don't really like keeping XP Mode open if I can help it. Does anyone know a way around this problem? Maybe forcing Win 7 RDC to go to the logon screen, or changing some server-side settings to work around the IP address issue?

    Read the article

  • Logging Remote Server Access via Remote Desktop

    - by Nate Bross
    The objective here is to start a simple .NET application I've written which captures some environment variables (time, username, computername, etc) upon login. This .NET application subscribes to the Windows "User logout" event. Upon launch, the application captures the above variables, and creates a record in my database, upon logout (which I'm capturing) I update another field in the same record, with the logout time. The above is working exactly as I would like, when I launch the binary, it makes its initial log entry, then waits for the logout event and updates the same record. Restrictions, the .NET binary should be able to live on a share point (\server\share\myapp\v1) so I can update the application to (\server\share\myapp\v2) and simply update the GPO/Logon script. My initial thought was to use the \domaincontroller\sysvol\ directory to store the binary and then update all user accounts to include a call to my application. Can you see any flaws in this approach? My question is this: First, is there anything wrong with my idea above? Second, if so, what is the best way (through group policy or otherwise) to ensure this application launches whenever a session is started on a server?

    Read the article

  • SSH attcack CentOS Amazon EC2

    - by user37143
    Hi, I run a few Rightscale CentOS AMI based instances on Amazon EC2. Two months back I found that our SSHD security is compromised( I had added host.allow and host.deny for ssh). So I created new instances and done an IP based ssh that allows only our IPs through AWS Firewall(ec2-authorize) and chnaged the ssh 22 default port to some other port but two days back I found I was not able to login to the server and when I tried on 22 port the ssh got connected and I found that sshd_conf was changed and when I tried to edit sshd_config I found root had no write permission on the file. So I tried a chmod and it said access denied for 'root' user. This is very strange. I checked secure log and history and found nothing informative. I have PHP, Ruby On Rails, Java, Wordpress apps running on these server. This time I did a chkrootkit scan and found nothing. I renamed the /etc/ssh folder and reinstalled openssh through yum. I had faced this on 3 instances on CentOS(5.2, 5.4) I have instances on Debian as well those working fine. Is this a CentOS/Rightscale issue. Guys, what security measures I should take to prevent this. Please support me this is very critical. Thanks

    Read the article

  • Securing phpmyadmin: non-standard port + https

    - by elect
    Trying to secure phpmyadmin, we already did the following: Cookie Auth login firewall off tcp port 3306. running on non-standard port Now we would like to implement https... but how could it work with phpmyadmin running already on a non-stardard port? This is the apache config: # PHP MY ADMIN <VirtualHost *:$CUSTOMPORT> Alias /phpmyadmin /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> Options FollowSymLinks DirectoryIndex index.php <IfModule mod_php5.c> AddType application/x-httpd-php .php php_flag magic_quotes_gpc Off php_flag track_vars On php_flag register_globals Off php_value include_path . </IfModule> </Directory> # Disallow web access to directories that don't need it <Directory /usr/share/phpmyadmin/libraries> Order Deny,Allow Deny from All </Directory> <Directory /usr/share/phpmyadmin/setup/lib> Order Deny,Allow Deny from All </Directory> # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/phpmyadmin.log combined </VirtualHost>

    Read the article

  • ssh Password-less login to multiple machines when you already have one

    - by tandu
    I'm a little bit confused about setting up a password-less login for multiple machines to begin with, but I think I could do it from scratch. The problem is I already have it set up for one machine and I don't want that to be blown away when I try to set it up for the other machine. Let's clarify: Machine A: the machine I'm connecting from Machine B: the machine I'm connecting to. Password required Machine C: the machine I'm connecting to. Password-less ssh I have read some tutorials on setting up password-less ssh to a certain site, but they usually start with "move id_rsa out of the way so it doesn't get blown away," but then at the end of the tutorial it's not moved back. If I had no help at all, here is what I would do: Log into B ssh-keygen -t rsa -f ~/id_rsa.other scp id_rsa.other.pub A:~/.ssh echo "Host A \n Identity File ~/.ssh/id_rsa.other" > ~/.ssh/config (Note that I realize these commands may not be exactly correct, but this is just the idea). What I'm not quite clear on is if I need to update the config for A, B, or both. I'm fairly certain to do a password-less login from A to B, it is A that needs the public key .. but I also suppose I need B to use the correct id_rsa file for that public key. Finally, I don't want the password-less login for C to be affected at all .. it's using id_rsa. Am I going wrong anywhere?

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Sending same email through two different accounts on different domains using Outlook 2010

    - by bot
    I am a programmer and don't have experience in Outlook configurations. Our company has two email domains namely xyz.com and xyz.biz. Each employee has an email id on one of these domains but not both depending on the project they are working on. The problem we are facing is that when a communication email is sent from the Accounts, HR, Admin, etc departments, they need to send the email twice. Once through the xyz.com account to all employees with an email address on xyz.com and once through xyz.biz to all employees with an email address on xyz.biz. I am not sure why they have to send two separate emails but the IT team has directed all departments to do so as there is no other solution according to them. Even though two different groups have been created, sending an email to employees in a group of xyz.biz from xyz.com does not seem to work. I want to know if Outlook provides a feature such that we can configure some kind of rules to send an email through an id on xyz.com to all users on xyz.com and the same email gets sent automatically to users on xyz.biz through an id on xyz.biz. The only technical details I know is that we are using Exchange 2003 and the IT team claims that this is a limitation causing the problem. Edit: Our company is split into two main divisions depending on the type of projects. I am pretty sure I use domain XYZ wheras the employees in the other division use the doman ABC to log in into the windows machine or outlook itself. Also, employees in domain XYZ can access the machines on the network in domain ABC but not the other way around

    Read the article

  • Mac OS X - Time Machine backup fails verification - What can I do to save the history?

    - by usermac75
    Hi, How do I make Time Machine to make a new complete backup without losing older versions of backed up files? Verbose: I am using the Time Machine backup on my OS X (Snow Leopard) to backup the whole computer to an external drive. I especially like the "history", i.e. the feature that allows you to restore the older version of a file. Problem: I had some data corruption on my external backup drive, I repaired it with the System Tool for doing that, it found some faults. I had the disk tool repair the external drive. After that, the external drive was OK and I could use Time Machine again. I let Time Machine do one more backup. Now I made a verification according to http://superuser.com/questions/47628/verifying-time-machine-backups, namely along sudo diff -qr . $HOME/Desktop 2>&1 | tee $HOME/timemachine-diff.log However: After doing the command above, several differences and missing files were reported, approx. 200 files in sum. Whereas some of the missing files were cache or excluded directories, the differences do bother me, especially as some important documents from me are listed as differing. How can I make sure that the data on the external drive is synced correctly? Is it possible to have Time Machine to do a complete new backup without losing the history? Or to have Time Machine compare all files for differences and re-write all files that are different? Or can I set some flags on the files that do not match to have them copied again? (like the archive-flag in Windows/Dos). I'd rather not touch the files because I would like to keep the date of last change/date of creation) Thank you for your thoughts!

    Read the article

  • Issue with SSH on Ubuntu - Local connection ok, remote connection - Is it me or my ISP?

    - by Benjamin
    I have an issue with a server running Ubuntu 12.04, I am trying to set up a remote connection so I can access the server at my work from out of town. I have installed the SSH server and all that stuff, and I have reassigned the default port from 22 to 3399. A local connection from any OS can connect on the 192.168... address, but in no way can I get a connection on the actual IP address. I believe my configuration is correct, and I will attach it. If I have done something wrong in the config, please tell me and I will make a change to it. I honestly think that the Router that my ISP provided is horrible, and although the port for ssh is forwarded, it might be stopping any traffic coming inbound. Is there anything I can try to verify this? /var/log/auth does not show any error when I connect VIA our static IP. I have included all values not commented out below: (sshd_config) Port 3399 ListenAddress 0.0.0.0 Protocol 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key UsePrivilegeSeparation yes KeyRegenerationInterval 3600 ServerKeyBits 768 SyslogFacility AUTH LogLevel INFO LoginGraceTime 120 PermitRootLogin yes StrictModes yes UseDNS no RSAAuthentication yes IgnoreRhosts yes RhostsRSAAuthentication no HostbasedAuthentication no PermitEmptyPasswords no ChallengeResponseAuthentication no PasswordAuthentication yes GSSAPIAuthentication no X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM yes Am I doing this wrong? port forwarding image

    Read the article

  • MySQL stops accepting connections over 3306, still working on localhost

    - by Ben Dilts
    I have a MySQL database that stopped accepting connections from my web server altogether. So I SSH'ed into the server and started checking its vitals. The hard disks had plenty of open space, and there was plenty of available memory and swap space. Nothing was eating up the CPU (close to 100% idle). I even connected to MySQL locally and ran a few queries without any issues. But SHOW PROCESSLIST only showed my own connection, no others. Worst of all, in the MySQL log, no errors even remotely coincided with the unavailability of the server. On the web server, I got an error saying "Lost connection to MySQL server during query" at the moment the unavailability started, followed by a bunch of "MySQL server has gone away" errors. There's only one other application on the server that accepts network connections, and I killed that one (in case it was holding too many open connections or something), but it didn't help. Finally I just restarted the MySQL process, and everything is (for now) working again. What else should I check in these circumstances? Any idea what the problem might be? And how might I verify that is in fact the problem?

    Read the article

  • Uninstall Glassfish and metro completely

    - by user775829
    I thought of updating my Glassfish server from 2.1 to 3.1.1 in a Linux machine. I downloaded the .ZIP package. However during uninstalling of Glassfish v2.1 I did not find the uninstall.sh file in "bin" directory. Following are a few steps which I did... I removed the glassfish folder (rm -rf ...) After removing files in the end it gave me a notification that it could not remove 2 files used by Metro. I cant recollect those file names, but I manually deleted that folder. I made a mistake by first not uninstalling Metro. I uninstalled metro completely after that. but it seemed pointless (it uninstalled successfully :P ) I transfered the Glassfish 3.1.1 ZIP file and unzipped and configured it. FOllow are a few Problems I am facing I cannot deploy any of my WAR file. Its giving errors saying " Error creating bean,Instantiation of bean failed etc etc." (However the WAR file is getting deployed successfully in other Linux Machine) When I try installing Metro v2.1 separately, it does not show the admin console or it timesout while starting the domain. The Log File of the Domain says it has started the domain successfully and the process is also created. But after running the command (asadmin) it takes like forever and times out without showing Domain Started Successfully, There is no uninstall.sh in Glassfishv3.1.1 bin directory. How do I completely uninstall Glassfish v 3.1.1 and Metro 2.1 ??? What are the files which I will have to manually remove?

    Read the article

  • Dovecot starting and running, but not listening on any port

    - by Dženis Macanovic
    Among others things I'm in charge of a Debian GNU/Linux (Wheezy) DomU for the mail services of the company i work for. Yesterday one HDD that was used for this particular server has died. After installing Debian again, Dovecot decided to no longer listen on any ports (checked with netstat -l). Other services (like Postfix and MySQL) work without problems. dovecot -n: # 2.1.7: /etc/dovecot/dovecot.conf # OS: Linux 3.2.0-3-amd64 x86_64 Debian wheezy/sid ext3 auth_mechanisms = plain login disable_plaintext_auth = no first_valid_uid = 150 last_valid_uid = 150 mail_gid = mail mail_location = maildir:/var/vmail/%d/%n mail_uid = vmail namespace inbox { inbox = yes location = prefix = } pass db { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } plugin { sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-userdb { group = mail mode = 0666 user = vmail } } service imap-login { inet_listener imaps { port = 993 ssl = yes } } service pop3-login { inet_listener pop3s { port = 995 ssl = yes } } ssl_cert = </etc/ssl/private/mail.crt ssl_key = </etc/ssl/private/mail.key userdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } protocol imap { mail_max_userip_connections = 25 } UID 150 is vmail (I double checked file permissions). I didn't install Dovecot from source, but via apt from the official Debian US mirror. There are no messages concerning Dovecot in /var/log/syslog except for: Oct 21 06:36:29 server dovecot: master: Dovecot v2.1.7 starting up (core dumps disabled) Any ideas?

    Read the article

  • Cloning to a smaller hard drive with DDRescue

    - by krebshack
    I am currently working with a 700 GB Seagate hard drive that's beginning to fail. I'll call this "SDB" from now on. I'd like to clone it while I'm still able to. However, the only hard drive that I have available is a 500 GB WD hard drive. I'll call this "SDC" from now on. The partition scheme on SDB is as follows: 9.77 GB is allocated to a recovery partition and the remaining 688.87 GB is allocated to a Windows partition. Both are formatted using NTFS. There is no partition scheme on SDC. I know how to clone one hard drive to another using DDRescue but I've only done it using hard drives that are the same size. For your reference, I'll normally use the command "ddrescue -v -r 3 /dev/sdb /dev/sdc example.log". I'd like to know if it's possible to do this with DDRescue. I've read the manual from GNU (http://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html) and I haven't seen anything indicating that it is possible. I'm just looking for some confirmation that this is a correct impression. If it's not possible, then it would be helpful if any of y'all would be able to make some work around suggestions. But please don't feel obligated to do that. I don't want to have my one thread bogged down with two many questions.

    Read the article

  • Chrome Lockups Windows 7 64-bit

    - by Mike Chess
    I'm running Google Chrome (6.0.427.0 dev) on Windows 7 Home Premium 64-bit (AMD Phenom 3.00 GHz, 8 GB RAM). The computer lockups hard after running Chrome for about five minutes. The lockup happens whether Chrome is actually being used to browse web sites or it is just idling. No programs can be started or interacted with when this happens. The computer must be power-cycled to recover. The lockup happens regardless of which web sites are being browsed. The system event logs do not show any events around the time when the lockup transpired. All other applications run just fine on this system. In fact, Chrome ran without issue for several months on this system (the system was brand new 03-2010). I also run the same version of Chrome on other computers (Windows XP SP3) without issue. I've come to really like Chrome and use it as my default browser whenever possible. What could be causing Chrome to cause the system to lockup as it does? Does Chrome have any logs that aren't part of the Windows event log? Does Chrome have a debug command line switch that might reveal more about what happens?

    Read the article

  • How to correctly setup home directories and permissions on a mounted partition.

    - by user36505
    I'm setting up a Fedora 12 server. I have a root (/) partition where the boot (/boot) partition is mounted and then a separate partition (/files) for separating home directories and shares away from the other partitions. The filesystem mounts fine and users can be created to have home directories in /files/home/[user] just fine. However, when I log in as one of those users, I get an error saying "Cannot chdir in to /files/home/[user]: permission denied". If I create a user under the default /home using the same process, everything works fine. The same goes for when I try and browse a share in windows; I can see the shares, but cannot access them. The permissions and owners on /files and /files/home are the same as /home. When the user is created, the user directory owner and permissions are also the same. How can I set the /files partition up so that it can be used as a home directory and for samba sharing rather than using the root (/) partition? Thanks.

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

< Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >