Search Results

Search found 26742 results on 1070 pages for 'linux kernel'.

Page 379/1070 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • How Do I Properly Run OfflineIMAP in a Crontab

    - by alharaka
    Installed Fedora. # cat /etc/redhat_release | awk ' { print F "> " $0; print ""; }' Fedora release 14 (Laughlin) Installed offlineimap from yum, cuz I'm lazy these days. # yum info offlineimap | awk ' { print F "> " $0; print ""; }' Loaded plugins: langpacks, presto, refresh-packagekit Adding en_US to language list Installed Packages Name : offlineimap Arch : noarch Version : 6.2.0 Release : 2.fc14 Size : 611 k Repo : installed From repo : fedora Summary : Powerful IMAP/Maildir synchronization and reader support URL : http://software.complete.org/offlineimap/ License : GPLv2+ Description : OfflineIMAP is a tool to simplify your e-mail reading. With : OfflineIMAP, you can read the same mailbox from multiple : computers. You get a current copy of your messages on each : computer, and changes you make one place will be visible on all : other systems. For instance, you can delete a message on your home : computer, and it will appear deleted on your work computer as : well. OfflineIMAP is also useful if you want to use a mail reader : that does not have IMAP support, has poor IMAP support, or does : not provide disconnected operation. And, lo and behold, every time I run offlineimap and try to redirect output in a crontab, it does not work. Below is my .offlineimaprc. [general] ui = TTY.TTYUI accounts = Personal, Work maxsyncaccounts = 3 [Account Personal] localrepository = Local.Personal remoterepository = Remote.Personal [Account Work] localrepository = Local.Work remoterepository = Remote.Work [Repository Local.Personal] type = Maildir localfolders = ~/mail/gmail [Repository Local.Work] type = Maildir localfolders = ~/mail/companymail [Repository Remote.Personal] type = IMAP remotehost = imap.gmail.com remoteuser = [email protected] remotepass = password ssl = yes maxconnections = 4 # Otherwise "deleting" a message will just remove any labels and # retain the message in the All Mail folder. realdelete = no [Repository Remote.Work] type = IMAP remotehost = server.company.tld remoteuser = username remotepass = password ssl = yes maxconnections = 4 I have tried TTY.TTYUI, NonInteractive.Quiet and NonInteractive.Basic with different variations. With or without redirection, the crontab entries I try cause problems. $ crontab -l | awk ' { print F "> " $0; print ""; }' */5 * * * * offlineimap >> ~/mail/logs/offlineimap.log 2>&1 */5 * * * * offlineimap I always get the same damn error ERROR: No UIs were found usable!. What am I doing wrong!?

    Read the article

  • manual install perl-5.10.1-3.tar.bz2 on cygwin

    - by Jason
    Hi, I have the binaries: perl-5.10.1-3.tar.bz2 and I don't have access (admin privileges) to run setup.exe for cygwin, my question is how can I install manually inside cygwin the module for perl: perl-5.10.1-3.tar.bz2 ? (or maybe can I get the sources and compile? can someone tell me please how?) thanks

    Read the article

  • Is it reasonable that a random disk seek & read costs ~16ms?

    - by fzhang
    I am frustrated about the latency of random reading from a non-ssd disk. Based on results from following test program, it speeds ~16 ms for a random read of just 512 bytes without help of os cache. I tried changing 512 to larger values, such as 25k, and the latency did not increase as much. I guess it is because the disk seek dominates the time. I understand that random reading is inherently slow, but just want to be sure that ~16ms is reasonable, even for non-ssd disk. #include <sys/stat.h> #include <sys/time.h> #include <sys/types.h> #include <sys/unistd.h> #include <fcntl.h> #include <limits.h> #include <stdio.h> #include <string.h> int main(int argc, char** argv) { int fd = open(argv[1], O_RDONLY); if (fd < 0) { fprintf(stderr, "Failed open %s\n", argv[1]); return -1; } const size_t count = 512; const off_t offset = 25990611 / 2; char buffer[count] = { '\0' }; struct timeval start_time; gettimeofday(&start_time, NULL); off_t ret = lseek(fd, offset, SEEK_SET); if (ret != offset) { perror("lseek error"); close(fd); return -1; } ret = read(fd, buffer, count); if (ret != count) { fprintf(stderr, "Failed reading all: %ld\n", ret); close(fd); return -1; } struct timeval end_time; gettimeofday(&end_time, NULL); printf("tv_sec: %ld, tv_usec: %ld\n", end_time.tv_sec - start_time.tv_sec, end_time.tv_usec - start_time.tv_usec); close(fd); return 0; }

    Read the article

  • Multithread http downloader with webui [closed]

    - by kiler129
    I looking for software similar to JDownloader or PyLoad. JD is pretty good but use heavy Java and for now have very weak web interface. PyLoad is awesome, include simple but powerful web-UI but downloading 10 files (10 threads each, so summary it's 100 connections running at around 8MB/s all) consume a lot of cpu - it's whole core for me. Do you know any lightweight alternatives? Aria2c is good for console but I failed to find any good webui, official one is good but after adding more files almost crashes Chrome :)

    Read the article

  • phpmyadmin port change?

    - by Rajat
    How do i change my default phpmyadmin port to 443 or 9999? Is it possible or do I have use port 80 only? If possible, then how do I change share the same? Apache is listening on port 9999 for sure. However, going to URL http://<webserver>:9999/phpmyadmin/ Will give following error (with Firefox browser) An error occurred during a connection to webserver:9999. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) Anyone has any clue what is going on?

    Read the article

  • Manage song metadata on CentOS from command line

    - by Puddingfox
    I am making a simple Pandora.com alternative for myself and a few friends where the user can upload his/her songs and listen to them anywhere. My intent is to make a lightweight, simple player in HTML5 so all the user needs is a current Firefox or Chrome to use it. I have set it up so that all uploaded songs get converted to .ogg and added to a database but I also want some metadata (not sure if that is the correct term) for the songs to be stored in the database so the player can tell the user what song he/she is listening too. I know there are several GUI tools for managing the title/artist/album info for songs but I'm having trouble finding any good ones I can use from the command line. If the song has the information already in the file, I think I can use mplayer to retrieve the information but it would be really great if there was one that would look up the song information online. I don't mind interfacing with an API (would be pretty interesting actually). Do you guys have any suggestions?

    Read the article

  • Removing DS_Store files and variants?

    - by Ron Gejman
    Hi, I am running an Ubuntu 10.04.1 LTS server. Frequently I open up files using AFP from my Mac. Inevitably this created .DS_Store files on the server (although for some reason they are named :2eDS_Store. However, it also creates variants on DS_Store files. These variants are often named similarly to other files in that directory. E.g.: ~$ ls total 60K -rw-r--r-- 1 tarakhovsky 16K 2010-11-30 18:28 :2eDS_Store drwx--S--- 4 tarakhovsky 4.0K 2010-11-08 13:58 :2eTemporaryItems/ lrwxrwxrwx 1 tarakhovsky 15 2010-10-19 17:44 bigdisk -> /media/bigdisk// ... drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-03 18:24 Temporary Items/ drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-30 01:34 tmp/ ... I've disabled creation of DS_Store files using: defaults write com.apple.desktopservices DSDontWriteNetworkStores true so hopefully this won't continue to occur—but I really want to get rid of all of the existing variants of DS_Store files already on the server. Any ideas as to why these variants are being created and how I can get rid of them all?

    Read the article

  • How to diagnose causes of oom-killer killing processes

    - by dunxd
    I have a small virtual private server running CentOS and www/mail/db, which has recently had a couple of incidents where the web server and ssh became unresponsive. Looking at the logs, I saw that oom-killer had killed these processes, possibly due to running out of memory and swap. Can anyone give me some pointers at how to diagnose what may have caused the most recent incident? Is it likely the first process killed? Where else should I be looking?

    Read the article

  • Rsync backup - detect new directory and backup only from that directory

    - by Pracovek
    New cpanel daily backup is creating separate directories for daily backup. This creates problem when I try to user rsync to do an offsite backup since I would like to rsync only latest data. E.g. On backup server I have directory "backup" and on server, from which we are pulling backups I get directories 2013-11-07, 2013-11-08 etc in backup directory. If I backup /backup directory on the server it will use allot more space so I would like to backup only latest directory in backup directory, eg 2013-11-08. Is there a way to detect latest directory in backup directory and pass that directory name to rsync for backup ?

    Read the article

  • How to upgrade XMBC Live from 9.04.1 to 9.11?

    - by sunpech
    I've been unable to do a fresh install of XBMC Live 9.11 to my hard drive. Everytime it fails at the Install System step. But I am able to get XBMC Live 9.04.1 to install successfully. How do I upgrade XBMC Live 9.04.1 to 9.11? I understand that Ctrl+Shift+F2 brings up the command line, but what are the next set of commands to run?

    Read the article

  • Upgraded from fc10 to fc12 now I have eth0_rename, how do I get back to plain old eth0?

    - by shank
    I upgraded from Fedora 10 to Fedora 12. Unfortunately, my ethernet interface eth0 is now named eth0_rename. I'd like to get back to having it named plain old eth0. I googled a bit but the solution of removing the eth0 entry from /etc/udev/rules.d/70-persistent-net.rules seems to have no effect (I restarted the network service but didn't reboot). The interface works just fine although I could see a script or two having a problem with the format. So, it's more of an inconvenience thing than anything else. Any ideas? Thanks.

    Read the article

  • bash script with permanent ssh connection

    - by samuelf
    I use a bash script which runs /usr/bin/ssh -f -N -T -L8888:127.0.0.1:3306 [email protected] However, when I run the bash script, it waits.. I see the connection coming up but the script doesn't exit.. it's like it's waiting for the SSH process to finish, because when I manually kill it the bash script finishes as well. Any ideas how to resolve this? UPDATE: I have croned this script.. and the cron process is the one that becomes a zombie.. the actual scripts runs just fine, sorry about that, with ps -auxf I get: root 597 0.0 0.7 2372 912 ? Ss Jul12 0:00 cron root 2595 0.0 0.8 2552 1064 ? S 02:09 0:00 \_ CRON 1001 2597 0.0 0.0 0 0 ? Zs 02:09 0:00 \_ [sh] <defunct> 1001 2603 0.0 0.0 0 0 ? Z 02:09 0:00 \_ [cron] <defunct> and when I kill the ssh the defuncts disappear.. why would they become defunct?

    Read the article

  • Configuring Apache for multiple clients

    - by Chris_K
    Last week I had a question here about suexec / suphp but I tried to accomplish too much. I'm going to narrow the scope a bit and try again. I'd like to configure a LAMP server to host multiple clients. I'd like it to seem (from the client's viewpoint) just like any other shared hosting environment. Web sites in their home directory, no need to muck around with file ownerships to get pages served, etc. It would seem that a configuration that involves suexec and suphp is the way to go(?) I'm specifically looking for a current/modern guide on how to accomplish this (I'll be using CentOS if it matters) and I'm afraid I need more than a link to Apache docs. Are there any good How-To's out there? The few I've found have been pretty out of date, but it is quite possible my search was weak.

    Read the article

  • Using curl -s in *nix command line not working for some reason

    - by JM4
    I am trying to install composer (though to be honest I really have no idea how it fully works and documentation seems to be quite poor) on my MediaTemple DV machine. I am using their [instructions][1] Trying to install globally using: $ curl -s https://getcomposer.org/installer | php My command line (again using putty and logged into my server as root) thinks for a second, then sets up for next prompt. I run a simple ls -l to check for the file it should have downloaded with no luck. Any idea what could be causing the issue? I have tested and do in fact have curl installed. UPDATE 1 Based on the first answer, the verbose response is: > $ curl -vs https://getcomposer.org/installer | php > * About to connect() to getcomposer.org port 443 > * Trying 37.59.4.156... connected > * Connected to getcomposer.org (37.59.4.156) port 443 > * successfully set certificate verify locations: > * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none > * SSLv2, Client hello (1): SSLv3, TLS handshake, Server hello (2): SSLv3, TLS handshake, CERT (11): SSLv3, TLS handshake, Server key > exchange (12): SSLv3, TLS handshake, Server finished (14): SSLv3, TLS > handshake, Client key exchange (16): SSLv3, TLS change cipher, Client > hello (1): SSLv3, TLS handshake, Finished (20): SSLv3, TLS change > cipher, Client hello (1): SSLv3, TLS handshake, Finished (20): SSL > connection using DHE-RSA-AES256-SHA > * Server certificate: > * subject: /C=CH/CN=dl.packagist.org/[email protected] > * start date: 2012-07-07 23:25:35 GMT > * expire date: 2013-07-10 02:55:12 GMT > * SSL: certificate subject name 'dl.packagist.org' does not match target host name 'getcomposer.org' > * Closing connection #0 > * SSLv3, TLS alert, Client hello (1): > > > [1]: http://getcomposer.org/doc/00-intro.md

    Read the article

  • How to manage mounted partitions (fstab + mount points) from puppet

    - by Cristian Ciupitu
    I want to manage the mounted partitions from puppet which includes both modifying /etc/fstab and creating the directories used as mount points. The mount resource type updates fstab just fine, but using file for creating the mount points is bit tricky. For example, by default the owner of the directory is root and if the root (/) of the mounted partition has another owner, puppet will try to change it and I don't want this. I know that I can set the owner of that directory, but why should I care what's on the mounted partition? All I want to do is mount it. Is there a way to make puppet not to care about the permissions of the directory used as the mount point? This is what I'm using right now: define extra_mount_point( $device, $location = "/mnt", $fstype = "xfs", $owner = "root", $group = "root", $mode = 0755, $seltype = "public_content_t" $options = "ro,relatime,nosuid,nodev,noexec", ) { file { "${location}/${name}": ensure => directory, owner => "${owner}", group => "${group}", mode => $mode, seltype => "${seltype}", } mount { "${location}/${name}": atboot => true, ensure => mounted, device => "${device}", fstype => "${fstype}", options => "${options}", dump => 0, pass => 2, require => File["${location}/${name}"], } } extra_mount_point { "sda3": device => "/dev/sda3", fstype => "xfs", owner => "ciupicri", group => "ciupicri", $options = "relatime,nosuid,nodev,noexec", } In case it matters, I'm using puppet-0.25.4-1.fc13.noarch.rpm and puppet-server-0.25.4-1.fc13.noarch.rpm.

    Read the article

  • Getting VSFTP running on Fedora 14

    - by Louis W
    Having troubles getting VSFTPD running on Fedora 14. Here is what I have done so far, please let me know if I am missing something. When I try to connect through FTP it says connection time out. Installed VSFTP with yum yum install vsftpd Edited config file vi /etc/vsftpd/vsftpd.conf Started service and made sure it would always start up service vsftpd start chkconfig vsftpd on Added and configured a new user /usr/sbin/useradd upload /usr/bin/passwd upload usermod -c "This user cannot login to a shell" -s /sbin/nologin upload Added firewall rules iptables -A INPUT -p tcp --dport 21 -j ACCEPT iptables -A OUTPUT -p tcp --sport 20 -j ACCEPT service iptables save service iptables restart Checked netstat (In reply to comment below) tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 23752/vsftpd

    Read the article

  • Postfix smtp error 450 (failed to add recipient)

    - by culter
    I have debian server with postfix and roundcube. After an attack we are on 2 blacklists, but I don't think that this is the main problem. I can't send mail to any address. I tried to find the cause...I checked var/spool/postfix/etc/resolv.conf and resolv.etc and they're the same with this content: nameserver 127.0.0.1 nameserver localhost In var/log/mail.err I found: cyrus/imap[25452]: DBERROR: opening /var/lib/cyrus/user/m/[email protected]: cyrusdb error cyrus/imap[25452]: DBERROR: skiplist recovery /var/lib/cyrus/user/m/[email protected]: ADD at 1FC0 exists When I try to send email from roundcube, I get the message from title. When I send it within opera or any other mail client, It gives nothing, but email is'nt sended. Thank you for any advice.

    Read the article

  • Easily recreate a server's "state" [closed]

    - by Brandon Wamboldt
    I want the ability to setup new servers for dev/testing/prod very easily. The reasons for being able to setup a new dev VM is obvious, but for prod my concern is adding a new production server/migrating to a new server. I assume a traditional backup solution won't work as hardware may be different so the binaries/config might be different. I want to get experience with puppet anyways, so I was thinking about creating a manifest that would setup my users, install Postgres, Nginx, PHP-FPM, etc, and configure them the way I specify. Then I could install puppet on a new server, copy down my manifest and apply it locally. This would make keeping my server configs in sync easier too. Is there a better approach I'm not aware of, and does my approach have any pitfalls?

    Read the article

  • How to keep groups when pulling with git

    - by mimrock
    I have a staging site that is a working directory of a git repository. How to set up git to let a developer pull out a branch or release without changing the group of the modified files? An example. Let's say I have two developers, robin and david. They are both in git-users group, so initially they can both have write permissions on site.php. -rw-rw-r-- 1 robin git-users 46068 Nov 16 12:12 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git After robin-server1$ git pull origin master: -rw-rw-r-- 1 robin robin 46068 Nov 16 12:35 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git And david do not have write permissions on site.php, because the group changed from 'git-users' to 'robin'. From now on, david will get a permission denied, when he tries to pull to this repository.

    Read the article

  • script not run after reboot from /etc/rc3.d

    - by yael
    I create symbolic link to the file - /etc/rc3.d/platform.bash from /var/tmp/platform.bash ln -s /var/tmp/platform.bash /etc/rc3.d/platform.bash script exist under /var/tmp : -rwxr-xr-x 1 root root 58442 Aug 30 08:49 platform.bash view from /etc/rc3.d : lrwxrwxrwx 1 root root 31 Aug 30 06:33 S99platform.bash -> /var/tmp/platform.bash my target is to run platform.bash after reboot ( on solaris 10 OS ) from some reason the script platform.bash not run after reboot ? please advice what I need to check in order to verify the problem ? my script ( platform.bash ) #!/bin/bash echo test > /var/tmp/log.txt

    Read the article

  • Setup staging with multiple SVN

    - by Kapil Sharma
    We are a startup, setting new environments for product to be released soon. Planned server structure with planned release flow is as shown in below image It ideally have a local server (or Staging server, shown in green) in local office, without public IP address and Production Server (Red) at Amazon EC2. Both local and production server have there own SVN copy. Management here want to update production server with production SVN and without providing its access to developers (including freelancers/contract employees). So for developers, there is a Local SVN on local server. Another purpose of local SVN to keep a copy of code on local server, which is under our direct control. Although there are some technical concerns like how will code at local server will be updated from local SVN and commit on production SVN but bigger question is, is that structure correct? Major requirement remain don't provide production SVN access to developers. What are other possible options to achieve that? Another minor question, if suitable here, if above structure is correct, is it possible for a SVN checkout to get updated from one SVN (Local SVN) but commit to other (Production SVN)? If yes, How? edit An answer has been accepted but for bounty, I'm still looking for answer Is that structure correct? Its pros/Cons? Technical solution is already provided by accepted answer.

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >