Search Results

Search found 26434 results on 1058 pages for 'folder options'.

Page 157/1058 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Setting up autotest with rspec in ubuntu

    - by Reactor5
    I'm trying to set up autotest on Ubuntu, and no matter what my configuration, I get this: loading autotest/rails_rspec2 style: RailsRspec2 /home/brian/.rvm/gems/ruby-1.9.2-rc2@rails3tutorial/gems/redgreen-1.2.2/lib/redgreen/autotest.rb:6:in `<top (required)>': uninitialized constant Object::PLATFORM (NameError) the .autotest (~/.autotest) file I have is as follows: #!/usr/bin/env ruby require 'redgreen/autotest' def self.notify title, msg, img, pri='low', time=3000 `notify-send -i #{img} -u #{pri} -t #{time} '#{msg}'` end Autotest.add_hook :ran_command do |at| results = [at.results].flatten.join("\n") output = results.slice(/(\d+)\s+examples?,\s*(\d+)\s+failures?(,\s*(\d+)\s+not implemented)?(,\s*(\d+)\s+pending)?/) folder = "~/Pictures/autotest/" if output =~ /([123456789]|[\d]{2,})\sfailures?/ notify "FAIL:", "#{output}", folder+"rails_fail.png", 'critical', 10000 elsif output =~ /[1-9]\d*\spending?/ notify "PENDING:", "#{output}", folder+"rails_pending.png", 'normal', 10000 else notify "PASS:", "#{output}", folder+"rails_ok.png" end end what am I doing wrong here?

    Read the article

  • Slow to sync music files?

    - by pst007x
    I have created a folder in the Ubuntu One sync folder and called it music. I have added various albums and the folder has started to sync. However the files were added back in October and still only the folders have synced and no music files. I tested this service before and added a single music file directly into the Ubuntu One folder (no sub folders) and within a few days it synced, however it seems anything in sub folders seem to stall or take a very long time. My ubuntu One program always says syncing and the progression bar creeps, but still no files synced. I know there are issues with speed but a month and going to sync? I recently tested the same files with Dropbox and it took 9 hours. I have port forwarded the https (443) port both in the software firewall and in my router, I tried disabling both firewalls too, either way it makes no difference. I have also tried both from home and the office on different Ubuntu systems. Is there anything anyone has done to improve this service? I am trying to integrate Ubuntu One service into the office to share project files but the syncing is taking to long. I am using the latest Ubuntu 10.10 (fully updated, fresh install), I love Ubuntu and wish to continue to support it anyway I can, so a solution would be good :-) Any help would be appreciated. Thanks Paul

    Read the article

  • Logitech USB headphones detected and selected in Debian Squeeze but sound still coming from speakers

    - by mattalexx
    I have a pair of Logitech wireless USB headphones that work with Ubuntu Natty but aren't working in Debian Squeeze. When they are selected as the default audio output, the sound comes out of the speakers instead of the headphones. I have rebooted and tried using a different USB port. My computer is a Thinkpad T510. How can I fix this problem? Here is lsusb: Bus 002 Device 005: ID 046d:0a29 Logitech, Inc. Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 005: ID 046d:c52f Logitech, Inc. Wireless Mouse M305 Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Here is cat /proc/asound/cards 0 [Intel ]: HDA-Intel - HDA Intel HDA Intel at 0xf2420000 irq 17 1 [Headset ]: USB-Audio - Logitech Wireless Headset Logitech Logitech Wireless Headset at usb-0000:00:1d.0-1.1, full speed 2 [NVidia ]: HDA-Intel - HDA NVidia HDA NVidia at 0xcdefc000 irq 17 Here is the gnome-volume-control GUI: Here's lsmod | grep usb: snd_usb_audio 50670 0 snd_usb_lib 11192 1 snd_usb_audio usbhid 28008 0 hid 50909 1 usbhid snd_rawmidi 12513 2 snd_usb_lib,snd_seq_midi snd_hwdep 4054 2 snd_usb_audio,snd_hda_codec snd_pcm 47226 3 snd_usb_audio,snd_hda_intel,snd_hda_codec usbcore 98969 5 snd_usb_audio,snd_usb_lib,usbhid,ehci_hcd snd 34423 11 snd_usb_audio,snd_rawmidi,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_seq,snd_timer,snd_seq_device nls_base 4541 1 usbcore Here's cat /etc/modprobe.d/alsa-base.conf: # autoloader aliases install sound-slot-0 /sbin/modprobe snd-card-0 install sound-slot-1 /sbin/modprobe snd-card-1 install sound-slot-2 /sbin/modprobe snd-card-2 install sound-slot-3 /sbin/modprobe snd-card-3 install sound-slot-4 /sbin/modprobe snd-card-4 install sound-slot-5 /sbin/modprobe snd-card-5 install sound-slot-6 /sbin/modprobe snd-card-6 install sound-slot-7 /sbin/modprobe snd-card-7 # Cause optional modules to be loaded above generic modules install snd /sbin/modprobe --ignore-install snd && { /sbin/modprobe --quiet snd-ioctl32 ; /sbin/modprobe --quiet snd-seq ; } install snd-rawmidi /sbin/modprobe --ignore-install snd-rawmidi && { /sbin/modprobe --quiet snd-seq-midi ; : ; } install snd-emu10k1 /sbin/modprobe --ignore-install snd-emu10k1 && { /sbin/modprobe --quiet snd-emu10k1-synth ; : ; } # Prevent abnormal drivers from grabbing index 0 options bt87x index=-2 options cx88_alsa index=-2 options snd-atiixp-modem index=-2 options snd-intel8x0m index=-2 options snd-via82xx-modem index=-2 # Keep snd-pcsp from beeing loaded as first soundcard options snd-pcsp index=-2 # Keep snd-usb-audio from beeing loaded as first soundcard options snd-usb-audio index=-2 EDIT In VLC, I reset VLC prefs (Output: Default) and sound still comes out of speakers as expected. Then I change it to "Output: ALSA Audio output" and a Device menu appears. I select the headphones. When I then save the prefs, the audio switch to the headphones! But here's what's weird: I go back to prefs, change it to "Output: Default" and the headphones keep working. Maybe the ALSA option is actually what is being chosen as the "Default" option, but the Device menu (whose selection is still being used) is still set to the headphones. Anyway, now I need to figure out how to make it work as the default for the whole system.

    Read the article

  • .htaccess do not work without index.php on CodeIgniter

    - by Mattia
    I have read a lot of topic with the same problem but I do not find the solution. I have a LAMP into Ubuntu server. My document root is /home/utente/ into this dir I have another dir (turni) with a CodeIgniter web app. The web app works fine with the index.php into the URL, but I want to eliminate it. I have this configuration: config.php into CodeIgniter: $config['index_page'] = ''; .htaccess: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_URI} ^application.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] /etc/apache2/sites-available/default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/utente <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/utente/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> When I open a link of the web app without index.php into the URL, the server show me this error: The requested URL /turni/auth/login was not found on this server. Why? If I put the index.php like /turni/index.php/auth/login all works fine.

    Read the article

  • Installation experiences with NDepend under Win7/64 with restricted user permissions

    - by Marko Apfel
    Today Patrick gives me a new license for his static code analysis tool NDepend for my fresh machine with Win7/64. This platform is new for me, so some things are different to Win XP. Maybe that till yet some of these things are not well enough understandanded from me. So i stepped in some traps. Here are my notes to get NDepend running. Download of NDepend Professional Edition from http://www.ndepend.com/NDependDownload.aspx   Extracted to c:\program files (x86)\NDepend   Started NDepend.Install.VisualStudioAddin.exe this failed with Okay – sounds plausible.   Copy NDependProLicense.xml to this folder   Next try with NDepend.Install.VisualStudioAddin.exe opens the integration dialog   Registering in Visual Studio failed with   Manually unblock as described (first solution hint)   and here comes my largest understanding problem. After unblocking this file   and closing this dialog the next opening shows the blocking again: Why? So the same error during integration pops up.   Okay – tried the second solution hint with copying folders Copy all to a full accessable folder under c:\temp\   Now the installation works   looks good   copying the folders back to c:\program files (x86)\NDepend   starting Visual Studio failed with     Okay – copying the folder to a private application folder c:\users\apf\My Applications\NDepend   Installing again   Now Visual Studio runs and NDepend is integrated Nevertheless my machine is only used by me, i prefer “all user”-installations. The described way works sadly only for my account.

    Read the article

  • Windows File Sharing - Long Initial Delay

    - by Isaac Sutherland
    I have two Windows 7 machines connected to a router. I created a shared folder on machine A, and I can access it from machine B. The transfer speed is great. However, there is sometimes a long initial delay when I try to access the shared folder from machine B. I'll click to open the folder, and windows explorer pauses for a few minutes before actually loading the contents of the folder. After it loads, however, I can navigate the subfolders and edit files with no noticeable delay. Then, some time later, I will get the huge delay on saving a file, after which subsequent saves have no delay. What is the problem here, and how can I fix it?

    Read the article

  • Backing up Excel Files to a different Directory

    - by Joe Taylor
    In Excel 2007 in the Save As box there is an option to 'Create a Backup' which simply backs up the file whenever it is saved. Unfortunately it backs up the file to the same directory as the original. Is there a simple way to change this directory to another drive / folder? I have messed about with macros to do this, coming up with: Private Sub Workbook_BeforeClose(Cancel As Boolean) 'Private Sub Workbook_BeforeSave(ByVal SaveAsUI As Boolean, Cancel As Boolean) 'Saves the current file to a backup folder and the default folder 'Note that any backup is overwritten Application.DisplayAlerts = False ActiveWorkbook.SaveCopyAs Filename:="T:\TEC_SERV\Backup file folder - DO NOT DELETE\" & _ ActiveWorkbook.Name ActiveWorkbook.Save Application.DisplayAlerts = True End Sub This creates a backup of the file ok the first time, however if this is tried again I get: Run-Time Error '1004'; Microsoft Office Excel cannot access the file 'T:\TEC_SERV\Backup file folder - DO NOT DELETE\Test Macro Sheet.xlsm. There are several possible reasons: The file name or path does not exist The file is being used by another program The workbook you are trying to save has the same name as a... I know the path is correct, I also know that the file is not open anywhere else. The workbook has the same name as the one I'm trying to save over but it should just overwrite. I have posted the question about the coding on Stack Overflow but wondered if there is an easier way to do this. Any help would be much appreciated. Joe

    Read the article

  • Certain Japanese characters aren't displayed properly

    - by Nisto
    On the following site: http://www.nciku.com/search/radical the first 2 characters on the second row of the "Step 2" table aren't displayed properly. All other characters look fine. I tried re-installing the Asian fonts via the checkboxes regarding Asian fonts in the "Regional and Language Options" control panel applet. I have tried removing every single Font from the Fonts folder (some were ofcourse not possible to remove), and re-installing them all again. I did this by... Running cmd Closing down the explorer process In cmd; using the command DEL /F /S /Q * in the Fonts folder Putting in my XP SP3 Retail disc In cmd; using expand -r *.tt_ in the I386 folder on the XP disc (and any other font file, in the I386\LANG folder) I also tried installing this pack from Microsoft, but this solved nothing either. I even tried running my browser (Firefox) through AppLocale. And changing character encoding -- again, does not help. I've also tried viewing the page in Internet Explorer. What could be wrong? I have checked my Fonts folder, to make sure that every single font available on the XP disc is available in WINDOWS\Fonts. What shows in the first square on the second row - I can't really tell what it's supposed to look like (but it's not the proper character)... but the second square shows a rectangular symbol containing HEX code. I've been in this situation before -- and it has been when I've been missing fonts. But how could I possibly be missing a necessary font? Shouldn't it be provided in the Asian "font packages"? I've talked to some other users that has viewed the page, and they had no problems displaying those characters on second row - even though they're only using the fonts provided on the Windows installation disc. Windows XP Professional Service Pack 3 (x86 - with latest updates) Firefox 3.6.15

    Read the article

  • Running shortcut from command prompt without the .lnk extension (Windows)

    - by Abbas
    I have created a folder (d:\shortcuts), created shortcuts for most applications in this folder and appended the folder path to the Path environment variable. Now all my applications are available from run and command window without messing around with Path. However, I now have to type the name of the shortcut as well as extension (e.g. vlc.lnk) to invoke it. Is there any way to do this without typing the extension?

    Read the article

  • Setup symbolic link where users can access it with FTP

    - by Dan Shields
    I have a folder on a server where a client of mine has a bunch of folders that they upload images and what not for a site, I do a symbolic link to those folders to the root of the website. This way I can give them ftp access to upload whatever they need without having access to the root level of the website. I have another folder that I can't setup as a symbolic link to their folder, which has images they need to upload to. I know that if I create a symbolic link the other way around where the sym link is in their folder, they can't access it through FTP. There has to be a way without creating two separate FTP accounts and give a user the ability to upload to a different directory that is outside of their home directory. I see that it is ftp specific and that there are some settings that can be changed but I haven't seen any clear cut answers for the best way to handle this.

    Read the article

  • Using the link command to keep backups on another drive

    - by Xavier
    I have a folder that contains a not so large amount of space called /data/backup. I have been told that if I link that folder (/data/backup) to an even bigger folder area like /bigdata/backup for example, that I will be able to execute backups to the /data/backup folder. It will then just create a link, but the data will be seen in both folders and the latter one (/bigdata/backup) will contain the backup results but it will show on both folders. Since the /bigdata/backup has far more disk space then the backup will no longer fail because of space problems in the /data/backup one. Is this true?

    Read the article

  • OOP for unit testing : The good, the bad and the ugly

    - by Jeff
    I have recently read Miško Hevery's pdf guide to writing testable code in which its stated that you should limit your classes instanciations in your constructors. I understand that its what you should do because it allow you to easily mock you objects that are send as parameters to your class. But when it comes to writing actual code, i often end up with things like that (exemple is in PHP using Zend Framework but I think it's self explanatory) : class Some_class { private $_data; private $_options; private $_locale; public function __construct($data, $options = null) { $this->_data = $data; if ($options != null) { $this->_options = $options; } $this->_init(); } private function _init() { if(isset($this->_options['locale'])) { $locale = $this->_options['locale']; if ($locale instanceof Zend_Locale) { $this->_locale = $locale; } elseif (Zend_Locale::isLocale($locale)) { $this->_locale = new Zend_Locale($locale); } else { $this->_locale = new Zend_Locale(); } } } } Acording to my understanding of Miško Hevery's guide, i shouldn't instanciate the Zend_Local in my class but push it through the constructor (Which can be done through the options array in my example). I am wondering what would be the best practice to get the most flexibility for unittesing this code and aswell, if I want to move away from Zend Framework. Thanks in advance

    Read the article

  • libsasl2 change paths

    - by mk_89
    I have been following the tutorial https://help.ubuntu.com/community/Postfix for installing Postfix on ubuntu. Im stuck at the Authenication section of the tutorial where you change paths to live in the false root, if you look at the link above I have a file (/etc/default/saslauthd) which is pretty much the same as the one from the tutorial. saslauthd # This needs to be uncommented before saslauthd will be run automatically START=yes PWDIR="/var/spool/postfix/var/run/saslauthd" PARAMS="-m ${PWDIR}" PIDFILE="${PWDIR}/saslauthd.pid" # You must specify the authentication mechanisms you wish to use. # This defaults to "pam" for PAM support, but may also include # "shadow" or "sasldb", like this: # MECHANISMS="pam shadow" MECHANISMS="pam" # Other options (default: -c) # See the saslauthd man page for information about these options. # # Example for postfix users: "-c -m /var/spool/postfix/var/run/saslauthd" # Note: See /usr/share/doc/sasl2-bin/README.Debian #OPTIONS="-c" #make sure you set the options here otherwise it ignores params above and will not work OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd" When I run the following command in ubuntu dpkg-statoverride --force --update --add root sasl 755 /var/spool/postfix/var/run/saslauthd I get the following error dpkg-statoverride: warning: An override for '/var/spool/postfix/var/run/saslauthd' already exists, but --force specified so will be ignored. dpkg-statoverride: warning: --update given but /var/spool/postfix/var/run/saslauthd does not exist I don't why this is happening, I literally followed the tutorial step by step and have installed all the packages necessary, what could be the problem? do I have to manually create

    Read the article

  • Problems serving SVN over HTTPS on Ubuntu 10.04

    - by odd parity
    We've been experiencing some problems with our Subversion server after upgrading to Ubuntu 10.04. When trying to access a repository, regardless of client (I've tried git-svn and svn on Windows as well as svn on Ubuntu 10.04, from different computers and network locations), I get a 400 bad request. Here's the output from svn: svn: Server sent unexpected return value (400 Bad Request) in response to OPTIONS request for 'https://svn.example.org/svn/programs' Here are the relevant entries from the Apache logs (I'm running Apache 2.2): error.log [Mon Jun 14 11:29:31 2010] [error] [client x.x.x.x] request failed: error reading the headers ssl_access.log x.x.x.x - - [14/Jun/2010:11:29:28 +0200] "OPTIONS /svn/programs HTTP/1.1" 401 2643 "-" "SVN/1.6.6 (r40053) neon/0.29.0" x.x.x.x - - [14/Jun/2010:11:29:31 +0200] "ction-set/></D:options>OPTIONS /svn/programs HTTP/1.1" 400 644 "-" "SVN/1.6.6 (r40053) neon/0.29.0" If anyone has run into similar problems or could give me a pointer to track down the cause of this I'd be very grateful - I'd really like to avoid having to downgrade the box again.

    Read the article

  • Bridge and OpenVPN with shorewall

    - by Javier Martinez
    I have this scenario and everything it's working OK, but I want to configure my Shorewall and I can't do it. My interfaces are: br0 (bridge of eth0) tun0 (OpenVPN) vnet* (each one of bridged interfaces with public IP's) Public Main IP: 188.165.X.Y OpenVPN IP's: 172.28.0.x Bridge: public ip's So, I have the next configuration for shorewall: /etc/shorewall/zones #ZONE TYPE OPTIONS IN OUT # OPTIONS OPTIONS fw firewall inet ipv4 road ipv4 /etc/shorewall/interfaces #ZONE INTERFACE BROADCAST OPTIONS inet br0 detect routeback road tun+ detect routeback /etc/shorewall/policy #SOURCE DEST POLICY LOG LIMIT: CONNLIMIT: # LEVEL BURST MASK $FW all ACCEPT inet $FW DROP info road all DROP inet road DROP /etc/shorewall/tunnels #TYPE ZONE GATEWAY GATEWAY # ZONE openvpnserver:1194 inet 0.0.0.0/0 The problem is that even with shorewall running I am able to ping or connect to the virtual machines behind the bridge

    Read the article

  • How to setup NTFS ACL with Acces Based Enumeration

    - by Patrick Pellegrino
    We're in the process of migrating from Novell Netware to Windows 2K8 R2 infrastructure (AD, File server, print server... etc) My question is about ACL. While Netware and Windows are totally different, I want to be sure my thnking is good before screwing everything up! There's a scenario : F: | +-- DATA <= Shared as DATA with Access based enumeration | +-- Folder 1 +-- Team 1's Folder +-- Team 2's Folder ... In that case, by default, rights are herited from the F: to the deepest folders. What we want : Administrators group have full control top - down. From DATA, ABE list only folders that users have access. (ex. : I'm in group Team 2, I see Team 2's Folder). From what I understand, at DATA I remove all NTFS ACL to be herited (ex. Users Group), be sure to keep Administrators Group and SYSTEM user. After that, grant Full control (or any right needed) on each folder to Groups or Users that have to have access. Does I'm wrong ? Anything I should take care of ? Any help to my understanding will be very appreciated. Regards.

    Read the article

  • Download a website that requires log-in with HTTtrack Copier

    - by H.Moss
    Hi guys! I have been researching of how to download content of a site that requires username and password. This is actually harder than I thought it would be. I tried to use both HTTtrack Copier and followed the instruction below, but it's not working! Q: I can not access several pages (access forbidden, or redirect to another location), but I can with my browser, what's going on? A: You may need cookies! Cookies are specific data (for example, your username or password) that are sent to your browser once you have logged in certain sites so that you only have to log-in once. For example, after having entered your username in a website, you can view pages and articles, and the next time you will go to this site, you will not have to re-enter your username/password. To "merge" your personnal cookies to an HTTrack project, just copy the cookies.txt file from your Netscape folder (or the cookies located into the Temporary Internet Files folder for IE) into your project folder (or even the HTTrack folder)

    Read the article

  • Why would accessing photos over a network be a problem for Digikam?

    - by Shedeki
    Digikam has always worked nicely for me. I recently setup a Synology DiskStation (DS212+) and moved all my pictures to it, keeping them in an encrypted folder. I mount that folder using cifs, as some bug prevents eCryptfs and NFS from working together. This has led Digikam to being incredibly slow. Startup takes a very long time (several minutes for 41779 items, 123.8GB) but worse is how long it takes Digikam to write files. I like using Digikams import feature to copy new images from my camera to the hard drive because it checks for duplicates as well as creating a clear folder structure according to the dates the images were taken. Since I moved to using the network drive Digikam takes about 5 to 10 times as long to import photos than it did before. Saving modified or converted images takes equally long. What I am looking for is a way to help Digikam speed things up or an alternative piece of software (I have never liked Digikam being so very much KDEish…). There are just so many features that only Digikam seems to combine, e.g.: Batch processing. Respects existing folder structure. Does not mess up files for other applications. *.NEF support. Caches thumbnails in a clean way.

    Read the article

  • access_log item w/out IP. Starts with "::1 - - [<date>]"

    - by Meltemi
    Looking at our Apache log I see normal requests like: 174.133.xxx.xxx - - [20/May/2010:17:36:44 -0700] "GET /index.html HTTP/1.1" 200 2004 but every so often i get a cluster of these w/out an IP address. ::1 - - [20/May/2010:18:47:21 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [20/May/2010:18:47:22 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [20/May/2010:18:47:23 -0700] "OPTIONS * HTTP/1.0" 200 - what do they mean and curious what causes them?

    Read the article

  • How do you manage Labeled and All-Mail unread mails in Thunderbird with a GMail IMAP accounnt?

    - by Edward Beach
    I use Thunderbird with gmail via imap and do so with multiple accounts. On the gmail side I have filters that will automatically assign labels to incoming mail and archive it moving it out of the inbox. On the Thunderbird side it will see the new mail appear in the corresponding folder and the all mail folder -- that's fine but my problem is that they're both marked as unread. Since I have many accounts I use the unread mail view in Thunderbird's folder panel and what I see is both folders highlighted as unread. When I read the message in one folder the other only get marked as read I click on it and Thunderbird does another imap transaction. Is there a configuration that will recognize the same mail in two different folders automatically?

    Read the article

  • Outlook rules not working together

    - by JBurace
    I have multiple Outlook (2010) rules and these 2 are having issues together: Rule 1: Apply this rule after the message arrives with blahname in the sender's address and move it to the BlahBox folder. Rule 2: Apply this rule after the message arrives from [email protected] move it to the NoReply folder. If I have rule 1 above rule 2, only rule 1 works (noreply emails stay in the Inbox folder). If I swap and have rule 2 above rule 1, only rule 2 works (blahname emails stay in the Inbox folder). What am I doing wrong; how can I fix this so it applies both rules on incoming email? I'm fairly certain the two rules should never intersect (blahname != domain.com). Also I do not have "this computer only" checked on any rules, I avoid client-only rules.

    Read the article

  • php rsync with exec() not working

    - by mojeime
    Why this: rsync -avz -e ssh /home/userneme/folder [email protected]:/var/www/folder works from cronjob and this: exec("rsync -avz -e ssh /home/userneme/folder [email protected]:/var/www/folder"); doesn't work. I know exec is working because i have a few places in my appp that do convercion from pdf to jpg with ImageMagick (exec). SOLVED exec is working OK it was a permission issue on remote server. "Local" server is shared reseller account and remote server is my first VPS Ubuntu 10.10 LAMP box. If only I had a system administrator since i'm just a software developer forced to do this and i stink at it :) Thank You all!

    Read the article

  • Problem with custom 404 page

    - by user33687
    Don't know if this question is appropriate here. I created an Alias directed to the root folder of the website (right now running on localhost). In the root folder, I have a custom 404 page, and I've added a .htaccess file in that folder with the "ErrorDocument 404 /404.html" line, but it still doesn't work. I'm pretty new to this stuff (web servers), so I must be missing something obvious. Any suggestions greatly appreciated.

    Read the article

  • Copying dovecot maildir to another server with courier maildir

    - by NovumCoder
    Hi all, i just moved all my mailboxes first from one server to the new one by using rsync. After that i created the folders using Thunderbird to have same folder structure like on old server. Then i copied all mail files into the folders. Now when i subscribe and click on the folder in Thunderbird it starts downloading the headers of all mails, but after finishing download nothing appears in the mail list. Its like my folder is empty and everytime i click again on the folder thunderbird starts again downloading headers. What is wrong here? I found a solution using a tool called imapsync, but its not for free, so i started doing it by copy&paste. I thought Thunderbird will be able to fix the indexes. :-( Or is there a better solution to migrate from dovecot maildir to courier maildir?

    Read the article

  • Transmission-daemon not picking up on watch directory

    - by Mild Fuzz
    Trying to get my transmission-daemon to pick up files from a dropbox folder, to make remote starting easier (it's a headless system). As far as I can tell, the settings.json file is as expected, but none of the files I place in the folder get picked up. I have checked that dropbox is syncing correctly. Here is the whole settings.json file, but the relevant lines are included below: "watch-dir": "/home/john/Dropbox/torrents", "watch-dir-enabled": true Update It appears to be a permissions issue. From /var/log/syslog: Unable to watch "/home/john/Dropbox/torrents": Permission denied (watch.c:79) I have tried stopping the daemon - sudo service transmission-daemon stop - changing permissions of folder using chown - sudo chown -R john /home/john/Dropbox/torrents - restarting daemon - sudo service transmission-daemon start Same result, however Update 2 Permissions for the folder are: drwsrwsrwx 2 john debian-transmission 4096 2012-04-09 19:40

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >