Search Results

Search found 15776 results on 632 pages for 'include paths'.

Page 195/632 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • How to edit semi-plaintext file and maintaining character structure?

    - by Raul
    I am using a software (Groupmail from Infacta) that uses exact / absolute %PATHS% for saving some settings in specific semi-plaintext file. This is a really bad idea because you can't move to USER folder, or like my case it does not work after migrating to a new computer with different language. For example: C:\Documents and Settings\USER\Local Settings\Application Data\Infacta is different than C:\Documents and Settings\USER\Configuración local\Datos de programa\Infacta Obviously, the software does not work well. I tried to solve this using Find/Replace the new PATH with Notepad++. While the Groupmail software loads well and shows settings correctly, the software fails when trying to save data on that file. I guess this is because length or number of replaced characters is different and also it corrupt the file. Please could you help me to edit this file maintaining file integrity / structure?

    Read the article

  • Either, nginx+php-fpm bad config or nginx+php-fpm cannot handle high query?

    - by The Wolf
    I have wordpress installed in my server configured(hopefully with nginx+php-fpm+mariaDB). I am trying to import using wordpress importer a 1.5MB xml file. Everytime I try to upload it using the importer, it got cut of... meaning just blank screen result.. Here is my error log: actually I just posted 2 of the errors [error] 858#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.xx.xx, server: xxx.com, request: "GET xxxx.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" [error] 858#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.x.xx.xx, server: xxx.com, request: "GET xxxx.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" I don't know what is the reason why it can't process the wordpress export .xml. I already increased max_file_upload & etc., but nothing happens. Hope somebody can help me. Here are my conf: nginx.conf user nginx; worker_processes 8; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; server_tokens off; keepalive_timeout 65; fastcgi_read_timeout 500; #gzip on; client_max_body_size 2M; php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf [root@host etc]# vim php-fpm.conf [root@host etc]# vim php-fpm.conf ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf ps aux [root@host etc]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 2900 1380 ? Ss Jun02 0:00 init root 2 0.0 0.0 0 0 ? S Jun02 0:00 [kthreadd/9308] root 3 0.0 0.0 0 0 ? S Jun02 0:00 [khelper/9308] root 124 0.0 0.0 2464 576 ? S<s Jun02 0:00 /sbin/udevd -d root 460 0.0 0.1 35976 1308 ? Sl Jun02 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 474 0.0 0.0 8940 1028 ? Ss Jun02 0:00 /usr/sbin/sshd root 481 0.0 0.0 3264 876 ? Ss Jun02 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 491 0.0 0.1 6268 1432 ? S Jun02 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/host.busilak.com. mysql 584 0.1 6.8 679072 71456 ? Sl Jun02 0:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --use root 586 0.0 0.3 12008 3820 ? Ss Jun02 0:01 sshd: root@pts/0 root 629 0.0 0.0 9140 756 ? Ss Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 630 0.0 0.0 9140 520 ? S Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 645 0.0 0.1 12788 1928 ? Ss Jun02 0:01 sendmail: accepting connections smmsp 653 0.0 0.1 12576 1728 ? Ss Jun02 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue root 691 0.0 0.1 7148 1184 ? Ss Jun02 0:00 crond root 698 0.0 0.1 6272 1688 pts/0 Ss Jun02 0:00 -bash root 1006 0.0 0.0 7828 924 ? Ss 00:30 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 1007 0.0 0.1 8156 1724 ? S 00:30 0:00 nginx: worker process nginx 1008 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1009 0.0 0.1 8020 1356 ? S 00:30 0:00 nginx: worker process nginx 1011 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1012 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1013 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1014 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1015 0.0 0.1 8024 1344 ? S 00:30 0:00 nginx: worker process root 1030 0.0 0.2 25396 2904 ? Ss 00:30 0:00 php-fpm: master process (/etc/php-fpm.conf) apache 1031 0.0 1.9 40700 20624 ? S 00:30 0:00 php-fpm: pool www apache 1032 0.0 2.0 41924 21888 ? S 00:30 0:01 php-fpm: pool www apache 1033 0.0 1.9 41212 20848 ? S 00:30 0:01 php-fpm: pool www apache 1034 0.0 1.9 40956 20792 ? S 00:30 0:01 php-fpm: pool www apache 1035 0.0 2.0 41560 21556 ? S 00:30 0:02 php-fpm: pool www apache 1040 0.0 1.8 39292 19120 ? S 00:30 0:00 php-fpm: pool www root 1125 0.0 0.0 6080 1040 pts/0 R+ 01:04 0:00 ps aux netstat -l [root@host etc]# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost.localdomain:smtp *:* LISTEN tcp 0 0 localhost.locald:cslistener *:* LISTEN tcp 0 0 *:mysql *:* LISTEN tcp 0 0 *:http *:* LISTEN tcp 0 0 *:ssh *:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 60575947 /var/run/saslauthd/mux unix 2 [ ACC ] STREAM LISTENING 60574168 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 60575873 /var/lib/mysql/mysql.sock Hope somebody can help me to figure out what is the problem.

    Read the article

  • .htaccess redirect for www in parent folder and children react

    - by ServerChecker
    We were having a problem with the Norton seal not showing up on our affiliate marketing landing pages (landers). Turns out, the Norton seal was super picky about the "www." prefix. I had folder paths like /lp/cmpx where x was a number 1-100 and indicated advertising campaign number. So, to remedy this, I stuck this in my .htaccess file right after the RewriteEngine On line: RewriteCond %{HTTP_HOST} ^example\.com RewriteRule ^(.*)$ http://www.example.com/lp/cmp1/$1 [R=302,QSA,NC,L] Trouble is, I had to do that under every campaign folder, changing cmp1 to whatever the folder name was. Therefore, my question is... Is there a way I can do this with an .htaccess file under the parent folder (/lp in this case) and it will work for each of the children? EDIT: Note that I stuck an .htaccess file in /lp just now to test: RewriteEngine On RewriteCond %{HTTP_HOST} ^example\.com RewriteRule ^(.*)$ http://www.example.com/lp/$1 [R=302,QSA,NC,L] This yielded no effect to the /lp/cmpx folders underneath, to my dismay.

    Read the article

  • private address in traceroute results

    - by misteryes
    I use traceroute to check paths on a remote host, and I notice that there are some private IPs, like 10.230.10.1 bash-4.0# traceroute -T 132.227.62.122 traceroute to 132.227.62.122 (132.227.62.122), 30 hops max, 60 byte packets 1 194.199.68.161 (194.199.68.161) 1.103 ms 1.107 ms 1.097 ms 2 sw-ptu.univ.run (10.230.10.1) 1.535 ms 1.625 ms 2.172 ms 3 sw-univ-gazelle.univ.run (10.10.20.1) 6.891 ms 6.937 ms 6.927 ms 4 10.10.5.6 (10.10.5.6) 1.544 ms 1.517 ms 1.518 ms why there are private addresses near the host? what are the purposes that these private addresses are used? I mean why they want to put the public IP behind private IPs? thanks!

    Read the article

  • Plesk directory structure problems

    - by johnnietheblack
    I have an entire website with the following directory structure: /example.com /html (public) /css /js index.php /lib session.php other_lib_files.php /views index.php /models /controllers As illustrated, the html is public, and anything above it is private. My site now needs to upgrade servers, and the new server (Linux w/ Plesk) has the following structure (reduced to the problematic parts below): /myplesksite.com /httpdocs /css /js index.php /private /lib /models /views What I would THINK is that I should be able to put my /lib, /views, /models, etc in the directory directly above /httpdocs, the same way I had it in my previous server. Is that possible? Or do I have to put it in private? I would really love not to have to adjust my internal paths throughout the site if not necessary...

    Read the article

  • Configure PEAR on CentOS 6 and PLESK

    - by RCNeil
    I'm hoping to get a little assistance with configuring PEAR to work properly. I have a PHP file that's calling PEAR's mail and mail-mime files, and I believe I am missing some steps because I keep getting the very common Warning: include_once(Mail.php): failed to open stream: No such file or directory Warning: include_once(Mail_Mime/mime.php): failed to open stream: No such file or directory It is installed - Installed packages, channel pear.php.net: ========================================= Package Version State Archive_Tar 1.3.7 stable Console_Getopt 1.2.3 stable Mail 1.2.0 stable Mail_Mime 1.8.3 stable PEAR 1.9.4 stable Structures_Graph 1.0.4 stable XML_RPC 1.5.4 stable XML_Util 1.2.1 stable And according to this TUT, I need to configure it appropriately in each vhost. I have already gone through and adjusted the php.ini file, but when the TUT speaks of the php_admin_value open_basedir "/var/www/vhosts/example.com/httpdocs:/tmp:/usr/share/pear:/local/PEAR" in my /var/www/vhosts/example.com/conf/httpd.include file I kind of get lost. There are several httpd.include files in that directory, all preceded with very long numerical strings. All I want to do is have an email attachment in my form.... Any insight or similar experiences shared would be greatly appreciated.

    Read the article

  • Why do my backup fail when I target a network share hosted by a Synology DS211 disk station?

    - by Larry
    My backups are failing when I try to use a network share hosted by a Synology DS211 disk station. They work fine if I target a different network share (i.e. \server1\data\larry). When I run the following command: Wbadmin start backup -backupTarget:\\diskstation\backup-larry -include:C: This is what I get: wbadmin 1.0 - Backup command-line tool (C) Copyright 2004 Microsoft Corp. Note: The backed up data cannot be securely protected at this destination. Backups stored on a remote shared folder might be accessible by other people on the network. You should only save your backups to a location where you trust the other users who have access to the location or on a network that has additional security precautions in place. Retrieving volume information... This will back up volume WIN7(C:) to \\diskstation\backup-larry. Do you want to start the backup operation? [Y] Yes [N] No y Note: The list of volumes included for backup does not include all the volumes that contain operating system components. This backup cannot be used to perform a system recovery. However, you can recover other items if the destination media type supports it. The backup operation to \\diskstation\backup-larry is starting. Creating a shadow copy of the volumes specified for backup... Creating a shadow copy of the volumes specified for backup... The backup operation stopped before completing. Summary of the backup operation: ------------------ The backup operation stopped before completing. Detailed error: Access is denied. Windows Backup failed to write the file: '<backup location>\WindowsImageBackup\<Computer Name>\MediaId'. Access is denied. The backup creates the following path \\diskstation\backup-larry\WindowsImageBackup\LARRY-MYDOMAIN\ but its empty. I definitely have read/write access on the target directory (\diskstation\backup-larry). I have verified this by looking at the permission and by actually copying files to this location. Any suggestions?

    Read the article

  • CentOS 6.5 SVN https - Unknown DAV provider: svn

    - by Programster
    I am trying to setup a CentOS 6.5 64bit server with SVN over HTTPS. Unfortunately after configuring the /etc/httpd/conf.d/subversion.conf file as follows (changed paths): <Location /repos> DAV svn SVNParentPath /path/to/svn/repos # Limit write permission to list of valid users <LimitExcept GET PROPFIND OPTIONS REPORT> # Require SSL connection for password protection SSLRequireSSL AuthType Basic AuthName "Authorization Realm" AuthUserFile /path/to/passwdfile Require valid-user </LimitExcept> </Location> I get the following error message when restarting http: Starting httpd: Syntax error on line 3 of /etc/httpd/conf.d/subversion.conf: Unknown DAV provider: svn I have triple checked that I have the mod_dav_svn package already installed: Package mod_dav_svn-1.6.11-10.el6_5.x86_64 already installed and latest version Is my config wrong or are there other packages I need to set up?

    Read the article

  • Nginx infinite redirect loop

    - by Zachary Burt
    Why is http://compassionpit.com/blog/ going through an infinite redirect loop? Here's my nginx conf file. The site is run by a nodejs server on port 8000 and Apache serves up the blog (wordpress) and the forum (phpBB). The forum is resolving just fine, at http://www.compassionpit.com/forum/ ... server { listen 80; server_name www.compassionpit.org; rewrite ^/(.*) http://www.compassionpit.com/$1 permanent; } server { listen 80; # your server's public IP address server_name www.compassionpit.com; index index.php index.html; location ~ ^/$ { proxy_pass http://127.0.0.1:8000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location @blogphp { internal; root /opt/blog/; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root/index.php; fastcgi_index index.php; fastcgi_pass 127.0.0.1:8080; } location ~ ^/(forum|blog)/($|.*\.php) { root /opt/; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_index index.php; fastcgi_pass 127.0.0.1:8080; } location ~ ^/(forum|blog) { root /opt/; try_files $uri $uri/ @blogphp; } location ~ ^/(forum|blog)/ { root /opt/; } location @backend { internal; proxy_pass http://127.0.0.1:8000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ~ / { root /opt/chat/static/; try_files $uri $uri/ @backend; } }

    Read the article

  • PostgreSQL user authentication against PAM

    - by elmuerte
    I am trying to set up authentication via PAM for PostgreSQL 9.3. I already managed to get this working on an Ubuntu 12.04 server, but I am unable to get this working on a Centos-6 install. The relevant pg_hba.conf line: host all all 0.0.0.0/0 pam pamservice=postgresql93 The pam.d/postgressql93 is the default config shipped with the official postgresql 9.3 package: #%PAM-1.0 auth include password-auth account include password-auth When a user tries to authenticate the following is reported in secure log: hostname unix_chkpwd[31807]: check pass; user unknown hostname unix_chkpwd[31808]: check pass; user unknown hostname unix_chkpwd[31808]: password check failed for user (myuser) hostname postgres 10.1.0.1(61459) authentication: pam_unix(postgresql93:auth): authentication failure; logname= uid=26 euid=26 tty= ruser= rhost= user=myuser The relevant content of password-auth config is: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so The problem is with the pam_unix.so. It is unable to validate the password, and unable to retrieve the user info (when I remove the auth entry of pam_unix.so). The Centos-6 install is only 5 days old, so it does not have a lot of baggage. The unix_chkpwd is suid and has execute rights for everybody, so it should be able to check the shadow file (which has no privileges at all?).

    Read the article

  • router propogation usng OSPF

    - by liv2hak
    I am using Juniper J-series routers to emulate a small telco and VPN customer.I need to use OSPF so that the path to each internal subnet is propogated to all PE nodes.The network topology is given below. To achieve this I am planning to run the following commands in UOW-TAU and UOW-HAM. set protocols ospf area 0.0.0.0 interface ge-0/0/0 set protocols ospf area 0.0.0.0 interface lo0.0 Do I need to do any additional configuration in TAU-PE1 and HAM-PE1 routers for it to receive the OSPF paths.? I am a beginner at the routing.Any help is appreciated.

    Read the article

  • Find out ESXi host from VM? [duplicate]

    - by Kyle Brandt
    This question already has an answer here: Determining the name of the VMware host of a VM guest - from the guest 3 answers I have a monitor that is running on all guests, Windows and Linux. I'd like to send back the Parent host of the guest with this monitor, How can I found out the VM host from within a ESXi VM? Ideally I'm looking for a universal way to do this from both Windows and Linux, but if they are separate paths that okay. Using something from VMWare tools would be fine, but I'm not currently using anything running the vSphere API.

    Read the article

  • Simulating an UNC path with a leading dot

    - by Uwe Keim
    Being a C# .NET Windows Forms developer, some customers are running our applications on an Apple OS X Mac inside a Parallels virtual machine. Parallels presents host folders to the guest Windows as UNC paths with a leading dot like \\.psf\Home\Some\More\Folders Now an application of us cannot handle the leading dot correctly when accessing files from these kind of shares ("Invalid URI, cannot analyze host name" exception). I want to debug and fix this issue, unfortunately I do have no Mac and Parallels around here to test it. My question is: Is there a way to "simulate" this kind of share on a normal Windows server or client so that I'll be able to debug my application with Visual Studio? What I tried so far: I already tried to edit my HOSTS file to contain an entry like # ... 127.0.0.1 .psf # ... but Windows just seems to not recognize the share at all.

    Read the article

  • Keeping my zsh or bash profile synced up on all my machines.

    - by Joseph Silvashy
    I work on several different machines, all of which are *nix. I have a lot of specific things I like my shell to, or the prompt to look like, or aliases, etc, etc. I'm sure all of you folks deal with this as well. What do you think the best way to keep all my machines' shells to act the same? First off, I'm aware that different machines will need different paths to bins and other differences, so my first inclination is to just include a file at the end of my profile, this is the one that we'll keep in sync. What is the best way to keep files synced up? I can put the file on a remote system, and perhaps use git, to push, then pull my changes every once and a while. However, isn't Rsync better suited for this?

    Read the article

  • Hyper-V Virtual Machine won't respond over network

    - by Brad Gignac
    Recently, one of our Hyper-V virtual machines has periodically stopped responding over the network. It seems to be happening every few days, and it occasionally happens up to several times a day. I am by no means a sysadmin, so any direction you guys could provide would be very welcome. I've included everything I know to include below. If you need any additional information, I'll be glad to include it. I can connect through the Hyper-V console. I can't connect to network shares, IIS web apps, using RDP, or using ping. Memory usage seems to be normal (3 of 4 GB) Processor usage seems low. We don't know the exact time the server goes down, but the following error appears consistently around the time it goes down: Error 5719, NETLOGON This computer was not able to set up as secure session with a domain controller in domain *** due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If this problem persists, please contact your domain administrator.

    Read the article

  • NGINX server_name issues

    - by Unai
    I have the following simple server block on NGINX: server { listen 80; listen 8090; server_name domain.com; autoindex on; root /home/docroot; location ~ \.php$ { include /usr/local/nginx/conf/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/docroot$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; } } After I include the relevant settings on my hosts file I get the following (unexpected) behavior: http: //domain.com/ and http: //domain.com:8090/ work fine; http: //domain.com:8090/future-cell-phone-technology-01-150x150.jpg works; http: //domain.com/future-cell-phone-technology-01-150x150.jpg - ERROR! "The connection was reset" (note.- added a space after http: to avoid link protection but this is not really promoting anything) I've been troubleshooting (3) for a couple hours and I'm unable to identify the culprit. I'm running NGINX 1.0.10 (latest stable) on Debian 6.0.2 32 bits. This NGINX instance runs another 40 or 50 sites with no problems.

    Read the article

  • Using symbolic links with git

    - by Alfredo Palhares
    I used to have my system configuration files all in one directory for better management but now i need to use some version control on it. But the problem is that git doesn't understand symbolic links that point to outside of the repository, and i can't invert the role ( having the real files on the repository and the symbolic links on their proper path ) since some files are read before the kernel loads. I think that I can use unison to sync the files in the repo and and the their paths, but it's just not practical. And hard links will probably be broken. Any idea ?

    Read the article

  • grep --color=auto with -i option disables the matching text color, why?

    - by emptyset
    I was messing around with grep and put this in my .zshenv: export GREP_OPTIONS="--color=auto" export GREP_COLORS='mt=1;34' I was bonking my head on the keyboard and changing GREP_COLORS around for a minute trying to figure out why the folder colors were working, but the matching text wasn't. I was doing this: $ grep -R -n -i -e "functionFoo\(" --include=*.cs --exclude-dir=Logs * The line number and file names were set with the default colors, but the matching text wasn't. After spending way too much time, I thought to do this: $ grep -R -n -e "functionFoo\(" --include=*.cs --exclude-dir=Logs * (I removed the -i option.) That's all it took to get the matching text to correctly show up in bold blue. This is a Cygwin on Vista setup, with rxvt running zsh. Any idea why grep colors would break on specifying a case-insensitive match? Update: Under cygwin 1.7, it's a little bit better - case insensitive search works correctly, but it only highlights the word that matches the expression exactly. In other words, "FunctionFoo" highlights "FunctionFoo" but not "functionFoo" and vice versa. Probably a grep issue so I'll be submitting it to that list.

    Read the article

  • nmap installation issue

    - by daasf
    vanilla centos with latest updates, installed gcc, and after ./configure:.... Configuration complete. Type make (or gmake on some *BSD machines) to compile. [root@winxp nmap-5.51]# make Makefile:375: makefile.dep: No such file or directory g++ -MM -I./liblua -I./libdnet-stripped/include -I./libpcre -I./libpcap -I./nbase - I./nsock/include -DHAVE_CONFIG_H -DNMAP_NAME=\"Nmap\" -DNMAP_URL=\"http://nmap.org\" - DNMAP_PLATFORM=\"x86_64-unknown-linux-gnu\" -DNMAPDATADIR=\"/usr/local/share/nmap\" - D_FORTIFY_SOURCE=2 main.cc nmap.cc targets.cc tcpip.cc nmap_error.cc utils.cc idle_scan.cc osscan.cc osscan2.cc output.cc payload.cc scan_engine.cc timing.cc charpool.cc services.cc protocols.cc nmap_rpc.cc portlist.cc NmapOps.cc TargetGroup.cc Target.cc FingerPrintResults.cc service_scan.cc NmapOutputTable.cc MACLookup.cc nmap_tty.cc nmap_dns.cc traceroute.cc portreasons.cc xml.cc nse_main.cc nse_utility.cc nse_nsock.cc nse_dnet.cc nse_fs.cc nse_nmaplib.cc nse_debug.cc nse_pcrelib.cc nse_binlib.cc nse_bit.cc > makefile.dep /bin/sh: g++: command not found make: *** [makefile.dep] Error 127 [root@winxp nmap-5.51]# yum install g++ -y Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.ash.fastserv.com * base: centos.mirror.choopa.net * extras: mirror.trouble-free.net * updates: mirror.nexcess.net Setting up Install Process No package g++ available. Nothing to do [root@winxp nmap-5.51]#

    Read the article

  • change directory automatically on ssh login

    - by Gareth
    Hi, I'm trying to get ssh to automatically change to a particular directory when I log in. I tried to get that behaviour working using the following directives in ~/.ssh/config: Host example.net LocalCommand "cd web" but whenever I log in, I see the following: /bin/bash: cd web: No such file or directory although though there is definitely a web folder in my home directory. Even using an absolute path gives the same message. To be clear, if I type cd web after logging in I get to the right folder. What am I missing here? EDIT: Different combinations of quotes/absolute paths give different error messages: LocalCommand "cd web" /bin/bash: cd web: No such file or directory LocalCommand cd web /bin/bash: line 0: cd: web: No such file or directory LocalCommand cd /home/gareth/web /bin/bash: line 0: cd: /home/gareth/web: Input/output error This makes me think that the quotes shouldn't be there, and that there's another error happening.

    Read the article

  • Enabling mod_wsgi in Apache for a Django app on Gentoo

    - by hobbes3
    I installed Apache, Django, and mod_wsgi on Gentoo using emerge (on Amazon EC2). I know that the mod_wsgi is configured in /etc/apache2/modules.d/70_mod_wsgi.conf: <IfDefine WSGI> LoadModule wsgi_module modules/mod_wsgi.so </IfDefine> # vim: ts=4 filetype=apache So in my /etc/conf.d/apache I added the WSGI module: APACHE2_OPTS="-D DEFAULT_VHOST -D INFO -D SSL -D SSL_DEFAULT_VHOST -D LANGUAGE -D WSGI" But when I try to list the loaded module, mod_wsgi isn't listed. root ~ # apache2 -M | grep wsgi Syntax OK I also know that mod_wsgi isn't loading properly because the Apache configuration file doesn't recognize WSGIScriptAlias. By the way for Django to work I need to include a custom Apache configuration file. Where should I insert the line below? Include "/var/www/localhost/htdocs/mysite/apache/apache_django_wsgi.conf" I currently have that in the httpd.conf file but I feel like that file will get reseted whenever I upgrade Gentoo or related package. EDIT: it seems the mod_wsgi file is located in /usr/lib64/apache2/modules/mod_wsgi.so. Here is my detailed Apache settings: root@ip-99-99-99-99 /usr/portage/eclass # apache2 -V Server version: Apache/2.2.21 (Unix) Server built: Mar 7 2012 06:52:30 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr" -D SUEXEC_BIN="/usr/sbin/suexec" -D DEFAULT_PIDLOG="/var/run/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="/var/run/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/httpd.conf"

    Read the article

  • samba4 not building in Arch

    - by kmplsv
    cp bin/tdbtool bin/tdbdump bin/tdbbackup /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/bin cp ./include/tdb.h /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/include cp tdb.pc /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/pkgconfig cp libtdb.a libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 mkdir -p /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` cp tdb.so /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` /bin/install -c -d /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8 for I in manpages/*.8; do \ /bin/install -c -m 644 $I /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8; \ done /bin/install: cannot stat `manpages/*.8': No such file or directory make: *** [installdocs] Error 1 Aborting... ==> ERROR: Makepkg was unable to build samba4. ==> Restart building samba4 ? [y/N] ==> ------------------------------- ==>c Any ideas as what is causing my build to fail? I assume it's an issue with manpages I can't figure out exactly what package it is looking for that I don't have.

    Read the article

  • setting up git on cygwin - openssl

    - by Pete Field
    I'm trying to get git running in cygwin on a windows 7 machine I have git unpacked and the directory git-1.7.1.1 when i run make install from within that directory, I get CC fast-import.o In file included from builtin.h:4, from fast-import.c:147: git-compat-util.h:136:19: iconv.h: No such file or directory git-compat-util.h:140:25: openssl/ssl.h: No such file or directory git-compat-util.h:141:25: openssl/err.h: No such file or directory In file included from builtin.h:6, from fast-import.c:147: cache.h:9:21: openssl/sha.h: No such file or directory In file included from fast-import.c:156: csum-file.h:10: error: parse error before "SHA_CTX" csum-file.h:10: warning: no semicolon at end of struct or union csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:17: error: parse error before '}' token fast-import.c: In function `store_object': fast-import.c:995: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:995: error: (Each undeclared identifier is reported only once fast-import.c:995: error: for each function it appears in.) fast-import.c:995: error: parse error before "c" fast-import.c:1000: warning: implicit declaration of function `SHA1_Init' fast-import.c:1000: error: `c' undeclared (first use in this function) fast-import.c:1001: warning: implicit declaration of function `SHA1_Update' fast-import.c:1003: warning: implicit declaration of function `SHA1_Final' fast-import.c: At top level: fast-import.c:1118: error: parse error before "SHA_CTX" fast-import.c: In function `truncate_pack': fast-import.c:1120: error: `to' undeclared (first use in this function) fast-import.c:1126: error: dereferencing pointer to incomplete type fast-import.c:1127: error: dereferencing pointer to incomplete type fast-import.c:1128: error: dereferencing pointer to incomplete type fast-import.c:1128: error: `ctx' undeclared (first use in this function) fast-import.c: In function `stream_blob': fast-import.c:1140: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:1140: error: parse error before "c" fast-import.c:1154: error: `pack_file_ctx' undeclared (first use in this functio n) fast-import.c:1154: error: dereferencing pointer to incomplete type fast-import.c:1160: error: `c' undeclared (first use in this function) make: *** [fast-import.o] Error 1 I'm guessing that most of these errors are due to the iconv.h and openssl files which apparently are missing, but I can't figure out how I'm supposed to install those (if I am), or if there is some other way to get around this.

    Read the article

  • Script to list current user's mapped network drives

    - by Dmart
    I have a Windows XP/ Server 2003 environment here users have mapped different network drives themselves using arbitrary drive letters. Some of these users do not know how to tell the true UNC path of these drives, and I would like to be able to run a script or program to query those drives and show me the drive letters and the corresponding UNC paths. I would like to see output like "net use" in that user's context so that I can see what drives THEY have mapped. I would need to do this using my own admin account, which is where the difficulty lies. I understand this information would be stored in the HKCU registry? I would love to be able to do this in Powershell, but a vbscript or even a standalone executable would do. Thanks.

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >