Search Results

Search found 16797 results on 672 pages for 'directory traversal'.

Page 318/672 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • GVIM hangs when saving through GVFS' FTP

    - by Lie Ryan
    I loved Gnome's Nautilus and FTP integration and being able to mount a remote FTP directory as a regular bookmark/directory, and double clicking any remote files to open in any unmodified program. I also loved editing text files with GVim. However, if I double clicked file on Nautilus to open a text file in Gvim, then saving a file will take about 10 seconds and GVim will hang for that amount of time. The major irritant is that I cannot continue editing while the text editor is waiting for the write to finish, this delay interrupted my workflow and thought process and saving becomes a painful process. The other problem is that I don't think simply uploading a file should take that much time. I'm aware of GVim's internal FTP support, but they are not as well integrated with Nautilus's FTP. So a few question: Is there a way to make GVim or GVFS to save in background while I continue editing? Why is GVFS so slow? Is there any way to set GVFS to use a single persistent FTP connection instead of creating a new FTP connection each time? I'm on Gentoo Linux x86-64.

    Read the article

  • mod_fcgi produces random 500 Errors

    - by DmitrySemenov
    php 5.4.7 via mod_fcgi when I run the site sometimes it works, sometimes it crashed with 500 Internal Error, this is what I see in error.log everytime I run the script [Mon Sep 24 18:50:43 2012] [warn] [client 68.231.194.198] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Mon Sep 24 18:50:43 2012] [error] [client 68.231.194.198] Premature end of script headers: api.php any ideas? vhost config: <VirtualHost :80> ServerAdmin [email protected] DocumentRoot "/home/www/sites/test.com/html/development" ServerName test.com ServerAlias www.test.com ErrorLog "/home/www/sites/test.com/logs/error_log" CustomLog "/home/www/sites/test.com/logs/access_log" common <IfModule mod_fcgid.c> <Directory /home/www/sites/test.com/html/development> Options +ExecCGI AllowOverride All AddHandler fcgid-script .php FCGIWrapper /home/www/php-fcgi-scripts/php-fcgi-starter .php Order allow,deny Allow from all </Directory> FcgidMaxRequestLen 1073741824 </VirtualHost> fcgi.d conf LoadModule fcgid_module modules/mod_fcgid.so # Use FastCGI to process .fcg .fcgi & .fpl scripts AddHandler fcgid-script fcg fcgi fpl # Sane place to put sockets and shared memory file FcgidIPCDir /var/run/mod_fcgid FcgidProcessTableFile /var/run/mod_fcgid/fcgid_shm IdleTimeout 300 BusyTimeout 300 ProcessLifeTime 7200 IPCConnectTimeout 300 IPCCommTimeout 7200 PHP_Fix_Pathinfo_Enable 1 php-fcgi-starter.php #!/bin/sh PHP_CGI=/usr/local/php547/bin/php-cgi PHP_INI=/etc/php547-fastcgi.ini export PHP_FCGI_TIMEOUT=1200 #export PHP_FCGI_CHILDREN=6 export PHP_FCGI_MAX_REQUESTS=1000 exec $PHP_CGI -c $PHP_INI

    Read the article

  • Strange ssh key issue

    - by user55714
    Scenario 1. I am doing this from /home/deploy directory I am trying to set up ssh with github for capistrano deployment. this has been an absolute nightmare. when I do ssh [email protected] as the deploy account I get Permission denied (publickey). so may be the key is not being found, so If I do a ssh-add /home/deploy/.ssh/id_rsa Could not open a connection to your authentication agent. (i did verify that the ssh-agent was running) If I do exec ssh-agent bash and then repeat the ssh-add then the key does get added and I can ssh into github. Now I exit from the ssh connection to my server and ssh back in and I can't ssh into github anymore! Scenario 2 if I login to my remote server and then cd into my .ssh directory and ssh into github then it all works fine I guess there is a problem with locating the key and for some reason the agent isn't funcitoning correctly. Any ideas? Her is a pastie with more details..my .bashrc, permissions etc. http://pastie.org/pastes/1190557/

    Read the article

  • How to have PHP and mod_wsgi python app on the same domain?

    - by Lazik
    I am using apache with mod_wsgi (python3) on ubuntu 12.04. I have a python app (bottle) which is at www.mysite.com/ In my python app I have routes like www.mysite.com/abbb?q=blab I would like a path www.mysite.com/forum to resolve to a php app (simple machine forums) Ideally I would like to use apache to handle the forum part and pass it to php (instead of coding it in the python app). Don't know if it's possible. I'm new to this, I have read https://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines#The_Apache_Alias_Directive but I don't understand how to use it. Here is my apache conf for the mod_wsgi app, I don't know how to specify the PHP portion. <VirtualHost *:80> ServerName www.ex.com ServerAlias ex.com *.ex.com RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}$1 [R=301,L] WSGIDaemonProcess ex user=www-data group=www-data processes=1 threads=5 WSGIScriptAlias / /var/www/vhosts/ex/app.wsgi <Directory /var/www/vhosts/ex> WSGIProcessGroup ex WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • Is there a faster way to change default apps associated with file types on OS X?

    - by Lri
    Is there anything more convenient than using RCDefaultApp or Magic Launch, or just repeatedly pressing the Change All buttons in Finder's information panels? I thought about writing a shell script that would modify the CFBundleDocumentTypes arrays in Info.plist files. But each app has multiple keys (sometimes an icon) that would need to be changed. lsregister can't be used to make specific modifications to the Launch Services database. $ `locate lsregister` -h lsregister: [OPTIONS] [ <path>... ] [ -apps <domain>[,domain]... ] [ -libs <domain>[,domain]... ] [ -all <domain>[,domain]... ] Paths are searched for applications to register with the Launch Service database. Valid domains are "system", "local", "network" and "user". Domains can also be specified using only the first letter. -kill Reset the Launch Services database before doing anything else -seed If database isn't seeded, scan default locations for applications and libraries to register -lint Print information about plist errors while registering bundles -convert Register apps found in older LS database files -lazy n Sleep for n seconds before registering/scanning -r Recursive directory scan, do not recurse into packages or invisible directories -R Recursive directory scan, descending into packages and invisible directories -f force-update registration even if mod date is unchanged -u unregister instead of register -v Display progress information -dump Display full database contents after registration -h Display this help

    Read the article

  • 7ZIP - Command Line Compression | Can Never Keep it Simple

    - by OneTwoYou
    I've been Googleing for a few hours on how to just compress a file inside a directory and I can't find anything. I found how to just compress a folder in general. Now I wish to know how I can compress a folder in a folder with a file. Current code: 7zG.exe a -tzip "test.zip" dontcompressme/compressme/new.txt pause As you can see above, I don't want to compress the first folder, but only the second and what ever is within that folder. I have the 7zG.exe sitting in the main folder and I have some files that are three folders in, but I don't know how to only compress those. Here is my directory list: Folder One (don't compress) Folder Two (don't compress) Folder Three (okay to compress) Document One.txt (okay to compress) Document Two.txt (okay to compress) Index.html (okay to compress) Does anyone know how I can do this in the most simplest way ever invented by man? Cause whenever I go to a website using Google it goes throw all these methods on how to compress a folder, but not do it the way I wish it to do. It makes me kinda upset cause I can't get a simple and straight forward answer. Thank you if you answer my question.

    Read the article

  • ssh Prompts For Password After Account Unlocked - Despite ssh key?

    - by user1011471
    Here's what happened: I set up ssh key so that user could ssh from A to B without a password. I got user's password wrong in some other context too many times, and user's account got locked out. (IT uses Active Directory here) IT unlocked the account. Concurrent to the unlocking, a script was running, calling something like ssh user@B some-health-check-command every 5 seconds or so -- which seemed to work fine before I caused user to get locked out in step 2. IT reports user reliably gets locked out a short time after each unlock attempt. I thought the ssh key would allow ssh user@B some-command as long as the account is not locked. But it behaves as if, when user gets unlocked, B suddenly asks for a password and since my command repeatedly runs without supplying a password, the account gets locked out after 5 attempts. Account cannot be accessed at this time. Please contact your system administrator. My questions are... Is that what's happening? Or: what's happening? More importantly: How can I reconfigure things such that my script doesn't cause problems? Can I accomplish what I want without having to install Expect? (I don't know if I have permission to do so) Other notes: Not using ssh-agent currently. The ssh command is running on our Jenkins master, a linux box. A and B are Mac OS X. user is managed in Active Directory and normally can sign into all three machines. Other than these things and the ssh key I set up, everything else has the default configuration as far as I know.

    Read the article

  • Windows Explorer and UAC: run elevated

    - by syneticon-dj
    I am profoundly annoyed by UAC and switch it off for my admin user wherever I can. Yet, there are situations where I can't - especially if those are machines not under my continuous administration. In this case, I am always challenged with the task of traversing directories using my administrative user via the Windows Explorer where regular users do not have "read" permissions. The possible two approaches to this problem so far: change the ACLs to the directory in question to include my user (Windows conveniently offers the Continue button in the "You don't currently have permissions to access this folder" dialog. This obviously sucks since more often than not I do not want to change ACLs but just look into the folder's contents use an elevated cmd.exe prompt along with a bunch of command line utilities - this usually takes a lot of time when browsing through large and / or complex directory structures What I would love to see would be a way to run Windows Explorer in elevated mode. I have yet to find out how to do so. But other suggestions solving this problem in an unobtrusive way without changing the entire system's configuration (and preferably without the need for downloading / installing anything) are very welcome, too. I have seen this post with a suggestion for altering HKCR - interesting, but it changes the behavior for all users, which I am not allowed to do in most situations. Also, some folks have suggested using UNC paths to access the folders - unfortunately this does not work when accessing the same machine (i.e. \\localhost\c$\path) as the "Administrators" group membership is still stripped from the token and a re-authentication (and thus the creation of a new token) would not happen when accessing localhost.

    Read the article

  • Hadoop streaming job on EC2 stays in "pending" state

    - by liamf
    Trying to experiment with Hadoop and Streaming using cloudera distribution CDH3 on Ubuntu. Have valid data in hdfs:// ready for processing. Wrote little streaming mapper in python. When I launch a mapper only job using: hadoop jar /usr/lib/hadoop/contrib/streaming/hadoop-streaming*.jar -file /usr/src/mystuff/mapper.py -mapper /usr/src/mystuff/mapper.py -input /incoming/STBFlow/* -output testOP hadoop duly decides it will use 66 mappers on the cluster to process the data. The testOP directory is created on HDFS. A job_conf.xml file is created. But the job tracker UI at port 50030 never shows the job moving out of "pending" state and nothing else happens. CPU usage stays at zero. (the job is created though) If I give it a single file (instead of the entire directory) as input, same result (except Hadoop decides it needs 2 mappers instead of 66). I also tried using the "dumbo" Python utility and launching jobs using that: same result: permanently pending. So I am missing something basic: could someone help me out with what I should look for? The cluster is on Amazon EC2. Firewall issues maybe: ports are enabled explicitly, case by case, in the cluster security group.

    Read the article

  • LDAP groups not applying to filesystem permissions

    - by BeepDog
    System is ArchLinux, and I'm using nss-pam-ldapd (0.8.13-4) to connect myself to ldap. I've got my users and some groups in LDAP: [root@kain tmp]# getent group <localgroups snipped> dkowis:*:10000: mp3s:*:15000:rkowis,dkowis music:*:15002:rkowis,dkowis video:*:15003:transmission,rkowis,dkowis,sickbeard software:*:15004:rkowis,dkowis pictures:*:15005:rkowis,dkowis budget:*:15006:rkowis,dkowis rkowis:*:10001: And I have some directories that are setgid video so that the video group stays, and they're configured g=rwx so that members of the video group can write to them: [root@kain video]# ls -ld /srv/video drwxrwxr-x 8 root video 208 Oct 19 20:49 /srv/video However, members of that group, say dkowis cannot write into that directory: [root@kain video]# groups dkowis mp3s music video software pictures dkowis Total number of groups that dkowis is in is like 7, I redacted a few here. [dkowis@kain wat]$ cd /srv/video [dkowis@kain video]$ touch something touch: cannot touch 'something': Permission denied [dkowis@kain video]$ groups dkowis mp3s music video software pictures I'm at a loss as to why my groups show up in getent groups, but my filesystem permissions are not being respected. I've tried making a new directory in /tmp and setting it's group permissions to rwx, and then trying to write a file in there, it doesn't work. The only time it does work is if I open it wide up allowing o=rwx. That's obviously not what I want, and I'm not able to figure out what my missing piece is. Thanks in advance.

    Read the article

  • Mac OS X: Update Python for Shell

    - by Nathan G.
    So, I see similar questions, but none of the answers work for me. I updated Python to 3.1.3 from 2.6.1. Everything works, except: When I type python into Terminal, I get: Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> So, how do I change the version of Python that runs in the Shell? I've tried the script that they provide. It adds their directory to my $PATH, but it still doesn't change the version that'd displayed from Terminal. Here's what I get when I echo $PATH: /Library/Frameworks/Python.framework/Versions/3.1/bin:/Library/Frameworks/Python.framework/Versions/3.1/bin:/Library/Frameworks/Python.framework/Versions/3.1/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin It appears that the script provided has added their directory for every time I ran the script (I tried it a few times, naturally). I'll gives links to caps of what is in the other relevant folders it mentions: /Library/Frameworks/Python.framework/Versions/3.1/bin /usr/local/bin /usr/bin Thakns in advance for any ideas!

    Read the article

  • Apache Server Status page in port 8443

    - by batman
    I'm very new to apache. I tried to enable the server status page of apache. I added the status.conf and status.load to mods-enabled directory. I changed the config of apache2.conf to include all mods-enabled directory. This is the config of staus.conf: <IfModule mod_status.c> # # Allow server status reports generated by mod_status, # with the URL of http://servername/server-status # Uncomment and change the "192.0.2.0/24" to allow access from other hosts. # <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 ::1 # Allow from 192.0.2.0/24 </Location> # Keep track of extended status information for each request ExtendedStatus On # Determine if mod_status displays the first 63 characters of a request or # the last 63, assuming the request itself is greater than 63 chars. # Default: Off #SeeRequestTail On <IfModule mod_proxy.c> # Show Proxy LoadBalancer status in mod_status ProxyStatus On </IfModule> </IfModule> The default settings. I restarted my server. I'm redirecting all ports to 8443. Which in turn turns my requests to localhost:8443/server-status. Which does throw an 404 error. Are there any way to get around this? Thanks in advance.

    Read the article

  • script to list user's mapped drive not giving results or error

    - by user223631
    We are in the process of migrating two file servers to a new server. We have mapped drives via user group in group policy. Many users have manually mapped drives and we need to find these mappings. I have created a PowerShell script to run that remotely get the drive mappings. It works on most computers but there are many that are not returning results and I am not getting any error messages. Each workstation on the list creates a text file and the ones that are not returning results have no text in the files. I can ping these machines. If the machine is not turned on, it does come up error message that the RPC server is not available. My domain user account is in a group that is in the local admin account. I have no idea why some are not working. Here is the script. # Load list into variable, which will become an array of strings If( !(Test-Path C:\Scripts)) { New-Item C:\Scripts -ItemType directory } If( !(Test-Path C:\Scripts\Computers)) { New-Item C:\Scripts\Computers -ItemType directory } If( !(Test-Path C:\Scripts\Workstations.txt)) { "No Workstations found. Please enter a list of Workstations under Workstation.txt"; Return} If( !(Test-Path C:\Scripts\KnownMaps.txt)) { "No Mapping to check against. Please enter a list of Known Mappings under KnownMaps.txt"; Return} $computerlist = Get-Content C:\Scripts\Workstations.txt # Loop through each item in the array (each computer in the list of computers we loaded into the variable) ForEach ($computer in $computerlist) { $diskObject = Get-WmiObject Win32_MappedLogicalDisk -computerName $computer | Select Name,ProviderName | Out-File C:\Tester\Computers\$computer.txt -width 200 } Select-String -Path C:\Tester\Computers\*.txt -Pattern cmsfiles | Out-File C:\Tester\Drivemaps-all.txt $strings = Get-Content C:\Tester\KnownMaps.txt Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -notmatch -simplematch | Out-File C:\Tester\Drivemaps-nonmatch.txt -Width 200 Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -simplematch | Out-File C:\Tester\Drivemaps-match.txt -Width 200

    Read the article

  • tomcat 'document base does not exist' error (but it does)

    - by SpliFF
    Gentoo / Tomcat 6 INFO: Starting Servlet Engine: Apache Tomcat/6.0.20 Sep 8, 2009 10:34:51 AM org.apache.catalina.core.StandardContext resourcesStart SEVERE: Error starting static Resources java.lang.IllegalArgumentException: Document base /www/rivervalley/site does not exist or is not a readable directory at org.apache.naming.resources.FileDirContext.setDocBase(Unknown Source) at org.apache.catalina.core.StandardContext.resourcesStart(Unknown Source) at org.apache.catalina.core.StandardContext.start(Unknown Source) at org.apache.catalina.core.ContainerBase.start(Unknown Source) at org.apache.catalina.core.StandardHost.start(Unknown Source) at org.apache.catalina.core.ContainerBase.start(Unknown Source) oh really? then how come: ls -la /www/rivervalley/site/ drwxr-xr-x 12 tomcat tomcat 4096 Sep 8 09:56 . drwxr-xr-x 16 tomcat tomcat 4096 Jun 29 16:22 .. -rwxr--r-- 1 tomcat tomcat 520 Jul 3 02:15 Application.cfm drwxr-xr-x 2 tomcat tomcat 4096 Sep 8 09:56 WEB-INF and ... tomcat 18916 1.0 5.5 1159188 167892 ? Ssl 10:37 0:11 /opt/sun-jdk-1.5.0.18/bin/java -Djava.util.loggin Hell, ANY account can read that directory so the claim is utter nonsense. What else can cause this? Here's my relevant server.xml section: <Host name="rivervalley" appBase="webapps" unpackWARs="false" autoDeploy="false" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="/www/rivervalley/site" /> </Host>

    Read the article

  • Apache vhosts config: Host Name instead of IP Address

    - by Johe Green
    I have a domain (example.com) hosted at an external provider. I directed the subdomain sub.example.com to my ubuntu server (12.04 with apache2). On my ubuntu server I have a vhost setup like this. The rest of the config is basically apache 2 standard: <VirtualHost *:80> ServerName sub.example.com ServerAlias sub.example.com ServerAdmin [email protected] DocumentRoot /var/www/sub.example.com ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined WSGIScriptAlias / /home/application/sub.example.com/wsgi.py <Directory /home/application/sub.example.com> <Files wsgi.py> Order allow,deny Allow from all </Files> </Directory> </VirtualHost> When I enter http://sub.example.com in my browser my application shows up fine. But the domain is replaced by the IP address of my server. Do I have to configure my server somewhere else to deliver all its content under my domain sub.example.com?

    Read the article

  • implementing NGINX loadbalancer

    - by Alaa Alomari
    I have two servers (ServerA 192.168.1.10, ServerB 192,168.1.11) and DNS of test.mysite.com is pointing to ServerA #in serverA i have this upstream lb_units { server 192.168.1.10 weight=2 max_fails=3 fail_timeout=30s; # Reverse proxy to BES1 server 192.168.1.11 weight=2 max_fails=3 fail_timeout=30s; # Reverse proxy to BES2 } server { listen 80; # Listen on the external interface server_name test.mysite.com; # The server name root /var/www/test; index index.php; location / { proxy_pass http://lb_units; # Load balance the URL location "/" to the upstream lb_units } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/test/$fastcgi_script_name; } } and ServerB is apache and it has the following <VirtualHost *:80 RewriteEngine on <Directory "/var/www/test" AllowOverride all </Directory DocumentRoot "/var/www/test" ServerName test.mysite.com </VirtualHost but whenever i try to browse test.mysite.com, it serves me from ServerA. also i tried to mark serverA and down server 192.168.1.10 down; in lb_units and still the same, serving me from serverA. any idea what i have done wrong??

    Read the article

  • Directories shown as files, when sharing a mounted cifs drive

    - by Johan Sigfred Abildskov
    I have an issue where a directory is shown as a file when accessing a samba share ( on Ubuntu 12.10 ) from a Windows machine. The output from ls -ll in the folder on the linuxbox is as follows: chubby@chubby:/media/blackhole/_Arkiv$ ls -ll total 0 drwxrwxrwx 0 jv users 0 Jun 18 2012 _20 drwxrwxrwx 0 jv users 0 Apr 17 2012 _2006 drwxrwxrwx 0 jv users 0 Apr 17 2012 _2007 drwxrwxrwx 0 jv users 0 May 12 2011 _2008 drwxrwxrwx 0 jv users 0 Feb 19 09:53 _2009 drwxrwxrwx 0 jv users 0 Dec 20 2011 _2010 drwxrwxrwx 0 jv users 0 May 8 2012 _2011 drwxrwxrwx 0 jv users 0 Mar 5 11:37 _2012 drwxrwxrwx 0 jv users 0 Feb 28 10:09 _2013 drwxrwxrwx 0 jv users 0 Feb 28 11:18 _Mailarkiv drwxrwxrwx 0 jv users 0 Jan 3 2011 _Praktikanter The entry in /etc/fstab is: # Mounting blackhole //192.168.0.50/kunder/ /media/blackhole cifs uid=jv,gid=users,credentials=/home/chubby/.smbcredentials,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0 When I access the share directly from the NAS on my Windows box, there are no issues. The version of Samba is 3.6.6, but I couldn't find anything in the changelogs that seem relevant. I've tried mounting it in different locations with different permissions, users and groups but I have not made any progress Due to my low reputation on serverfault ( mostly stackoverflow user ) I'm unable to post a screenshot that shows that the directories are shown as files. If I type the full path in explorer, the directory listing works excellently, except for any subdirectories that are then shown as files. Any attack vector for this issue would be greatly appreciated. Please let me know if I have provided insufficient details. Edit: The same share when accessed from a OS X, works perfectly listing the directories as directories. Best Regards!

    Read the article

  • Remote access to phpmyadmin from computer belongs to same LAN

    - by Charles
    OK... I solved it. It is because I have not configured the httpd.conf to allow the centos listen port 80 and 8080. Listen 80 Listen 8080 I have setup the myphpadmin on my CentOS 6.4 recently. I can access and login to the myphpadmin on my localhost. However, when I type http://[hostipaddr]/phpmyadmin on my other computer in the same LAN with the CentOS, the browser simply cannot access the page. Below are some of the current configuration. Anyone can help please......? config.inc.php $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'http'; /* Server parameters */ $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = false; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysql'; $cfg['Servers'][$i]['AllowNoPassword'] = false; phpmyadmin.conf <Directory /var/www/html/phpmyadmin/> order allow,deny allow from all </Directory> Furthermore, I can access the webpage that stored in the CentOS from my other computer without problems. After using wireshark and tcpdump, I found that the server (the Cent OS) keep resetting the connection. (192.168.1.106 is my other computer, 192.168.1.101 is my CentOS) 23:29:42.281473 IP 192.168.1.106.55999 > 192.168.1.101.webcache: Flags [S], seq 2559409090, win 65535, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 23:29:42.281504 IP 192.168.1.101.webcache > 192.168.1.106.55999: Flags [R.], seq 0, ack 2559409091, win 0, length 0 I have disabled the iptables service on the CentOS already.

    Read the article

  • Xubuntu stuck after login

    - by viraptor
    How can I debug an issue with Xubuntu 12.04 (fresh install) which just waits idle after a login for about 30 seconds? The login screen is delayed correctly. After login, I get my desktop background, but no panels or auto-starting apps. It doesn't seem to be an authentication/pam issue, because I can login without delay at the console while the graphical session is still stuck. There's no disk or cpu activity and no obvious respawning of any process when I look at htop. There's nothing obviously wrong in .xsession-errors. Most interesting errors: openConnection: connect: No such file or directory cannot connect to brltty at :0 WARNING: gnome-keyring:: couldn't connect to: /tmp/keyring-wFn4VR/pkcs11: No such file or directory ... (polkit-gnome-authentication-agent-1:2131): polkit-gnome-1-WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The nam e org.gnome.SessionManager was not provided by any .service files ** Message: applet now removed from the notification area ** Message: using fallback from indicator to GtkStatusIcon ... (xfce4-indicator-plugin:2176): libindicator-WARNING **: IndicatorObject class does not have an accessible description. ... (xfce4-indicator-plugin:2176): Indicator-Application-WARNING **: Unable to get application list: Operation was cancelled Bootchart seems to end before I login, so it's not that helpful. Where else can I look for information?

    Read the article

  • Apache: Isn't chmod 755 enough to set up symlink or alias on Apache httpd on Mac OS 10.5?

    - by eed3si9n
    On my Mac OS 10.5 machine, I would like to set up a subfolder of ~/Documents like ~/Documents/foo/html to be http://localhost/foo. The first thing I thought of doing is using Alias as follows: Alias /foo /Users/someone/Documents/foo/html <Directory "/Users/someone/Documents/foo/html"> Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </Directory> This got me 403 Forbidden. In the error_log I got: [error] [client ::1] (13)Permission denied: access to /foo denied The subfolder in question has chmod 755 access. I've tried specifying likes like http://localhost/foo/test.php, but that didn't work either. Next, I tried the symlink route. Went into /Library/WebServer/Documents and made a symlink to ~/Documents/foo/html. The document root has Options Indexes FollowSymLinks MultiViews This still got me 403 Forbidden: Symbolic link not allowed or link target not accessible: /Library/WebServer/Documents/foo What else do I need to set this up? Solution: $ chmod 755 ~/Documents In general, the folder to be shared and all of its ancestor folder needs to be viewable by the www service user.

    Read the article

  • MySQL based authentication with crypt()ed password fails in Apache 2.2

    - by Fester Bestertester
    I'm trying to set up a simple CalDAV/CardDAV server with a Radicale backend and an Apache 2.2 frontend. So far, it's all nice and simple, but I can't get the MySQL based authentication to work. I'd like to authenticate users against an existing MySQL database, and I need the REMOTE_USER variable to be set (pretty much like in the configuration examples for Radicale). I've tried mod_auth_mysql, which authenticated the users nicely, but failed to set the REMOTE_USER variable. The newer alternative seems to be mod_authn_dbd, which doesn't seem to like the crypted passwords in the MySQL database. According to the documentation, crypted passwords should work, so maybe I'm just missing a simple parameter. The configuration looks like this: DBDriver mysql DBDParams "sock=/var/run/mysqld/mysqld.sock dbname=myAuthDB user=myAuthUser pass=myAuthPW <Directory /> AllowOverride None Order allow,deny allow from all AuthName 'CalDav' AuthType Basic AuthBasicProvider dbd require valid-user AuthDBDUserPWQuery "SELECT crypt FROM myAuthTable WHERE id=%s" </Directory> I've tested the query, it works fine. And as mentioned before, mod_auth_mysql worked nicely against the same database, but didn't set the required variables. Am I just missing some configuration parameter? Or is mod_authn_dbd just not the right tool to achieve what I want?

    Read the article

  • Unable to install .NET Framework 3.5 to my XP machine

    - by CptanPanic
    I have an application that requires .NET 3.5, but I can't install it. The installer quits saying "it has encoutered a problem during setup. If I look at some of the error logs in the tmp directory I see. Error occurred while initializing fusion. It seems that I have 2.0 SP1 installed. Any ideas how I can get it to work? I went through the temp directory and found these references to the error. Any ideas? [04/17/12,18:55:09] Microsoft .NET Framework 2.0a: [2] Error: Installation failed for component Microsoft .NET Framework 2.0a. MSI returned error code 1603 [04/17/12,18:55:27] WapUI: [2] DepCheck indicates Microsoft .NET Framework 2.0a is not installed. 04/19/12 19:08:48 DDSet_Status: Loading fusion.dll using LoadLibraryShim() 04/19/12 19:08:48 DDSet_Error: Error occurred while initializing fusion. Setup could not load fusion with LoadLibraryShim(). Error: 0x80131700 04/19/12 19:08:48 DDSet_Status: Loading fusion.dll using LoadLibraryShim() 04/19/12 19:08:48 DDSet_Error: Error occurred while initializing fusion. Setup could not load fusion with LoadLibraryShim(). Error: 0x80131700 MSI (s) (74!08) [19:08:48:062]: Product: Microsoft .NET Framework 2.0 Service Pack 1 -- Error 25007.Error occurred while initializing fusion. Setup could not load fusion with LoadLibraryShim(). Error: 0x80131700 Error 25007.Error occurred while initializing fusion. Setup could not load fusion with LoadLibraryShim(). Error: 0x80131700

    Read the article

  • Virtual machine lost after power cut

    - by dannymcc
    We have just had a power issue and our ESX (ESXi 4.1.0) host lost power and then rebooted. All but one of the virtual servers have rebooted with no problem, however one of them refused to power up. I try to power it on and I get the following error: File <unspecified filename> was not found Reason: The system cannot find the file specified. Cannot open the disk '/vmfs/volumes/4e03076e-90834647-b846-001185c38f42/LAMP- Stack/turnkey-lamp-11.3-lucid-x86.vmdk' or one of the snapshot disks it depends on. VMware ESX cannot find the virtual disk "/vmfs/volumes/4e03076e-90834647-b846- 001185c38f42/LAMP-Stack/turnkey-lamp-11.3-lucid-x86.vmdk". Verify the path is valid and try again. I have logged into the ESX host to see if the file is there an have found only the following file that matches the filename: /vmfs/volumes/4e03076e-90834647-b846-001185c38f42/LAMP-Stack/turnkey-lamp-11.3-l ucid-x86-s001.vmdk I notice that the above file has '-s001' after the filename. Is this recoverable? Any help of advice is greatly appreciated! EDIT: Running ls -l on the directory that contains the file shows this: drwxr-xr-t 1 root root 1680 Feb 9 09:49 4e03076e-90834647-b846-001185c38f42 The databrowser file system looks like this: and in a different directory there is the file that matches the missing one:

    Read the article

  • how can i move ext3 partition to the beginning of drive without losing data?

    - by Felipe Alvarez
    I have a 500GB external drive. It had two partitions, each around 250GB. I removed the first partition. I'd like to move the 2nd to the left, so it consumes 100% of the drive. How can this be accomplished without any GUI tools (CLI only)? fdisk Disk /dev/sdd: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc80b1f3d Device Boot Start End Blocks Id System /dev/sdd2 29374 60801 252445410 83 Linux parted Model: ST350032 0AS (scsi) Disk /dev/sdd: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 2 242GB 500GB 259GB primary ext3 type=83 dumpe2fs Filesystem volume name: extstar Last mounted on: <not available> Filesystem UUID: f0b1d2bc-08b8-4f6e-b1c6-c529024a777d Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal dir_index filetype needs_recovery sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 15808608 Block count: 63111168 Reserved block count: 0 Free blocks: 2449985 Free inodes: 15799302 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8208 Inode blocks per group: 513 Filesystem created: Mon Feb 15 08:07:01 2010 Last mount time: Fri May 21 19:31:30 2010 Last write time: Fri May 21 19:31:30 2010 Mount count: 5 Maximum mount count: 29 Last checked: Mon May 17 14:52:47 2010 Check interval: 15552000 (6 months) Next check after: Sat Nov 13 14:52:47 2010 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: d0363517-c095-4f53-baa7-7428c02fbfc6 Journal backup: inode blocks Journal size: 128M

    Read the article

  • Scenario - NTFS Symbolic Link or Junction?

    - by Unsigned
    Differences Absolute Relative File Directory UNC Symbolic link ? ? ? ? ? Junction ? x x ? x Scenario Let's assume we're creating a reparse point to create the redirect C:\SomeDir => D:\SomeDir Since this scenario only requires local, absolute paths, either a junction or symlink would work. In this situation, is there any advantage to using one or the other? Assume Windows 7 for the OS, disregarding backward-compatibility (prior to Vista, symlinks are not supported). Update I have found another difference. Symbolic Link - Link's permissions only affect delete/rename operations on the link itself, read/write access (to the target) is governed by the target's permissions Junction - Junction's permissions affect enumeration, revoking permissions on the junction will deny file listing through that junction, even if the target folder has more permissive ACLs The permissions make it interesting, as symlinks can allow legacy applications to access configuration files in UAC-restricted areas (such as %ProgramFiles%) without changing existing access permissions, by storing the files in a non-restricted location and creating symlinks in the restricted directory.

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >