Search Results

Search found 5084 results on 204 pages for 'vhost conf'.

Page 177/204 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Guide To Setting Accessing PhpMyAdmin On NGINX, Ubuntu 11.04, EC2 Remote MySQL Instance

    - by darkAsPitch
    I have setup a domain name to run on amazon ec2 running ubuntu 11.04, nginx and php5-fpm. The domain name works great, I have setup it's own sites-available configuration file and sym-linked it to sites-enabled. I installed phpmyadmin via sudo apt-get install phpmyadmin and followed the instructions. I then added this just above my /etc/nginx/nginx.conf file and restarted nginx. server { listen 80; server_name phpmyadmin.domain.com; location / { root /usr/share/phpmyadmin; index index.php; } #make sure all php files are processed by fast_cgi location ~ \.php { # try_files $uri =404; fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I have also added the appropriate dns A records for phpmyadmin.domain.com phpmyadmin.domain.com just shows a 404 error code. All other subdomains do not respond at all so at least something is working here. FYI I have edited the /etc/phpmyadmin/config.inc.php file so that I can connect to a remote MySQL Database. What else do I need to do?

    Read the article

  • Can't get DNS Alias work on Ubuntu 10.04 with Apache 2

    - by Johnny
    I want to use the DNS Alias to configure one of my domain pointing to a specific directory on the server. Here is what I've done: Change the IP address in domain setting, and it works $ ping www.example.com PING example.com (124.205.62.xxx): 56 data bytes 64 bytes from 124.205.62.xxx: icmp_seq=0 ttl=48 time=53.088 ms 64 bytes from 124.205.62.xxx: icmp_seq=1 ttl=48 time=52.125 ms ^C --- example.com ping statistics --- 2 packets transmitted, 2 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 52.125/52.606/53.088/0.482 ms Add sites-available and sites-enabled $ ls -l /etc/apache2/sites-available/ total 16 -rw-r--r-- 1 root root 948 2010-04-14 03:27 default -rw-r--r-- 1 root root 7467 2010-04-14 03:27 default-ssl -rw-r--r-- 1 root root 365 2010-06-09 18:27 example.com $ ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 26 2010-06-09 15:46 000-default -> ../sites-available/default lrwxrwxrwx 1 root root 33 2010-06-09 18:17 001-example.com -> ../sites-available/example.com But it doesn't work and when I open the browser for www.example.com, it shows an 111 error: The following error was encountered: Connection to 124.205.62.48 Failed The system returned: (111) Connection refused Here is how example.com's config: $ cat /etc/apache2/sites-enabled/001-example.com <virtualhost *:80> DocumentRoot "/vhosts/example.com/htdocs/" ServerName www.example.com ServerAlias example.com <Location /> Order Deny,Allow Deny from None Allow from all </Location> #Include /etc/phpmyadmin/apache.conf ErrorLog /vhosts/example.com/logs/error.log CustomLog /vhosts/example.com/logs/access.log combined Could you please tell me how to solve this?

    Read the article

  • Why is .htaccess not allowed in a directory but is allowed in another?

    - by John Isaacks
    I have apache2 installed on ubuntu 10.4 inside my var/www/ directory [amung others] I have a cakephp and a dvdcatalog directories. Each of which have CakePHP 1.3 installed. I can access them both via localhost/cakephp and localhost/dvdcatalog But the dvdcatalog shows up with no css styling. They both have these files: /var/www/cakephp/app/webroot/css/cake.generic.css /var/www/dvdcatalog/app/webroot/css/cake.generic.css But when I go to http://localhost/cakephp/css/cake.generic.css it sees the file but it does not see the file when I go to http://localhost/dvdcatalog/css/cake.generic.css I think this means the cakephp folder is able to use .htaccess and the dvdcatalog is not. I setup the cakephp directory last month when I was following in the blog tutorial. I am setting up the dvdcatalog directory now for a different tutorial. So I am not sure if I am missing a step. in my /etc/apache2/apache2.conf file I have this: <Directory "/var/www/*"> Order allow,deny Allow from all AllowOverride All </Directory> Which I thought gave .htaccesss to all. Does anyone have any ideas what the problem is?

    Read the article

  • How do I set up dual monitors on Kubuntu 10.04 using the latest nVidia drivers with a 9800M video ca

    - by NoCatharsis
    I'm a Linux newb so please try to keep the lingo low-key. I installed the latest nVidia drivers on my laptop using the 9800M card. The laptop is a Gateway P-7805u and I'm connected to the second monitor using VGA. Also, before installing the nVidia drivers (and just using the basic drivers included with Kubuntu 10.04), basic dual monitor support worked, except I could not enable compositing features for some reason. So I thought the proprietary drivers would fix this. Several issues have arisen since installation: 1) I've clicked through all of the display settings to activate the second screen with absolutely no change. 2) When I try to apply settings and Save Configuration as the nVidia help suggests, I am told that I cannot save to the X.conf file. I assume this is due to innate permissions on my user settings, which I have no idea how to properly configure. 3) I have no idea where to go from here, as most of the fixes I found online involve Linux syntax and verbiage, to which I'm totally clueless after spending over half my life with Windows.

    Read the article

  • Adjust output Brightness/Gamma/Colors in Gnome

    - by Mikee
    We have a desktop system running Ubuntu 8.04.4, and it is connected to a standard desktop LCD monitor. Unfortunately, in 8.04.4, the brightness of the image is cranked way up. It appears to be a graphics driver issue. Unfortunately, installing a newer GPU driver for this Intel GPU is very difficult to do. So, I am looking for a software (or config file?) solution to achieve this. Note: Ubuntu 9.10 and higher do not exhibit this issue, so this is not a hardware problem. Note: VNC-ing to this machine from another does not exhibit this issue either. Also, I installed "DisplayCalibrator.app", and it does not work very well (the app comes up, but the contents of the window are blank). Is there anything that I can add to the xorg.conf file to correct this issue? Also, this solution: http://superuser.com/questions/96539/adjust-contrast-and-brightness-in-ubuntu did not resove my issue. Thank you all for the help!

    Read the article

  • Multiple logins with pam_mount means multiple (redundant) mounts ...

    - by Jamie
    I've configured pam_mount.so to automagically mount a cifs share when users login; the problem is if a user logs into multiple times simultaneously, the mount command is repeated multiple times. This so far isn't a problem but it's messy when you look at the output of a mount command. # mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) I'm assuming I need to fiddle with either the pam.d/common-auth file or pam_mount.conf.xml to accomplish this. How can I instruct pam_mount.so to avoid duplicate mountings?

    Read the article

  • MySQL Not Turning On

    - by Shalin Shah
    I have an amazon ec2 instance running on the Amazon Linux AMI and its a micro instance. I wanted to install Django onto my server so I entered these commands wget http://www.mlsite.net/blog/wp-content/uploads/2008/11/go wget http://www.mlsite.net/blog/wp-content/uploads/2008/11/django.conf chmod 744 go ./go So after I was done, I ran sudo service httpd restart and sudo service mysqld restart and This is what came up for mysqld: Stopping mysqld: [ OK ] MySQL Daemon failed to start. Starting mysqld: [FAILED] So I deleted the django files /usr/local/python2.6.8/site-packages/django_registration.egg and I tried finding the error and I found out that in my /etc/my.cnf for the socket, it said socket=/var/lock/subsys/mysql.sock so I went to /var/lock/subsys/ and there was no mysql.sock. I tried creating one using vim but it still didn't work. Then I checked the error log and it said Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) So I am pretty much lost right now. I know it has something to do with mysql.sock If you might know a reason why this was caused could you please let me know? I have a wordpress site on my server, so i kind of need MySQL to work. Thanks!

    Read the article

  • Synergy: Cannot send media keys from Linux to Mac

    - by CraftyThumber
    I have a Linux Synergy server (Si-Linux) serving just one Mac client (Macbook Pro UK) (SiBook-Pro.local). On my Linux server I am using a USB Apple keyboard with the exact layout of the laptops keyboard (the compact UK aluminium keyboard). I would like to send the media keys to the Mac client at all times and I have attempted the following in my synergy.conf: keystroke(AudioPlay) = keystroke(AudioPlay,SiBook-Pro.local) This did not seem to work so I ran both the server and client as foreground processes and with debugging enabled and observed the following: Server Log: DEBUG1: activate actions DEBUG1: hotkey: keyDown(AudioPlay,SiBook-Pro.local) DEBUG1: onKeyDown id=57523 mask=0x0000 button=0x0000 DEBUG1: send key down to "SiBook-Pro.local" id=57523, mask=0x0000, button=0x0000 DEBUG1: deactivate actions DEBUG1: hotkey: keyUp(AudioPlay,SiBook-Pro.local) DEBUG1: onKeyUp id=57523 mask=0x0000 button=0x0000 DEBUG1: send key up to "SiBook-Pro.local" id=57523, mask=0x0000, button=0x0000 Client Log: DEBUG1: recv key down id=0x0000e0b3, mask=0x0000, button=0x0000 DEBUG1: mapKey e0b3 (57523) with mask 0000, start state: 0000 DEBUG1: key e0b3 is not on keyboard DEBUG1: recv key up id=0x0000e0b3, mask=0x0000, button=0x0000 DEBUG1: recv enter, 1279,386 5 2000 As you can see, the client claims the key received is not on keyboard. I don't understand since it is the same key as is on the Macbook's keyboard. I tried to reverse the client-server config to see if I could capture the key being sent if I pressed the Play button on the Macbook but the key doesn't seem to even make it to Synergy. Almost all keyboard presses get logged but the media keys seem to bypass the logs and just execute their function locally. E.g. I press play on the Macbook (with the Macbook as the server) and the key plays music on the Macbook and the key is not logged to the debug log.

    Read the article

  • IP routing Solaris 9 access the internet from local network

    - by help_me
    I am trying to configure the NICS on the Solaris Sparc server. My problem lies in getting out to the "Internet" from the local network. I have requested the NIC to receive a DHCP server address #ifconfig -interface dhcp start. If anyone could guide me as to what I need to do next. I am not able to ping 4.2.2.2 or access the internet. Much appreciated, thank you #uname -a SunOS dev 5.9 Generic_122300-59 sun4u sparc SUNW,Sun-Fire-V210 ifconfig -a lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 10.100.0.3 netmask ffffc000 broadcast 10.100.63.255 bge0:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 10.100.0.22 netmask ffffc000 broadcast 10.100.63.255 bge3: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 1500 index 12 inet 169.14.60.37 netmask fffffe00 broadcast 169.14.61.255 cat /etc/defaultrouter 10.100.0.254 169.14.60.1 cat /etc/resolv.conf nameserver 169.14.96.73 nameserver 169.10.8.4 netstat -rn Routing Table: IPv4 Destination Gateway Flags Ref Use Interface -------------------- -------------------- ----- ----- ------ --------- 169.14.60.37 169.14.60.1 UGH 1 0 169.14.60.0 169.14.60.37 U 1 18 bge3 10.100.0.0 10.100.0.3 U 1 34940 bge0 10.100.0.0 10.100.0.22 U 1 0 bge0:2 224.0.0.0 10.100.0.3 U 1 0 bge0 default 10.100.0.254 UG 1 111 default 169.14.60.1 UG 1 26 127.0.0.1 127.0.0.1 UH 10 59464 lo0 bash-2.05$ sudo ndd -get /dev/ip bge0:ip_forwarding 1 bash-2.05$ sudo ndd -get /dev/ip bge3:ip_forwarding 1 bash-2.05$ sudo ndd -get /dev/ip ip_forwarding 1

    Read the article

  • Nginx/puma rhel unix socket permission error?

    - by Kevin Brown
    When I try to start my puma server, I get the error: /.rvm/gems/ruby-2.1.1/gems/puma-2.9.0/lib/puma/binder.rb:275:in `initialize': Permission denied - connect(2) for "/var/run/nvhbase.sock" (Errno::EACCES) My sites-available/nvhbase.conf file: upstream nvhbase { server unix:/var/run/nvhbase.sock; } server { listen 80 default_server; server_name 207.131.132.219; root /home/vf032500/dev/nvh/public; location / { proxy_pass http://unix:/var/run/nvhbase.sock; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } I don't know a lot about unix sockets and everything works fine using tcp/puma default. My rails app is in my user directory. Is that the problem?? socket is starting in /var/run--I can start in /tmp, but I've heard that's bad practice? Provided I start the server in /tmp, I then can't access it via the server's ip--then what? I'm happy to provide any needed info, I just don't know a whole lot about server/nginx/puma.

    Read the article

  • Moving web files to /home/user/ gives permission denied using apache

    - by Maaz
    I recently created some linux users on my machine and their respective directories were created in the following manner /home/my_user so I decided to treat each user as one of my websites. I moved all my website files over to this directory like so /home/my_user/public_html/. I edited the virtual host in my httpd.conf and changed the root directory folder so this is how that looks <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/home/my_user/public_html" ServerName mywebsite.com ServerAlias www.mywebsite.com ErrorLog "/var/log/httpd/mywebsite/error_log" CustomLog "/var/log/httpd/mywebsite/access_log" common </VirtualHost> Now this virtual host configuration was working perfectly fine with my older document root path that was located at /var/www/html/mywebsite/public_html but after changing that to what it is right now, I am getting a permission denied error. But I followed the instructions here: http://stackoverflow.com/questions/14427808/you-dont-have-permission-error-in-apache-in-centos Even after following the above instructions, when I run the following command: sudo -u apache ls /home/my_user/public_html The server responds with ls: cannot open directory /home/my_user/public_html: Permission denied Even so, I do not get a permissions denied error when I try to access my site any more, however, now I am redirected to the default page of apache instead of my website. I am not exactly sure what's wrong any more, if anyone has an idea, it would be great if you guys could help out!

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • kde dropping keyboard

    - by shabbychef
    I am having problems with KDE 'dropping' my keyboard. It happens periodically when using my gentoo box directly, but has become much worse when accessing via NX (from a Mac laptop). Some possibly irrelevant clues: it appears to happen more often when the system is under higher CPU load the mouse continues to work, but no windows will accept any kind of keyboard focus. kwin will not accept tabbing between windows. when working on the machine directly, I can ctrl-alt-F1 to get to a shell (obviously this does nothing over NX). so I think it is KDE and not xorg. am running kwin-4.3.5-r1, and KDE-4.3.5 generally. this problem definitely appeared after upgrading to kde-4.x, but I do not remember if it appeared in kde-4.2. sometimes the keyboard will reappear, but sometimes I have to kill my kde session. playing with accessibility options or window-focus-stealing options in system-settings under kde will often make the keyboard responsive again, only to drop it perhaps minutes later. I had read online this might be an evdev problem under X (again, I think this is KDE, not X, but will try anything). as a result, I have fiddled with my xorg.conf endlessly. I even deleted it entirely and let nvidia-xconfig have a stab at it, with no luck I am tearing my hair out over this. I have done emerge -e xorg-server and am right now doing emerge -e kwin, to rebuild all packages that might be relevant. no luck with the xorg-server rebuild. any help appreciated. thanks,

    Read the article

  • installing mod_wsgi giving 403 error

    - by John Smiith
    installing mod_wsgi giving 403 error httpd.conf i added code below WSGIScriptAlias /wsgi "C:/xampp/www/htdocs/wsgi_app/wsgi_handler.py" <Directory "C:/xampp/www/htdocs/wsgi_app/"> AllowOverride None Options None Order deny,allow Allow from all </Directory> wsgi_handler.py status = ‘200 OK’ output = ‘Hello World!’ response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Note: localhost is my virtual host domain and it is working fine but when i request http://localhost/wsgi/ got 403 error. <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:/xampp/www/htdocs/localhost" ServerName localhost ServerAlias www.localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" combined </VirtualHost> Error log [Wed Jul 04 06:01:54 2012] [error] [client 127.0.0.1] File does not exist: C:/xampp/www/htdocs/localhost/favicon.ico [Wed Jul 04 06:01:54 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] Options ExecCGI is off in this directory: C:/xampp/www/htdocs/wsgi_app/wsgi_handler.py [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] File does not exist: C:/xampp/www/htdocs/localhost/favicon.ico [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache Note: My apache is not in c:/xampp/bin/apache it is in c:/xampp/bin/server-apache/

    Read the article

  • Upstart Script on Centos 6

    - by MarcusMaximus
    I'm trying to create an upstart script to run a python script on startup. In theory it looks simple enough but I just can't seem to get it to work. I'm using a skeleton script I found here and altered. description "Used to start python script as a service" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on runlevel [2345] # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Start the process script exec su nonrootuser -c "python /usr/local/scripts/script.py" end script The test script I want it to run is currently a simple python script that runs without any issue when run from a terminal. #!/usr/bin/python2 import os, sys, time if __name__ == "__main__": for i in range (10000): message = "shotgunUpstartTest " , i , time.asctime() , " - Username: " , os.getenv("USERNAME") #print message time.sleep(60) out = open("/var/log/scripts/scriptlogfile", "a") print >> out, message out.close() The location/var/log/scripts has permissions 777 The file /usr/local/scripts/script.py has permissions 775 The upstart script /etc/init.d/pythonupstart.conf has permissions 755

    Read the article

  • Conditionally set an Apache environment variable

    - by Tom McCarthy
    I would like to conditionally set the value of an Apache2 environment variable and assign a default value if one of the conditions is not met. This example if a simplification of what I'm trying to do but, in effect, if the subdomain portion of the host name is hr, finance or marketing I want to set an environment var named REQUEST_TYPE to 2, 3 or 4 respectively. Otherwise it should be 1. I tried the following configuration in httpd.conf: <VirtualHost *:80> ServerName foo.com ServerAlias *.foo.com DocumentRoot /var/www/html SetEnv REQUEST_TYPE 1 SetEnvIfNoCase Host ^hr\. REQUEST_TYPE=2 SetEnvIfNoCase Host ^finance\. REQUEST_TYPE=3 SetEnvIfNoCase Host ^marketing\. REQUEST_TYPE=4 </VirtualHost> However, the variable is always assigned a value of 1. The only way I have so far been able get it to work is to replace: SetEnv REQUEST_TYPE 1 with a regular expression containing a negative lookahead: SetEnvIfNoCase Host ^(?!hr.|finance.|marketing.) REQUEST_TYPE=1 Is there a better way to assign the default value of 1? As I add more subdomain conditions the regular expression could get ugly. Also, if I want to allow another request attribute to affect the REQUEST_TYPE (e.g. if Remote_Addr = 192.168.1.[100-150] then REQUEST_TYPE = 5) then my current method of assigning a default value (i.e. using the regular expression with a negative lookahead) probaby won't work.

    Read the article

  • DNSSEC - First Signature

    - by Arancha
    I'm testing DNSSEC with Bind 9.7.2-P2. I have a question regarding the first signature created over a zone that already exists. I'm using dynamic DNS. I create the first two keys: one KSK and one ZSK. According to https://datatracker.ietf.org/doc/draft-ietf-dnsop-dnssec-key-timing/, the first ZSK needs to be published for an interval equal to Ipub, before it can be active. I create the ZSK with a Publication date previous to its Activation date. I restart the service and I can see that the key is published at Publication date, but it's no active later, when Activation date arrives. This is the configuration of the zone dnssec.es at the named.conf file: zone "dnssec.es" { auto-dnssec maintain; update-policy local; sig-validity-interval 1; key-directory "dnssec/keys_dnssec"; type master; file "dnssec/db.dnssec.es"; }; Any clue?? Regards

    Read the article

  • Ubuntu xrandr rotate issue

    - by user83544
    I've just bought a second monitor for my PC which happens to be a pivot monitor. I've already read lots of forums related to my problem but haven't come across a solution - I have the same symptoms as dozens of posts but no matter whatever I try it just doesn't work. I've already changed the xorg.conf file and added in the device section just under Driver "nvidia" the following for my second monitor: Option "RandRRotation" "on" When I save and reboot I try to rotate my screen with the nvidia X server settings by choosing the second monitor and clicking either "left" or "right" for the rotation. It immediately exits the nvidia settings window and does nothing. I tried within the terminal by typing: xrandr -o right I get the following error: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 154 (RANDR) Minor opcode of failed request: 2 (RRSetScreenConfig) Serial number of failed request: 14 Current serial number in output stream: 14 I actually manage to rotate it with Option "Rotate" "CCW" instead of "RandRRotation". The problem with this solution is that you get the second monitor in the right position, but any window you open on that screen is practically unchangeable. You can't change the size nor move it, making it useless for reading PDFs, which is the main reason why I bought this second screen to help me write my thesis. Any help is really appreciated. sudo lshw -c video hiram@hiram-linux:~$ sudo lshw -c video *-display description: VGA compatible controller product: nVidia Corporation vendor: nVidia Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:f8000000-f9ffffff memory:d8000000-dfffffff memory:d4000000-d7ffffff ioport:dc00(size=12 memory:fbd80000-fbdfffff

    Read the article

  • Building PHP For MacOS

    - by Eray
    I was using XAMPP and decided to uninstall it and use MacOS' in-built apache and php modules. But while uninstalling XAMPP I deleted /usr/bin/php files and other PHP-CLI files accidentally. And I decided to install newest version of PHP (5.5.12) instead of rebuilding current version (5.4.24). Downloaded it and unzip. After this executed this command as mentioned at this guide. ./configure '--with-apxs2=/usr/sbin/apxs' '--enable-cli' '--with-config-file-path=/etc' '--with-zlib=/usr' '--enable-bcmath' '--with-bz2=/usr' '--enable-calendar' '--disable-cgi' '--with-curl=/usr' '--enable-dba' '--enable-ndbm=/usr' '--enable-exif' '--enable-fpm' '--enable-ftp' '--with-gd' '--enable-gd-native-ttf' '--enable-mbregex' '--with-mysql=mysqlnd' '--with-mysqli=mysqlnd' '--with-pear' '--with-pdo-mysql=mysqlnd' '--with-mysql-sock=/var/mysql/mysql.sock' '--with-tidy' '--enable-wddx' '--with-xmlrpc' '--enable-zip' make make install When i check phpinfo() , it's still version 5.4.24 . This line from my httpd.conf LoadModule php5_module libexec/apache2/libphp5.so /usr/libexec/apache2/libphp5.so coming from old version and i couldn't ind libphp5.so for new version. There is no libphp5.so file inside modules dir. How can i use new PHP build with Apache ? UPDATE Results of php -v command . PHP 5.5.12 (cli) (built: May 27 2014 05:17:21) Copyright (c) 1997-2014 The PHP GroupZend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies

    Read the article

  • SELinux - Allow multiple services access to same /home/dir

    - by Mike Purcell
    I currently have SELinux enabled and have been able to configure apache to allow access to /home/src/web with a chcon command granting the 'httpd_sys_content_t' type. But now I am trying to serve the rsyslogd.conf file from the same directory, but every time I start rsyslogd I see an entry in my audit log saying that rsyslogd was denied access. My question is, is it possible to grant two applications the ability to access the same directory, while still keeping SELinux enabled? Current perms on /home/src: drwxr-xr-x. src src unconfined_u:object_r:httpd_sys_content_t:s0 src Audit log message: type=AVC msg=audit(1349113476.272:1154): avc: denied { search } for pid=9975 comm="rsyslogd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349113476.272:1154): arch=c000003e syscall=2 success=no exit=-13 a0=7f9ef0c027f5 a1=0 a2=1b6 a3=0 items=0 ppid=9974 pid=9975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="rsyslogd" exe="/sbin/rsyslogd" subj=unconfined_u:system_r:syslogd_t:s0 key=(null) -- Edit -- Came across this post, which is sort of what I am trying to accomplish. However when I viewed the list of allowed sebool params, the only relating to syslog was: syslogd_disable_trans (SELinux Service Protection), seems like I can maintain the current SELinux 'type' on the /home/src/ dir, but set the bool on syslogd_disable_trans to false. I wonder if there is a better approach?

    Read the article

  • Apache config: Permissions, Directories and Locations

    - by James Murphy
    I'm trying to get my head around apache configuration to fix a problem I'm having but after a few hours I've decided to ask here. This is what I've got at the moment: DocumentRoot "/var/www/html" <Directory /> Options None AllowOverride None Deny from all </Directory> <Directory /var/svn> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Directory /opt/hg> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Location /hg> AuthType Digest AuthName "Engage HG" AuthDigestProvider file AuthUserFile /opt/hg/hgweb.users Require valid-user </Location> WSGISocketPrefix /var/run/wsgi WSGIDaemonProcess hg processes=3 threads=15 WSGIProcessGroup hg WSGIScriptAlias /hg "/opt/hg/hgweb.wsgi" <Location /svn> DAV svn SVNPath /var/svn/repos AuthType Basic AuthName "Subversion" AuthUserFile /etc/httpd/conf/users require valid-user </Location> I'm trying to get my head around how it's all laid out and how directories relate to locations/etc For /hg I get asked for a password but to /svn I get a 403 forbidden... the error I get is: [client 10.80.10.169] client denied by server configuration: /var/www/html/svn When I remove the entry it works fine.. I can't figure out how to get it linking to the /var/svn directory

    Read the article

  • bind would not work unless allow-query is "any"

    - by adrianTNT
    I have this in /etc/named.conf, I commented the default values and set my own under it. My domain would not load in browser unless I set allow-query to "any", is this OK, what should I edit? If is localhost or 127.0.0.1; 10.0.1.0/24; domain would not load. I tried the 127.. thing because it mentioned it here: http://wiki.mandriva.com/en/Testing:Bind Bind version is 9.7.0-P2-RedHat-9.7.0-5.P2.el6_0.1 OS is CentOS 6.0. options { // listen-on port 53 { 127.0.0.1; }; listen-on port 53 { any; }; //listen-on-v6 port 53 { ::1; }; listen-on-v6 port 53 { any; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; //allow-query { localhost; }; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; };

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • Samba access works with IP address only

    - by Sebastian Rittau
    I added a Debian etch host (hostname: webserver, IP address: 192.168.101.2) running Samba to a Windows network with a Windows 2003 PDC (IP address 192.168.101.3). The Samba server exports a public guest share, called "Intranet". The server shows up fine in the network, but trying to click on it produces an error dialog, stating I don't have the necessary permissions. So does entering \webserver manually and using \webserver\internet states that the path does not exist. Interestingly, accessing the share by IP address (\192.168.101.2 or \192.168.101.2\intranet) works fine. DNS is configured correctly, and "smbclient //webserver/intranet" on another Linux client works fine. One complicating issue is that the webserver is only a VMware virtual machine running on PDC server. Here is our smb.conf: [global] workgroup = Foobar server string = Webserver wins support = yes ; commenting out these wins server = 192.168.101.3 ; two lines has no effect dns proxy = no guest account = nobody [... snipped some unrelated bits, like logging ...] security = share [... snipped some password-related things ...] domain master = no [intranet] comment = Intranet path = /srv/webserver/contents browseable = yes guest ok = yes guest only = yes read only = yes create mask = 0775 directory mask = 0775

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >