Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 423/1328 | < Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >

  • Second DocumentRoot for certain URLS

    - by scrr
    Hello, I have the following setup in my apache-config: <VirtualHost 1.2.3.4:80> ServerName example.com:80 ServerAlias www.example.com DocumentRoot /var/www/page <Location "/blog"> DocumentRoot /var/www/blog </Location> RailsBaseURI / RailsEnv development </VirtualHost> However, Apache tells me, I am not allowed to have a second DocumentRoot. How can I make "www.example.com/blog" point to "/var/www/blog"? I'm sure this is basic, but I just can't find the proper documentation online.

    Read the article

  • Prevent udev / uevents looking up DVD at boot time

    - by Sampo
    Problem: Boot time delay caused by udev. Early after initscripts starts there is message saying waiting for uevents to be processed and causing delay on boot, it seems that udev is looking if there is disc in dvd tray. After udev has found disc, boot process continues normally. Main question: How to prevent udev dvd lookup at boot time? Maybe there is some way to skip some udev related stuff and let boot process continue and then later instruct udev to do stuff that may cause some delay (delay in uevents, but not in main boot process, is acceptable after udev is initially loaded).

    Read the article

  • Grant HTTP access based on unix user group

    - by Sander Marechal
    Is it possible to grant network access or HTTP access based on a user's group? At my company we want to set up an internal composer server using Satis to manage packages for the projects we write (e.g. on repository.mycompany.com), with the packages themselves in our SVN server (svn.mycompany.com). We have several webservers with many different users on them. Some users should be able to reach the composer and SVN server. Some should not. Users that should be able to reach these servers all belong to the same group. How can I set up Apache on the Composer and SVN server to only grant access to those users in that group? Alternatively, can I set up the webservers in such a way that only users from that group are able to make a connection to our Composer and SVN servers? The best thing we have come up with so far is using SSL client certificates. We simply place a client certificate on all servers which can be used to access Composer and SVN. Only the right usergroup will have read access to the certificate. A bit clunky but it may work. But I'm looking for something better.

    Read the article

  • Apache is running the wrong version of PHP.

    - by The Rook
    I am trying to enable CURL in php, and possibly update php as well. I have run into a road block where Apache seems to be running the wrong version of PHP. Here is some evidence. #lsof | grep php httpd 18397 nobody 135w REG 8,3 242 528537 /usr/local/apache/modules/libphp5.so I download the latest php 5.2.13 and build from source. I run a service httpd stop, do a make install and then I manually overwrite /usr/local/apache/modules/libphp5.so just to be safe. Using ldd i can see that its my fresh binary compiled with curl (the old one doesn't have curl.) ldd /usr/local/apache/modules/libphp5.so | grep curl libcurl.so.4 => /usr/local/lib/libcurl.so.4 (0x0064e000) I execute a phpinfo() and 5.2.3 is being executed instated of my new 5.2.13 and "curl" is nowhere to be found. What am I doing wrong? Why is curl disabled even though its statically linked? This is a RHEL 5.5 system.

    Read the article

  • Application to store/browse/view photos?

    - by amorfis
    I am looking for application which I can install on Ubuntu server and put my photos there. It can be by web interface, it can be by mounted samba disk. What I require: Ability to add tags to photos. Ability to move photos. I.e. if I set photos directory to /home/photos, but then I want to move all photos to e.g. /home/common/photos - I don't want to lose all the tags. I used to use FSpot and it was great, but it lacked point 2, and I lost everything :(

    Read the article

  • Default gateway is in different subnet. How to configure in RHEL6.2

    - by Dmytro Leonenko
    I have two subnets routed to my server from ISP. I have only one gateway ip. The gateway is on the same VLAN as my IP address. For example netowrk 1 is 1.0.0.0/24 and network 2 is 2.0.0.0/24. Both are routed to eth0 by my ISP. Gateway is 1.0.0.1. My host ip is 2.0.0.1/24 (eth0) So I can configure default gateway manually with ip route add default dev eth0 ip route add default via 1.0.0.1 and then internet connection works properly. How do I configure it in /etc/sysconfig/network-scripts/ifcfg-eth0 ? I tried to set GATEWAY=1.0.0.1 but it doesn't work. Tried to set GATEWAY and GATEWAYDEV in /etc/sysconfig/network and it does only what first command from listing above do.

    Read the article

  • I don't have permission on my own machine?

    - by Webnet
    I'm running Ubuntu desktop and I don't have permission to modify some local files on my computer. IE - within /var/www/ I can't create a new folder unless I sudo. How do I fix it so by default I have permission without logging in as the root user?

    Read the article

  • how to word wrap, align text like the output of man?

    - by cody
    what is the command that word wraps and justifies a text file so that the output looks like that of a man page: All of these system calls are used to wait for state changes in a child of the calling process, and obtain information about the child whose state has changed. A state change is considered to be: the child terminated; the child was stopped by a signal; or the child was resumed by a signal. In the case of a terminated child, performing a wait allows the system to release the resources associated with the child; if a wait is not performed, then the termi- nated child remains in a "zombie" state (see NOTES below). Thanks.

    Read the article

  • How to execute a shell script on startup?

    - by vijay.shad
    I have create a script to start a server(my first question). Now I want it to run on the system boot and start the defined server. What should I do to get this done? My findings tell me put this file in /etc/init.d location and it will execute when the system will boot. But I am not able to understand how the first argument on the startup will be start? Is this predefined somewhere to use start as $1? If I want to have a case startall that will start all the servers in the script, then what are the options I can manage. My Script is like this: #!/bin/bash case "$1" in start) start ;; stop) stop ;; restart) $0 stop $0 start ;; *) echo "usage: $0 (start|stop|restart)" ;; esac

    Read the article

  • Nagios Woudn't Start, now won't Stop!

    - by Bart B
    I ran an update on a CentOS server running Nagios, after the update, Nagios failed to start. The error in the logs was: Failed to obtain lock on file /var/run/nagios.pid: Permission denied So, I checked and there was no pid file for Nagios in /var/run. I created one and gave it the following permissions: -rwxr--r-- 1 nagios nagios 6 May 31 11:58 nagios.pid Nagios then started and seems to be running normally. The only problem is, it refuses to stop now, so I can't re-start it to add new servers and services to be monitored! When I issue the command "service nagios stop", I get [FAILED], but nothing at all gets outputted to the log, and the service remains up. Any ideas on how I can get the service to stop now? I'm running the RPM version which was installed via yum from the RPMForge repositories. The server is CenotOS 5.5.

    Read the article

  • Is there a way to do something like LVM over NFS?

    - by warren
    I realize that since NFS is not block-level, LVM can't be used directly. However: is there a way to combine multiple NFS exports (from, say, 3 servers) into one mount point on a different server? Specifically, I'd like to be able to do this on RHEL 4 (or 5, and re-export the combined mount to my RHEL 4 server). expansion The reason I pegged lvm is that I want a bunch of exported mounts (servera:/mnt/export, serverb:/mnt/export, serverc:/mnt/export, etc) to all mount at /mnt/space so that my /mnt/space on this server (serverx) as one large filesystem. Yes, I know that re-exporting is generally a Bad Thing™ but thought it might work, if there was a way to accomplish this on a newer release as opposed to an older one From reading the unionfs docs, it appears that I can't use it over a remote connection - have I misread it? More accurately, since Union FS merges the contents of multiple branches, but makes them appear as one, it doesn't seem to go in reverse: I'm trying to mount a bunch of NFS points in a merged fashion, then write to them - not caring where data goes, a la LVM .

    Read the article

  • Ubuntu Natty 11.04, Turning the wireless switch off; switches it off permanently!

    - by ZiGi
    i'm using an hp pavilion dv2000 i turned the wifi switch off by mistake, the LED turned orange and the wifi got disconnected. and now when i turn the switch on, it remains orange and the wifi still isn't functional. this happened before; i found a fix that worked searching google. it was done via terminal commands and i didn't have to download anything but i can't find the solution anymore! wlan0 shows up when i use: :~$iwconfig #BLA BLA BLA #... wlan0 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off more results: :~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill :~$ rfkill list all 1: phy0: WirelessLAN Soft blocked: yes Hard blocked: yes :~$ sudo rfkill unblock all :~$ rfkill list all 1: phy0: WirelessLAN Soft blocked: no Hard blocked: yes :~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill it's still hard blocked! even though the switch is turned on; gives the same result eitherways a direction to a page with a working solution is a much appreciated answer!

    Read the article

  • Will these instructions work when turning of journaling on a n ext4 SSD?

    - by snowlord
    I have an Acer Aspire One with an SSD for storage. I recently installed Ubuntu on it and chose ext4 for my filesystem. Then I read that journaling on an SSD isn't the best idea, so I will try to disable journaling and I have found these intstructions (from http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html): # Create ext4 fs on /dev/sda10 disk mkfs.ext4 /dev/sda10 # Enable writeback mode. This mode will typically provide the best ext4 performance. tune2fs -o journal_data_writeback /dev/sda10 # Delete has_journal option tune2fs -O ^has_journal /dev/sda10 # Required fsck e2fsck -f /dev/sda10 # Check fs options dumpe2fs /dev/sda10 |more For more performance add fstab opions: data=writeback,noatime,nodiratime i.e: /dev/sda10 /opt ext4 defaults,data=writeback,noatime,nodiratime 0 0 I will use them on my boot partition. Are there any particularly bad parts here, or are there any missing steps? Will my boot partition be fit for being on an SSD after this? Or should I consider switching to ext2, or even reinstall it all and choose ext2 at partitioning time (I'd rather not though, since I've configured quite some stuff already)?

    Read the article

  • Different files on shared partition?

    - by Matt Robertson
    I am dual-booting Windows 8 and Ubuntu 12.04. My partition scheme looks like this: /dev/sda1 - Windows 8 (nfts) /dev/sda2 - Ubuntu / (ext4) /dev/sda3 - Ubuntu home (ext4) /dev/sda5 - swap /dev/sda6 - Shared data partition (exfat) (First off, yes I do have exfat libraries installed on Ubuntu) I created some PNG images in Windows and saved them on my shared partition. From Ubuntu, I edited the images in GIMP and saved them (replacing the ones on the shared partition). When I boot into Windows, the files appear unchanged - exactly like they did before I edited them from Ubuntu. I even added a folder and deleted some other files, but none of these changes exist in Windows. When I boot into Ubuntu, all of the changes are still there. It is as if Windows is caching the old file structure... How is this possible? Thanks in advance. Edit -- commands output ~~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk +-sda1 8:1 0 165.1G 0 part +-sda2 8:2 0 21.3G 0 part / +-sda3 8:3 0 98.9G 0 part /home +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 7.8G 0 part [SWAP] +-sda6 8:6 0 172.7G 0 part /mnt/shared_data ~~ /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # /dev/sda2 UUID=8f700f65-b5c7-4afc-a6fb-8f9271e0fb5e / ext4 errors=remount-ro 0 1 # /dev/sda3 UUID=f0d688b7-22bd-4fa7-bc1b-a594af2933fa /home ext4 defaults 0 2 # /dev/sda5 UUID=3bc2399b-5deb-4f04-924b-d4fc77491997 none swap sw 0 0 # /dev/sda6 UUID=F2DE-BC47 /mnt/shared_data exfat defaults 0 3 ~~ /etc/mtab /dev/sda2 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /home ext4 rw 0 0 /dev/sda6 /mnt/shared_data fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/matt/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=matt 0 0

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • Problems with connecting to ASM instance

    - by Rodnower
    I have Oracle11 with RAC installed on RedHat 5. I have two servers with two instances one on each one. In each server I succeed connect to appropriate instance of database but not to instance of ASM. I connect with user oracle11 and type: export ORACLE_SID=+ASM1 sqlplus "/ as sysdba" It connects but write: Connected to an idle instance and when I try to access to parameters or views I have errors. For know that +ASM1 is the SID I type: ps aux | grep pmon and get: asm_pmon_+ASM1 I tried also with +ASM but it was also unsuccessfully. What is wrong here?

    Read the article

  • Ubuntu init.d script not being called on startup

    - by Mike
    I've got a script in ubuntu 9.04 in init.d that I've set to run on start on with update-rc.d using update-rc.d init_test defaults 99. All of the symlinks are there and the permissions appear to be correct -rwxr-xr-x 1 root root 642 2010-10-28 16:44 init_test mike@xxxxxxxxxx:~$ find /etc -name S99* | grep init_test find: /etc/rc5.d/S99init_test find: /etc/rc4.d/S99init_test find: /etc/rc2.d/S99init_test find: /etc/rc3.d/S99init_test The script runs through source and ./ without issue and behaves correctly. Here is the source of the script: #!/bin/bash ### BEGIN INIT INFO # Provides: init test script # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO start() { echo "hi" echo "start called" >> /tmp/test.log return } stop() { echo "Stopping" } echo "Script called" >> /tmp/test.log case "$1" in start) start ;; stop) stop ;; *) echo "Usage: {start|stop|restart}" exit 1 ;; esac exit $? When the machine starts, I don't see "script called" or "start called" in the test.log at all. Is there anything obvious I'm messing up?

    Read the article

  • How to change the Nginx default folder?

    - by Ido Bukin
    I setup a server with Nginx and i set my Public_HTML in - /home/user/public_html/website.com/public And its always redirect to - /usr/local/nginx/html/ How can i change this ? Nginx.conf - user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } /usr/local/nginx/sites-enabled/default - server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } /usr/local/nginx/sites-available/website.com - server { listen 80; server_name website.com; rewrite ^/(.*) http://www.website.com/$1 permanent; } server { listen 80; server_name www.website.com; access_log /home/user/public_html/website.com/log/access.log; error_log /home/user/public_html/website.com/log/error.log; location / { root /home/user/public_html/website.com/public/; index index.php index.html; } # pass the PHP scripts to FastCGI server listening on # 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/user/public_html/website.com/public/$fastcgi_script_name; } } The error message I get is Fatal error: require_once() [function.require]: Failed opening required '/usr/local/nginx/html/202-config/functions.php' the server try to find the file in the Nginx folder and not in my Public_Html

    Read the article

  • rm -rf not erasing directory

    - by chief
    I am attempting to erase a directory called apps. When I run rm-rf apps it looks like it erases it for the moment. When I log back on to the server the directory is still there, though it is highlighted in green. drwxrwxrwx 3 user user 4096 2010-04-24 18:33 apps

    Read the article

  • rm -rf not erasing directory

    - by chief
    I am attempting to erase a directory called apps. When I run rm-rf apps it looks like it erases it for the moment. When I log back on to the server the directory is still there, though it is highlighted in green. drwxrwxrwx 3 user user 4096 2010-04-24 18:33 apps Ubuntu 9.10

    Read the article

  • RTNETLINK answers: Invalid argument

    - by LinuxPenseur
    Hi, When my system boots up it shows the following message. Bringing up loopback interface: [ OK ] Bringing up interface eth0: RTNETLINK answers: Invalid argument [ OK ] Bringing up interface eth1: RTNETLINK answers: Invalid argument [ OK ] Bringing up interface eth2: RTNETLINK answers: Invalid argument [ OK ] Bringing up interface eth3: RTNETLINK answers: Invalid argument [ OK ] Why is this happening. Normally it does not give the message RTNETLINK answers: Invalid argument I did ifconfig and the output is eth0 Link encap:Ethernet HWaddr 00:00:50:6D:56:B4 inet addr:120.0.10.137 Bcast:120.0.255.255 Mask:255.255.255.0 inet6 addr: fe80::200:50ff:fe6d:56b4/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:3 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:214 (214.0 b) Base address:0xa000 eth1 Link encap:Ethernet HWaddr 00:00:50:6D:56:B5 inet addr:121.0.10.137 Bcast:121.0.255.255 Mask:255.255.255.0 inet6 addr: fe80::200:50ff:fe6d:56b5/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:3 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:214 (214.0 b) Base address:0xc000 eth2 Link encap:Ethernet HWaddr 00:00:50:6D:56:B6 inet addr:128.0.10.137 Bcast:128.0.255.255 Mask:255.255.255.0 inet6 addr: fe80::200:50ff:fe6d:56b6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1006 (1006.0 b) TX bytes:396 (396.0 b) Interrupt:16 eth3 Link encap:Ethernet HWaddr 00:00:50:6D:56:B7 inet addr:123.0.10.137 Bcast:123.0.255.255 Mask:255.255.255.0 inet6 addr: fe80::200:50ff:fe6d:56b7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:728 (728.0 b) TX bytes:396 (396.0 b) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:14 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:980 (980.0 b) TX bytes:980 (980.0 b) What could be the reason for the message and how to change this to normal? Thanks

    Read the article

  • Automatically mounting windows share in Fedora 12

    - by user15865
    Hi, I'm trying to automatically mount a windows share in a Fedora 12 instance (FC12). When I manually mount things work: mount -t cifs //nas01/servers -o username=guest,password=myPassword /mnt/nas01/servers If I update /etc/fstab with the following: //nas01/servers /mnt/nas01/servers cifs username=guest,password=myPassword 0 0 Nothing happens after reboot. The thing that has me baffled is after a reboot if I run: mount -a The share is mounted. Any ideas on this? Thank you, Martin

    Read the article

  • How to "ignore" username and password prompt in net use

    - by Mattisdada
    I have at the moment a logon.cmd script, that I'm using to map network drives to the users profile. It looks like this: ::Onboarding net use m: /delete net use m: \\BOB\onboarding ::Bookings net use n: /delete net use n: \\BOB\bookings ::Accounts net use j: /delete net use j: \\BOB\accounts It works fine up until it gets up to a folder that the current user cannot access, it then asks for a username and password instead of erroring and continuing. Notes: This very script used to work on another Samba PDC network, but I've moved it over to another server (Still Samba PDC) and now its breaking. Is there anyway for it to ignore the username/password prompt and just continue?

    Read the article

< Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >