Search Results

Search found 5109 results on 205 pages for 'specify'.

Page 145/205 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Apache2 default vhost in alphabetical order or override with _default_ vhost?

    - by benbradley
    I've got multiple named vhosts on an Apache web server (CentOS 5, Apache 2.2.3). Each vhost has their own config file in /etc/httpd/vhosts.d and these vhost config files are included from the main httpd conf with... Include vhosts.d/*.conf Here's an example of one of the vhost confs... NameVirtualHost *:80 <VirtualHost *:80> ServerName www.domain.biz ServerAlias domain.biz www.domain.biz DocumentRoot /var/www/www.domain.biz <Directory /var/www/www.domain.biz> Options +FollowSymLinks Order Allow,Deny Allow from all </Directory> CustomLog /var/log/httpd/www.domain.biz_access.log combined ErrorLog /var/log/httpd/www.domain.biz_error.log </VirtualHost> Now I when anyone tries to access the server directly by using the public IP address, they get the first vhost specified in the aggregated config (so in my case it's alphabetical order from the vhosts.d directory). Anyone accessing the server directly by IP address, I'd like them to just get an 403 or a 404. I've discovered several ways to set a default/catch-all vhost and some conflicting opinions. I could create a new vhost conf in vhosts.d called 000aaadefault.conf or something but that feels a bit nasty. I could have a <VirtualHost> block in my main httpd.conf before the vhosts.d directory is included. I could just specify a DocumentRoot in my main httpd.conf What about specifying a default vhost in httpd.conf with _default_ http://httpd.apache.org/docs/2.2/vhosts/examples.html#default Would having a <VirtualHost _default_:*> block in my httpd.conf before I Include vhosts.d/*.conf be the best way for a catch-all?

    Read the article

  • Linux NFS create mask and force user equivalent

    - by Mike
    I have two Linux servers: fileserver Debian 5.0.3 (2.6.26-2-686) Samba version 3.4.2 apache Ubuntu 10.04 LTS (2.6.32-23-generic) Apache 2.2.14 I have a number of Samba shares on fileserver so that I can access files from Windows PCs. I am also exporting /data/www-data to the apache server, where I have it mounted as /var/www. The setup is okay, except for when I come to create files on the NFS mount. I end up with files that cannot be read by Apache, or which cannot be modified by other users on my system. With Samba, I can specify force user, force group, create mask and directory mask, and this ensures that all files are created with suitable permissions for my Apache web server. I can't find a way to do this with NFS. Is there a way to force permissions and ownership with NFS - am I missing something obvious? Although I've spent quite a bit of time with Linux, and am weaning myself off Windows, I still haven't quite got to grip with Linux permissions... If this is not the right way to do things, I am open to alternative suggestions.

    Read the article

  • ubuntu preseed installation keep missing mirror files

    - by JackWu
    Install ubuntu12.04.2 with preseed file, but there is one buggy problem about preseed mirror setting. The symptom here is installing process got stuck. So I track down the log file, and find out the real problem, the installation is looking for a file that's not there. This is just one of them, another pops up if I faked this file. This all happened during preseed, so I believe preseed has something to do with this. I google ubuntu preseed mirror and find this post saying: # If you select ftp, the mirror/country string does not need to be set. #d-i mirror/protocol string ftp d-i mirror/country string manual d-i mirror/http/hostname string archive.ubuntu.com d-i mirror/http/directory string /ubuntu d-i mirror/http/proxy string # Alternatively: by default, the installer uses CC.archive.ubuntu.com where # CC is the ISO-3166-2 code for the selected country. You can preseed this # so that it does so without asking. #d-i mirror/http/mirror select CC.archive.ubuntu.com # Suite to install. #d-i mirror/suite string lucid # Suite to use for loading installer components (optional). #d-i mirror/udeb/suite string lucid # Components to use for loading installer components (optional). #d-i mirror/udeb/components multiselect main, restricted I wonder the difference between d-i mirror/http/hostname and d-i mirror/http/mirror, I mean they all specify a mirror, right? In my preseed file, this is no d-i mirror/http/mirror, and d-i mirror/http/hostname points to my own repo as you might notice in the previous image. Here is my question: Does preseed fetches file/resource from internet, if I use local repo? Why it's looking for file that's not even there? This has bothered for quite time, many thanks in advance to anyone who might give any help.

    Read the article

  • Exchange 2003 automatically converts text/plain emails to text/html for IMAP retrieval

    - by wfaulk
    When accessing an Exchange 2003 server via IMAP, emails that were sent as text/plain (and ones that had no MIME encoding specified at all) get automatically converted to multipart/alternative with the original text/plain body and a text/html body. This is … stupid. It doesn't even bother to specify a monospaced font. The new MIME part starts like this: Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN"> <HTML> <HEAD> <META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; = charset=3Diso-8859-1"> <META NAME=3D"Generator" CONTENT=3D"MS Exchange Server version = 6.5.7654.12"> <TITLE>{{subject}}</TITLE> </HEAD> <BODY> <!-- Converted from text/plain format --> <BR> <P><FONT SIZE=3D2>{{body}} (All the "3D" stuff is quoted-printable encoding for an equals sign; there's nothing wrong on that front, surprisingly.) How can I make this stop?

    Read the article

  • Custom Dreamweaver DocTypes

    - by Hugh Guiney
    Dreamweaver CS5 with Dreamweaver HTML5 Pack 1.2.7 Windows 7 x64 When I go to create a new document and select the HTML5 DocType, Dreamweaver gives me the legacy encoding/character set declaration: <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> I want to replace it with the new, abbreviated style: <meta charset="utf-8"> The relevant file seems to be %ProgramFiles(x86)%\Adobe\Adobe Dreamweaver CS5\configuration\DocumentTypes\NewDocuments\Default.html, which has a blank charset, that is then apparently replaced with the appropriate character set dynamically: <meta http-equiv="Content-Type" content="text/html; charset="> I changed it, but then new documents show up like this: <meta charset=""> <title>Untitled Document</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> It seems Dreamweaver added the legacy declaration back in after my modification—and as far as I can tell, there's no way to specify that the charset definition should go in-between the quotes, either. Additionally, any modifications to Default.html apply to every DocType, whereas I only want this change to apply to the HTML5 DocType. Is there anything in the configuration files that would allow me to make any of these customizations? If not, is there an extension that does it?

    Read the article

  • 2 Printers 1 Queue

    - by Shazburg
    My issue: When an order is processed, the same document needs to be printed on two printers. My proposed solution: Create a single queue in CUPS with a backend script that spits the job out to the two real printers queues. My problem: Documentation. Maybe I'm looking at every ring around the bullseye, but I can't find anything that lays out the rules for writing a CUPS backend script. In the end, I have several questions: Is there already an option to do this in CUPS that I've missed? The line I use to add my queue is "lpadmin -p MultiPass -E -v multipass -P Generic PostScript Printer". But DeviceURI is bad unless I specify a directory like "-v multipass:/tmp". Why is this? For testing, my script does nothing but capture ARGV and write it out to a text file one line per argument. Problem is, I'm getting nothing. Logs show the job as successful, but I'm pretty sure my meager attempt at a backend isn't even being run. I've tried to keep this question brief, so please ask for more info as I'm sure I've left out the most important part in all this. Honestly, I'm just done chasing my own tail. Thank you for your time.

    Read the article

  • use ssh tunnel with phpmyadmin

    - by JohnMerlino
    I been using ssh tunnel to bypass firewall of remote mysql server. On my Ubuntu 12.04 installation, it works via the terminal and it works when using a program called mysql workbench. However, that program freezes often and I want to try phpmyadmin as an alternative. However, I cannot connect to remote server using ssh tunnel on phpmyadmin, albeit I can connect locally. These are the steps I've tried: 1) Open a tunnel, listening on localhost:3307 and forwarding everything to xxx.xxx.xxx.xxx:3306 (used 3307 because MySQL on my local machine uses the default port 3306): ssh -L 3307:localhost:3306 [email protected] So now I have the port for tunnel open and I have my local mysql installation default port: $ netstat -tln Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3307 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN ... 2) Now I can easily connect to remote server via localhost using the terminal: $ mysql -u user.name -p -h 127.0.0.1 -P 3307 Notice that I expicitly identify 3307 as the port, so traffic forwards to the remote server, and hence it logs me in to the remote server. Unfortunately, the localhost/phpmyadmin local login interface doesn't allow you to specify a port option. So I modify the config-db.php file and change the $dbport variable to 3307, under the impression that the phpmyadmin interface will now work with port 3307: $ sudo vim /etc/phpmyadmin/config-db.php $dbport='3307'; Then I restart the mysql server. Unfortunately, it didn't work. When I use the remote credentials to login, it gives me error: #1045 Cannot log in to the MySQL server

    Read the article

  • Reusing slot numbers in Linux software RAID arrays

    - by thkala
    When a hard disk drive in one of my Linux machines failed, I took the opportunity to migrate from RAID5 to a 6-disk software RAID6 array. At the time of the migration I did not have all 6 drives - more specifically the fourth and fifth (slots 3 and 4) drives were already in use in the originating array, so I created the RAID6 array with a couple of missing devices. I now need to add those drives in those empty slots. Using mdadm --add does result in a proper RAID6 configuration, with one glitch - the new drives are placed in new slots, which results in this /proc/mdstat snippet: ... md0 : active raid6 sde1[7] sdd1[6] sda1[0] sdf1[5] sdc1[2] sdb1[1] 25185536 blocks super 1.0 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU] ... mdadm -E verifies that the actual slot numbers in the device superblocks are correct, yet the numbers shown in /proc/mdstat are still weird. I would like to fix this glitch, both to satisfy my inner perfectionist and to avoid any potential sources of future confusion in a crisis. Is there a way to specify which slot a new device should occupy in a RAID array? UPDATE: I have verified that the slot number persists in the component device superblock. For the version 1.0 superblocks that I am using that would be the dev_number field as defined in include/linux/raid/md_p.h of the Linux kernel source. I am now considering direct modification of said field to change the slot number - I don't suppose there is some standard way to manipulate the RAID superblock?

    Read the article

  • High speed network configuration

    - by Peter M
    Sorry if this seems to be a stupid question, I'm not sure how to specify what I want to know when checking google. I will have 2 or 3 devices pumping out data on a 100Base-T port. The combined data rate of all devices is about 15KB/S which exceeds the optimal 100Base-T channel capacity (12KB/S), but well within the realms of a 1000Base-T connection. Each device will be sending a burst of data in the form of an FTP transfer to a common, single host computer in a sequential manner ie: Device A establishes FTP connection and transfers data Device B establishes FTP connection and transfers data Device C establishes FTP connection and transfers data It may be that the A&B, B&C and C&A transfers overlap in the time domain to some extent. There will be minimal traffic going back from the computer to each device (in general what ever is needed to support the FTP transfers), and the network will be dedicated to transferring data between these devices and the host computer. Is it possible to use a switch to combine the multiple incoming 100Base-T streams into a single outgoing 1000Base-T stream? if so what features in a switch should I be looking for? Or would it be better to have 3 physical point-to-point 100Base-T dedicated connections between each device and the host computer? (thus having at least 3 physical Ethernet interfaces on that computer) Note that I can't change the interface on the devices, but I am free to choose the network and host computer configuration. Thanks for you help Peter

    Read the article

  • Ubuntu 9.04: Ripping CDs with grip?

    - by chris
    I tried to rip a CD tonight, and couldn't figure out how to configure grip - /dev/cdrom doesn't seem to be the mount point for music CDs any more. How can I configure grip to find CDs? Update: /etc/fstab has /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 But there's nothing visible in /media/cdrom0 (or /media/cdrom, which is a symlink to cdrom0) There's an icon on the desktop labeled "Audio Disk" and opening it shows the .wav files on the CD. The location is cdda://sr0/, but grip doesn't like that either. Trying to manually mount /dev/sr0, I get $ sudo mount -t auto /dev/sr0 foo/ mount: block device /dev/sr0 is write-protected, mounting read-only mount: you must specify the filesystem type Update 2: Tried to change the media handling preferences (From a file browser, Edit-Preferences, Media, CD Audio) to "Do Nothing". CD Still doesn't mount. Update 3: With an audio CD in the drive: $ ls -l /dev/ | grep cd lrwxrwxrwx 1 root root 3 2009-09-15 22:13 cdrom1 -> sr0 lrwxrwxrwx 1 root root 3 2009-09-15 22:13 cdrw1 -> sr0 drwxr-xr-x 2 root root 60 2009-09-15 22:13 pktcdvd lrwxrwxrwx 1 root root 3 2009-09-15 22:13 scd0 -> sr0 crw-rw----+ 1 root cdrom 21, 2 2009-09-15 22:13 sg2 brw-rw----+ 1 root cdrom 11, 0 2009-09-15 22:13 sr0

    Read the article

  • Ubuntu cannot resolve unmet dependency

    - by DisgruntledGoat
    I'm trying to install a package on my Ubuntu 8.10 server. However, I get this message: The following packages have unmet dependencies. webmin: Depends: apt-show-versions but it is not going to be installed E: Unmet dependencies. Try ‘apt-get -f install’ with no packages (or specify a solution). So I run apt-get -f install which offers to install apt-show-versions and libapt-pkg-perl. After selecting to install without verification, I get these errors: Err http://gb.archive.ubuntu.com intrepid/universe libapt-pkg-perl 0.1.22build1 404 Not Found Err http://gb.archive.ubuntu.com intrepid/universe apt-show-versions 0.13 404 Not Found Failed to fetch http://gb.archive.ubuntu.com/ubuntu/pool/universe/liba/libapt-pkg-perl/libapt-pkg-perl_0.1.22build1_i386.deb 404 Not Found Failed to fetch http://gb.archive.ubuntu.com/ubuntu/pool/universe/a/apt-show-versions/apt-show-versions_0.13_all.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? I've tried running apt-get update and adding --fix-missing as suggested, but neither works. Where do I go from here?

    Read the article

  • How to define nodes from a Hiera file in Puppet?

    - by Pigueiras
    I am using puppet and the puppet network device management module and I am trying to build my custom type. In the built-in type for the routers configuration, you can specify a list of nodes and then the configuration inside that node: node "c2950.domain.com" { Interface { duplex => auto, speed => auto } interface { "FastEthernet 0/1": description => "--> to end-user workstation", mode => access, native_vlan => 1000 # [...] More configuration } What I am trying to do, is to move the manifest declaration of the nodes and the configuration of my custom type to a Hiera file like this one: nodes: - node1 - node2 config_device: node1: custom_parameter: "whatever1" node2: custom_parameter: "whatever2" And then in the manifest iterate over the hiera file creating the nodes with the configuration of each node with something like (I am taking as reference this question in serverfault): class my_class { $nodes = hiera_array('nodes') define hash_extract() { $conf_hash = hiera_hash("config_device") $custom_paramter = $conf_hash[$name] ## TRICK lies in $name variable node $name { my_custom_device { $name: custom_parameter => $device_conf['custom_parameter'] } } } hash_extract{$pdu_names: } } } But for this solution I have two problems, I can not define a node inside a define and I can not parameterize a node name. So, is there any way to declare nodes from a Hiera file with their configuration inside?

    Read the article

  • How do you optimize your Outlook Exchange + IMAP setup?

    - by Mike
    My company provides an Outlook/Exchange account we must use for mail/calendar. Like many companies, they unfortunately also provide a ridiculously small mail quota. I got tired of managing and backing up .pst files (since I'm always in my e-mail there is never a good time to back it up), so I started storing my archived mail "in the cloud", using an IMAP server I set up on my Linux box. This has a few drawbacks for me: IMAP (at least the implementation in Outlook) is *very slow*. Furthermore, if I move a large number of messages to the IMAP server, it blocks the entire Outlook client for hours sometimes, which is quite annoying. Can't use exchange over HTTP to do mail without launching a VPN session, because the client-side rules I have which organize my mail fail and disable the rule if the IMAP server can't be reached. If I reply to a message from my IMAP store, I have to specify a SMTP server willing to relay for me in order to send e-mail, unless I always remember to select my Exchange account while composing e-mail. ... but the main advantage of being very easy to back up, with a couple of cron jobs that essentially do an 'rsync'. Short of moving the IMAP server to my local host (which seem like might have the same file locking problems as using a .pst), my options seem limited for solving (1). I'd like to come up with a solution for (2) and (3) though. For problem (2) would it be possible to somehow tell Outlook that the IMAP server is "offline", and have it synchronize my changes during a periodic "send and receive"? If so, I wonder if it would block the Outlook client, like it does in problem (1), and if it would be compatible with the client-only rules I use to sort my mail into folders. I've looked all over the options menu and have not found a way to tell Outlook to not use a certain account for sending mail, which would solve (3). Is anyone else crazy enough to be doing something like this? Any ideas?

    Read the article

  • How do I lower the hardware volume? (volume too high)

    - by Zom-B
    I have a 4yo Dell laptop with Windows XP Pro (modern ones unfortunately don't have a physical volume knob), and lately I'm using my Apple earphones, because they have much better low frequency response than my $10 earphones. They also have the side effect of being much louder. To give an indication of my agony, for most tasks (movie, music, games) I have my main volume at 3 ticks: drag to 0 with the mouse and press the up key 3 times (the handle does not even raise 1 pixel) and my wave volume at 50%. I notice that when I do this, I have a lot of digital noise, because I'm using just a tiny fraction of the 16-bit space. If I drag the Wave slider down until I barely hear the audio, it becomes really distorted and noisy, indicating that this is digital volume (in the DirectSound driver or something) and not hardware volume. I experimented in Audition. When I make a tone of 1000Hz at -50db, (all windows volumes at max) the volume is just below my pain threshold. When I zoom in to see how high the sample values reach, I see that just 8 of the 16 bits are used (about -100 ~ 100). When I generate such tone at -80db (minimum I can specify) then I can still clearly hear the tone, although really noisy. When I zoom in, I see that just 3 out of 16 bits are used. I created a squarewave tone that is just 1 bit high, and I can still hear it! For most uses, this is not a big problem (audiophiles will disagree!), as I just have more noise than usual (about the same as old 8 bit hardware), but I'm also in the process of programming a hearing test program, in which case this problem is a death blow as the test subjects will even hear tones at the bottom of the theoretical range (lowering the windows volume is futile, see above) (I cannot update drivers, as Dell has discontinued XP support for my model)

    Read the article

  • Can't mount hard drive. Ubuntu 12.04

    - by Sam
    I am trying to recover some pictures on my 320 GB Hard Disk, so I put in a Live Ubuntu CD and am in that right now. In the devices list, it shows my USB drive, but not my 320 GB Hard Disk. I can see the disk in Disk Utility (it says it's on /dev/sda), but it's not mounted, and it says it has a few bad sectors but it is OK. In Disk Usage Analyzer, it says my maximum capacity is 13.4 GB, so it's definitely not using the 320 GB Hard Disk. I tried the following: sudo mkdir /media/newhd (worked) sudo mount /dev/sda /media/newhd (didn't work. it says I must specify the filesystem type) I then tried: fsck.ext4 -f /dev/sda (didn't work. Said: Superblock invalid, trying to backup blocks. then: Bad magic number in super-block while trying to open /dev/sda. The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock) Does anyone have any ideas? The whole problem started when my Windows Vista said "Can't find operating system". Any ideas on how I can get on to my hard drive at /dev/sda?

    Read the article

  • Windows 2008 R2 DHCP Overlapping Scopes

    - by Buska
    We are trying to troubleshoot a scope overlap problem. We have multiple device types we wish to give all different ranges of a 16 bit subnet. IE. X device we wish to give 192.168.2.1-192.168.2.254/16, Y devices we wish to give 192.168.3.1-192.168.3.254/16. We are trying to accomplish this by creating different scopes and using the 60 class identifier. The problem is DHCP won't allow us to give these scopes with 16 bit masks because of the potential overlap. We aren't overlapping the address pool so why does DHCP care and can we work around this? If this isn't possible, how can i assign specific ranges by device type without creating multiple scopes? Any thoughts would be helpful. UPDATE: Entire Scope is 192.168.0.0/16 Gateway is 192.168.1.1/16 Device Hardware A - 192.168.20.1-192.168.20.254/16 Device Hardware B - 192.168.26.1-192.168.26.254/16 Device Hardware C - 192.168.85.1-192.168.85.254/16 We tried to setup multiple scopes for each device type (A,B,C) but couldn't specify a 16 bit mask as Scope A could technically overlap Scope B even thought our start and end addresses don't. I hope this makes more sense. Thanks for your thoughts.

    Read the article

  • Ubuntu cannot resolve unmet dependency

    - by DisgruntledGoat
    I'm trying to install a package on my Ubuntu 8.10 server. However, I get this message: The following packages have unmet dependencies. webmin: Depends: apt-show-versions but it is not going to be installed E: Unmet dependencies. Try ‘apt-get -f install’ with no packages (or specify a solution). So I run apt-get -f install which offers to install apt-show-versions and libapt-pkg-perl. After selecting to install without verification, I get these errors: Err http://gb.archive.ubuntu.com intrepid/universe libapt-pkg-perl 0.1.22build1 404 Not Found Err http://gb.archive.ubuntu.com intrepid/universe apt-show-versions 0.13 404 Not Found Failed to fetch http://gb.archive.ubuntu.com/ubuntu/pool/universe/liba/libapt-pkg-perl/libapt-pkg-perl_0.1.22build1_i386.deb 404 Not Found Failed to fetch http://gb.archive.ubuntu.com/ubuntu/pool/universe/a/apt-show-versions/apt-show-versions_0.13_all.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? I've tried running apt-get update and adding --fix-missing as suggested, but neither works. Where do I go from here?

    Read the article

  • EC2 kernel decision and issues with creating a new machine with my AMI

    - by roacha
    I could really use some advice. I started a new instance on EC2 using Amazon's AMI and during the deployment process I selected a Kernel ID of "Use Default". I then configured my server the way that I wanted to and took a snapshot of it. I then created my own AMI to create new servers with. When I try and create a new server with this AMI the server fails to start and I get the error: EXT3-fs: sda1: couldn't mount because of unsupported optional features (240). Which appears to happen because I am selecting a kernel id of "Use default" again when building my second server. I have read that in order for this to work I need to choose the same kernel id that was used in my original server. I have deleted my original server and don't know what it was using. What is the best process to follow in order to not have these issues? Should I choose "Use Default" for my original server? How do you know which kernel it selected? Then should I just document this and always specify this during the deployment of my next servers using my custom AMI? OR should I choose a custom kernel id during the initial build and always use this one moving ahead hoping Amazon never retires it? Thanks for any advice!

    Read the article

  • How to tell Linux to explicitly swap out main memory of a suspended process?

    - by Vi
    I run a memory-hungry process (mkcromfs) which consumes more memory than I have physical memory on my latop, so it is paging and swappin and thrashing all the time and loadavg is about 2 (compcache is already in use with usual swap partition as well), but slowly moving forward (Although I afraid it will finally try to allocate 2GB and crash draining 2 days of thrashing). When I want to use the laptop for something else, I stop the process, start X server, firefox and other programs. The problem is that when I start Firefox the loadavg jumps to 10 and the system becomes almost unresponsive at all (long time to turn on/off caps lock, slow mouse cursor position updates, slow switching from X server to Linux console, slow login). The stopped mkcromfs still holds a lot of memory (464.8 MiB and slowly falling) and moves it to swap only when more memory is needed for some other program, which results in a great slowdown. How to tell the Linux to swap out this process entirely (e.g. I'm not intending to resume it in short term), possibly waking from swap other data? Also it will be useful to be able to specify the exact swap device to swap the given process out.

    Read the article

  • FTP account ownership on vhost directory makes Apache not run website correctly

    - by CodeShining
    I've purchased a virtual server, where I'm given of a non-root sudo-enabled user. Actually I do need to create an FTP account that's not that sudo-able account, so I created a no-login account just for that purpose. I've set up VSFTPd correctly, also enabling the "userlist" feature, to specify which user are permitted to use FTP. Then I created an empty directory under my sudo-able account, and I gave ownership permissions to the second account, so to make it more easy to understand, let's say the main account (the one I do use to manage my VPS) is called ubuntu and the FTP-user is named ftpuser, I created a directory /home/ubuntu/mywebsite giving the ownership to ftpuser:ftpuser. Then I uploaded a worpdress website, whose default permissions are 755 and 644. The issue is that Apache is not given of any privilege to run the website. How can I make the website run properly, and which is the most secure? Should I run that virtualhost with another user (if it's possible)? Should I force the FTP user to use the www-data group (if that's possible) and run with permissions like 775 and 664? How can I solve this issue? Any help is appreciated, I'd like to run it using the default permissions, so any update won't break up anything (because of permissions reset).

    Read the article

  • Can't install mplayer or vlc on ubuntu

    - by mirko4
    I am trying to install Mplayer or VLC player on ubuntu feisty but i can't do it. I try with apt-get: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run `apt-get -f install' to correct these: The following packages have unmet dependencies: mplayer: Depends: libasound2 (> 1.0.16) but 1.0.13-1ubuntu5 is to be installed Depends: libavcodec51 (>= 0.svn20080206-8) but it is not going to be installed or libavcodec-unstripped-51 (>= 0.svn20080206-8) but it is not installable Depends: libavformat52 (>= 0.svn20080206-8) but it is not going to be installed or libavformat-unstripped-52 (>= 0.svn20080206-8) but it is not installable Depends: libavutil49 (>= 0.svn20080206-8) but it is not going to be installed or libavutil-unstripped-49 (>= 0.svn20080206-8) but it is not installable Depends: libcaca0 (>= 0.99.beta14-1) but 0.99.beta11.debian-2build1 is to be installed Depends: libcdparanoia0 (>= 3.10.2+debian) but 3.10+debian~pre0-4build1 is to be installed Depends: libcucul0 (>= 0.99.beta14-1) but 0.99.beta11.debian-2build1 is to be installed Depends: libfaad0 (>= 2.6.1) but it is not going to be installed Depends: libfribidi0 (>= 0.10.9) but 0.10.7-4build1 is to be installed Depends: libgif4 (>= 4.1.6) but it is not going to be installed Depends: libjack0 (>= 0.109.2) but it is not going to be installed Depends: liblzo2-2 but it is not going to be installed Depends: libopenal1 but it is not going to be installed Depends: libpostproc51 (>= 0.svn20080206-8) but it is not going to be installed or libpostproc-unstripped-51 (>= 0.svn20080206-8) but it is not installable Depends: libspeex1 (>= 1.2~beta3-1) but 1.1.12-3 is to be installed Depends: libsvga1 Depends: libswscale0 (>= 0.svn20080206-8) but it is not going to be installed or libswscale-unstripped-0 (>= 0.svn20080206-8) but it is not installable Depends: mplayer-skin python-apt: Depends: libapt-inst-libc6.7-6-1.1 Depends: libapt-pkg-libc6.7-6-4.6 scim-gtk2-immodule: Depends: libscim8c2a (>= 1.4.6) but 1.4.4-7ubuntu1 is to be installed scim-modules-socket: Depends: libscim8c2a (>= 1.4.6) but 1.4.4-7ubuntu1 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). I try apt-get -f install but it doesn't work neither. What to do please help me ?!

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • MS Word TOC that references # pages rather than page number

    - by buttonsrtoys
    We frequently need to write specifications in Word which require a TOC that refers to the total number of pages in a section, rather than the page number. E.g., Section No. Pages 01010 Summary of Work..............5 01025 Prices.......................2 01400 Quality Control..............1 01700 Contract Close Out...........2 A wrinkle is that each section is a separate file. To date, we've been writing or TOC by hand, which has introduced every error imaginable. Is there an MS feature that populates a TOC with page totals? If not, I've done a little VB in Office, so wouldn't be opposed to that route as need be, as long as it was usable by our low tech users. Related question - all the section files are in the same folder. It would be nice if the TOC loaded every file in a folder, rather than having to specify each one. Is this a feature of Word or would this require VB? We tried a master document with links to subdocuments, but since the number of section files ebbs and flows with each project, the approach required too much maintenance for our Wordophobes.

    Read the article

  • Postfix: How to configure Postfix with virtual Dovecot mailboxes?

    - by user75247
    I have configured a Postfix mail server for two domains: domain1.com and domain2.com. In my configuration domain1 has both virtual users with Maildirs and aliases to forward mail to local users (eg. root, webmaster) and some small mailing lists. It also has some virtual mappings to non-local domains. Domain2 on the other hand has only virtual alias mappings, mainly to corresponding 'users' at domain1 (eg. mails to [email protected] should be forwarded to [email protected]). My problem is that currently Postfix accepts mail even for those users that don't exist in the system. Mail to existing users and /etc/aliases works fine. Postfix documentation states that the same domain should never be specified in both mydestination and virtual_mailbox_maps, but If I specify mydestination as blank then postfix validates recipients against virtual_mailbox_maps but rejects mail for local aliases of domain1.com. /etc/postfix/main.cf: myhostname = domain1.com mydomain = domain1.com mydestinations = $myhostname, localhost.$mydomain, localhost virtual_mailbox_domains = domain1.com virtual_mailbox_maps = hash:/etc/postfix/vmailbox virtual_mailbox_base = /home/vmail/domains virtual_alias_domains = domain2.com virtual_alias_maps = hash:/etc/postfix/virtual alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases virtual_transport = dovecot /etc/postfix/virtual: domain1.com right-hand-content-does-not-matter firstname.lastname user1 [more aliases..] domain2.com right-hand-content-does-not-matter @domain2.com @domain1.com /etc/postfix/vmailbox: [email protected] user1/Maildir [email protected] user2/Maildir /etc/aliases: root: :include:/etc/postfix/aliases/root webmaster: :include:/etc/postfix/aliases/webmaster [etc..] Is this approach correct or is there some other way to configure Postfix with Dovecot (virtual) Maildirs and Postfix aliases?

    Read the article

  • Upgrading to Java 7u65 breaks my Deployment Rule Set for Oracle applications

    - by Don Atreides
    My company uses an older version of an Oracle application that requires Java 6u45. Naturally we want to be secure, so we use a Deployment Rule Set to specify 6u45 for that internal application and let other applications use 7u60. Now that we're ready to upgrade the Java 7 half to 7u67, the Oracle application breaks with "Deployment Rule Set required version 1.6.0_45 not available." Of course it is available, it just can't find it for some reason. As a test, I specified that JavaTester.org should use 6u45 also and it works fine with no issues. But when I try to use the same configuration (7u67 and 6u45) against the Oracle application it fails every time. If I downgrade to 7u60, it works. 7u65 or higher, it breaks. The Oracle application hasn't changed so it must be something different in how 7u65+ is handling Deployment Rule Sets or pathing or something. I'm at a complete loss. ruleset.xml: <?xml version="1.0"?> -<ruleset version="1.0+"> -<rule> <id location="*.mycorp.com"/> <action version="1.6.0_45" permission="run"/> </rule> -<rule> <id location="http://javatester.org"/> <action version="1.6.0_45" permission="run"/> </rule> </ruleset>

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >