Search Results

Search found 26189 results on 1048 pages for 'linux guy'.

Page 301/1048 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • How do I make some files on my machine accessible via HTTP using Apache?

    - by Lazer
    I did a wget on the source and built the apache binaries correctly. Now what do I need to do to get some documents accessible using HTTP (start some services?)? Also, do I need to group all the files I want to make accessible in some directory and make the directory and its contents accessible or can I just make the individual documents available? I will be providing these links to my colleagues and do not want them to be down, so need to make sure that the apache services are up automatically after a reboot. Does apache have some inbuilt support for this?

    Read the article

  • XAMPP with PostgreSQL

    - by fred smith
    I'm looking for a package like XAMPP, but instead of MySQL it would use PostgreSQL. I've done some searching and haven't turned up anything other than doing a full server setup of both.

    Read the article

  • VirtualBox guest responds to ping but all ports closed in nmap

    - by jeremyjjbrown
    I want to setup a test database on a vm for development purposes but I cannot connect to the server via the network. I've got Ubuntu 12.04vm installed on 12.04 host in Virtualbox 4.2.4 set to - Bridged network mode - Promiscuous Allow All When I try to ping the virtual guest from any network client I get the expected result. PING 192.168.1.209 (192.168.1.209) 56(84) bytes of data. 64 bytes from 192.168.1.209: icmp_req=1 ttl=64 time=0.427 ms ... Internet access inside the vm is normal But when I nmap it I get nothin! jeremy@bangkok:~$ nmap -sV -p 1-65535 192.168.1.209 Starting Nmap 5.21 ( http://nmap.org ) at 2012-11-15 18:39 CST Nmap scan report for jeremy (192.168.1.209) Host is up (0.0032s latency). All 65535 scanned ports on jeremy (192.168.1.209) are closed Service detection performed. Please report any incorrect results at http://nmap.org/submit/ Nmap done: 1 IP address (1 host up) scanned in 0.88 seconds ufw and iptables on VM... jeremy@jeremy:~$ sudo service ufw stop [sudo] password for jeremy: ufw stop/waiting jeremy@jeremy:~$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I have scanned around and have no reason to believe that my router is blocking internal ports. jeremy@bangkok:~$ nmap -v 192.168.1.2 Starting Nmap 5.21 ( http://nmap.org ) at 2012-11-15 18:44 CST Initiating Ping Scan at 18:44 Scanning 192.168.1.2 [2 ports] Completed Ping Scan at 18:44, 0.00s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 18:44 Completed Parallel DNS resolution of 1 host. at 18:44, 0.03s elapsed Initiating Connect Scan at 18:44 Scanning 192.168.1.2 [1000 ports] Discovered open port 445/tcp on 192.168.1.2 Discovered open port 139/tcp on 192.168.1.2 Discovered open port 3306/tcp on 192.168.1.2 Discovered open port 80/tcp on 192.168.1.2 Discovered open port 111/tcp on 192.168.1.2 Discovered open port 53/tcp on 192.168.1.2 Discovered open port 5902/tcp on 192.168.1.2 Discovered open port 8090/tcp on 192.168.1.2 Discovered open port 6881/tcp on 192.168.1.2 Completed Connect Scan at 18:44, 0.02s elapsed (1000 total ports) Nmap scan report for 192.168.1.2 Host is up (0.0017s latency). Not shown: 991 closed ports PORT STATE SERVICE 53/tcp open domain 80/tcp open http 111/tcp open rpcbind 139/tcp open netbios-ssn 445/tcp open microsoft-ds 3306/tcp open mysql 5902/tcp open vnc-2 6881/tcp open bittorrent-tracker 8090/tcp open unknown Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds Answer... Turns out all of the ports were open to the network. I installed open ssh and confirmed it. Then I edited my db conf to listen to external IP's and all was well.

    Read the article

  • Unicode paragraph end/line break breaking space / non breaking space aware text editor

    - by martinr
    I want one of those to write my blog articles with. I'm tired of manually converting breaks from rough notes to either paragraphs or line breaks for release as HTML, and tired of converting spaces to breaking or non-breaking ones. There are standard Unicode code points for the difference - what editor lets me use almost plain ASCII text but with builtin support and understanding for Unicode paragraph and non-breaking space characters? And ideally will let me save straight to either plain text UTF8 or to a file of plain HTML paragraphs?

    Read the article

  • ssh_exchange_identification: Connection closed by remote host

    - by Charlie Epps
    First: $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys Connecting to SSH servers gives this message: $ ssh -vvv localhost OpenSSH_5.3p1, OpenSSL 0.9.8m 25 Feb 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to localhost [127.0.0.1] port 22. debug1: Connection established. debug1: identity file /home/charlie/.ssh/identity type -1 debug1: identity file /home/charlie/.ssh/id_rsa type -1 debug3: Not a RSA1 key file /home/charlie/.ssh/id_dsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/charlie/.ssh/id_dsa type 2 ssh_exchange_identification: Connection closed by remote host My /etc/hosts.allow is as following: sshd: ALLOW /etc/hosts.deny is as following: ALL: ALL: DENY I have changed my /etc/ssh/sshd_conf as following: ListenAddress 0.0.0.0 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys PasswordAuthentication no

    Read the article

  • MaxStartups and MaxSessions configurations parameter for ssh connections?

    - by Webby
    I am copying the files from machineB and machineC into machineA as I am running my below shell script on machineA. If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC. I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying 10 files in parallel. Below is my shell script which I have - #!/bin/bash export PRIMARY=/test01/primary export SECONDARY=/test02/secondary readonly FILERS_LOCATION=(machineB machineC) export FILERS_LOCATION_1=${FILERS_LOCATION[0]} export FILERS_LOCATION_2=${FILERS_LOCATION[1]} PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers SECONDARY_PARTITION=(1643 1103 1372 1096 1369 1568) # this will have more file numbers export dir3=/testing/snapshot/20140103 find "$PRIMARY" -mindepth 1 -delete find "$SECONDARY" -mindepth 1 -delete do_Copy() { el=$1 PRIMSEC=$2 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. } export -f do_Copy parallel --retries 10 -j 10 do_Copy {} $PRIMARY ::: "${PRIMARY_PARTITION[@]}" & parallel --retries 10 -j 10 do_Copy {} $SECONDARY ::: "${SECONDARY_PARTITION[@]}" & wait echo "All files copied." Problem Statement:- With the above script at some point I am getting this exception - ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host And I guess the error is typically caused by too many ssh/scp starting at the same time. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions is set too low. But my question is on which server it is pretty low? machineB and machineC or machineA? And on what machines I need to increase the number? On machineA this is what I can find - root@machineA:/home/david# grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 root@machineA:/home/david# grep MaxSessions /etc/ssh/sshd_config And on machineB and machineC this is what I can find - [root@machineB ~]$ grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10 [root@machineB ~]$ grep MaxSessions /etc/ssh/sshd_config #MaxSessions 10

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • MySQL and PostgreSQL on the same hardware

    - by Kamil Kisiel
    We recently bought some new hardware for a database server which we were intending to dedicate to the operation of PostgreSQL. However now we have the requirement to also run MySQL as some software we want to use only supports that database. Since the storage on this machine is the most suitable for hosting a DB, and we don't currently have the budget for more hardware,we're thinking of running both of them on the same server. Are there any caveats or best practices we should be aware of?

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • GlassFish will not start when SNMP is enabled

    - by edarc
    I have a GlassFish v3 app server running on 64-bit Debian Lenny. Everything is running fine, except I would like to monitor GF's JVM instance with SNMP. However, every time I try to enable it by adding the following <jvm-options> in domain.xml: -Dcom.sun.management.snmp.port=10161 -Dcom.sun.management.snmp.acl.file=/path/to/snmp.acl -Dcom.sun.management.snmp.interface=127.0.0.1 GlassFish refuses to start: $ asadmin start-domain Waiting for DAS to start .Error starting domain: default. The server exited prematurely with exit code 1. Command start-domain failed. $ There is also nothing illuminating (well, really nothing at all) in jvm.log or server.log. The snmp.acl file contains: acl = { { communities = public access = read-only managers = localhost } } and is chmod 600 (I know this is not the problem because it will actually fail with an error about the permissions if it is set to anything other than 600) $ java -version java version "1.6.0_0" OpenJDK Runtime Environment (build 1.6.0_0-b11) OpenJDK 64-Bit Server VM (build 1.6.0_0-b11, mixed mode)

    Read the article

  • how to resolve error in ubuntu

    - by Bipul
    i m using ubuntu 9.04. when i m running any command like sudo apt-get update ,i get following error message"E: Type 'l.com/ubuntu' is not known on line 45 in source list /etc/apt/sources.list E: The list of sources could not be read. ".due to this problem i m not able to download anything.please help

    Read the article

  • Command to execute another command while replaying the command on STDOUT

    - by hakre
    It's not easy to formulate the question properly, maybe it helps when I describe what I'd like to do. I want to execute a command and pipe it's output into a tool called pastebinit which uploads the STDOUT output to pastebin. That works very well, however I would like to send the command itself on top of it but w/o typing it a second time. Is there some command I can launch "my command" with that will Print "my command" on STDOUT Executes "my command" I have the feeling that something like that exists but as hard as it is to formulate such a question properly, I was not able to dig it up with google so far.

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • Advantages of Ubuntu LTS versions over regular Ubuntu?

    - by Adam Matan
    Do the LTS versions of Ubuntu have any advantages for the non-paying customers (who don't get any support?) From the tech spec only, these versions seem outdated in many aspects - mainly drivers and installed software versions. For instance, My previous (bounty!) problem regarding the AGN 5100 drivers would have been solved under Ubuntu 9.04.

    Read the article

  • Why isn't MediaWiki loading?

    - by E L
    I recently set up MediaWiki on an Apache server with PostgreSQL. It installed successfully. However, when I try to access the website, I get a blank page. The error log reports the following. [error] PHP Fatal error: require_once(): Failed opening required '/var/www/mediawiki-1.19.2/LocalSettings.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 [error] PHP Warning: require_once(/var/www/mediawiki-1.19.2/LocalSettings.php): failed to open stream: Permission denied in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 [error] PHP Fatal error: require_once(): Failed opening required '/var/www/mediawiki-1.19.2/LocalSettings.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 I've seen other people with similar problems and the solutions have involved using chmod on LocalSettings.php to 644 or in other cases 755. Others have said using chown to make LocalSettings match the Apache user, which is just 'apache' in my case. None of these solutions have worked for me. Does anyone have other suggestions or maybe I missed something?

    Read the article

  • Configure IPv6 routing

    - by godlark
    I've got IPv6 addresses from SIXXS. My host is connected with SIXXS network over a AICCU tunnel ("sixxs" interface). My host address is 2001:::2, the host on the end has address 2001:::1. On my host IPv6 is fully accessible. I have problem with configuring IPv6 network on VMs. I use VirtualBox, the VM (Ubuntu) uses tap1 (on the host bridged by br0) #!/bin/sh PATH=/sbin:/usr/bin:/bin:/usr/bin:/usr/sbin # create a tap tunctl -t tap1 ip link set up dev tap1 # create the bridge brctl addbr br0 brctl addif br0 tap1 # set the IP address and routing ip link set up dev br0 ip -6 route del 2001:6a0:200:172::/64 dev sixxs ip -6 route add 2001:6a0:200:172::1 dev sixxs ip -6 addr add 2001:6a0:200:172::2/64 dev br0 ip -6 route add 2001:6a0:200:172::2/64 dev br0 Host: routing table: 2001:6a0:200:172::1 dev sixxs metric 1024 2001:6a0:200:172::/64 dev br0 proto kernel metric 256 2001:6a0:200:172::/64 dev br0 metric 1024 2000::/3 dev sixxs metric 1024 fe80::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sixxs proto kernel metric 256 fe80::/64 dev br0 proto kernel metric 256 fe80::/64 dev tap1 proto kernel metric 256 default via 2001:6a0:200:172::1 dev sixxs metric 1024 Guest: interface eth1 (it is connected with tap1): auto eth1 iface eth1 inet6 static address 2001:6a0:200:172::3 netmask 64 gateway 2001:6a0:200:172::2 Guest: routing table 2001:6a0:200:172::/64 dev eth1 proto kernel metric 256 fe80::/64 dev eth0 proto kernel metric 256 fe80::/64 dev eth1 proto kernel metric 256 default via 2001:6a0:200:172::2 dev eth1 metric 1024 The guest pings to the host, the host pings to the guest, the host pings to 2001:6a0:200:172::1, but the guest doesn't ping to 2001:6a0:200:172::1. The guest tries to ping, on the host (by tcdump) I can capture its packets, but the host doesn't send them to 2001:6a0:200:172::1. What have I missed in configuration?

    Read the article

  • When I restart my virtual enviorment it does not re-bind to the IP address

    - by RoboTamer
    The IP does no longer respond to a remote ping With restart I mean: lxc-stop -n vm3 lxc-start -n vm3 -f /etc/lxc/vm3.conf -d -- /etc/network/interfaces auto lo iface lo inet loopback up route add -net 127.0.0.0 netmask 255.0.0.0 dev lo down route add -net 127.0.0.0 netmask 255.0.0.0 dev lo # device: eth0 auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address 192.22.189.58 netmask 255.255.255.248 gateway 192.22.189.57 broadcast 192.22.189.63 bridge_ports eth0 bridge_fd 0 bridge_hello 2 bridge_maxage 12 bridge_stp off post-up ip route add 192.22.189.59 dev br0 post-up ip route add 192.22.189.60 dev br0 post-up ip route add 192.22.189.61 dev br0 post-up ip route add 192.22.189.62 dev br0 -- /etc/lxc/vm3.conf lxc.utsname = vm3 lxc.rootfs = /var/lib/lxc/vm3/rootfs lxc.tty = 4 #lxc.pts = 1024 # pseudo tty instance for strict isolation lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.mtu = 1500 #lxc.cgroup.cpuset.cpus = 0 # security parameter lxc.cgroup.devices.deny = a # Deny all access to devices lxc.cgroup.devices.allow = c 1:3 rwm # dev/null lxc.cgroup.devices.allow = c 1:5 rwm # dev/zero lxc.cgroup.devices.allow = c 5:1 rwm # dev/console lxc.cgroup.devices.allow = c 5:0 rwm # dev/tty lxc.cgroup.devices.allow = c 4:0 rwm # dev/tty0 lxc.cgroup.devices.allow = c 4:1 rwm # dev/tty1 lxc.cgroup.devices.allow = c 4:2 rwm # dev/tty2 lxc.cgroup.devices.allow = c 1:9 rwm # dev/urandon lxc.cgroup.devices.allow = c 1:8 rwm # dev/random lxc.cgroup.devices.allow = c 136:* rwm # dev/pts/* lxc.cgroup.devices.allow = c 5:2 rwm # dev/pts/ptmx lxc.cgroup.devices.allow = c 254:0 rwm # rtc # mounts point lxc.mount.entry=proc /var/lib/lxc/vm3/rootfs/proc proc nodev,noexec,nosuid 0 0 lxc.mount.entry=devpts /var/lib/lxc/vm3/rootfs/dev/pts devpts defaults 0 0 lxc.mount.entry=sysfs /var/lib/lxc/vm3/rootfs/sys sysfs defaults 0 0

    Read the article

  • Reset KDE System Monitor (KSysGuard)

    - by Deltik
    Something went wrong while I was attempting to restore a backup, and KDE System Guard ceased to display properly. This is the correct display (command running from root: kdesudo ksysguard): This is the incorrect display (command: ksysguard): Here in the incorrect display, the menu bar is missing, and the tab "Process Table" is unclickable. I have already tried to remove the directory ~/.kde/share/apps/ksysguard/ but to no avail. My question: How do I restore KSysGuard back to factory defaults/normal functionality?

    Read the article

  • Hylafax: Encounter "No font metric information" when try to send a fax

    - by Chau Chee Yang
    I am using Hylafax 6.0.5 on Fedora 13 x86_64. As there are no rpm package available for Fedora 13, I use the source tar ball to install hylafax myself. Everything seems fine during compile and install. I try to send a fax with sendfax and encounter error: # sendfax -n -d <fax-number> /etc/passwd /usr/local/sbin/textfmt: No font metric information found for "Courier-Bold". Usage: /usr/local/sbin/textfmt [-1] [-2] [-B] [-c] [-D] [-f fontname] [-F fontdir(s)] [-m N] [-o #] [-p #] [-r] [-U] [-Ml=#,r=#,t=#,b=#] [-V #] files... >out.ps Default options: -f Courier -1 -p 11bp -o 0 Error converting document; command was "/usr/local/sbin/textfmt -B -f Courier-Bold -Ml=0.4in -p 11 -s default >'/tmp//sndfaxp5GdJ9' <'/etc/passwd'" It seems like there is problem with font problem. I have ghostscript-fonts installed too. I can't find hyla.conf in path /etc/hylafax. There is no /etc/hylafax path in my file system. All configuration files seems located in /var/spool/hylafax/etc. Please advice. Thank you.

    Read the article

  • Using PAM and vsftpd without root access

    - by Zizzencs
    I'm trying to set up a few vsftpd instances on a machine that I have no root access to. The authentication should be done through PAM with pam_listfile, like this: pam_listfile.so item=group sense=allow file=/path/filename onerr=fail I can ask the administrator to set up a PAM service for me and include that line but he is not willing to create 6 PAM services for my 6 vsftpd instances and I really need different /path/filename set for each vsftpd server. Is there a way to solve this problem? Can I somehow not use absolute path as the parameter? (I know the correct solution would be to use one vsftpd instance and set up virtual users properly. However unfortunately I have to work what I have and the users already exist in an Active Directory and are authenticated to the system using another PAM service.)

    Read the article

  • What are the pitfalls of hardlinked files on my desktop PC?

    - by MountainX
    All the identical-content files on my PC are now hardlinked. (My data is completely de-duplicated. It is a consequence of the way I copied my data from my old computer.) What pitfalls do I need to be aware of now that certain actions on one file could silently affect a number of other files? I know that deleting the file I'm working on is not a problem (assuming I deleted it on purpose). It doesn't affect any of the other hardlinked files and I don't see that the delete action would lead to unexpected side effects. Moving or renaming the file is not a problem. I don't see any unexpected consequences. I don't think copying hardlinked files is a problem, but I'm not as confident about any unexpected consequences in this regard. What I have seen is that making a copy (to the same disk) of a hardlinked file with cp keeps the copy hardlinked (i.e., inode number doesn't change in the copy). Copying to another filesystem obviously breaks the hardlink. (I guess one pitfall is forgetting this fact, given that my PC has 3 hard disks.) Changing permissions does affect all linked files. So far this has proven handy. (I made a large number of the hardlinked files read-only.) None of the operations above seem to produce any major unexpected consequences. However, as was pointed out to me by Daniel Beck in a comment, editing or modifying a file can sometimes be a problem. It depends on the tool and maybe the type of edit. (For example, editing small text files using sed seems to always break the link while using nano doesn't.) This introduces the chance that editing one file could affect all the hardlinked files (i.e., alter the original inode). My proposed solution to this is to make all hardlinked files read-only (and that is already mostly the case). If I can't do that for some files, I will unlink those particular files. Is there any problem with this read-only approach? I'm assuming that if I go to edit a file and find it to be read-only, I'll remember to unlink that filename while making it writable. So one pitfall might be forgetting this rule. In that case, I'll have to rely on my backups. Am I correct in the above statements? And what else do I need to know? BTW, I'm running Kubuntu 12.04. I'm also using btrfs. (I have 2 SSD's and 1 HDD in the PC. I will also be adding an external USB HDD. I'm also connected to a network and I mount some NFS shares. I don't assume any of these last bits are relevant to the question, but I'm adding them just in case.) BTW, since I have more than one drive (with separate file systems), to unlink any file all I have to do is copy it to another drive, then move it back. However, using sed also works (in my testing). Here's my script: sed -i 's/\(.\)/\1/' file1 Surprisingly, this even unlinks zero byte files. In my testing it also appears to work on non-text files without any special options. (But I understand that the --binary option might be needed on Windows, MS-DOS and Cygwin.) However, copying to another disk and moving back may be the best way to unlink. For my use-case, unlink command doesn't really "unlink", rather it "removes".

    Read the article

  • Proftp error message Fatal: unknown configuration directive 'DisplayFirstChdir' on line 22 of '/etc/proftpd/proftpd.conf'

    - by LedZeppelin
    Sorry for the newb factor but I'm trying to set up a server using this guide: http://www.intac.net/build-your-own-server/ I'm at the end of step 5 and when I try to restart proftp I get the following error message me@me-desktop:~$ sudo service proftpd restart * Stopping ftp server proftpd [ OK ] * Starting ftp server proftpd Fatal: unknown configuration directive 'DisplayFirstChdir' on line 22 of '/etc/proftpd/proftpd.conf' [fail] Any clues on how to change line 22?

    Read the article

  • Attempting to update Amazon Route53 using a script, but domain is not being updated

    - by ks78
    I have several Amazon EC2 instances, running Ubuntu 10.04, with which I'd like to use Amazon's Route53. I setup a script as described in Shlomo Swidler's article, but I'm still missing something. When the script runs, it doesn't return any output, which I initially assumed meant it ran correctly. However, when I check the DNS records using MyR53DNS, there are no entries for my instances. Here's my script: #!/bin/tcsh -f set root=`dirname $0` setenv EC2_HOME /usr/lib/ec2-api-tools setenv EC2_CERT /etc/cron.route53/ec2_x509_cert.pem setenv EC2_PRIVATE_KEY /etc/cron.route53/ec2_x509_private.pem setenv AWS_ACCESS_KEY_ID myaccesskeyid setenv AWS_SECRET_ACCESS_KEY mysecretaccesskey /user/bin/ec2-describe-instances | \ perl -ne '/^INSTANCE\s+(i-\S+).*?(\S+\.amazonaws\.com)/ \ and do { $dns = $2; print "$1 $dns\n" }; /^TAG.+\sShortName\s+(\S+)/ \ and print "$1 $dns\n"' | \ perl -ane 'print "$F[0] CNAME $F[1] --replace\n"' | \ xargs -n 4 $/etc/cron.route53/cli53/cli53.py \ rrcreate -x 60 mydomain.com Does anyone see a problem with this script? If its not the script, what else could be preventing my Route53 domain from being updated? I am using the Security Groups to IP-restrict the instances. I've tried opening port 53, but that didn't seem to have an effect. Is there another port that Route53 uses? I'd appreciate any help or guidance the ServerFault community can offer. Let me know if you need any further info.

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >