Search Results

Search found 12017 results on 481 pages for 'root'.

Page 149/481 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • trying to install Mumble on centos

    - by ErocM
    I am trying to install mumble on my CentOS vps following these directions: http://www.hosting.com/support/linux/install-the-mumble-voip-server-to-redhat-or-centos When I get to this line: rpm2cpio mumble-server-1.2.2-3mdv2011.0.x86_64.rpm > file.lzma lzma -d file.lzma I am getting this error: root@vps-1112788-12524 [/home/~~~~/mumble]# rpm2cpio mumble-1.2.4-0.20120422.1-mdv2012.0.x86_64.rpm > file.lzma root@vps-1112788-12524 [/home/~~~~/mumble]# lzma -d file.lzma lzma: file.lzma: File format not recognized I did update the name of the file since the link they gave me was not available. I got the new rpm from the same place: http://www.rpmfind.net/linux/rpm2html/search.php?query=mumble+server&submit=Search+... I'm new to Linux so I have no idea what I'm doing wrong. Can anyone help me out?

    Read the article

  • Better Method of Opening TTY Permissions

    - by VxJasonxV
    At work, I have a few legacy servers that I log into as root, and then su down to a user. I continue to run into an issue where after doing so, I am unable to run screen as this user. I don't want to open screen as root, because then I have to consciously su down the user every new shell, and I often forget. The question is, is there an easier resolution to this than I'm currently aware of? My current solution is to find my terminal pts number, then set it chmod 666. I'm looking for something akin to X11's xhost ACL management, if such a thing exists for this situation.

    Read the article

  • Clear scr does not work

    - by idea_
    I haven't been able to use the "clear scr" command in a while, as I get the following error: root@server:~# clear The program 'clear' is currently not installed. You can install it by typing: apt-get install ncurses-bin bash: clear: command not found root@server:~# apt-get install ncurses-bin Reading package lists... Done Building dependency tree Reading state information... Done ncurses-bin is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. I'm using Ubuntu Server 9.10. I did some development with the ncurses library a while ago, so I've no doubt broken something.

    Read the article

  • Permission problem - users can't access main index.php anymore

    - by JMan
    From /var/www, I executed "chmod -R 774 ." and now none of my .php scripts are accessible. From my browser, when I type in mydomain.com or mydomain.com/test2.php or mydomain.com/test.php, I get the 403 Forbidden error msg. So, I changed the permissions of 3 of the .php scripts to 775, but this didn't help either. Here is the output from "ls -la /var/www": drwxrwxr-- 6 john wheel 4096 2010-09-29 17:38 . drwxr-xr-x 14 root root 4096 2010-09-27 21:15 .. -rwxrwxr-x 1 john wheel 3353 2010-09-29 05:29 index.php -rwxrwxr-x 1 john wheel 124 2010-09-27 23:12 .htaccess -rwxrwxr-x 1 john john 34 2010-09-29 17:39 test2.php -rwxrwxr-x 1 john john 26 2010-09-28 22:08 test.php The .htaccess file does a URL mod_rewrite so typing in index.php is not needed. Thanks for your help.

    Read the article

  • OS X mavericks latex issue

    - by Vineet Bafna
    I upgraded to Mavericks and found that pdflatex stopped working. I followed some previous discussions to recreate a link that Mavericks had broken. sudo ln -fs /Library/TeX/Distributions/.DefaultTeX/Contents/Programs/texbin texbin The error message changed to "Permission denied". I tried to change permissions, but it does not work. Please see below. /usr 65: sudo ln -fs /Library/TeX/Distributions/.DefaultTeX/Contents/Programs/texbin texbin /usr 66: ls -l texbin ls: texbin: Permission denied lrwx------ 1 root wheel 63 Aug 21 08:42 texbin /usr 67: chmod 755 texbin /usr 68: ls -l texbin ls: texbin: Permission denied lrwx------ 1 root wheel 63 Aug 21 08:42 texbin /usr 69:

    Read the article

  • How does Windows 7 DNS client work?

    - by Mark Allison
    I am using a local DHCP and DNS server on my home network on a linux machine. It is running CentOS 6.3 with dnsmasq 2.48. It's all working fine except for local DNS lookups for Windows machines only. I have a mix of Ubuntu, CentOS and Windows machines on the network, some virtual, some physical. I have a machine called boron and the domain is called localdomain If I ping boron from any linux machine, I get [root@lithium lists]# ping -c3 boron PING boron.localdomain (10.0.0.5) 56(84) bytes of data. 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=1 ttl=64 time=0.740 ms 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=2 ttl=64 time=0.478 ms 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=3 ttl=64 time=0.458 ms --- boron.localdomain ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.458/0.558/0.740/0.131 ms If I do it from my Windows 7 machine, I get: Ping request could not find host boron. Please check the name and try again. If I try ping boron.localdomain I get: Pinging boron.localdomain [67.215.65.132] with 32 bytes of data: Reply from 67.215.65.132: bytes=32 time=16ms TTL=57 Reply from 67.215.65.132: bytes=32 time=188ms TTL=57 Reply from 67.215.65.132: bytes=32 time=15ms TTL=57 Reply from 67.215.65.132: bytes=32 time=14ms TTL=57 Ping statistics for 67.215.65.132: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 14ms, Maximum = 188ms, Average = 58ms which is clearly wrong. Why is it going out to the internet? Why can't my windows machine resolve the boron hostname to a FQDN? My Windows machines and linux machines get their network config from DHCP. UPDATE If I do ipconfig /all in Windows, it looks as I would expect: Windows IP Configuration Host Name . . . . . . . . . . . . : lanthanum Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : .localdomain Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : .localdomain Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller Physical Address. . . . . . . . . : 50-E5-49-38-FC-A2 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 10.0.0.57(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 23 August 2012 13:58:45 Lease Expires . . . . . . . . . . : 24 August 2012 07:58:48 Default Gateway . . . . . . . . . : 10.0.0.6 DHCP Server . . . . . . . . . . . : 10.0.0.6 DNS Servers . . . . . . . . . . . : 10.0.0.6 208.67.222.222 208.67.220.220 NetBIOS over Tcpip. . . . . . . . : Enabled When I do an nslookup I get: Server: carbon.localdomain Address: 10.0.0.6 *** carbon.localdomain can't find boron: Unspecified error However if I do ifconfig -a in Linux I get: [root@nitrogen ~]# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:0C:29:AF:EC:2A inet addr:10.0.0.7 Bcast:10.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:187687 errors:0 dropped:0 overruns:0 frame:0 TX packets:5857 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:23910700 (22.8 MiB) TX bytes:712964 (696.2 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:329894 errors:0 dropped:0 overruns:0 frame:0 TX packets:329894 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:67153143 (64.0 MiB) TX bytes:67153143 (64.0 MiB) and nslookup: [root@nitrogen ~]# nslookup boron Server: 10.0.0.6 Address: 10.0.0.6#53 Name: boron Address: 10.0.0.5 Both machines are on the same network using the same DHCP server. UPDATE 2 I thought the issue was resolved but I am getting intermittent DNS resolving issues but only on my Windows 7 machine. All my linux boxes are fine. This is what happens when I ping and nslookup from Windows to a Windows 2008 Server: C:\Users\mark>nslookup magnesium Server: carbon.localdomain Address: 10.0.0.6 Name: magnesium.localdomain Address: 10.0.0.12 C:\Users\mark>ping magnesium Pinging magnesium.localdomain [67.215.65.132] with 32 bytes of data: Reply from 67.215.65.132: bytes=32 time=267ms TTL=57 Reply from 67.215.65.132: bytes=32 time=162ms TTL=57 Reply from 67.215.65.132: bytes=32 time=510ms TTL=57 Reply from 67.215.65.132: bytes=32 time=146ms TTL=57 Ping statistics for 67.215.65.132: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 146ms, Maximum = 510ms, Average = 271ms And from Linux: [root@beryllium ~]# ping -c4 magnesium PING magnesium.localdomain (10.0.0.12) 56(84) bytes of data. 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=1 ttl=128 time=0.176 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=2 ttl=128 time=0.634 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=3 ttl=128 time=0.685 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=4 ttl=128 time=0.263 ms --- magnesium.localdomain ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3002ms rtt min/avg/max/mdev = 0.176/0.439/0.685/0.223 ms [root@beryllium ~]# nslookup magnesium Server: 10.0.0.6 Address: 10.0.0.6#53 Name: magnesium.localdomain Address: 10.0.0.12 UPDATE 3 I stopped the Windows DNS client on my Windows 7 machine with net stop dnscache and it is now working fine. It would be nice to get DNS working with the DNS client on, but I might be OK without it, what do you think?

    Read the article

  • Find files containing a string on the whole filesystem

    - by Fabio
    I need to find all the instances of a given string in the whole filesystem, because I don't remember in which configuration files, script or any other programs I put it and I need to update that string with a new one. I tried with the following command `grep -nr 'needle' / --exclude-dir=.svn | mail [email protected] -s 'References on xxx' If I run this command on a small directory it gives me the output I need in the form /path1/:nn:line containing needle /path2/:nn:line containing needle where /path1 is the full path of the file, nn is the row containing the needle and last field is the content of the line. However when I run the command on the root directory the grep process hang after a while. I run this script about 8 hours ago and even on a small filesystem (less than 5GB) it doesn't end and if I run top or ps the process seems sleeping root 24909 0.0 0.1 3772 1520 pts/1 S+ Feb10 0:15 grep -nr needle / --exclude-dir=.svn Why it doesn't end? Is there any better way to do this (it's a one time job, I don't need to execute this more than once) Thanks.

    Read the article

  • Changing startup parameters for MySQL

    - by RN
    I need to remove skip-networking from MySQL startup parameters I am running MySQL on Linux on Centos on a VPS Can someone please tell a newbie how to do this ? I suppose to start and stop the mySQL server, I have to do something like this /etc/init.d/mysqld stop /etc/init.d/mysqld start ps -ef|grep 'mysql' root 11331 20220 0 10:53 pts/0 00:00:00 grep mysql root 32452 1 0 Apr02 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe --skip-grant-tables --skip-networking mysql 32504 32452 0 Apr02 ? 00:00:18 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock --skip-grant-tables --skip-networking

    Read the article

  • Use apt-get source on a debian repo without using /etc/apt/source.list

    - by Erwan Queffélec
    I'm trying to use apt-get source as a regular user on a debian squeeze system. I want to retrieve the sources for cyrus-imapd-2.4 from the testing/wheezy repository. apt-get source works without root privileges; however, it seems there is no way to get apt-get to fetch anything from a repository that is not in /etc/apt/sources.list. Is there any command line option, alternate sources.list file, environment variable that will get apt to work with a custom repository ? I do have root access so I could change the /etc/apt/sources.list, however I really do not want to do that for a number of reason.

    Read the article

  • copy boot-able partition

    - by Dima
    I have an disk image with 3 partitions: first partition (hd0,0) is boot-able with GRUB1 with the following configuration GRUB file: default=0 timeout=5 title Bank A root (hd0,1) chainloader +1 title Bank B root (hd0,2) chainloader +1 The partitions (hd0,1) and (hd0,2) are also boot-able. I'm trying to clone partition (hd0,1) to (hd0,2) by creating device map using kpartx and copying whole partition using dd command. The problem is: after partition cloning, the cloned partition did not boot (but all files are OK). What the wrong? I need both partitions to bee identical (I'm using them for fail-over purposes into embedded device)

    Read the article

  • rc.local is not executed on bootup ubuntu

    - by Alexander
    Im on Ubuntu 10.04. I want to execute script on system boot. I added it to rc.local. If I execute rc.local manually it works fine. If I boot system in recovery mode(2nd string in boot menu) it also works fine. But if I boot normally it is not executed. However i added sleep 20 to my script and there is a pause at the end of boot process, but nothing more is executed. Thanks I think, it soesnt depend on contents of the script but anyway #!/bin/sh -e sleep 20 sudo service ssh start su -c 'service pgsql start' postgres sudo svnserve -d su -c 'hamachi start' root su -c 'hamachi login' root exit 0

    Read the article

  • Ubuntu Startup xsp4

    - by Chin Ye
    when i type in terminal command are working fine root@syscomp:/var/www/WebSite2# xsp4 xsp4 Listening on address: 0.0.0.0 Root directory: /var/www/WebSite2 Listening on port: 9000 (non-secure) Hit Return to stop the server. but i m using script in /etc/init/GPS_WebSite.conf when the script are running fine, but not running in background when the script run one time and then closed, that is why my mono server are not running all the time, this is my GPS_WebSite.conf script, what i need to change to be running forever in background? start on login-session-start script exec > /tmp/debug-my-script.txt 2>&1 sleep 10 cd /var/www/WebSite2 xsp4 end script

    Read the article

  • how to portforward port 7300 from server A to server B

    - by Patrick van Hout
    hi, We are using Stunnel. But want to replace it is with an iptables entry if possible. 192.168.123.122:7300 need to be forwarded to 192.168.123.188:7300. So in iptables I set these two entries: [root@dev ~]# iptables -t nat -A PREROUTING -p tcp --dport 7300 -j DNAT --to-destination 192.168.123.188:7300 [root@dev ~]# iptables -A FORWARD -m state -p tcp -d 192.168.123.188 --dport 7300 --state NEW,ESTABLISHED,RELATED -j ACCEPT But it isn't working. I did check that /proc/sys/net/ipv4/conf/eth0/forwarding has the value "1" inside. Any tips or hints? thanks, Patrick

    Read the article

  • Setup symbolic link where users can access it with FTP

    - by Dan Shields
    I have a folder on a server where a client of mine has a bunch of folders that they upload images and what not for a site, I do a symbolic link to those folders to the root of the website. This way I can give them ftp access to upload whatever they need without having access to the root level of the website. I have another folder that I can't setup as a symbolic link to their folder, which has images they need to upload to. I know that if I create a symbolic link the other way around where the sym link is in their folder, they can't access it through FTP. There has to be a way without creating two separate FTP accounts and give a user the ability to upload to a different directory that is outside of their home directory. I see that it is ftp specific and that there are some settings that can be changed but I haven't seen any clear cut answers for the best way to handle this.

    Read the article

  • need help writing puppet module for sssd.conf using Hiera

    - by mr.zog
    I need to build a module to manage /etc/sssd/sssd.conf on our Red Hat VMs. The sssd modules published on the forge don't seem to do what I want, nor do I feel like forking any of them. I want to keep all the configuration data in Hiera's common.yaml file. Below is my sssd.conf file. [sssd] config_file_version = 2 services = nss, pam domains = default [nss] filter_groups = root filter_users = root reconnection_retries = 3 entry_cache_timeout = 300 entry_cache_nowait_percentage = 75 [pam] [domain/default] auth_provider = ldap ldap_id_use_start_tls = True chpass_provider = ldap cache_credentials = True ldap_search_base = dc=ederp,dc=com id_provider = ldap ldap_uri = ldaps://lvldap1.lvs01.ederp.com/ ldaps://lvldap2.lvs01.ederp.com/ ldap_tls_cacertdir = /etc/openldap/cacerts What is the best, most economical way to build the sssd.conf file? Should I have multiple .pp files such as domain.pp, pam.pp etc. or should all the lines of configuration land in init.pp?

    Read the article

  • post-receive hook permission denied "unable to create file" error

    - by ThomasReggi
    Just got gitolite installed on my webserver and am trying to get a post-receive hook that can point the git dir in apache's direction. This is what my post-receive hook looks like. Got this script from the Using Git to manage a web site. #!/bin/sh echo "post-receive example.com triggered" GIT_WORK_TREE=/srv/sites/example.com/public git checkout -f This is the error response i'm getting back from git push origin master from my local workstation. These are files from within my repository. remote: post-receive example.com triggered remote: error: unable to create file .htaccess (Permission denied) remote: error: unable to create file .tm_sync.config (Permission denied) remote: fatal: cannot create directory at 'application': Permission denied Permissions of public. drwxr-xr-x 5 root root 4096 Jun 26 17:23 public

    Read the article

  • traffic shaping for certain (local) users

    - by JMW
    Hello, i'm using ubuntu 10.10 i've a local backup user called "backup". :) i would like to give this user just a bandwidth of 1Mbit. No matter which software wants to connect to the network. this solution doesn't work: iptables -t mangle -A OUTPUT -p tcp -m owner --uid-owner 1001 -j MARK --set-mark 12 iptables -t mangle -A POSTROUTING -p tcp -m owner --uid-owner 1001 -j MARK --set-mark 12 tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 2 htb default 1 tc filter add dev eth0 parent 2: protocol ip pref 2 handle 50 fw classid 2:6 tc class add dev eth0 parent 2: classid 2:6 htb rate 10Kbit ceil 1Mbit tc qdisc show dev eth0 tc class show dev eth0 tc filter show dev eth0 does anyone know how to do it? thanks a lot in advance

    Read the article

  • Linux WD30EZRX WD Green HDD & Blacx Duet 5G Usb

    - by Adam
    I have connected up an WD30EZRX WD Green HDD to a Thermaltake Blacx Duet 5G USB dock in Ubuntu 12.04. Every thing seems fine except when the HDD idles it seems to have error ls: reading directory .: Input/output error after a while and is only fixed by unmounting and remounting the drive as root. I have the following line in /etc/fstab UUID=AAF670E9F670B6E3 /media/3TB ntfs defaults,user,auto 0 0 I have noticed that it seems to go between /dev/sdc2 and /dev/sdd2 devices on remount. I did copy 1TB last night without issue in 1 sitting. But after x mins of idle it has remount issue. Any tips/suggestions on how to proceed would be appreciated. Spent most of the night googling and all its done is made me sad. Edit (tried as suggested): root@mediaserver:/media/3TB# sudo hdparm -B 255 -S 253 /dev/sdd2 /dev/sdd2: setting Advanced Power Management level to disabled HDIO_DRIVE_CMD failed: Input/output error setting standby to 253 (vendor-specific) APM_level = not supported Seems as if that didn't help with this particular drive.

    Read the article

  • Kernel panic error

    - by cioby23
    We have a dedicated server with software RAID1 and one of the disk failed recently. The disk was replaced but after rebuilding the array and rebooting the server freezes with a Kernel Panic message No filesystem could mount root, tried: reiserfs ext3 ext2 cramfs msdos vfat iso9660 romfs fuseblk xfs Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(9,1) The filesystem on both disks is ext4. It seems the kernel can't load ext4 support. Is there any way to add ext4 support or do I need to recompile a new kernel again ? Interesting point that before disk replacement all was fine. The kernel is a stock kernel bzImage-2.6.34.6-xxxx-grs-ipv6-64 from our provider OVH Kind regards,

    Read the article

  • Why no multiple instances of Firefox on Linux as on Windows?

    - by Jack
    On Windows If I run Firefox as user jack, and then try to start another instance of firefox I will be unable to, as one is already running. If I choose to run firefox as administrator, then I can have two instances of firefox, separate from each other side by side, because they are under different user accounts. This does not seem to be true on Linux. As user jack if I start firefox, like on windows I am unable to start a new instance. If I open a terminal and change to root, set XAUTHORITY to jacks .Xauthority and try to start firefox as root....I get the error that firefox is already running. Why is this? Please don't spare any technical details in your answers....thankyou.

    Read the article

  • Added user to CentOS, Updated sshd_config with AllowUsers, Login denied

    - by Gregg
    CentOS 5.3. I can SSH into the system as root just fine. Added a user and set their password. They have shell access (/bin/bash). I can su to the account from root just fine. I updated /etc/ssh/sshd_config with: AllowUsers myNewUser And restarted sshd: /etc/init.d/sshd restart When trying to ssh into the server with the new user, I get a permission denied. And yes, I've double and triple checked that I am using the correct password. Any help is appreciated.

    Read the article

  • How to change the Nginx default folder?

    - by Ido Bukin
    I setup a server with Nginx and i set my Public_HTML in - /home/user/public_html/website.com/public And its always redirect to - /usr/local/nginx/html/ How can i change this ? Nginx.conf - user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } /usr/local/nginx/sites-enabled/default - server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } /usr/local/nginx/sites-available/website.com - server { listen 80; server_name website.com; rewrite ^/(.*) http://www.website.com/$1 permanent; } server { listen 80; server_name www.website.com; access_log /home/user/public_html/website.com/log/access.log; error_log /home/user/public_html/website.com/log/error.log; location / { root /home/user/public_html/website.com/public/; index index.php index.html; } # pass the PHP scripts to FastCGI server listening on # 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/user/public_html/website.com/public/$fastcgi_script_name; } } The error message I get is Fatal error: require_once() [function.require]: Failed opening required '/usr/local/nginx/html/202-config/functions.php' the server try to find the file in the Nginx folder and not in my Public_Html

    Read the article

  • Connecting FreeNAS 8 to Mac OS X Lion LDAP Server

    - by Absolution
    I currently have Mac OS X Lion Server running from a MacMini and want to use it purely as an LDAP server for authentication for FreeNAS 8. I have FreeNAS setup and running on a VM, all features working correctly and as expected however I cannot connect to my LDAP server (MacMini). Error message; **Nss_ldap: could not search LDAP server – server is unavailable** For LDAP service settings in FreeNAS, I know my Hostname and Base DN are correct (exact copies of what I set originally and ones that are shown in server:open directory overview) however I am unsure what to enter for Root bind DN, password and Suffix’s. I have researched into where I can find these out and other than following the FreeNAS examples it appears there is a way to find out within the Server Workgroup Manager specific to my settings – however this function is unavailable to me and cannot be ‘ticked’ to view for some strange reason. Some forums explain how Root bind DN should be uid=admin, dc=… and others cn=admin, dc=… – I’m rather confused and would appreciate your help or advice with this.

    Read the article

  • How to securely store and update backup on remote server via ssh/rsync

    - by Sergey P. aka azure
    I have about 200 Gb of pictures (let's say about 1 mb/file, 200k files) on my desktop. I have access (including root access) to remote linux server. And I want to have updateable backup of my pictures on remote server. rsync seems to be the right tool for such kind of job. But other people also have access (including root access) to this server and I want to keep my pictures private. So the question is: what is the best way to keep private files on remote "shared" linux server securely?

    Read the article

  • postfix/postdrop Issue with Solaris 10 (sparc) - permissions

    - by Zayne
    I am trying to get postfix (installed from blastwave) working on a Solaris 10 server, but only root is allowed to send mail. The problem appears to be permission related with postdrop. postdrop: warning: mail_queue_enter: create file maildrop/905318.27416: Permission denied I've checked that /var/opt/csw/spool/postfix/maildrop and /var/opt/csw/spool/postfix/public are both in the 'postdrop' group. main.cf contains setgid_group = postdrop. ppriv on postdrop as non-root user reports: postdrop[27336]: missing privilege "file_dac_write" (euid = 103, syscall = 5) needed at ufs_iaccess+0x110 I'm at a loss as to what to do next. I'm don't have much experience with Solaris; I use Linux daily. Any suggestions?

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >