Search Results

Search found 32568 results on 1303 pages for 'linux pwns mac'.

Page 461/1303 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • OSSEC agent behind NAT

    - by Eric
    I am working on an OSSEC deployment where I will have multiple agents behind 1 public IP. Below is an example of the setup Private Network OSSEC-Agent1 (192.168.1.10) OSSEC-Agent2 (192.168.50.33) OSSEC-Agent3 (10.10.10.1) Those IPs NAT to 1 public IP (1.1.1.1) Then 1.1.1.1 talks to the public OSSEC server on 2.2.2.2 I've read some OSSEC documentation talking about NAT here, but it doesn't tell me exactly what I need to know. Their example is using an entire /24 subnet and mine will mainly have multiple agents to only 1 public IP. With the setup so far, I brought Agent1 online fine and it is communicating to the OSSEC server. However Agent2 continues to fail trying to connect to 2.2.2.2. Even though when I added the key, I had the correct name for it, so I know it talked to the portal at least once for that information. I'm assuming it's just getting confused with the multiple keys to 1 public IP. I basically want to know if this is possible and/or if I'm just overlooking something simple. Any help would be greatly appreciated.

    Read the article

  • Free space on Dedi' in CentOS

    - by Trance84
    It will sound stupid but i need to figure out how much disk space i have in my dedicated server, it runs CentOS6...the last command i issued was this [root@ks34900 ~]# df -h Filesystem Size Used Avail Use% Mounted on rootfs 9.7G 6.4G 2.9G 69% / /dev/root 9.7G 6.4G 2.9G 69% / none 1000M 288K 1000M 1% /dev /dev/sda2 914G 200M 868G 1% /home But again, stupid as it may sound... i cant figure out how much space i have in "/" folder (root) And is it possible that "/usr" have a different space (partition)?

    Read the article

  • List all symbolic links on a directory

    - by Mathias
    Hey, a short question: is it possible to list all symbolic links onto a directory other than running a find over the whole filesystem? Background: I have a directory containing a lot of different versions of a library and I'd like to do some cleanup work and delete the versions which weren't used in any projects. Thanks, Mathias

    Read the article

  • Increasing file descriptor limit on Debian does not work! Help!

    - by Aco
    I am running Debian 6 and I am trying to increase the file descriptor limit but it does not want to work. This is what I have done: I edited /etc/sysctl.conf by adding fs.file-max = 64000 at the end and applied the changes using sysctl -p. I then edited /etc/security/limits.conf and added the following lines: * soft nofile 64000 and * hard nofile 64000. Now when I execute ulimit -Hn and ulimit -Sn I still see 1024. I rebooted the server and I still get the same result. What have I failed to do?

    Read the article

  • /var/log/secure user activity. also, httpd can not start without two users

    - by user52869
    hello, i found some strange informations in /var/log/secure file: Feb 10 02:02:04 server2364 usermod[30750]: unlock user `username1' password Feb 10 02:02:04 server2364 usermod[30811]: lock user `username2' password Feb 10 02:05:16 server2364 usermod[30992]: unlock user `username2' password Feb 10 02:05:18 server2364 usermod[31114]: unlock user `username1' password username1 and username2 are two usernames on system, that have no ability to login. for every night in 02:02h results like that are in /var/log/secure file. one more thing: files /etc/shadow, and /etc/shadow have timestamps 02:05h. what can be cause for it? next thing, if i remove those two accounts (username1 and username2), i can not start web server. can you help me with some ideas, am i hacked?

    Read the article

  • needing storage integrity (write/read) test - for BASH

    - by Mr. Bash
    In need of shell scripts / bash commands to verify data integrity of local harddrives, usb-drives, etc, ... Like the famous www.heise.de/download/h2testw; or something that is at least common within repositories. (h2testw writes a specific datastring over and over onto the medium, then reads it again to verify if it was written correctly and displays write/read time/speed.) please no dd if=/dev/random of=/dev/sdx bs=1k && dd if=/dev/sdx of=/dev/null bs=1k since it won't verify if everything was written correctly. It is only a test if read/write is successful to the device. So far, I'm not too happy with badblocks -w -v /dev/sdx1 either, since it seems rather slow and I don't know what it exactly writes, and if it considers wear-leveling on flash media. There is also a program named F3 http://oss.digirati.com.br/f3/ that needs to be compiled. Designed after h2testw, the concept sounds interesting, i'd just rather have it as a ready to go bash script.

    Read the article

  • 1GB cached memory - Do I need more RAM?

    - by Martin
    The server runs well but I wonder if I should get more RAM. I only have a few MB of "free" memory and 1.2GB of "cached" memory: free: total used free shared buffers cached Mem: 3945 3893 51 0 28 1216 -/+ buffers/cache: 2648 1296 Swap: 3895 857 3038 I learned that cached memory is used while it's free and not. Is the cached value an indicator for the need of more RAM? cat /proc/meminfo 1 day after flushing the cache: MemTotal: 4040048 kB MemFree: 32844 kB Buffers: 18956 kB Cached: 1249092 kB SwapCached: 161576 kB Active: 3611328 kB Inactive: 189104 kB SwapTotal: 3989496 kB SwapFree: 2894200 kB Dirty: 20520 kB Writeback: 0 kB AnonPages: 2523496 kB Mapped: 217744 kB Slab: 70940 kB SReclaimable: 36756 kB SUnreclaim: 34184 kB PageTables: 99648 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 6009520 kB Committed_AS: 6401716 kB VmallocTotal: 34359738367 kB VmallocUsed: 18852 kB VmallocChunk: 34359719439 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB top: top - 17:20:10 up 112 days, 3:06, 1 user, load average: 1.01, 1.62, 1.48 Tasks: 208 total, 1 running, 207 sleeping, 0 stopped, 0 zombie Cpu(s): 0.6%us, 0.6%sy, 0.0%ni, 97.5%id, 1.3%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 4040048k total, 3953108k used, 86940k free, 16348k buffers Swap: 3989496k total, 1095712k used, 2893784k free, 1235436k cached

    Read the article

  • Amazon S3 tools for Debian?

    - by Jonik
    I need to (programmatically, in a shell script) upload an EAR file to an Amazon S3 bucket on Debian (5.0.4). What, if any, Debian package provides simple, scriptable tools for that? (I want raw S3 bucket access, so please don't suggest solutions like Jungle Disk.)

    Read the article

  • Elinks and flash

    - by bajki
    Hello everybody, is there a possibility to "use" flash based objects with elinks ? I mean, i have an online flash based multiplayer game ( http://haxball.appspot.com ) and i want to connect to game with elinks installed on my shell server to create an always-present game room. To do it, i need a terminal-based webbrowser with flash support. There is an elinks installed so it would be great if there is such a possibility in it. Any ideas? Thanks, Mike

    Read the article

  • Moving from Ubuntu desktop to Ubuntu Server via SSH

    - by Daniel Elessedil Kjeserud
    So a little while ago I installed regular Ubuntu for a home server, but that gave me a lot of extra packages. What I should have done was to install Ubuntu Server, since I don't even own a screen to connect to it. Does anybody know of a way to convert my Ubuntu machine to a Ubuntu Server machine in one big swoop? It has to be done over SSH, since I don't have a screen to connect to it, like I said. It's currently running 9.10, about to be upgraded to 10.4.

    Read the article

  • NOQUEUE: SYSERR(root): opendaemonsocket: daemon MTA-v4: cannot bind: Address already in use

    - by Francesco
    I have an issue with sendmail on my server (ubuntu 12.10) with php, mysql,and wordpress installed. Basically I want to create a contact form in my blog to receive emails from visitors directly into my gmail account but it doest work! I created a php file called testmail.php to recall it from the browser: <?php $to = '[email protected]'; $subbject = 'TEST MAIL'; $msg = 'test test test test test test test test test test test test test test test'; $isMailed = mail($to, $subbject, $msg, 'From:me <[email protected]>'); if($isMailed) echo 'mail has been send to: ' . $to; else echo 'mail has NOT been send..'; ?> But I dont receive anything! The /var/log/mail.log says: NOQUEUE: SYSERR(root): opendaemonsocket: daemon MTA-v4: cannot bind: Address already in use What do i do wrong? Where do I need to check? What info do you need more? I checked also into the spam folder, nothing. Thank you!

    Read the article

  • Socat and rich terminals (with Ctrl+C/Ctrl+Z/Ctrl+D propagation)

    - by Vi
    socat - exec:'bash -li',pty,stderr,ctty - bash: no job control in this shell What options should I use to get fully fledged shell as I get with ssh/sshd? I want to be able to connect the shell to everything socat can handle (SOCKS 5, UDP, OpenSSL), but also to have a nice shell which correctly interprets all keys, various Ctrl+C/Ctrl+Z, tab completion, up/down keys (with remote history). Update: Found "setsid" socat option. It fixes "no job control". Now trying to fix Ctrl+D. Update 2: socat file:`tty`,raw,echo=0 exec:'bash -li',pty,stderr,setsid,sigint,sane. Not it handles Ctrl+D/Ctrl+Z/Ctrl+C well, I can start Vim inside it, remote history is OK.

    Read the article

  • F3-F5 keys incorrectly behaving as audio keys

    - by obvio171
    I don't know if this is a configuration issue or a hardware issue, but I have a Kinesis Advantage USB keyboard and for some reason the F3-F5 keys aren't responding as they used to. They don't respond to anything and, when I tried using F5 on Emacs, it said <XF86AudioNext> is undefined, so I guess it's a weird mapping problem. Any idea how I could remap them to the original meaning?

    Read the article

  • How do I remove Xen kernel and put normal kernel on RHEL 5

    - by yan bellavance
    I have 3 identical machines (hardware wise) that all have RHEL 5.3 installed. 2 of those machines have the Xen kernel and one doesnt. I cannot install nvidia drivers on the ones that have the xen kernel and so I was wondering how I managed to do this and how to replace them with normal kernels. Could this of happened during install time when for example I was queried on certain components to install? (development,virtualization, webserver)

    Read the article

  • Why doesn't my symbolic link work?

    - by orokusaki
    I'm trying to better understand symbolic links... and not having very much luck. This is my actual shell output with username/host changed: username@host:~$ mkdir actual username@host:~$ mkdir proper username@host:~$ touch actual/file-1.txt username@host:~$ echo "file 1" > actual/file-1.txt username@host:~$ touch actual/file-2.txt username@host:~$ echo "file 2" > actual/file-2.txt username@host:~$ ln -s actual/file-1.txt actual/file-2.txt proper username@host:~$ # Now, try to use the files through their links username@host:~$ cat proper/file-1.txt cat: proper/file-1.txt: No such file or directory username@host:~$ cat proper/file-2.txt cat: proper/file-2.txt: No such file or directory username@host:~$ # Check that actual files do in fact exist username@host:~$ cat actual/file-1.txt file 1 username@host:~$ cat actual/file-2.txt file 2 username@host:~$ # Remove the links and go home :( username@host:~$ rm proper/file-1.txt username@host:~$ rm proper/file-2.txt I thought that a symbolic link was supposed to operate transparently, in the sense that you could operate on the file that it points to as if you were accessing the file directly (except of course in the case of rm where of course the link is simply removed).

    Read the article

  • virturalmin webmin dose not respond

    - by Miranda
    I have installed Virtualmin on a CentOS remote server, but it dose not seem to work https://115.146.95.118:10000/ at least the Webmin page dose not work. I have opened those ports http ALLOW 80:80 from 0.0.0.0/0 ALLOW 443:443 from 0.0.0.0/0 ssh ALLOW 22:22 from 0.0.0.0/0 virtualmin ALLOW 20000:20000 from 0.0.0.0/0 ALLOW 10000:10009 from 0.0.0.0/0 And restarting Webmin dose not solve it: /etc/rc.d/init.d/webmin restart Stopping Webmin server in /usr/libexec/webmin Starting Webmin server in /usr/libexec/webmin And I have tried to use Amazon EC2 this time, still couldn't get it to work. http://ec2-67-202-21-21.compute-1.amazonaws.com:10000/ [ec2-user@ip-10-118-239-13 ~]$ netstat -an | grep :10000 tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN udp 0 0 0.0.0.0:10000 0.0.0.0:* [ec2-user@ip-10-118-239-13 ~]$ sudo iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:20 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:21 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:20000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:10000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:993 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:143 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:995 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:110 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:20 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:21 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:587 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Since I need more than 10 reputation to post image, you can find the screenshots of the security group setting at the Webmin Support Forum. I have tried: sudo iptables -A INPUT -p tcp -m tcp --dport 10000 -j ACCEPT It did not change anything. [ec2-user@ip-10-118-239-13 ~]$ sudo yum install openssl perl-Net-SSLeay perl-Crypt-SSLeay Loaded plugins: fastestmirror, priorities, security, update-motd Loading mirror speeds from cached hostfile * amzn-main: packages.us-east-1.amazonaws.com * amzn-updates: packages.us-east-1.amazonaws.com amzn-main | 2.1 kB 00:00 amzn-updates | 2.3 kB 00:00 Setting up Install Process Package openssl-1.0.0j-1.43.amzn1.i686 already installed and latest version Package perl-Net-SSLeay-1.35-9.4.amzn1.i686 already installed and latest version Package perl-Crypt-SSLeay-0.57-16.4.amzn1.i686 already installed and latest version Nothing to do [ec2-user@ip-10-118-239-13 ~]$ nano /etc/webmin/miniserv.conf GNU nano 2.0.9 File: /etc/webmin/miniserv.conf port=10000 root=/usr/libexec/webmin mimetypes=/usr/libexec/webmin/mime.types addtype_cgi=internal/cgi realm=Webmin Server logfile=/var/webmin/miniserv.log errorlog=/var/webmin/miniserv.error pidfile=/var/webmin/miniserv.pid logtime=168 ppath= ssl=1 env_WEBMIN_CONFIG=/etc/webmin env_WEBMIN_VAR=/var/webmin atboot=1 logout=/etc/webmin/logout-flag listen=10000 denyfile=\.pl$ log=1 blockhost_failures=5 blockhost_time=60 syslog=1 session=1 server=MiniServ/1.585 userfile=/etc/webmin/miniserv.users keyfile=/etc/webmin/miniserv.pem passwd_file=/etc/shadow passwd_uindex=0 passwd_pindex=1 passwd_cindex=2 passwd_mindex=4 passwd_mode=0 preroot=virtual-server-theme passdelay=1 sessiononly=/virtual-server/remote.cgi preload= mobile_preroot=virtual-server-mobile mobile_prefixes=m. mobile. anonymous=/virtualmin-mailman/unauthenticated=anonymous ssl_cipher_list=ECDHE-RSA-AES256-SHA384:AES256-SHA256:AES256-SHA256:RC4:HIGH:MEDIUM:+TLSv1:!MD5:!SSLv2:+SSLv3:!ADH:!aNULL:!eNULL:!NULL:!DH:!ADH:!EDH:!AESGCM

    Read the article

  • "TCP Sweep" - What is it? How am I causing it?

    - by Stephen Melrose
    Hi there, I've just had an email from my hosting company telling me I'm in violation of their Acceptable Use Policy. They forwarded me an email from another company complaining about something to do with a "TCP sweep of port 22". They included a snippet from their logs, 20:29:43 <MY_SERVER_IP> 0.0.0.0 [TCP-SWEEP] (total=325,dp=22,min=212.1.191.0,max=212.1.191.255,Mar21-20:26:34,Mar21-20:26:34) (USI-amsxaid01) Now, my server knowledge is limited at best, and I've absolutely no idea what this is or what could be causing it. Any help would be greatly appreciated! Thank you

    Read the article

  • One server, Two APC UPS on redundant power supplies : How to trigger shutdown ?

    - by Falken
    I have a server racked and its redundant power supplies plugged in two APC Smart-UPS 3000 XLM. Each UPS is connected to two different mains power sources. Two instances of apcupsd are running, each one connected to its own UPS. They can both detect when an UPS is on Battery, and each UPS can then trigger a shutdown on the server. Question is : How NOT to shutdown if ONLY ONE UPS runs out of battery ? Note : Smart-UPS 3000 XLM has a "Power Sync" Function that is able to connect to its peer and detect its status. But when I pulled the plug out of one of them, the Shutdown order was sent anyway. I'm thinking about modifying the shutdown scripts to check with "apcaccess" if the other ups is down. Any experience on this would be appreciated !

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • rTorrent, too low memory usage !?

    - by Claudiu
    I want to know from more experienced rTorrent users how to tweak the .rtorrent.rc so that rTorrent will cache disk reading and writing (same as uTorrent does). I have set the max_memory_usage = 1GB but this amount is not used. I run 6 rTorrent instances on a Quad Core, 8 GB Ram machine and total used memory reported by htop is only ~500MB. I need to use memory buffers cause disk IO activity is very high.

    Read the article

  • SVN Authz - Any Subfolder permission or List contents

    - by Jaspa Jones
    Goal Basically I would like SVN users to be able to browse through a directory containing a lot of subfolders without allowing them to read its subfolders. [/] * = r [/Projects] * = # Allow viewing contents, but not reading. At least to be able to see Project1. [/Projects/Project1] my_group = rw Problem The problem is that there are a lot of projects. I could add every other project and make them disappear for the user, but that would be a lot of work to maintain. It would look like this: [/] * = r [/Projects] * = r [/Projects/Project1] my_group = rw [/Projects/Project2] * = [/Projects/Project3] * = [/Projects/Project4] * = [/Projects/Project5] * = It would be nice if I could use this: [/Projects/*] * = Any ideas? Thanks in advance, Jaspa Jones

    Read the article

  • XTerm and a bold text

    - by user610378
    This is my Xterm config: XTerm*saveLines: 512 XTerm*reverseVideo: false XTerm*reverseWrap: true XTerm*fullCursor: true XTerm*scrollTtyOutput: on XTerm*scrollKey: on XTerm*eightBitInput: false XTerm*pointerColor: white XTerm*pointerShape: left_ptr XTerm*charClass: 37:48,45-47:48,58:48,64:48,126:48 XTerm*cursorColor: rgb:aa/aa/aa XTerm*cursorColor2: black XTerm*color0: rgb:71/71/71 XTerm*color1: rgb:cd/00/00 XTerm*color2: rgb:b4/cd/00 XTerm*color3: rgb:cd/cd/00 XTerm*color4: rgb:71/71/71 XTerm*color5: rgb:cd/00/cd XTerm*color6: rgb:00/cd/cd XTerm*color7: rgb:e5/e5/e5 XTerm*color8: rgb:4c/4c/4c XTerm*color9: rgb:ff/00/00 XTerm*color10: rgb:55/ac/55 XTerm*color11: rgb:ff/ff/00 XTerm*color12: rgb:46/82/b4 XTerm*color13: rgb:ff/00/ff XTerm*color14: rgb:00/ff/ff XTerm*color15: rgb:ff/ff/ff XTerm*colorBD: white XTerm*colorUL: SkyBlue XTerm*colorBDMode: on XTerm*colorULMode: on XTerm*underLine: on XTerm*background: rgb:30/0a/24 XTerm*foreground: white XTerm*font: -*-monospace-medium-r-normal-9-140-*-*-m-*-* XTerm*font1: 5x7 XTerm*font2: 6x10 XTerm*font3: fixed XTerm*font4: 9x15 XTerm*ScrollBar.Background: gray XTerm*ScrollBar.thickness: 0 XTerm*ScrollBar.foreground: gray XTerm*ScrollBar: false XTerm*ScrollBar.DrawBorder: false XTerm*loginShell: true XTerm*faceName: Mono XTerm*faceSize: 9 Could anyone say is it possible to make bold some text, wich color is e.g. color1 from my config? I've tried XTerm*color1: rgb:cd/00/00 bold, but this doesn't work.

    Read the article

  • Samba - permission issue

    - by user88432
    I am trying to get samba to work properly... I have a "Movies" share (//server/Movies), I want only root account to be able to upload and delete. Guest can view "Movies" share without password/login but they cant delete/update (only view). [Movies] path = /mnt/user/Movies browsable = yes public = yes writable = no write list = root guest ok = yes I can access to Movies share as guest but when I try to add new file I get an error saying: "You need permission to perform this action" I expected username/password to popup but it didn't, how to fix this?

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >