Search Results

Search found 14544 results on 582 pages for 'ssh config'.

Page 373/582 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • SVN : how to change hostname?

    - by elon
    I'd like to sep up SVN repo on local machine. But we already have apache running under localhost. When I use instalator form subversion site with apache option it installs another apache and when I type "localhost" in browser I see this new apache (not the old one). Question is how to run this new apache under other host name. When installing it asks about it, so I set different name, but it still works under localhost (nothing happens). I'd like to have access to svn via URL e.g. "svnrepo" not "localhost". What can I do about it? Which lines of config should be changed (and/or what's more should be changed?) Another way I'm thinking of to solve this problem is to integrate this svn-apache module with mine apache. But still I don't really know how to do it (my apache is 2.2.6)

    Read the article

  • Puppet apache module causing 'Error 400 on SERVER: Invalid parameter identifier'

    - by Andy Shinn
    I am receiving the following error when trying to use the latest puppetlabs-apache module from github (https://github.com/puppetlabs/puppetlabs-apache): Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter identifier at /etc/puppet/environments/apache_update/modules/apache/manifests/mod.pp:40 on node zordon.mydomain.com Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run My node config looks like: node 'zordon.mydomain.com' { include template::common include template::puppetagent include template::lamp User::Create sudo::conf { 'joe': priority = 60, content = 'joe ALL=(ALL) NOPASSWD: ALL', require = User::Create['joe'], } } The template::lamp class is what uses apache module: class template::lamp { include myfirewall Firewall Firewall class { 'apache': } class { 'apache::mod::php': } class { 'apache::mod::ssl': } class { 'mysql::server': } } It looks like serverfault markup is getting garbled on Puppet realize statements. The User::Create and Firewall lines are just realizing a user and 2 firewall rules. I have verified that the /var/lib/puppet/lib/puppet/type/a2mod.rb type has the identifier parameter and it is the same MD5 as the server. I am using Puppet 3.0.1 on both agent and master. Any idea what may cause this?

    Read the article

  • run two apache servers on one computer

    - by harry_T
    I would like to run two XAMPP apache servers and mysql on one Windows computer. My first idea was to run one under directory XAMPP, the other under XAMPP_B. Why you ask? I have two applications that have to be in the "root" directory of localhost. Both servers do not have to be active at same time, so I don't think I will have any conflicts I will have to modify my.cnf in mySQL httpd.conf, apache_start and maybe other config files as well. Or maybe someone can suggest a better way...

    Read the article

  • How can Nagios handle non-threshold based plugins?

    - by FliesLikeABrick
    I am writing a Nagios plugin to monitor trends of a certain storage resource utilization (e.g. gradual increases are fine, but an instantaneous/sudden increase or decrease in resource usage may indicate a problem). For what it's worth, it is reviewing the last N entries in an RRD file generated by a custom cacti data source/templates. What is the "right" way to handle Nagios notification config/implementation for this? The problem is that it the plugin would exit as warning/critical for one polling period, but in the next it would be fine (or 3 polling periods later, if I look at 3 polling periods worth of data). I guess the question is: should I just write it in such a way that it will alert for X polling periods, or should I find a way to write it such that manual intervention is required for it to clear (such as logging into the monitoring server or hitting a URL to run a script that submits a passive result)? Your input is appreciated, and if you have any tips for how to implement the latter I'm open to them (I can think of a few ways to possibly implement it)

    Read the article

  • Caching DNS server (bind9.2) CPU usage is so so so high.

    - by Gk
    Hi, I have a caching-only dns server which get ~3k queries per second. Here is specs: Xeon dual-core 2,8GHz 4GB of RAM Centos 5x (kernel 2.6.18-164.15.1.el5PAE) bind 9.4.2 rndc status: recursive clients: 666/4900/5000 About 300 new queries (not in cache) per second. Bind always uses 100% on one core on single-thread config. After I recompiled it to multi-thread, it uses nearly 200% on two core :( No iowait, only sys and user. I searched around but didn't see any info about how bind use CPU. Why does it become bottleneck? One more thing, here is RAM usage: cat /proc/meminfo MemTotal: 4147876 kB MemFree: 1863972 kB Buffers: 143632 kB Cached: 372792 kB SwapCached: 0 kB Active: 1916804 kB Inactive: 276056 kB I've set max-cache-size to 0 to make sure bind can use as much RAM as it want, but it always stop at ~2GB. Since every second we got not cached queries so theoretically RAM must be exhausted but it wasn't. Do you have any idea? TIA, -Gk

    Read the article

  • Grub loading. The symbol ' ' not found. Aborted. Press any key...

    - by John
    Hi there, I have a dual boot system on dell xps 9000 with windows 7 and ubuntu. But after I performed system backup on it as requested by windows 7 I am no longer able to boot into the computer, instead at the beginning after bios I get the following message: Grub loading. The symbol ' ' not found. Aborted. Press any key... I tried to change bios booting config to starting with harddrive and it still returned the same message. Using windows boot disk only asks me to do another system backup or threatens to delete my harddrive completely. The only solution I have so far is to reinstall ubuntu, but that leaves 2 additional copies of ubuntu on my computer. Is there a simpler way to fix the situation so I can actually boot into windows? Thanks so much.

    Read the article

  • How to get Atheros ar242x wireless adapter working under Debian Linux?

    - by Mark
    Does anybody know how to get the Atheros ar242x wireless adapter working under Debian Linux (5.0.2 and/or 5.0.3)? My Debian live CDs and install CDs both don't like this card at all. Curisouly, it seems to work on other, Debian-based, Linuxes. Is this a free/non-free Driver issue? I know Debian gets mardy about that. Although for what it's worth, the Live CD doesn't seem to detect my wired LAN connection either... Specifically this is on a Samsung R610 laptop (some version of which seem to have an intel wireless adapter - this one definitely doesn't!) I've tried all sorts of things but obviously on a live CD installing software is limited. I've also tinkerering with network config files and kernel modules etc but to no avail.

    Read the article

  • How do you enable multi-core virtualization in Windows 8 Pro?

    - by Greg B
    I've just got a new Dell Vostro 470 with a quad core (8 threads) i7 3770 and I'm trying to run virtual machines on it, which works fine, except if I want to assign multiple cores to a VM. I've checked the bios which states Intel Virtualization Technology [Enabled], but both Hyper-V and VirtualBox will only allow me to assign a single core. If I run the Intel Processor Identification Utility on the host OS it tells me that Intel Virtualization Technology isn't supported by the processor, but according to the Intel website, it is. So whats going on? Have Dell clipped the i7's wings? Is there some config in Windows I need to change?

    Read the article

  • Kerberos: Running an app with a parameter using krenew

    - by Mihai Todor
    I need to run an application with krenew, but the application also needs to receive a parameter via command line and I need to send its output to a file. From the documentation, it looks like this should do the trick: krenew -t -- sh -c 'compute-job > /afs/local/data/output' but, unfortunately, when I run the command below: krenew -s -- sh -c './my_app config.xml > results/test.txt &' the application just dies after a while and I can see from the output of ps aux that krenew is not running along with my_app. I am not sure what the parameter -t does, and as far as I can see, if I run krenew -s ./my_app, it works properly. I hope someone can clarify this.

    Read the article

  • FreeBSD Ports: How can I see all dependencies for a port, and all subdependencies for those dependencies?

    - by Stefan Lasiewski
    I'm trying to build a port which depends on apache-ant. I thought I could run make build-depends-list to see all dependencies required by this port: # make build-depends-list /usr/ports/devel/apache-ant /usr/ports/java/jdk16 /usr/ports/math/gmp But after installing everything, the port had a dependency list which was a mile long: apache-ant-1.8.1 desktop-file-utils-0.15_2 gamin-0.1.10_4 gettext-0.18.1.1 gio-fam-backend-2.26.1 glib-2.26.1_1 gmp-5.0.1 inputproto-2.0 javavmwrapper-2.3.5 kbproto-1.0.4 libX11-1.3.3_1,1 libXau-1.0.5 libXdmcp-1.0.3 libXext-1.1.1,1 libXi-1.3,1 libXtst-1.1.0 libiconv-1.13.1_1 libpthread-stubs-0.3_3 libxcb-1.7 pcre-8.12 perl-5.10.1_3 pkg-config-0.25_1 python26-2.6.6 recordproto-1.14 unzip-6.0 xextproto-7.1.1 xproto How can I see all dependencies, and all subdependencies for a port?

    Read the article

  • Clean URLS on Hiawatha

    - by Botto
    I am using the Hiawatha web server and running drupal on a FastCGI PHP server. The drupal site is using imagecache and it requires either private files or clean urls. The issue I am having with clean urls is that requests to files are being rewritten into index.php as well. My current config is: UrlToolkit { ToolkitID = drupal RequestURI exists Return Match (/files/*) Rewrite $1 Match ^/(.*) Rewrite /index.php?q=$1 } The above does not work. Drupal's apache set up is: <Directory /var/www/example.com> RewriteEngine on RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] </Directory>

    Read the article

  • Any way to know what files were in a broken ZFS pool?

    - by Erik Tjernlund
    I have a large ZFS pool of 4 combined drives. Now, the filesystem can not be mounted: pool: tank state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-3C scan: none requested config: NAME STATE READ WRITE CKSUM tank UNAVAIL 0 0 0 insufficient replicas c10t0d0 ONLINE 0 0 0 c8t0d0 UNAVAIL 0 0 0 cannot open c8t1d0 ONLINE 0 0 0 c10t1d0 ONLINE 0 0 0 Probably a broken drive (c8t0d0). I'm not overly concerned by the loss of the data, but I'd love to know exactly which files were in that pool. Is there any way to get a listing of what files were there?

    Read the article

  • FreeBSD Ngnix installation error

    - by Asaf Nevo
    I have a VPS which has Apache webserver installed. I'm trying to install Ngnix on it since my new server will be needing to handle large amount of connection simultaneously. I used this install guide and did: cd /usr/ports/www/nginx make install clean However I get this error: adding module in /usr/ports/www/nginx/work/arut-nginx-dav-ext-module-0e07a3e ./configure: error: no /usr/ports/www/nginx/work/arut-nginx-dav-ext-module-0e07a3e/config was found ===> Script "configure" failed unexpectedly. I'm pretty new to FreeBSD and I am used to controlling my server using Direct Admin. What shall I do next ?

    Read the article

  • buggy mouse click after a while

    - by sputnick
    The cursor of my mouse is moving as well on my twin view, but after a WHILE, or if I start kde with runlevels (I don't know why, but startx works better) I can't click anything. I have had replaced the mouse with another one, but it's the same problem, so it's not a hardware problem. The problem occurs both with gnome and kde. My config : PC Dell Optiplex 780 archlinux x86_64 (up to date) xf86-input-evdev 2.6.0-4 xorg-server 1.11.2-2 kernel linux-3.1.1-1-ARCH My Xorg log doesn't contains "EE" string for errors. Any clue ?

    Read the article

  • How can I enable anonymous access to a Samba share under ADS security mode?

    - by hemp
    I'm trying to enable anonymous access to a single service in my Samba config. Authorized user access is working perfectly, but when I attempt a no-password connection, I get this message: Anonymous login successful Domain=[...] OS=[Unix] Server=[Samba 3.3.8-0.51.el5] tree connect failed: NT_STATUS_LOGON_FAILURE The message log shows this error: ... smbd[21262]: [2010/05/24 21:26:39, 0] smbd/service.c:make_connection_snum(1004) ... smbd[21262]: Can't become connected user! The smb.conf is configured thusly: [global] security = ads obey pam restrictions = Yes winbind enum users = Yes winbind enum groups = Yes winbind use default domain = true valid users = "@domain admins", "@domain users" guest account = nobody map to guest = Bad User [evilshare] path = /evil/share guest ok = yes read only = No browseable = No Given that I have 'map to guest = Bad User' and 'guest ok' specified, I don't understand why it is trying to "become connected user". Should it not be trying to "become guest user"?

    Read the article

  • Dynamically loading chef recipies from URLs

    - by andy
    I'm deploying a web app on AWS. I intend to use chef to build AMIs which I'll then put into production. I want to have Chef monitor a URL stored in simpleDB. The URL would point to a tarball in S3. There would be different URLs, one for a config tarball, one for a code tarball. When I update the URL in simpleDB, I want chef to spot this and pull in and apply the configs/deploy the code. Is this possible? Has anything like this been done before or would I need to roll my own code? I think Chef can monitor URLs, but how would be the best way of getting it to load that URL from simpleDB?

    Read the article

  • How to stop Nginx sending static file requests to the CakePHP app controller when running Cake in a

    - by Throlkim
    I'm trying to run a CakePHP app from within a subfolder on Nginx, but the static files are not being found and are instead being passed to the app controller. Here's my current config: location /uniquetv { index index.php index.html; if (-f $request_filename) { break; } if (!-e $request_filename) { rewrite ^/uniquetv(.+)$ /uniquetv/webroot/$1 last; break; } } location /uniquetv/webroot { index index.php; if (!-e $request_filename) { rewrite ^/uniquetv/webroot/(.+)$ /uniquetv/webroot/index.php?url=$1 last; break; } } Any ideas? :)

    Read the article

  • Too much free space on FreeNAS - ZFS

    - by Guillaume
    I have a FreeNAS server with 3 x 2 To disks in raidz1. I would expect to have about 4 To of space available. When I run zpool list I get: [root@freenas] ~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT main_volume 5.44T 3.95T 1.49T 72% ONLINE /mnt I was expecting a size of 4 To. Also, used space as reported by zpool list does not match what's reported by du: [root@freenas] ~# du -sh /mnt/main_volume/ 2.6T /mnt/main_volume/ There are quite a few things that I dont yet completely understand about ZFS. But at the moment I am mostly worried that I misconfigured my system and that I dont have any storage redundancy. How can I make sure I did not do an horrible mistake ... For the sake of completeness, here is the output of zpool status: [root@freenas] ~# zpool status pool: main_volume state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM main_volume ONLINE 0 0 0 raidz1 ONLINE 0 0 0 gptid/d8584e45-5b8a-11d9-b9ea-5404a6630115 ONLINE 0 0 0 gptid/d8f7df30-5b8a-11d9-b9ea-5404a6630115 ONLINE 0 0 0 gptid/d9877cc3-5b8a-11d9-b9ea-5404a6630115 ONLINE 0 0 0 errors: No known data errors

    Read the article

  • Why do I have to manually 'Restart Management Network' on vSphere 5 host after reboot to get networking available?

    - by growse
    I've got a couple of vSphere 5.0 hosts in a small lab environment here and I've noticed a strange behaviour. When on of the hosts gets rebooted, it is unresponsive to the network until I log into the ESX console, Press F2 to customize and select Restart management network. Once this is done, the networking works perfectly as expected. Each host has two NICs which are trunked together using Etherchannel to a Cisco 3750. The link is also a .1q VLAN trunk and the management network is configured on VLAN121 with the VM traffic configured on VLAN118. Why would the host be completely dead to the world until I physically kick it? Edit Sample switch config for trunk: interface Port-channel2 description Blade 1 EtherChannel Trunk switchport trunk encapsulation dot1q switchport mode trunk end ! ! interface GigabitEthernet4/0/1 description Bladecenter1 CPM 1A switchport trunk encapsulation dot1q switchport mode trunk speed 1000 duplex full channel-group 2 mode on end Vswitch teaming settings: Management port group settings:

    Read the article

  • Ubuntu and mysql server. Something isnt allowing me to connect

    - by acidzombie24
    I have a question about mysql settings http://serverfault.com/questions/94054/remote-connections-and-mysql-on-ubuntu/94088#94088 now i want to figure out why i cannot connect. I made sure bind-address was commented out. I can ping the server within the VM but i cannot ping it from within the VM using mysqladmin --protocol=tcp --host=self_ip ping. I also followed along and check if my ports were open and they look like they are. I setup samba on that VM and can access that with no problem as well. It looks like ubuntu does not have a firewall either (i figured this out before) so i am stumped why the server isnt allowing my connection. Apparently the config file works on another person side http://www.pastie.org/742545 I am using Ubuntu 6.06 LTS just because of 'support' reasons. So hopefully this will be 'easy'?

    Read the article

  • Authentication in Apache2 with mod_dav_svn

    - by Poita_
    I'm having some trouble setting up authentication in Apache2 for a SVN repository that's being served using mod_dav_svn. Here is my Apache config for the directory: <Location /svn> DAV svn SVNParentPath /var/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dev.passwd Require valid-user </Location> I can use svn with the projects under /var/svn/repos, so I know that the DAV is working, but when I do svn updates or commits (or anything), Apache doesn't ask for any authentication... It does the exact same thing whether the Auth directives are there or not. The permissions on the repository directory (and all subdirectories/files) only give permission to www-data (the Apache2 user/group). I have also ensured that all relevant modules are enabled (in particular mod_auth is enabled, as are all mod_dav* modules). Any ideas why svn commands aren't authenticating? Thanks in advance.

    Read the article

  • Why can't add a hot spare in freebsd? Can anybody help me fix it?

    - by hamlet
    Why can't add a hot spare? Can anybody help me fix it? mfiutil add e1:s1 mfid0 mfiutil: Drive 1 is not available My mfi status:: mfiutil show config mfi0 Configuration: 1 arrays, 1 volumes, 0 spares array 0 of 2 drives: drive 0 ( 137G) ONLINE <HITACHI HUS153014VLS300 A410 serial=JFWHSB4C> SAS enclosure 1, slot 0 drive 1 ( 137G) ONLINE <HITACHI HUS153014VLS300 A410 serial=JFWJ3AEC> SAS enclosure 1, slot 1 volume mfid0 (136G) RAID-1 64K OPTIMAL spans: array 0 mfiutil show events 1468 (boot + 25s/BATTERY/WARN) - Battery removed 1475 (boot + 52s/DRIVE/WARN) - PD 00(e1/s0) is not a certified drive 1478 (boot + 52s/DRIVE/WARN) - PD 01(e1/s1) is not a certified drive 1480 (boot + 64s/BATTERY/WARN) - BBU disabled; changing WB virtual disks to WT mfiutil show volumes mfi0 Volumes: Id Size Level Stripe State Cache Name mfid0 ( 136G) RAID-1 64K OPTIMAL Disabled

    Read the article

  • Keepalived takes several minutes to recover in a particular situation

    - by NathanE
    I've setup Keepalived for a master-slave style virtual IP and it seems to work well. Both are hosted in almost identical VMs. If I "pause" the VM that is running the Master. The Slave will take over, as expected, almost instantly. However if I then "unpause" the VM that runs the Master. The virtual IP will stop responding the pings. And it takes a good 4 or 5 minutes for it to start pinging again. It seems to be getting desynchronised due to the nature of the way I'm testing it (by pausing/unpausing the VMs). I admit that pausing and unpausing VMs is a slightly dodgy way to test this. But it has raised a concern for me that there could be other scenarios that cause the same undesirable behaviour. Is this expected / by design? Is there anything I can do to the config to improve it? Thanks.

    Read the article

  • What requirements does an IT department work space need?

    - by Rob
    Hello all, i need to provide a list of workspace requirements to the IT director for my network operations team. So far I got Secure workspace - so nothing gets stolen and people cant come up to us asking for support (they need a ticket from the helpdesk) Quite area - so that we can work and not be disturbed by the loud project managers who play soccer in the office sometimes. A large table or desk where we can setup and or config systems and servers if needed. What else do we need? Thanks in advance.

    Read the article

  • What is the maximum number of virtualhosts Apache can handle?

    - by FractalizeR
    Hello. What is the maximum number of VirtualHosts Apache can handle on a single machine (I don't mean anything related to load, let's suppose it's irrelevant for the question). And we take only Apache without any proxifying things like nginx. I am asking because on one forum one guy reported that his Apache works unstable with the number of sites more than 400 on a single machine. If you have a config, that handles more than 400, please tell me here. Thanks.

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >