Search Results

Search found 13404 results on 537 pages for 'george host'.

Page 227/537 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Oracle Error ORA-12560 TNS:Protocol Adapter error?

    - by David Basarab
    I am using Oracle Database 10g. Both Servers are Windows 2003. I have an Orcale Database set up on one server. Here is the TNSNames.ora from the server with the database. # tnsnames.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\network\admin\tnsnames.ora # Generated by Oracle configuration tools. ORCL.VIRTUALHOLD.COM = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = databaseServer)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) The Environmental Variables on the Server are ORACLE_HOME = C:\oracle\product\10.2.0\db_1 ORACLE_SID = orcl I am trying to connect to it from another box that has Oracle Client installed. Here is the tnsnames.ora installed on the other client server. # tnsnames.ora Network Configuration File: C:\oracle\product\10.2.0\client_1\network\admin\tnsnames.ora # Generated by Oracle configuration tools. ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = databaseServer)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = orcl) ) ) ORACLE_HOME = C:\oracle\product\10.2.0\client_1 ORACLE_SID = orcl Locally on the database server I can connect to through sqlplus with no issues. On the client machine I keep getting the error: ORA-12560: TNS:protocol adapter error What am I missing? Does the client TNSNames.ora need to be different?

    Read the article

  • tomcat 'document base does not exist' error (but it does)

    - by SpliFF
    Gentoo / Tomcat 6 INFO: Starting Servlet Engine: Apache Tomcat/6.0.20 Sep 8, 2009 10:34:51 AM org.apache.catalina.core.StandardContext resourcesStart SEVERE: Error starting static Resources java.lang.IllegalArgumentException: Document base /www/rivervalley/site does not exist or is not a readable directory at org.apache.naming.resources.FileDirContext.setDocBase(Unknown Source) at org.apache.catalina.core.StandardContext.resourcesStart(Unknown Source) at org.apache.catalina.core.StandardContext.start(Unknown Source) at org.apache.catalina.core.ContainerBase.start(Unknown Source) at org.apache.catalina.core.StandardHost.start(Unknown Source) at org.apache.catalina.core.ContainerBase.start(Unknown Source) oh really? then how come: ls -la /www/rivervalley/site/ drwxr-xr-x 12 tomcat tomcat 4096 Sep 8 09:56 . drwxr-xr-x 16 tomcat tomcat 4096 Jun 29 16:22 .. -rwxr--r-- 1 tomcat tomcat 520 Jul 3 02:15 Application.cfm drwxr-xr-x 2 tomcat tomcat 4096 Sep 8 09:56 WEB-INF and ... tomcat 18916 1.0 5.5 1159188 167892 ? Ssl 10:37 0:11 /opt/sun-jdk-1.5.0.18/bin/java -Djava.util.loggin Hell, ANY account can read that directory so the claim is utter nonsense. What else can cause this? Here's my relevant server.xml section: <Host name="rivervalley" appBase="webapps" unpackWARs="false" autoDeploy="false" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="/www/rivervalley/site" /> </Host>

    Read the article

  • How can I setup nginx to serve virtualhosts with rails(unicorn/passenger) and php-fpm

    - by NewAlexandria
    I would like to serve multiple sites on one instance. I install nginx, php-fpm, and a rails app. I use sites like this to guide me. I configure php-fpm to listen to a local socket listen = /var/run/php-fpm/php-fpm.sock I configure ngnix with multiple hosts: include /etc/nginx/conf.d/*.conf I have several site php conf files like /etc/nginx/conf.d/site1.conf server { listen 80; server_name site1.com www.site1.com; root /var/www/site1; location / { index index.html index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; } } and rails site conf files like upstream rails { server 127.0.0.1:3000; } server { listen 80; server_name site2.com www.site2.com; root /var/www/site2; location / { proxy_pass http://rails; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Url-Scheme $scheme; } } I have a unicorn rails server running via rails s -p 3000 Yet, no sites come up for either site1.com or site2.com. I can get to the rails site at www.site2.com:3000 What is wrong? I've spent 2 days (nearly 30hr) trying many different blogs, SO / SF questions, etc. Please share your insight or answer. edit 1: No log entries are created when I try to visit either site. It's like the requests never come in.

    Read the article

  • Contacts in Outlook 2003/2007, some questions

    - by Ernst
    If I create a distribution list and then select members, can I see different fields than the default ones? In 2007 there are radio buttons for 'name only' and 'more columns', but the latter does seem to only result in no results at all, regardless of which address book I choose. In 2003 there is no such thing. Is there a plug in that will break up the recipients (whether they be to, cc, or bcc) in groups of X, and send then a number of mails as required? Our host allows only 50 recipients per mail and only 300 total recipients per 5 minutes. I know the email client blat has exactly this functionality, but it does not seem to be able to connect to the exchange server to get the contacts needed. Could I maybe set outlook to send to blat which then does the breaking up as necessary? Can I (or is there a plug in for this) export only part of the contacts instead of all of them? Note that we send mail outside our organisation via our web host where we've got a few mailboxes, and we use our exchange (2000) server only internally, the few people that can send email to the outside world have an external mailbox as well as their exchange account defined. I might be able to convince our general boss that we can simply give (some) people the ability to send outside via exchange, but I might just as well not succeed. Alternatively, is there another program that can connect to exchange to get the contacts (selected based on categories) and then send via smtp in groups with delays between the mails?

    Read the article

  • Random and Selective ARP blindness in VMWare ESXi 4.1

    - by Peter Grace
    We have multiple VMWare ESX servers spread out amongst our company, doing various tasks. One particular ESXi host is exhibiting very peculiar behavior. We detect it when our monitoring system (Orion) notifies us that it can no longer ping the box. Upon jumping on the local console of the guest in question, we see that it cannot ping any new addresses that aren't already in its ARP table. At first we thought that the problem was just related to one of our guests, as the problem seemed to always happen to another guest, DevRedis. However, this afternoon the problem swapped and started happening on ApacheBox rather than DevRedis. When I have been fortunate to catch the problem, I have run tcpdump on both sides of the connection (one side being vmware, the other side being a physical webserver) and have noticed the following course of events: Guest ApacheBox sends an ARP request for the physical address of server WindowsBeast WindowsBeast tenders an ARP is-at back to the network indicating its physical mac address. ApacheBox never sees the ARP is-at response. The ESX host in question is running VMware ESXi, 4.1.0, 348481 The two guests (DevRedis and ApacheBox) are both running CentOS 6.3, however they are running two separate kernel versions ( 2.6.32-279.9.1.el6.x86_64 and 2.6.32-279.el6.x86_64 ) so I'm not entirely sure it's a CentOS problem. Does anyone have any thoughts on what might cause this? Has anyone run into it before?

    Read the article

  • Can a website company that builds 4-5 websites a year afford dedicated hosting?

    - by Petras
    We manage about 30 websites that use shared ASP.NET SQL Server web hosting. These are typical small/medium business websites and they perform fine in this environment. Recently I was looking at VPS hosting in this thread http://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps After contacting a provider in one of the replies I was told that VPS hosting is not recommended for 30 sites, even if they are small. The resource requirements might be too great even for VPS. So I should turn to dedicated hosting. The lowest cost dedicated hosting is $219 per month (see http://www.serverintellect.com/dedicated/pentiumdservers.aspx). But this is only for a single processor which seems too light for a machine running both IIS and SQL. In our office all the developers work on quad cores so I assume I’d really need the Quad Processor. However, this starts at $599 monthly. Now, I won’t be able to transfer all of our 30 sites to this machine. I’d only be able to transfer say 5 or 6. However, moving forward, I’d be able to host all future sites on this machine. This amounts to 4-5 per year. Let’s look at the economics. Shared hosting costs are typically $16.95 monthly (see http://www.crystaltech.com/dotnet.aspx). So here’s the dilemma First months costs: $599 First month revenue: 6x$16.95 = $101.7 Loss in first month: $497.3 First year costs: $599x12=$7188 First month revenue: 6x$16.95x12 + 5x$16.95x6(averaged) = $1728.9 Loss in first year: $5459.1 Clearly it is going to take years for this server to pay for itself. It just doesn’t seem economical! Am I missing something here, or is dedicated not the way to go with the amount of sites we build?

    Read the article

  • Debian Server wont reboot from script

    - by Littlejon
    I have a script that is run to backup a server via Rsync, after that script is run I want the server to reboot. My script is run as root from the Crontab at 3am in the morning. #!/bin/bash HOST="email" RSYNC_OPTS="-a -v -v --progress --stats --delete" RSYNC_DEST="10.0.0.10::$HOST" BACKUP_LIST="/etc /home /root" TIMESTAMP="/timestamp-bkup-start.chk" TIMESTAMP2="/timestamp-bkup-stop.chk" touch $TIMESTAMP rsync $RSYNC_OPTS $TIMESTAMP $RSYNC_DEST for BACKUP_ITEM in $BACKUP_LIST; do rsync $RSYNC_OPTS $BACKUP_ITEM $RSYNC_DEST done /etc/init.d/zimbra stop sleep 60s rsync $RSYNC_OPTS /opt $RSYNC_DEST touch $TIMESTAMP2 rsync $RSYNC_OPTS $TIMESTAMP2 $RSYNC_DEST echo `date +%Y%m%d%H%M` >> /var/log/reset reboot # $# shows number of args passed # $1 to access first variable #if [ $# -eq 1 ]; then # if [ $1 = "withreboot" ]; then # echo "rebooting..."; # echo `date +%Y%m%d%H%M` >> /var/log/reset # /sbin/reboot # fi #fi I have tried using init 6 rather then reboot. I have tried /sbin/reboot. I also have another basic script that just echos to the reset log and runs reboot without issue. It is just with the script above the server wont restart. If anyone has any theories that would be great as I have run out of idea. Thanks, Jon

    Read the article

  • VMWare use of Gratuitous ARP REPLY

    - by trs80
    I have an ESXi cluster that hosts several Windows Server VMs and around 30 Windows workstation VMs. Packet captures show a high number of ARP replies of the form: -sender_ip: VM IP -sender_mac: VM virtual MAC -target_ip: 0.0.0.0 -target_mac: Switch interface MAC The specific addresses aren't really a concern -- they're all legitimate and we're not having any problems with communications (most of the questions surrounding GARP and VMWare have to do with ping issues, a problem we don't have). I'm looking for an explanation of the traffic pattern in an environment that functions as expected. So the question is why would I see a high number of unsolicited ARP replies? Is this a mechanism VMWare uses for some purpose? What is it? Is there an alternative? EDIT: Quick diagram: [esxi]--[switch vlan]--[inline IDS]--[fw]--(rest of network) The IDS is complaining about these unsolicited ARPs. Several IDS vendors trigger on ARP replies without a prior request, or for ARP replies that have a target IP of 0.0.0.0. The target MAC in these replies is the VLAN interface on the switch. Capture points: -The IDS grabs the offending packets -The FW can see the same ones -A VM on the ESXi host does not see these, although there is an ARP request for a specific IP on the ESXi host that has source_ip=0.0.0.0 and source_mac=[switch vlan interface]. I can't share the captures, unfortunately. Really I'm interested in finding out if this is normal for an ESXi deployment.

    Read the article

  • glusterfs mounts get unmounted when 1 of the 2 bricks goes offline

    - by Shiquemano
    I have an odd case where 1 of the 2 replicated glusterfs bricks will go offline and take all of the client mounts down with it. As I understand it, this should not be happening. It should fail over to the brick that is still online, but this hasn't been the case. I suspect that this is due to configuration issue. Here is a description of the system: 2 gluster servers on dedicated hardware (gfs0, gfs1) 8 client servers on vms (client1, client2, client3, ... , client8) Half of the client servers are mounted with gfs0 as the primary, and the other half are pointed at gfs1. Each of the clients are mounted with the following entry in /etc/fstab: /etc/glusterfs/datavol.vol /data glusterfs defaults 0 0 Here is the content of /etc/glusterfs/datavol.vol: volume datavol-client-0 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs0 end-volume volume datavol-client-1 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs1 end-volume volume datavol-replicate-0 type cluster/replicate subvolumes datavol-client-0 datavol-client-1 end-volume volume datavol-dht type cluster/distribute subvolumes datavol-replicate-0 end-volume volume datavol-write-behind type performance/write-behind subvolumes datavol-dht end-volume volume datavol-read-ahead type performance/read-ahead subvolumes datavol-write-behind end-volume volume datavol-io-cache type performance/io-cache subvolumes datavol-read-ahead end-volume volume datavol-quick-read type performance/quick-read subvolumes datavol-io-cache end-volume volume datavol-md-cache type performance/md-cache subvolumes datavol-quick-read end-volume volume datavol type debug/io-stats option count-fop-hits on option latency-measurement on subvolumes datavol-md-cache end-volume The config above is the latest attempt at making this behave properly. I have also tried the following entry in /etc/fstab: gfs0:/datavol /data glusterfs defaults,backupvolfile-server=gfs1 0 0 This was the entry for half of the clients, while the other half had: gfs1:/datavol /data glusterfs defaults,backupvolfile-server=gfs0 0 0 The results were exactly the same as the above configuration. Both configs connect everything just fine, they just don't fail over. Any help would be appreciated.

    Read the article

  • What is the reason for this DNSSEC validation failure of dnsviz.net?

    - by grifferz
    On trying to resolve dnsviz.net from a host using an Unbound resolver that is configured to use DNSSEC validation, the result is "no servers could be reached": $ dig -t soa dnsviz.net ; <<>> DiG 9.6-ESV-R4 <<>> -t soa dnsviz.net ;; global options: +cmd ;; connection timed out; no servers could be reached Nothing is logged by Unbound to suggest why this is the case. Here is the /etc/unbound/unbound.conf: server: verbosity: 1 interface: 192.168.0.8 interface: 127.0.0.1 interface: ::0 access-control: 0.0.0.0/0 refuse access-control: ::0/0 refuse access-control: 127.0.0.0/8 allow_snoop access-control: 192.168.0.0/16 allow_snoop chroot: "" auto-trust-anchor-file: "/etc/unbound/root.key" val-log-level: 2 python: remote-control: control-enable: yes If I add: module-config: "iterator" (thus disabling DNSSEC validation) then I am able to resolve this host normally. The domain and its DNSSEC check out fine according to http://dnscheck.iis.se/ so there must be something wrong with my resolver configuration. What is it and how do I go about debugging that?

    Read the article

  • How to get http requests details in a tcpdump?

    - by tucson
    I am trying to get a tcpdump trace of some http requests. Here is what I got so far (I replaced the real IP addresses with REMOTE and LOCAL): C:\>Windump -na -i 3 ip host REMOTE and ip src LOCAL and tcp port 80 Windump: listening on \Device\NPF_{8056BE5E-BDBB-44E6-B492-9274B410AD66} 13:13:34.985460 IP LOCAL.4261 > REMOTE.80: . 1784894764:1784894765(1) ack 1268208398 win 65535 13:13:38.589175 IP LOCAL.4302 > REMOTE.80: F 3708464308:3708464308(0) ack 982485614 win 65535 13:13:38.589285 IP LOCAL.4303 > REMOTE.80: F 890175362:890175362(0) ack 2462862919 win 65535 13:13:38.589330 IP LOCAL.4304 > REMOTE.80: F 1838079178:1838079178(0) ack 156173959 win 65535 13:13:38.589374 IP LOCAL.4305 > REMOTE.80: F 3952718843:3952718843(0) ack 2209231545 win 65535 13:13:38.589413 IP LOCAL.4306 > REMOTE.80: F 446105750:446105750(0) ack 3141849979 win 65535 13:13:38.590265 IP LOCAL.4302 > REMOTE.80: . ack 2 win 65535 13:13:38.590403 IP LOCAL.4304 > REMOTE.80: . ack 2 win 65535 13:13:38.590429 IP LOCAL.4303 > REMOTE.80: . ack 2 win 65535 13:13:38.590484 IP LOCAL.4305 > REMOTE.80: . ack 2 win 65535 13:13:38.590514 IP LOCAL.4306 > REMOTE.80: . ack 2 win 65535 But I do not get the following level of details: Request URL:http://domain.com/index.php Request Method:POST Status Code:200 OK POST /index.php HTTP/1.1 Host: domain.com Connection: keep-alive Content-Length: 151 Cache-Control: max-age=0 etc How can I get this level of data?

    Read the article

  • SSH Keys Authentication keeps asking for password

    - by Rhyuk
    Im trying to set access from ServerA(SunOS) to ServerB(Some custom Linux with Keyboard Interactive login) with SSH Keys. As a proof of concept I was able to do it between 2 virtual machines. Now in my real life scenario it isnt working. I created the keys in ServerA, copied them to ServerB, chmod'd .ssh folders to 700 on both ServerA,B. Here is the log of what I get. debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: Peer sent proposed langtags, ctos: debug1: Peer sent proposed langtags, stoc: debug1: We proposed langtags, ctos: en-US debug1: We proposed langtags, stoc: en-US debug1: SSH2_MSG_KEX_DH_GEX_REQUEST sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: dh_gen_key: priv key bits set: 125/256 debug1: bits set: 1039/2048 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'XXX.XXX.XXX.XXX' is known and matches the RSA host key. debug1: Found key in /XXX/.ssh/known_hosts:1 debug1: bits set: 1061/2048 debug1: ssh_rsa_verify: signature correct debug1: newkeys: mode 1 debug1: set_newkeys: setting new keys for 'out' mode debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: newkeys: mode 0 debug1: set_newkeys: setting new keys for 'in' mode debug1: SSH2_MSG_NEWKEYS received debug1: done: ssh_kex2. debug1: send SSH2_MSG_SERVICE_REQUEST debug1: got SSH2_MSG_SERVICE_ACCEPT debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /XXXX/.ssh/identity debug1: Trying public key: /xxx/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /xxx/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive Password: Password: ServerB has pretty limited actions since its a custom propietary linux. What could be happening? EDIT WITH ANSWER: Problem was that I didnt have those settings enabled in the sshd_config (Refer to accepted answer) AND that while pasting the key from ServerA to ServerB it would interpret the key as 3 separate lines. What I did was, in case you cant use ssh-copy-id like I couldnt. Paste the first line of your key in your "ServerB" authorized_keys file WITHOUT the last 2 characters, then type yourself the missing characters from line 1 and the first one from line 2, this will prevent adding a "new line" between the first and second line of the key. Repeat with the 3d line.

    Read the article

  • OpenVPN access to a private network

    - by Gior312
    There are many similar topics about my issue, however I cannot figure out a solution for myself. There are three hosts. A without a routable address but with an Internet access. Server S with a routable Internet address and host B behind NAT in a private network. What I've managed to do is a OpenVPN connection between A and B via S. Everything works fine so far according to this manual VPN Setup What I want to do is to connect A to Bs private network 10.A.B.x I tried this manual but had no luck. So A has a vpn address 10.9.0.10, B's vpn address is 10.9.0.6 and B's private network is 10.20.20.0/24. When at the Server I try to make a route to Bs private network like this sudo route add 10.20.20.0 netmask 255.255.255.0 gw 10.9.0.6 dev tun0 it says "route: netmask 000000ff doesn't make sense with host route" but I don't know how to tell Server to look for a private network in a different way. Do you know how can I make it right ?

    Read the article

  • Virtual machine lost after power cut

    - by dannymcc
    We have just had a power issue and our ESX (ESXi 4.1.0) host lost power and then rebooted. All but one of the virtual servers have rebooted with no problem, however one of them refused to power up. I try to power it on and I get the following error: File <unspecified filename> was not found Reason: The system cannot find the file specified. Cannot open the disk '/vmfs/volumes/4e03076e-90834647-b846-001185c38f42/LAMP- Stack/turnkey-lamp-11.3-lucid-x86.vmdk' or one of the snapshot disks it depends on. VMware ESX cannot find the virtual disk "/vmfs/volumes/4e03076e-90834647-b846- 001185c38f42/LAMP-Stack/turnkey-lamp-11.3-lucid-x86.vmdk". Verify the path is valid and try again. I have logged into the ESX host to see if the file is there an have found only the following file that matches the filename: /vmfs/volumes/4e03076e-90834647-b846-001185c38f42/LAMP-Stack/turnkey-lamp-11.3-l ucid-x86-s001.vmdk I notice that the above file has '-s001' after the filename. Is this recoverable? Any help of advice is greatly appreciated! EDIT: Running ls -l on the directory that contains the file shows this: drwxr-xr-t 1 root root 1680 Feb 9 09:49 4e03076e-90834647-b846-001185c38f42 The databrowser file system looks like this: and in a different directory there is the file that matches the missing one:

    Read the article

  • Where do these mysterious DNS lookups come from and why are they slow?

    - by Hongli
    I have recently obtained a new dedicated server which I'm now setting up. It's running on 64-bit Debian 6.0. I have cloned a fairly large git repository (177 MB including working files) onto this server. Switching to a different branch is very very slow. On my laptop it takes 1-2 seconds, on this server it can take half a minute. After some investigation it turns out to be some kind of DNS timeout. Here's an exhibit from strace -s 128 git checkout release: stat("/etc/resolv.conf", {st_mode=S_IFREG|0644, st_size=132, ...}) = 0 socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 5 connect(5, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("213.133.99.99")}, 16) = 0 poll([{fd=5, events=POLLOUT}], 1, 0) = 1 ([{fd=5, revents=POLLOUT}]) sendto(5, "\235\333\1\0\0\1\0\0\0\0\0\0\35Debian-60-squeeze-64-minimal\n\17happyponies\3com\0\0\1\0\1", 67, MSG_NOSIGNAL, NULL, 0) = 67 poll([{fd=5, events=POLLIN}], 1, 5000) = 0 (Timeout) This snippet repeats several times per 'git checkout' call. My server's hostname was originally Debian-60-squeeze-64-minimal. I had changed it to shell.happyponies.com by running hostname shell.happyponies.com, editing /etc/hostname and rebooting the server. I don't understand the DNS protocol, but it looks like Git is trying to lookup the IP for Debian-60-squeeze-64-minimal as well as for happyponies.com. Why does Debian-60-squeeze-64-minimal come back even though I've already changed the host name? Why does Git perform DNS lookups at all? Why are these lookups so slow? I've already verified that all DNS servers in /etc/resolv.conf are up and responding slowly, yet Git's own lookups time out. Changing the host name back to Debian-60-squeeze-64-minimal seems to fix the slowness. Basically I just want to fix whatever DNS issues my server has because I'm sure they will cause more problems that just slowing down git checkout. But I'm not sure sure what the problem exactly is and what these symptoms mean.

    Read the article

  • ldap-authentication without sambaSamAccount on linux smb/cifs server (e.g. samba)

    - by umlaeute
    i'm currently running samba-3.5.6 on a debian/wheezy host to act as the fileserver for our department's w32-clients. authentication is done via OpenLDAP, where each user-dn has an objectclass:sambaSamAccount that holds the smb-credentials and an objectclass:shadowAccount/posixAccount for "ordinary" authentication (e.g. pam, apache,...) now we would like to dump our department's user-db, and instead use authenticate against the user-db of our upstream-organisation. these user-accounts are managed in a novell-edirectory, which i can already use to authenticate using pam (e.g. for ssh-logins; on another host). our upstream organisation provides smb/cifs based access (via some novell service) to some directories, which i can access from my linux client via smbclient. what i currently don't manage to do is to use the upstream-ldap (the eDirectory) to authenticate our institution's samba: i configured my samba-server to auth against the upstream ldap server: passdb backend = ldapsam:ldaps://ldap.example.com but when i try to authenticate a user, i get: $ smbclient -U USER \\\\SMBSERVER\\test Enter USER's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.6.6] tree connect failed: NT_STATUS_ACCESS_DENIED the logfiles show: [2012/10/02 09:53:47.692987, 0] passdb/secrets.c:350(fetch_ldap_pw) fetch_ldap_pw: neither ldap secret retrieved! [2012/10/02 09:53:47.693131, 0] lib/smbldap.c:1180(smbldap_connect_system) ldap_connect_system: Failed to retrieve password from secrets.tdb i see two problems i'm having: i don't have any administrator password for the upstream ldap (and most likely, they won't give me one). i only want to authenticate my users, write-access is not needed at all. can i go away with that? the upstream ldap does not have any samba-related attributes in the db. i was under the impression, that for samba to authenticate, those attributes are required, as smb/cifs uses some trivial hashing which is not compatible with the usual posixAccount hashes. is there a way for my department's samba server to authenticate against such an ldap server?

    Read the article

  • Connecting to SVN server from a computer outside of my LAN

    - by Tom Auger
    I've got a Fedora server running Subversion and svnserve on port 3690. My repo is at /var/svn/project_name. I have my router forwarding port 3690 to the local server (as well as port 80, 21, 22 and a few others). When I connect locally to svn://192.168.0.2/project_name it works great. When I connect from an external server to svn://my.static.ip/project_name I get a time out connecting to the host. However, if I http://my.static.ip there is no problem, so port forwarding is working (at least for port 80). I don't want to run WebDAV or svn via HTTP/s. I'd like it to work using svnserve, as documented in the svn book. What have I misconfigured? EDIT Here is the last part of my iptables dump. I'm not an expert, but it looks OK to me: ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:svn ACCEPT udp -- anywhere anywhere state NEW udp dpt:svn ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:6680:6699 ACCEPT udp -- anywhere anywhere state NEW udp dpts:6680:6699 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited EDIT 2 Results from sudo netstat -tulpn tcp 0 0 0.0.0.0:3690 0.0.0.0:* LISTEN 1455/svnserve

    Read the article

  • iis not listening on port 80

    - by Holian
    Hello, We have server 2003 and ISA 2004 with IIS 6 on same machnie. Everything worked well till yesterday, when we try to make some new rule in ISA..but this is a long story... Unfortunatelly something happend with our intranet site. Our site is on the port 80, but if we try to open on this client machines then we got and error page (which error page is our provider): 403-forbidden; Remote host not listening, the remote host is not prepared to acceppt the connection request. On the server i can open the site with port 80. If i change the port number in the iis and try to open the site with the port, then works well. I try to shut down IIS and start apache with a simple page. On the server works well but in clients the problem is the same, so i think this is not an IIS related problem. In the ISA we have a web pub rule, with port 80, no auth. Im pulling out my hair, please help.

    Read the article

  • Ubuntu 8.04 wont reboot from script

    - by Littlejon
    I have a script that is run to backup a server via Rsync, after that script is run I want the server to reboot. My script is run as root from the Crontab at 3am in the morning. #!/bin/bash HOST="email" RSYNC_OPTS="-a -v -v --progress --stats --delete" RSYNC_DEST="10.0.0.10::$HOST" BACKUP_LIST="/etc /home /root" TIMESTAMP="/timestamp-bkup-start.chk" TIMESTAMP2="/timestamp-bkup-stop.chk" touch $TIMESTAMP rsync $RSYNC_OPTS $TIMESTAMP $RSYNC_DEST for BACKUP_ITEM in $BACKUP_LIST; do rsync $RSYNC_OPTS $BACKUP_ITEM $RSYNC_DEST done /etc/init.d/zimbra stop sleep 60s rsync $RSYNC_OPTS /opt $RSYNC_DEST touch $TIMESTAMP2 rsync $RSYNC_OPTS $TIMESTAMP2 $RSYNC_DEST echo `date +%Y%m%d%H%M` >> /var/log/reset reboot # $# shows number of args passed # $1 to access first variable #if [ $# -eq 1 ]; then # if [ $1 = "withreboot" ]; then # echo "rebooting..."; # echo `date +%Y%m%d%H%M` >> /var/log/reset # /sbin/reboot # fi #fi I have tried using init 6 rather then reboot. I have tried /sbin/reboot. I also have another basic script that just echos to the reset log and runs reboot without issue. It is just with the script above the server wont restart. If anyone has any theories that would be great as I have run out of idea. Thanks, Jon

    Read the article

  • unicorn and nginx, went wrong

    - by achempion
    I try to deploy my app via capistrano. It was done, but when I start to nginx and show my site in the browser I see 'We're sorry, but something went wrong.' It is bad. I use unicorn. See my configs https://gist.github.com/3904032 I try to start server via rails s -e prodiction and it's work! I think that this error may be because I can't restart server root@li272-194:~# /etc/init.d/nginx restart Restarting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok configuration file /etc/nginx/nginx.conf test is successful [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: still could not bind() nginx. any ideas? nginx log 2012/10/17 02:57:41 [error] 3271#0: *1 could not find named location "@myapp", client: 91.192.62.77, server: 178.79.153.194, request: "GET / HTTP/1.1", host: "178.79.153.194" 2012/10/17 02:19:08 [crit] 2448#0: *8 connect() to unix:/srv/zarcon/shared/unicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 91.192.62.77, server: zarkon, request: "GET / HTTP/1.1", upstream: "http://unix:/srv/zarcon/shared/unicorn.sock:/", host: "178.79.153.194"

    Read the article

  • nginx redirect what is not coming from load balancing

    - by dawez
    I have nginx on SERVER1 that is acting as load balancing between SERVER1 and SERVER2 in SERVER1 I have the upstreams for the load balancing defined as : upstream de.server.com { # similar upstreams defined also for other languages # SELF SERVER1 server 127.0.0.1:8082 weight=3 max_fails=3 fail_timeout=2; # other SERVER2 server otherserverip:8082 max_fails=3 fail_timeout=2; } The load balancing config on SERVER1 is this one: server { listen 80; server_name ~^(?<LANG>de|es|fr)\.server\.com; location / { proxy_pass http://$LANG.server.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # trying to pass a variable in the header to SERVER2 proxy_set_header Is-From-Load-Balancer 1; } } Then in server 2 I have: server { listen 8082; server_name localhost; root /var/www/server.com/public; # test output values add_header testloadbalancer $http_is_from_load_balancer; add_header testloadbalancer2 not_load_bal; ## other stuff here to process the request } I can see the "testloadbalancer" in the response header is set to 1 when the request is coming from the load balancing, it is not present when from a direct access: SERVER2:8082 . I would like to bounce back to the SERVER1 all the direct requests that are sent to SERVER2, but keep the ones from the load balancing. So this should forbid direct access to SERVER2:8082 and redirect to SERVER1:80 .

    Read the article

  • Changing time intervals for vSphere performance monitoring, and is there a better way?

    - by user991710
    I have a set of experiments running on a cluster node which is running ESXi 5.1, and I want to monitor the resource consumption on the node itself. Specifically, I am currently running experiments on a subset of the VMs on the ESXi host and wish to monitor resource consumption on those specific VMs. Right now, since I'm using only a single ESXi host, I am using vSphere to access it and the performance reports. Ideally, I would like to get these reports for different time intervals. I can already get the charts for a time interval of 1h, but these are rather long-running experiments and something like 2h, 3h,... would be preferable. However, I cannot seem to change the time interval. Here is an example of what my Customize Performance Chart dialog shows: I am also running on a trial key at the moment. How can I change this interval? Do I need a standard license, or do I just need to turn off the VM (unlikely, but I haven't attempted it yet as these are long-running experiments)? Any help (or pointers to documentation which deals with the above -- I've already looked but did not find much) would be greatly appreciated.

    Read the article

  • Some Apps don't start on Windows 8 Release Preview

    - by Exa
    I recently installed the Release Preview of Windows 8 in a virtual machine. Some apps do not work. When I open them (by clicking on their tile in the start screen) I see a splash screen and nothing else happens. Sometimes the app crashes after 30 seconds, sometimes it just keeps on loading. A good example is the "Map"-App from Windows 8 or the app "Cookbook" by Bewise. I installed Cookbook and when I had a look at the task manager I saw that it was the 32bit version running, but I have an x64 Windows 8... Could this be a problem? Shouldn't the Windows Store download the correct version? This is the setup of my virtual machine: Windows 8 Release Preview x64 Oracle VirtualBox 4 of 8 cores from host system 8 of 16 GB RAM from the host system 256 MB graphics memory guest additions installed resolution 1920 x 1080 Do you need further information? Unfortunately there is no error message... I just see the start screen of the app with its logo and it keeps loading, but nothing happens. Other Apps (like Mail, Video, Social, etc.) work fine.

    Read the article

  • Automatic o/s reset on a dedicated internet browsing Windows 7 pc.

    - by camelCase
    I have just purchased a new Acer Revo nettop PC for dedicated internet browsing. It will be the only pc on a home network. My original plan was to install one virtual PC for family browsing, another for remote web based server administration and ban browser use from the host Windows 7 o/s. The idea was that I could recover to a fresh VHD image once a week to eliminate any build up of malware inside the browser VMs. However now I am looking for alternative solutions since the Intel Atom cpu does not have hardware VT support which Windows Virtual PC requires. Would it be possible to engineer some type of routine overnight host o/s wipe and recovery? I guess cyber cafes do something like this? The only user data that would need to be retained across a recovery would be browser bookmarks but these could be exported to remote service. Edit 1: I am thinking the o/s reset could be done via some disk image recovery process. Edit 2: Just had a brainwave. Routine browsing could be done via the new Google Chrome O/S. I have just seen a video of the Google Chrome o/s booting off a usb pen drive in seconds.

    Read the article

  • 400 error with nginx subdomains over https

    - by aquavitae
    Not sure what I'm doing wrong, but I'm trying to get gunicorn/django through nginx using only https. Here is my nginx configuration: upstream app_server { server unix:/srv/django/app/run/gunicorn.sock fail_timeout=0; } server { listen 80; return 301 https://$host$request_uri; } server { listen 443; server_name app.mydomain.com; ssl on; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; client_max_body_size 4G; access_log /srv/django/app/logs/nginx-access.log; error_log /srv/django/app/logs/nginx-error.log; location /static/ { alias /srv/django/app/data/static/; } location /media/ { alias /wrv/django/app/data/media/; } location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; proxy_pass http://app_server; } } I get a 400 error on app.mydomain.com, but the app is published on mydomain.com. Is there an error in my configuration?

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >