Search Results

Search found 52112 results on 2085 pages for 'name length'.

Page 941/2085 | < Previous Page | 937 938 939 940 941 942 943 944 945 946 947 948  | Next Page >

  • Require and Includes not Functioning Nginx Fpm/FastCGI

    - by Vince Kronlein
    I've split up my FPM pools so that php will run under each individual user and set the routing correctly in my vhost.conf files to pass the proper port number. But I must have something incorrect in my environment because on this new domain I set up, require, require_once, include, include_once do not function, or rather, they may not be getting passed up to the interpreter to be rendered as php. Since I already have a Wordpress install on this server that runs perfectly, I'm pretty sure the error is in my server block for nginx. server { server_name www.domain.com; rewrite ^(.*) http://domain.com$1 permanent; } server { listen 80; server_name domain.com; client_max_body_size 500M; index index.php index.html index.htm; root /home/username/public_html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { if (!-e $request_filename) { rewrite ^(.*)$ /index.php?name=$1 break; } fastcgi_pass 127.0.0.1:9002; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } The problem I'm finding I think is that there are dynamic calls to the doc root index file, while all calls to anything within a sub-folder should be routed as normal ie: NOT passed to index.php. I can't seem to find the right mix here. It should run like so: domain.com/cindy (file doesn't exist) --> index.php?name=$1 domain.com/admin/anyfile.php (files DO exist) --> admin/anyfile.php?$args

    Read the article

  • SendMail not working in CentOs 6.4

    - by Kane
    I am trying to send e-mails from my CentOS 6.4 but it does not work. My knowledge about servers is quite limited, so I hope someone can help me. Here is what I did: First i tried to send an email using the "mail" command, but it was not in the OS so I installed it. # yum install mailx After that, I tried sending an email using the "mail" command, but it did not send anything. I checked it on the internet and I realized I needed an e-mail server like sendmail, so I installed it. # yum install sendmail sendmail-cf sendmail-doc sendmail-devel After that, I configured it following some tutorials. First, sendmail.mc file. # vi /etc/mail/sendmail.mc Commented out the next line: BEFORE # DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl AFTER # dnl DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl Check that the next lines are correct: # FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl # ... # FEATURE(use_cw_file)dnl # ... # FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl Update sendmail.cf # m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf Open the port 25 adding the proper line in the iptables file # vi /etc/sysconfig/iptables # -A INPUT -m state --state NEW -m tcp --dport 25 -j ACCEPT restart iptables and sendmail # service iptables restart # service sendmail restart So i thought that would be ok, but when i tried: # mail '[email protected]' # Subject: test subject # test content #. I checked the mail log: # vi /var/log/maillog And that is what I found: Aug 14 17:36:24 dev-admin-test sendmail[20682]: r7D8RItS019578: to=<[email protected]>, ctladdr=<[email protected]> (0/0), delay=1+00:09:06, xdelay=00:00:00, mailer=esmtp, pri=2460500, relay=alt4.gmail- smtp-in.l.google.com., dsn=4.0.0, stat=Deferred: Connection timed out with alt4.gmail-smtp-in.l.google.com. I do not understand why there is a connection time out. Am I missing something? Can anyone help me, please? Thank you.

    Read the article

  • arrays in puppet

    - by paweloque
    I'm wondering how to solve the following puppet problem: I want to create several files based on an array of strings. The complication is that I want to create multiple directories with the files: dir1/ fileA fileB dir2/ fileA fileB fileC The problem is that the file resource titles must be unique. So if I keep the file names in an array, I need to iterate over the array in a custom way to be able to postfix the file names with the directory name: $file_names = ['fileA', 'fileB'] $file_names_2 = [$file_names, 'fileC'] file {'dir1': ensure => directory } file {'dir2': ensure => directory } file { $file_names: path = 'dir1', ensure =>present, } file { $file_names_2: path = 'dir2', ensure =>present, } This wont work because the file resource titles clash. So I need to append e.g. the dir name to the file title, however, this will cause the array of files to be concatenated and not treated as multiple files... arghh.. file { "${file_names}-dir1": path = 'dir1', ensure =>present, } file { "${file_names_2}-dir2": path = 'dir1', ensure =>present, } How to solve this problem without the necessity of repeating the file resource itself. Thanks

    Read the article

  • PEAP validating a secondary domain suffix

    - by sam
    Probably the title is a little bit confusing, let me explain the situation. Our company wants to implement a corporate wireless lan with PEAP authentication. unfortunately someone made a big mistake in our AD design 10 years ago. The domain name we are using "company.ch" is not owned by company but by someone else. so it is not possible to issue a public SSL certificate for the RADIUS server. Our AD is to big to rename it. We already thought about using our private PKI and rollout the CA certificate via GPO but that would only cover our corporate managed clients but not the BYOD (Smartphones, Tablets, Laptops..) Is there a way to add a secondary domain name like “company2.ch” and issue a public certificate and join that radius to that secondary domain aslwell, and configure that secondary dns suffix via DHCP for all the client pools... or is there another way with for example a new radius server which has his own domain company2.ch which is connected with some kind of trust between the company.ch doamin? sorry i'am not a client server guy.. hopefully you get my drift.!?

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • SSL Connection Error

    - by toffee.beanns
    I have purchased a comodo ssl cert and have submitted the Certificate Signing Request (CSR) generated by my server to the ssl management site. With the 3 files it returned me with, - AddTrustExternalCARoot.crt - PositiveSSLCA2.crt - www_mydomainname_com.crt I have uploaded them to my /etc/ssl/ssl-certs folder and have updated my virtual host in my sites-available and restarted accordingly. NameVirtualHost 107.167.120.195:80 #sample ip address NameVirtualHost 107.167.120.195:443 #sample ip address ......... #normal http virtual host (working well) <VirtualHost 107.167.120.195:443> ServerAdmin [email protected] ServerName mydomainname.com ServerAlias www.mydomainname.com DocumentRoot /var/www/mydomainname SSLEngine on SSLCertificateFile /etc/ssl/ssl-certs/www_mydomainname.com.crt SSLCertificateKeyFile /etc/ssl/ssl-certs/server.key SSLCertificateChainFile /etc/ssl/ssl-certs/PositiveSSLCA2.crt </VirtualHost> I have also enabled ran 'a2enmod ssl' and it's enabled. This is the error I get when I access the webpage https in chrome: SSL connection error Error code: ERR_SSL_PROTOCOL_ERROR Unable to make a secure connection to the server. This may be a problem with the server, or it may be requiring a client authentication certificate that you don't have. I have also checked out my apache log files and there seems to be an error saying that the Common Name (CN) is not the same as the server. RSA server certificate CommonName (CN) `www.mydomainname.com' does NOT match server name!? and Invalid method in request \x16\x03\x01 What should I do?

    Read the article

  • NTLM, Kerberos and F5 switch issues

    - by G33kKahuna
    I'm supporting an IIS based application that is scaled out into web and application servers. Both web and applications run behind IIS. The application is NTLM capable when IIS is configured to authenticate via Kerberos. It's been working so far without a glitch. Now, I'm trying to bring in 2 F5 switches, 1 in front of the web and another in front of the application servers. 2 F5 instances (say ips 185 & 186) are sitting on a LINUX host. F5 to F5 looks for a NAT IP (say ips 194, 195 and 196). Created a DNS entry for all IPs including NAT and ran a SETSPN command to register the IIS service account to be trusted at HTTP, HOST and domain level. With the Web F5 turned on and with eachweb server connecting to a cardinal app server, when the user connects to the Web F5 domain name, trust works and user authenticates without a problem. However, when app load balancer is turned on and web servers are pointed to the new F5 app domain name, user gets 401. IIS log shows no authenticated username and shows a 401 status. Wireshark does show negotiate ticket header passed into the system. Any ideas or suggestions are much appreciated. Please advice.

    Read the article

  • Bouncing between a 502 and 503 error

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • Authority Information Access local path being ignored

    - by Kevin
    I have a CA set up in Server 2008 R2, and generally it is working, but I can't control the local path/filename it writes its own certificate to for the Authority Information Access publishing. Here's a screen shot of the dialog I'm trying to set this on: From these settings I would expect to get the file: C:\Windows\system32\CertSrv\CertEnroll\DAMNIT.crt But instead I get: C:\Windows\system32\CertSrv\CertEnroll\SERVER.domain.com_My Issuing Authority(1).crt Of course, the actual change shown wouldn't be very useful, but it's illustrative; no matter what path/filename I use, it always lands up in the same place and with the same name. I actually wanted to change the name from <ServerDNSName>_<CaName><CertificateName>.crt to <CaName><CertificateName>.crt, since the latter corresponds to the HTTP URL whereas the former does not. Admittedly, I haven't set up many CAs so perhaps I'm just deluded as to what this dialog is supposed to be setting, but if so this is notoriously bad UI design. (Incidentally, I have a couple other complaints with the same dialog.) What's going on here and is there some way to get the filename pattern I want?

    Read the article

  • Enrich a dataset of POIs with OpenStreetMap

    - by zero
    update: due to some hints of users - eg oliver Salzburg and slhck i have been aware of gis.stackexchange.com - so i moved the topic on my own: Plz can you or somebody who has the permission close the article - since we do not need this topic on two sites. Thx for your work. KEEP up the service here! STACK-sites rock. I have a list of POIs, some with a full description and some with only a few data entries, like the following: 6.9441000 50.9242000 [50677] (Ital) Casa di Biase [Köln] 6.9373600 50.9291800 [50674] (Ital) Al Setaccio [Köln] However, I need the full dataset. Can I get this somewhere? If I have all the position data, is it possible to find the rest? a. name of the street b. name of the town So for example, the data should finally look like this: 10.5346100 52.1613600 [38300] (Chin) Wanbao Kommissstr.9 [Wolfenbüttel] 13.2832500 52.4422600 [14167] (Ital) LaPergola Unter den Eichen 84d [Berlin] 13.3177700 52.5062900 [10625] (Chin) Good Friends Kantstr.30 [Berlin] Can I do this with OpenStreetMap? Should I parse OpenStreetMap data? Or OpenBabel?

    Read the article

  • Windows 8 Pro Homegroup Not Allowing Access to Shared Files

    - by Jack Herr
    I have two pc's and a laptop. With all three running Windows 7, homegroup worked perfectly, with all computers able to access the files on the others exactly as one would expect. I installed Windows 8 Pro, which I downloaded recently from the MSDN website, on the laptop. From the info on the MSDN website, I believe the version downloaded is the official release version, not a release candidate. The install preserved all settings, apps, and files. I cannot connect to the homegroup from the laptop. Well, not exactly. I joined the homegroup during the install, and later left and rejoined, that was advertised as being offered by one of the pc's (not sure why that one was picked and not the other or both). This homegroup appears in my "computer" folder in the file explorer, offering files from that pc. But it excludes documents (only the music, pix, videos, and playlists folders on the one computer appear). I can actually access those files from the laptop. But the Homegroup folder, which actually shows all the computers by account and computer name and includes documents from both computers, does not open those folders when clicked. The computers listed by name in the Network folder give me a login that doesn't work. It asks for a username and password, no combinations of which work. Further, the following error message appears before the login is even attempted: "The system cannot contact a domain controller to service the authentication request. Please try again later." The two windows 7 pc's continue to share all files, including documents, seemlessly. I can also get to the laptop shared files, which I shared during setup, from either pc. Any ideas?

    Read the article

  • What is proper relationship between /etc/hosts and DNS A records for a Linux server?

    - by MountainX
    I have an Ubuntu server. It is going to be a web server with a URI of www.example.com. I have a DNS A record pointing www.example.com to the server's IP address. Let's say I pick "trinity" as the hostname for this server. I want to set up the DNS records correctly. I need reverse DNS to www.example.com, so a CNAME for www.example.com doesn't seem appropriate. Here's my question: Is it considered best practice to set up two DNS records (which in my case would likely be two A records), one for www.example.com and one for trinity.example.com, both pointing to this server's IP address? (Or, even if it is not accepted as a best practice, is it a good idea?) If so, would the following be a proper /etc/hosts file? $ cat /etc/hosts 127.0.1.1 trinity.local trinity 99.100.101.102 trinity.example.com trinity www.example.com This server is a Linode and Linode's docs seem to imply that the above approach is best (if I am reading them correctly). Here's the relevant section. I bolded the line that seems to apply here. Update /etc/hosts Next, edit your /etc/hosts file to resemble the following example, replacing "plato" with your chosen hostname, "example.com" with your system's domain name, and "12.34.56.78" with your system's IP address. As with the hostname, the domain name part of your FQDN does not necesarily need to have any relationship to websites or other services hosted on the server (although it may if you wish). As an example, you might host "www.something.com" on your server, but the system's FQDN might be "mars.somethingelse.com." File:/etc/hosts 127.0.0.1 localhost.localdomain localhost 12.34.56.78 plato.example.com plato The value you assign as your system's FQDN should have an "A" record in DNS pointing to your Linode's IP address. For more information on configuring DNS, please see our guide on configuring DNS with the Linode Manager.

    Read the article

  • Rails app returns HTTP 422 for new ServerAlias - Internet Explorer only

    - by Snips
    I have a long-standing Rails app running on Mac OS X (apache2). The set-up uses Apache virtual hosts and Passenger. The Rails app also uses HTTP Basic Authentication. I need to migrate the app from one url domain to another - with some overlap of both domain names being accessible simultaneously for a period. To do this, I've added the new domain name as a ServerAlias of the existing domain name in the Passenger Virtual Host config. I can now Browse the Rails app using both the legacy url, and the new url from any of Safari, Chrome, Firefox, or Internet Explorer. I can also 'HTTP post' updates to the Rails app using Safari, Chrome, or Firefox. All good. Except, attempts to post updates from Internet Explorer result in the Rails app rejecting the update, The Rails app log contains the message, ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): I have other domains & aliases working just fine on this same machine. Any suggestions as to what is causing the Rails app to reject posts from IE would be appreciated.

    Read the article

  • Configure Domino to use SMTP routing and hMailServer

    - by Sébastien Lachance
    I have been trying for a couple of days to set up a Domino 8.5 server. Basically, I want everything to be run inside a local network. Right now I can send email to other user in the Domino directory without any mail address. I am pretty new to all this stuff, so maybe the answer will be really obvious. What I need to do is be able to send a mail from somewhere else to a domino user that will be redirected to his account. On the Domino server, I also have hMailServer installed on port 25. I configured Domino to use port 26. I followed those step to get where I am now. -I have set the Fully qualified Internet host name to "preview.notes". -Smtp Listener task changed to Enabled to turn on the Listener so that the server can receive messages routed via SMTP routing -Setting up SMTP routing within the local Internet domain (http://www.h2l.com/help/help85%5Fadmin.nsf/f4b82fbb75e942a6852566ac0037f284/7f9738a49efc4f58852574d500097b01?OpenDocument) -I modified the person to use the [email protected] address. -I'm using the hMailServer (which have the local "preview.local" domain name) to send mail to [email protected]. When sending mail I got an error telling that the DNS is not set up correctly. Is using the Domino Smtp server instead of hMailServer will solve the problem? I can Telnet the Domino Smtp Server.

    Read the article

  • New Static Website with Hosted DNS alternating 502, 503 and Page Does Not Exist Errors

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • IIS 7.5 returning 404 for unknown host names

    - by WaldenL
    This just doesn't seem correct to me, so I'm looking for someone to tell me how I've misconfigured IIS... Configuration is IIS7.5 (2008R2), without SP1. I have IIS 7.5 configured w/several sites. ALL sites have hostnames defined in the bindings, there is NO site w/out a hostname. However, if I request an unknown hostname from the server IIS (technically Microsoft-HTTPAPI/2.0) return a 404 error, not a 400 error. I would expect a 400 (or some other major error) rather than a lowly 404. This causes a problem when I have nginx in front of multiple IISs and want to stop a site so nginx takes it out of rotation. Since IIS still returns a 404 for the request even when there is no active site for that name, nginx doesn't know the server is dead. NB: IIS returns the 404 regardless of whether there is a server, but it's stopped, or there is no server. Thoughts? Solutions? -- Additional info: OK, I added a site on a port other than 80 (5000) and then on a connection to that port asked for a site that doesn't exist, and I get the expected error 400 (Invalid hostname). So, while IIS isn't listening for generic (no host name) connections on port 80 it would seem that something is. Any ideas how to get HTTPSys to dump the list of what it's listening for?

    Read the article

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • PTR record not valid for all domains

    - by charnley
    We have an issue sending emails to certain domains, namely Time Warner and Cox. Last week, we decommissioned our Exchange 2003 server and now our Exchange 2010 server is doing all of the transport for our domain. We run our own authoritative name servers, so we are in charge of the DNS and have modified our PTR record to reflect the new server. All mailflow is working except for these 2 domains. When I telnet on port 25 to the mail servers for Cox and Time Warner I am receiving errors. For Cox the error is: 554... rejected - no rDNS And when I telnet to port 25 to the Time Warner mail server we get this: 554 5.7.1 - Connection refused. IP name lookup failed for x.x.x.x I have run through the outbound SMTP test on Microsoft Remote Connectivity Analyzer and get 100% completely successful results. MXToolbox comes up with all successful tests on SMTP as well, showing correct reverse banner check, and no blacklisting. DNSQueries.com shows a valid reverse DNS entry as well for us. Outbound emails to these 2 domains continue to sit in the queue. Any ideas or advice would be greatly appreciated. Thanks!

    Read the article

  • Can't access one directory via HTTPS + public FQDN

    - by Justin James
    Hello - I have the strangest IIS error that I've ever seen in my life. I have an application/directory on an IIS server, that throws an error 500 when accessing ANY of the content in it, including HTML documents, when accessed via HTTPS AND the machines FQDN. When I access it with "localhost" it works fine. When I added a bogus entry for the NIC's IP in the hosts file, it worked fine. When I access it with the machines name and HTTP it works fine. Here's a chart (the machine's name is "lofn.titaniumcrowbar.com"): http - lofn.titaniumcrowbar.com: works https - lofn.titaniumcrowbar.com: broken https - localhost: works https - temp.titaniumcrowbar.com (put into hosts file): works I set up tracing, and I got some useless information: "The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)" This would make sense, except this happens when pulling up static content. While the directory may be an "application", the content is all static in it. Any/all suggestions, no matter how strange, are VERY appreciated. Thanks! J.Ja

    Read the article

  • dns in a small network with router and AD domain

    - by Felix
    I have a small office network with router (running OpenWRT), Windows Domain Controller (used to be 2008R2; I just backed it up and upgraded to 2012), about a dozen AD clients (3 server and windows workstation) and several non-AD clients (network printer, PBX). The problem is that the clients can't access servers by name (only by IP). I tried all kind of permutations. Right now domain controller runs DNS server for all desktops; but unless I put an entry in hosts file - I can only get by IP. I have router as DHCP server (since not all devices are on AD); and except for Domain Controller all IP addresses, including "static", are assigned by the router. Most frustrating, some servers sometimes just work! for example, I can often get to the Linux box by name (it is part of Domain using Beyond Trust Integration Services); but I can never get to SQL Server box. Seems like non-domain devices see more names than domain members... This network should be fairly typical; but I couldn't get any guidance about how to set up DNS/DHCP service to make all nodes happy. The closest is this question, but still it's different! Thanks

    Read the article

  • SNMP Access on Ubuntu

    - by javano
    I am trying to use SNMP to monitor a machine locally on its self and remotely. This is the snmpd.conf (Ubuntu 8.04.1): # sec.name source comunity com2sec readonly 1.2.3.4 nicenandtight com2sec readonly 5.6.7.8 reallysafe group MyROGroup v1 readonly group MyROGroup v2c readonly group MyROGroup usm readonly view all included .1 view system included .iso.org.dod.internet.mgmt.mib-2.system access MyROGroup "" any noauth exact all none none syslocation my house syscontact me <[email protected]> exec .1.3.6.1.4.1.2021.7890.1 distro /usr/bin/distro smuxpeer .1.3.6.1.4.1.674.10892.1 includeAllDisks 95% 1.2.3.4 is the local machines IP and everything is working locally. 5.6.7.8 is the remote machine and initially I am just trying to touch SNMPD with snmpwalk from the remote machine; snmpwalk -v 2c -c reallysafe 1.2.3.4 Timeout: No Response from 1.2.3.4 I have added to iptables as the very first rule; -A INPUT -p udp -m udp --dport 161 -j ACCEPT With such a loose iptables rule I can't see why I can't even touch the SNMPD on that Uubuntu Machine. There are more specific rules further down the table but as I couldn't connect I added the above. TCPDump shows the UDP packets coming in. What could be going wrong here?

    Read the article

  • lsof not showing what port a proc is listening on

    - by ericslaw
    I have many processes on a box listening on several ports. I am trying to map ports to pids. The problem is that lsof is not telling me what ports belong to which process. Given an apache listening on port 80, I can see it listening via netstat: user@host% netstat -an|grep LISTEN|grep 80 *.80 *.* 0 0 49152 0 LISTEN But when I try to map port 80 to a pid I get nothing: user@host% lsof -iTCP:80 -t When I try seeing what sockets that specific pid is using I get: user@host% lsof -lnP -p31 -a -i COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME libhttpd. 31 0 15u IPv4 0x6002d970b80 0t0 TCP *:65535 (LISTEN) Notice the *:65535 in the NAME column. Does anyone know why lsof is not reporting the port in use? I am running as root. I am using a mix of lsof and os versions: lsof v4.77 on Solaris10 sparc lsof v4.72 on Redhat4.2 etc I know that linux solutions can use "netstat -p", so I guess I'm only looking for why solaris isn't working, but I find lsof is frequently silent and not showing me expected data.

    Read the article

  • fedora, dhcpd fails to start

    - by soxs060389
    History: I got a tiny shiny plugserver which I want to plug to my ADSL router (or however you want to call it) on one end (eth0), and the other end (eth1) I want to run a dhcp server for my LAN. ATM I am stuck with getting LAN to work. OS is fedora 12. I configured my /etc/dhcp/dhcpd.conf like this: # # DHCP Server Configuration file. # see /usr/share/doc/dhcp*/dhcpd.conf.sample # see 'man 5 dhcpd.conf' # option domain-name "unknown.org"; option domain-name-servers 192.168.44.1; option subnet-mask 255.255.255.0; option broadcast-address 192.168.44.255; default-lease-time 86400; max-lease-time 172800; subnet 192.168.44.0 netmask 255.255.255.0 { host fedorabigbox { hardware ethernet 00:19:66:8E:61:74; fixed-address 192.168.44.21; } #host mobile #{ # hardware ethernet ***; # fixed-address 192.168.44.22; #} range 192.168.44.100 192.168.44.110; option routers 192.168.44.1; } # this is just dummy, as read many howtos, some suggesting to add a subnet blah netmask blah for each interface subnet 192.168.33.0 netmask 255.255.255.0 { range 192.168.33.100 192.168.33.110; option routers 192.168.33.1; } But the server fails to start when trying to start it via /etc/init.d/dhcpd start In general it would be nice if someone can point me to a in detail explanation of how network works, I am pretty new to this stuff. More concrete question: How to point the subnets to eth1 and the other to eth0, how can this be achieved? Does someone see any errors or flaws? Syntax should be correct, allready checked that with the dhcpd syntax check. Thanks for any help

    Read the article

  • Configure IIS site to work with host header & hosts file entry

    - by HarveySaayman
    I'm I bit of an IIS / Web noob (I'm a C# backend service / winforms dev) so please bare with me :-) I've set up a site in IIS on my local dev machine. In the bindings section of the site ive added 4 bindings, all 4 for http: Host Name Port IP Address blog.sourcecube.co.za 26581 * www.blog.sourcecube.co.za 26581 * blog.sourcecube.co.za 26581 127.0.0.1 www.blog.sourcecube.co.za 26581 127.0.0.1 in my hosts file (drivers\etc\hosts), i've added the folling entries: 127.0.0.1 blog.sourcecube.co.za 127.0.0.1 www.blog.sourcecube.co.za when i ping my domain name from the command line it does in fact resolve to the loopback address, 127.0.0.1. So what I'm expecting to happen when i navigate to blog.sourcecube.co.za in my browser is for it to resolve to 127.0.0.1, and when the request hits IIS, it should know which site to serve because of the host header? But when i navigate to blog.sourcecube.co.za, i get an "Unable to connect, Firefox can't establish a connection to the server at blog.sourcecube.co.za" error. What am I doing wrong? --- UPDATE --- Navigating to blog.sourcecube.co.za:26581 from my browser works... I'd like get it working without specifying the port number though.

    Read the article

  • No Homegroup Computers, Network Troubleshooter Fails

    - by Mokubai
    I have a problem with my Windows 7 Homegroup, between two Windows 7 Home Premium machines. On one machine I get this: The other machine in the Homegroup is perfectly happy and is able to see and browse this faulty machine as if there is nothing wrong. The Network and Sharing Center shows that I am joined to a Homegroup on my "Home" network and nothing is out of the ordinary. I have tried leaving the Homegroup and rejoining/recreating it several times and that does nothing at all. Normal browsing to machine names and looking through folders seems to work, but it's a much more clunky way to get stuff compared to the convenience of the Homegroup facilities. Starting the troubleshooter detects some problems with a "Peer Networking" (PNRpr or something like that) service not starting but fails to fix anything. Sure enough when I go to view the services via Control Panel - Administrative Tools - Services I see that both the "Peer Name Resolution Protocol" and "Peer Networking Grouping" services are stopped. Attempting to start the "Peer Networking Grouping" gives an error that a dependency service will not start, the only service it is dependant on is the "Peer Name Resolution Protocol" so I try to start that and I get an error saying that the "service could not start due to error 0x80630801" This has happened before and I have fixed it then by using System Restore and restoring the machine to a week before when I knew it had all worked. This time though I cannot remember when I last used the Homegroup from this machine and I've installed quite a bit so I don't want to go fumbling through restore points trying to find one that works... Can anyone tell me if there is a way to reset things so that this machine is able to use the Homegroup again?

    Read the article

< Previous Page | 937 938 939 940 941 942 943 944 945 946 947 948  | Next Page >