Search Results

Search found 86974 results on 3479 pages for 'visualsvn server'.

Page 1255/3479 | < Previous Page | 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262  | Next Page >

  • How does Subnetting Work?

    - by Kyle Brandt
    How does Subnetting Work, and How do you do it by hand or in your head? Can someone explain both conceptually and with several examples? Server Fault gets lots of subnetting homework questions, so we could use an answer to point them to on Server Fault itself. What is classless routing and why is class-based routing obsolete? If I have a network, how do I figure out how to split it up? If I am given a netmask, how do I know what the network Range is for it? Sometimes there is a slash followed by a number, what is that number? Sometimes there is a subnet mask, but also a wildcard mask, they seem like the same thing but they are different? Someone mentioned something about knowing binary for this? What is NAT (Network Address Translation). Not looking for links to other sites (unless maybe you have one post with a bunch of good ones). I already know how to subnet, I just thought it would be nice if Server Fault had a generic subnetting answer.

    Read the article

  • Exchange 2010 - Certificate error on internal Outlook 2013 connections

    - by Lorenz Meyer
    I have an Exchange 2010 and Outlook 2003. The exchange server has a wildcard SSL certificate installed *.domain.com, (for use with autodiscover.domain.com and mail.domain.com). The local fqdn of the Exchange server is exch.domain.local. With this configuration there is no problem. Now I started upgrading all Outlook 2003 to Outlook 2013, and I start to get consistently a certificate error in Outlook : The Name on the security certificate is invalid or does not match the name of the site I understand why I get that error: Outlook 2013 is connecting to exch.domain.local while the certificate is for *.domain.com. I was ready to buy a SAN (Subject Alternate Names) Certificate, that contains the three domains exch.domain.local, mail.domain.com, autodiscover.domain.com. But there is a hindrance: the certificate provider (in my case Godaddy) requires that the domain is validated as being our property. Now it is not possible for an internal domain that is not accessible from the internet. So this turns out not to be an option. Create self-signed SAN certificate with an Enterprise CA is an other option that is barely viable: There would be certificate error with every access to webmail, and I had to install the certificate on all Outlook clients. What is a recommended viable solution ? Is it possible to disable certificate checking in Outlook ? Or how could I change the Exchange server configuration so that the public domain name is used for all connections ? Or is there another solution I'm not thinking of ? Any advice is welcome.

    Read the article

  • How to remotely connect using perfmon?

    - by user36914
    Suprised there is not a ton of information on google when i search for this but there is not. Lot of people asking the question but i none of them have any good answers. I have a remote computer running hyper-v (server) running a Windows 7 x64 guest (guest). Occasionally i won't be able to remote desktop to guest. I will then remote to server and see that the guest instance is constantly using about 25% of the cpu. WHen i try to connect directly from server i will get the login screen but as soon as i type the password in it will just stay at the windows 7 login screen but the account names will disappear and it will not log in. It responds to pings though. I don't know how else to diagnose other than trying to run perfmon remotely. It only happens like every 3 weeks and i run it 24/7. So i'm trying to run remote desktop remotely. I tested this out on a local vm i have running under vmware. When i try to connect using perfmon to my local vm i get this error: "when attempting to connect to the remote computer the4 following system error occurred: the network path was not found" I found in another past to start the remote registry service and when i start the service i get this error: "No such interface supported" Anyways, how do i remotely connect to another machine with perfmon or if anyone has a better idea how i can diagnose the problem above then let me know.

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

  • sharepoint crawl not indexing main site

    - by user22215
    Guys I'm having some strange search issues' going on with my main portal application. First off let me give you a little back ground on the problem web app. Our Sharepoint environment was originally set up by a consultant that did not follow best practices. She used one web app to house our companies' intranet site, ssp, and mysites. Since than I have provisioned a new ssp that I have segmented correctly I moved all of our other sites over to the new ssp with out any problems . However, I could not assign the main portal app to the new ssp since the portal app housed the ssp site collection. So I deleted the ssp site collection after that I deleted the ssp and assigned the portal app to my new ssp. Now this is where the problem starts when I attempt to crawl this application the crawl starts than stops 5 seconds later with a status of success also it reports that 1 item was successfully crawled. The funny thing is the main portal app has nearly 30000 items. I have tracked the problem down to the web app if I create a test web app than restore the content I have no problem crawling all 30000 items. Also all of my other web apps that use the same ssp have no problem completing crawls. I don't see anything in the ULS logs or server 2003's event viewer. Also I'm using a separate dedicated index server that's configured to crawl itself via host file configuration. I would like to fix this problem with out having to recreate our main portal site due to the fact that we have several custom code modifications where DLL's were registered to the IIS bin folder also I don't even want to get into the Silverlight mods that were done. Any help with this problem is much appreciated Same problem as minehttp://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/MS-SharePoint/Q_23885820.html

    Read the article

  • Connection timed out on Node.js app running under CentOS

    - by ss1271
    I followed this tutorial to create a simple node.js app on my CentOS: the node.js version is: $ node -v v0.10.28 Here's my app.js: // Include http module, var http = require("http"), // And url module, which is very helpful in parsing request parameters. url = require("url"); // show message at console console.log('Node.js app is running.'); // Create the server. http.createServer(function (request, response) { request.resume(); // Attach listener on end event. request.on("end", function () { // Parse the request for arguments and store them in _get variable. // This function parses the url from request and returns object representation. var _get = url.parse(request.url, true).query; // Write headers to the response. response.writeHead(200, { 'Content-Type': 'text/plain' }); // Send data and end response. response.end('Here is your data: ' + _get['data']); }); // Listen on the 8080 port. }).listen(8080); However, when I uploaded this app onto my remote server (assume the address is 123.456.78.9), I couldn't get access to it on my browser http://123.456.78.9:8080/?data=123 The browser returned Error code: ERR_CONNECTION_TIMED_OUT. I tried the same app.js code which runs fine on my local machine, is there anything I am missing? I tried to ping the server and its address was reachable. Thanks.

    Read the article

  • Remote access to phpmyadmin from computer belongs to same LAN

    - by Charles
    OK... I solved it. It is because I have not configured the httpd.conf to allow the centos listen port 80 and 8080. Listen 80 Listen 8080 I have setup the myphpadmin on my CentOS 6.4 recently. I can access and login to the myphpadmin on my localhost. However, when I type http://[hostipaddr]/phpmyadmin on my other computer in the same LAN with the CentOS, the browser simply cannot access the page. Below are some of the current configuration. Anyone can help please......? config.inc.php $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'http'; /* Server parameters */ $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = false; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysql'; $cfg['Servers'][$i]['AllowNoPassword'] = false; phpmyadmin.conf <Directory /var/www/html/phpmyadmin/> order allow,deny allow from all </Directory> Furthermore, I can access the webpage that stored in the CentOS from my other computer without problems. After using wireshark and tcpdump, I found that the server (the Cent OS) keep resetting the connection. (192.168.1.106 is my other computer, 192.168.1.101 is my CentOS) 23:29:42.281473 IP 192.168.1.106.55999 > 192.168.1.101.webcache: Flags [S], seq 2559409090, win 65535, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 23:29:42.281504 IP 192.168.1.101.webcache > 192.168.1.106.55999: Flags [R.], seq 0, ack 2559409091, win 0, length 0 I have disabled the iptables service on the CentOS already.

    Read the article

  • Google Apps: MX records for zonefile

    - by 23tux
    Hi everybody, I have a question about using Google Apps for handling emails. I don't want to set up a whole entire mail system on my server, so I decided to use Google Apps. The ownership of my domain is approved, and now I'm trying to change the MX records in the zone file of my domain. But I think I'm doing wrong, it doesn't work. I want to use mail.mydomain.com as a adress to the mail server for POP, SMTP and IMAP. My zone file looks like this: $TTL 86400 @ IN SOA ns1.first-ns.de. postmaster.robot.first-ns.de. ( 2011011700 ; serial 14400 ; refresh 1800 ; retry 604800 ; expire 86400 ) ; minimum @ IN NS robotns3.second-ns.com. @ IN NS robotns2.second-ns.de. @ IN NS ns1.first-ns.de. @ IN A 111.111.111.111 localhost IN A 127.0.0.1 www IN A 111.111.111.111 ftp IN CNAME www loopback IN CNAME localhost mail IN CNAME @ relay IN CNAME www @ IN MX 10 ALT1.ASPMX.L.GOOGLE.COM. @ IN MX 10 ASPMX3.GOOGLEMAIL.COM. @ IN MX 10 ASPMX2.GOOGLEMAIL.COM. @ IN MX 10 ASPMX.L.GOOGLE.COM. @ IN MX 10 ALT2.ASPMX.L.GOOGLE.COM. I hope someone can figure out, what's wrong with this configuration. When I start a ping on mail.mydomain.org I get an answer from 111.111.111.111 and not from the google server ALT1.ASPMX.L.GOOGLE.COM. thx, tux

    Read the article

  • 4.4.1 Timeout in 10 minute intervals SMTP on batch email jobs

    - by TEEKAY
    I am running a job that uses SMTP and it can run in excess of an hour, emailing the entire time. It's not my code but a workflow based app so I just get a form to configure the mail server, subj, msg, etc and can't see it's implementation. I know it is .NET and SmtpClient. I have been seeing 4.4.1 timeouts every 10 minutes being reported by the application as the response from the server. The # of emails in those 10 minute sessions are variable, between 100 and below 150 which leads me to ask about the 10 minute timeout time specifically. I have found there are several exchange properties (though I don't know what version they are running) that set timeout limits. (http://technet.microsoft.com/en-us/library/bb232205%28v=exchg.150%29.aspx) Would those values for ConnectionInactivityTimeOut and ConnectionTimeout be the controlling the timeouts? and finally I would like to ask if exchange considers the consistent connection(s) it kept receiving from the same source as one continuous connection and cause the timeout each 10 minutes and cause the timeout? I am using a static ip of the mail server. Thanks if anyone can shed any light on my problem. EDIT - It is my belief that the library is just keeping the connections around and isn't wrapped in any cleanup code or using statement. That said, I still haven't made any progress on this issue in the last year and just requeue the failed ones as I see them.

    Read the article

  • Ubuntu 8.04 wont reboot from script

    - by Littlejon
    I have a script that is run to backup a server via Rsync, after that script is run I want the server to reboot. My script is run as root from the Crontab at 3am in the morning. #!/bin/bash HOST="email" RSYNC_OPTS="-a -v -v --progress --stats --delete" RSYNC_DEST="10.0.0.10::$HOST" BACKUP_LIST="/etc /home /root" TIMESTAMP="/timestamp-bkup-start.chk" TIMESTAMP2="/timestamp-bkup-stop.chk" touch $TIMESTAMP rsync $RSYNC_OPTS $TIMESTAMP $RSYNC_DEST for BACKUP_ITEM in $BACKUP_LIST; do rsync $RSYNC_OPTS $BACKUP_ITEM $RSYNC_DEST done /etc/init.d/zimbra stop sleep 60s rsync $RSYNC_OPTS /opt $RSYNC_DEST touch $TIMESTAMP2 rsync $RSYNC_OPTS $TIMESTAMP2 $RSYNC_DEST echo `date +%Y%m%d%H%M` >> /var/log/reset reboot # $# shows number of args passed # $1 to access first variable #if [ $# -eq 1 ]; then # if [ $1 = "withreboot" ]; then # echo "rebooting..."; # echo `date +%Y%m%d%H%M` >> /var/log/reset # /sbin/reboot # fi #fi I have tried using init 6 rather then reboot. I have tried /sbin/reboot. I also have another basic script that just echos to the reset log and runs reboot without issue. It is just with the script above the server wont restart. If anyone has any theories that would be great as I have run out of idea. Thanks, Jon

    Read the article

  • GPO - Setting not applied, although policy is applied

    - by Kenny Bones
    This is rather strange. In our domain we have several terminal servers and this morning a user reported that no drives are mapped when he logs on to the terminal server. So, I checked Group Policy Results and compare two users. Both users have the exact same policies applied. But for this particular user, the Script section under User Configuration - Policies - Windows Settings is just not there. For the other user, which this is working fine for, it says under the Script section that Winning GPO is Terminal2008, which is the GPO that contains the script section. And the Terminal2008 GPO is applied to both users. Also, the loopback processing is set to Replace. What could be the cause for this? I've never seen this particular issue before. I mean, both users are in the same OU, they log on to the same terminal server and the same policies are applied to both. They do not however have the exact same group memberships, but should that matter? It's not stated that the script should be run only if the user is a member of a certain group either. Not sure if that could be done through that specific setting either.All I know is, the very same policies are applied to both users, in the same OU and the same computer. Meaning, the same policies should be applied? Edit: I just ran Group Policy Results on one of the other terminal servers, which are also in the same OU, and the Scripts section is there! This means that this particular user don't get this setting when he's logged onto this particular server. What could be the cause of this?

    Read the article

  • Joomla performance problems on AWS

    - by Bobby Jack
    I'm running a site on AWS with the following setup: Single m1.small instance (web server) Single RDS m1.small db Joomla 1.5 Generally, the site is performant, but is fairly low-traffic - say around 50-100 visits / hour. However, at peak time, we see about double that traffic. During peak time, pretty much every day: CPU usage on the web server slowly climbs to 100% CPU usage on the RDS server climbs quite quickly to about 30%, from an average of about 15 Database connections shoot up to about 140, from a normal average of about 2 or 3 The site is then occasionally unreachable, certainly according to pingdom monitoring. Does anyone recognise this behaviour? Can you point me in the right direction to begin investigating? Of course, RDS makes it difficult to do things like slow query logging, so I've started by regularly dumping the mysql process list into a file to see if there's anything I can spot there, but it would be good to have something more concrete to investigate. UPDATE At least, can someone confirm that I'm definitely right in saying that the level of traffic implies the problem must be a specific type of query taking way longer than it should to execute? This would happen if a table gets locked, and many queries need to write to it, right? For this very reason, I've already changed the __session table type to InnoDB.

    Read the article

  • Apache https is slsow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With http (KeepAlive off), I can get over 5000 requests per second. However, with https, I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • PHP on several servers with session-sharing

    - by Etu
    there's certanly other threads about this, but I have one more question. We are about to scale the website at work to have more than one server. And we need to share the sessions between the servers. We have been looking into different solutions, one in memcached and use Memcached as sessionhandler in PHP. That will probably work. And the idea would be to run memcached on every machine and let all webservers access all other servers memcached servers, and then we have shared sessions between the machines, yay. (we have no resources to setup with sticky-sessions yet, that's a later project. we need this running, and we need this running now. and we will loadbalance with DNS for a starter) But then... If I want to take one server down, say, for maintenance, or a server crashes, or whatever reason. I don't want the users to just loose their sessions and have to start from the beginning... That's why we need some kind of replication, which Memcached does not support. Then I found http://repcached.lab.klab.org/ -- which has multi-master replication of memcached, which is great, and is what I want. But does it work with 2 machines? Say 3, 5, 10? For future scaling. I also looked into redishttp://redis.io/ -- which also seems great, but is a bit more "shaky" with the php-session-handler support, and no multi-master-replication. The thing is that I like to use memcached, but I want to be able to power down one of two boxes without loosing half of the sessions. Any suggestions?

    Read the article

  • krenew command not working : Permission Denied

    - by prathmesh.kallurkar
    I am using a Linux server to perform my simulations. The login and the file-system of the server are protected using kerberos. The file-system is supported using NFS. Since my simulations take a lot of time to run, my ssh sessions used to hang regularly. So, I have started running my simulations in byobu (similar to screen). In order to make sure that my kerberos session remains active, I am using the krenew command. I have entered the following command in my .bash_profile file. (I am sure that it is called for every login) killall -9 krenew 2> /dev/null krenew -b -t -K 10 So everytime I ssh to the server, I kill the existing krenew command. Then, I spawn a new krenew command -b (which runs in background), -t (I forgot why I was using this option !), and -K 10 (It must run after every 10 minutes and refresh the kerberos cache). When I run the simulations, It runs for 14 hours and then suddenly, I am getting error for reading file Permission Denied Is the command that I am running incorrect ??

    Read the article

  • Restrict access to one SVN repository (overwrite default)

    - by teel
    I'm trying to set up our SVN server so that by default the group developers will have access to all repositories, but I want to override that setting on some certain repositories where I want to allow access only to single defined users (or separate groups) The current configuration is SVN + WebDAV on Apache2. All my repositories are located at /var/lib/svn/ In dav_svn.authz I currently have [/] @developers = rw @users = r Now I want to add one repository (let's call it secret_repo) that would only allow access to one user who is also a member of the developers group.¨ I tried to do [secret_repo:/] * = secret_user = rw Where secret_user is the user I'd like to give access to the repository, but it doesn't seem to work. Currently the server is using Apache's LDAP module to authenticate users from our active directory domain and I'd like to keep it that way if possible. Also I seem to be able to browse all my repos freely with any web browser, which I'd like to block. Second problem is that I have webSVN on the server, which is using Apache's LDAP authentication. Everyone who is a member of our domain can access it, so I'd like to hide this secret_repo from websvn listing. It's configured not with parentPath("/var/lib/svn");. Do I really need to remove that and add every repository separately, except the ones I want to hide?

    Read the article

  • Apache + Tomcat: Which one should handle SSL? IP-based proxy forwarding?

    - by delirial
    We currently have a Tomcat application running with SSL on port 443. Right now we have an apache server that accepts http requests on port 80 and redirects to the Tomcat instance: <VirtualHost *:80> ServerName domain.com ServerAlias domain.com <LocationMatch "/"> Redirect permanent / https://domain.com/ </LocationMatch> </VirtualHost> Tomcat is handling SSL, because there's no proxy, just a simple redirect to the SSL port: <Connector port="443" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="/app/ssl/domain_com.jks" keystorePass="ourpassword" clientAuth="false" sslProtocol="TLS"/> We want to begin using the apache web server as a proxy and additionally, do per-IP redirects to certain apps that should only be used by hosts on a pre-determined IP range. We would also like to redirect IPs that don't match the pre-determined list to a static html page hosted on the apache server. My first question is: Should I continue to handle SSL on Tomcat's end, or should I use apache with SSL while forwarding to an "unprotected" tomcat port? Is there any way to redirect to different apps (and potentially hosts) depending on the incoming IP? thanks, del

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

  • How to extract attachments from Exchange 2003 database

    - by John
    I have an ancient Exchange 2003 server that I'm getting ready to retire. All user accounts have been migrated to Google Apps for Business, so no new mail is being sent or received on the server. There are less than 50 accounts on the server, but some are very large so that the whole Exchange database is between 10 and 20 GB. The largest account has over 100,000 messages. I believe that in the migration to Gmail, some attachments were not migrated. For peace of mind, I'd like to get the attachments out of the Exchange database. The only way I know of to do this is to set up a 2nd computer with Outlook on it, set up one of the accounts, and then sync the whole mail history and get the attachments out that way. Is there something simpler that I can do? Here are two possibilities: An Exchange attachment retrieval tool/script that pulls attachments for all accounts directly out of the Exchange database. An Exchange PST exporter tool/script that will export PST files for all accounts so that I can just load the PST files into Outlook at will.

    Read the article

  • Virtualbox Networking: XP Guest, Ubuntu Host: Connecting to Windows servers & local network?

    - by user51833
    Here's what I have: Windows XP running in VirtualBox 3.0.8_OSE r53138; Host OS = Ubuntu 9.10 "Karmic Koala"; Windows network in my office with smb fileservers; Guest OS is connected to the internet and is sharing folders with Host OS; Limited networking expertise. Here's what I actually need to do: Use MS Outlook in my XP guest with all its calendar-sharing features and stuff (if this is all done through the internet then great) - or find a Linux app that can do the same stuff; Map Windows network servers, eg. smb://server01/ in my XP guest (I can already access these in Ubuntu. Here's what I've tried with no luck: Entering the server address (example above) in my XP guest windows explorer address bar (got a "could not access the file, path or drive" error message - maybe if I could enter login/pass information? But I don't know how); Mapping the server as a network drive (Windows could not find the path); Mounting the server as one of my shared folders (I couldn't find it through the shared folders browser in VirtualBox - is there somewhere in the Linux filesystem that Ubuntu keeps links to mounted servers?).

    Read the article

  • How to setup apache to catch a proxy_pass from nginx?

    - by Paté
    I have a working apache vhost such as <VirtualHost localhost:10006> DocumentRoot "/home/pate/***/git/kohana_site/public/site/" </VirtualHost> <VirtualHost *:10006> ServerName api.* DocumentRoot "/home/pate/***/git/kohana_site/public/api/" LogLevel debug </VirtualHost> If i point to localhost:10006 I get my website and api.localhost:10006 I get my api. Then I have haproxy setup on top of that, that runs on port 10010 and both localhost:10010 and api.localhost:10010 have the expected behaviour. Now I have nginx setup on port 80 with this configuration. server { listen 10000; server_name api.*; location / { proxy_pass http://legacy_server; } } server { listen 10000 default; server_name _; location /nginx_status { stub_status on; access_log off; } # images are accessed via the CDN over HTTP (not https) location /n/image { proxy_pass http://image_caching_server; } location / { return 301 https://$host:10014$request_uri; } } upstream legacy_server { server localhost:10010 fail_timeout=0; } the problem is that apache does not recognize the vhost properly and redirects api.localhost to the website instead of the api. I tried playing with set_proxy_header Host $host but it doesn't seem to do anything.

    Read the article

  • IP to IP forwarding with iptables [centos]

    - by FunkyChicken
    I have 2 servers. Server 1 with ip 1.1.1.1 and server 2 with ip 2.2.2.2 My domain example.com points to 1.1.1.1 at the moment, but very soon I'm going to switch to ip 2.2.2.2. I have already setup a low TTL for domain example.com, but some people will still hit the old ip a after I change the ip address of the domain. Now both machines run centos 5.8 with iptables and nginx as a webserver. I want to forward all traffic that still hits server 1.1.1.1 to 2.2.2.2 so there won't be any downtime. Now I found this tutorial: http://www.debuntu.org/how-to-redirecting-network-traffic-a-new-ip-using-iptables but I cannot seem to get it working. I have enabled ip forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward After that I ran these 2 commands: /sbin/iptables -t nat -A PREROUTING -s 1.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2:80 /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE But when I load http://1.1.1.1 in my browser, I still get the pages hosted on 1.1.1.1 and not the content from 2.2.2.2. What am I doing wrong?

    Read the article

  • Permission denied (publickey,gssapi-with-mic,password) ssh error

    - by zentenk
    Heads up I'm a noob with linux and networking. I set up a ubuntu server and I have a static ip for my network. When I try to connect to the server at home (external), it prompts me to log in. I supply the correct password (or incorrect pw), I get the error Permission denied, please try again. and after 3 times I get Permission denied (publickey,gssapi-with-mic,password) I am however able to connect with SSH from another computer in the same network with ssh < internal ip of server > I'm connecting with mac os x and my config file is vanilla. Note: During installation of ubuntu it says I don't have a default route or something while doing auto network configuration, but I ignored it and continued the installation, could this be the problem? EDIT: I have tried the below, I have nothing in hosts.allow and also iptables shows the ports that I have allowed, which is 22. I checked the auth.log, and there is nothing when I connect to it remotely (even when it says permission denied). I have tried connecting to it internally and the correct authentication logs show. Any idea whats wrong?

    Read the article

  • Spam or exchange issue?

    - by John
    I am getting an error message to unknow user on my domain. I would like to know is this just a phishing spam email or it was really send from our domain? I have changed our domain name to OURDOMAIN.COM I have Exchange 2010 installed. Body of the email is Delivery has failed to these recipients or distribution lists: sales The recipient's e-mail address was not found in the recipient's e-mail system. Microsoft Exchange will not try to redeliver this message for you. Please check the e-mail address and try resending this message, or provide the following diagnostic text to your system administrator. Sent by Microsoft Exchange Server 2007 Diagnostic information for administrators: Generating server: murraygroup.local [email protected] #550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ## Original message headers: Received: from ironport.mih.co.uk (10.10.29.9) by mih-exca-01.murraygroup.local (10.10.29.133) with Microsoft SMTP Server id 8.3.106.1; Fri, 29 Jun 2012 12:36:12 +0100 Received: from glamf04.netintelligence.com (HELO mailfilter.iomart.com) ([62.128.193.114]) by ironport.mih.co.uk with SMTP; 29 Jun 2012 12:42:48 +0100 Received: from glamta4.netintelligence.com(localhost.localdomain[127.0.0.1]) by mailfilter.iomart.com ; Fri, 29 Jun 2012 12:37:18 BST Received: from [195.43.137.66] ([195.43.137.66]) by glamta4.netintelligence.com (8.13.1/8.12.8) with ESMTP id q5TBbH4j022142 for <[email protected]>; Fri, 29 Jun 2012 12:37:18 +0100 Date: Fri, 29 Jun 2012 12:37:17 +0100 Message-ID: <20120629145229.4C2A817231D8A7958044@SONW> From: Ines Hampton <[email protected]> To: sales <[email protected]> Reply-To: Marguerite Soto <[email protected]> Subject: User sales MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-Path: [email protected] eporting-MTA: dns;murraygroup.local Received-From-MTA: dns;ironport.mih.co.uk Arrival-Date: Fri, 29 Jun 2012 11:36:12 +0000 Final-Recipient: rfc822;[email protected] Action: failed Status: 5.1.1 Diagnostic-Code: smtp;550 5.1.1 RESOLVER.ADR.RecipNotFound; not found X-Display-Name: sales

    Read the article

< Previous Page | 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262  | Next Page >