Search Results

Search found 71537 results on 2862 pages for 'virtual com port'.

Page 252/2862 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Two domains hosted on the same server with different root folder shows up the same homepage

    - by emaillenin
    I have hosted two domains from GoDaddy at Linode VPS. They are mobiletoast.com and lesseltechnologies.com Thought the latter site has a separate index folder, whenever I navigate to it, I get the homepage of mobiletoast.com The strange thing is, I see the expected page (It works), when I open the site from my mobile phone. But when I open the site from my PC (any browser, without any cache, hard refresh), I get the homepage of mobiletoast.com The Linode support team says, they see the correct "It works" page. But I am not able to see that page. This is the output of the command apache2ctl -S root@li339-83:~# apache2ctl -S VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server mobiletoast.com (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost mobiletoast.com (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost blog.mobiletoast.com (/etc/apache2/sites-enabled/blog.mobiletoast.com:1) port 80 namevhost lesseltechnologies.com (/etc/apache2/sites-enabled/lesseltechnologies.com:1) port 80 namevhost mobiletoast.com (/etc/apache2/sites-enabled/mobiletoast.com:1) Syntax OK

    Read the article

  • Subdomain is preventing my search results from rising as expected in page rank

    - by culov
    My problem is that I have a site which has requires a dedicated page for every city I choose to support. Early on, I decided to use subdomains rather than a directly after my domain (ie i used la.truxmap.com rather than truxmap.com/la). I realize now that this was a major mistake because Google seems to treat la.truxmap.com as a completely different site as ny.truxmap.com. So for instance, if i search "la food truck map" my site will be near the top, however, if i search "nyc food truck map" im no where in sight because ny.truxmap.com wouldnt be very high in the page rank by itself, and it doesnt have the boost that it ought to be getting from the better known la.truxmap.com So a mistake I made a year ago is now haunting my page rank. I'd like to know what the most painless way of resolving my dilemma might be. I have received so much press at la.truxmap.com that I can't just kill the site, but could I re-direct all requests at la.truxmap.com to truxmap.com/la and do the same for all cities supported without trashing my current, satisfactory page rank results I'm getting from la.truxmap.com ?? EDIT I left out some critical information. I am using Google Apps to manage my domain (that is, to add the subdomains) and Google App Engine to host my site. Thus, Google Apps provides a simple mechanism to mask truxmap.appspot.com (the app engine domain) as la.truxmap.com, but I don't see how I can mask it as truxmap.com/la. If I can get this done, then I can just 301 redirect la.truxmap.com to truxmap.com/la as suggested below. Thanks so much!

    Read the article

  • mod_rewrite adds .html when redirecting

    - by user12093810293812031
    I have a redirect situation where the site is part dynamic and part generated .html files. For example, mysite.com/homepage and mysite.com/products/42 are actually static html files Whereas other URLs are dynamically generated, like mysite.com/cart Both mysite.com and www.mysite.com are pointing to the same place. However I want to redirect all of the traffic from mysite.com to www.mysite.com. I'm so close but I'm running into an issue where Apache is adding .html to the end of my URLs for anything where a static .html file exists - which I don't want. I want to redirect this: http://mysite.com/products/42 To this: http://www.mysite.com/products/42 But Apache is making it this, instead (because 42.html is an actual html file): http://www.mysite.com/products/42.html I don't want that - I want it to redirect to www.mysite.com/products/42 Here's what I started with: RewriteCond %{HTTP_HOST} ^mysite\.com$ [NC] RewriteRule ^(.*)$ http://www.mysite.com/$1 [R=301,L] I tried making the parameters and the .html optional, but the .html is still getting added on the redirect: RewriteCond %{HTTP_HOST} ^mysite\.com$ [NC] RewriteRule ^(.*)?(\.html)?$ http://www.mysite.com/$1 [R=301,L] What am I doing wrong? Really appreciate it :)

    Read the article

  • Thin web server - single or multiple instances per IP address:port?

    - by wchrisjohnson
    I'm deploying a rack/sinatra/web socket app onto several servers and will use thin as the web server (http://code.macournoyer.com/thin/). There are almost no views to show, so I am not front-ending it with a traditional web server like Apache or nginx. In general, you see thin started and the underlying config file for it has the number of server instances to start, say 3, and the port to start with, say 5000. So, in my example, when thin starts, it starts up three instances on a range of ports, starting on port 5000. If I have a series of virtual machines, say 3, 6, 9, etc. that I treat as a cluster, would/should I choose to start a single thin instance on each VM, or multiple instances on each VM? Why? Thanks - Chris

    Read the article

  • Slash after domain in URL missing for Rails site

    - by joshee
    After redirecting users in a Rails app, for some reason the slash after the domain is missing. Generated URLs are invalid and I'm forced to manually correct them. The problem only occurs on a subdomain. On a different primary domain (same server), everything works ok. For example, after logging out, the site is directing to https://www.sub.domain.comlogin/ rather than https://www.sub.domain.com/login I suspect the issue has something to do with the vhost setup, but I'm not sure. Here are the broken and working vhosts: BROKEN SUBDOMAIN <VirtualHost *:80> ServerName www.sub.domain.com ServerAlias sub.domain.com Redirect permanent / https://www.sub.domain.com </VirtualHost> <VirtualHost *:443> ServerAdmin email@email.com ServerName www.sub.domain.com ServerAlias sub.domain.com RailsEnv production # SSL Engine Switch SSLEngine on # SSL Cipher Suite: SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL # Server Certificate SSLCertificateFile /path/to/server.crt # Server Private Key SSLCertificateKeyFile /path/to/server.key # Set header to indentify https requests for Mongrel RequestHeader set X_FORWARDED_PROTO "https" BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 DocumentRoot /home/usr/www/www.sub.domain.com/current/public/ <Directory "/home/usr/www/www.sub.domain.com/current/public"> AllowOverride all Allow from all Options -MultiViews </Directory> WORKING PRIMARY DOMAIN <VirtualHost *:80> ServerName www.diffdomain.com ServerAlias diffdomain.com Redirect permanent / https://www.diffdomain.com </VirtualHost> <VirtualHost *:443> ServerAdmin email@email.com ServerName www.diffdomain.com ServerAlias diffdomain.com ServerAlias *.diffdomain.com RailsEnv production # SSL Engine Switch SSLEngine on # SSL Cipher Suite: SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL # Server Certificate SSLCertificateFile /path/to/server.crt # Server Private Key SSLCertificateKeyFile /path/to/server.key # Set header to indentify https requests for Mongrel RequestHeader set X_FORWARDED_PROTO "https" BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 DocumentRoot /home/usr/www/www.diffdomain.com/current/public/ <Directory "/home/usr/www/www.diffdomain.com/current/public"> AllowOverride all Allow from all Options -MultiViews </Directory> </VirtualHost> Please let me know if there's anything else I could provide that would help determine what's wrong here. UPDATE tried adding a trailing slash to the redirect command, but still no luck.

    Read the article

  • How can I port forward over a VPN NAT?

    - by Charlie
    I have a multi-site VPN currently running with pfSense boxes and currently using OpenVPN. However I can change the OS and VPN type if need be. The main router has a 10.13.0.0/16 subnet and a series of public IPs For example, a branch has a 10.12.1.0/24 subnet How can I port forward NAT traffic on a public IP of the main router to a server behind the NAT of the second? So for instance port 95 on a public IP assigned to the main router forwards to 10.12.1.102 on the other router. Is this even possible? Currently my setup works great but only for intertnal traffic

    Read the article

  • DNS no longer works after server reboot

    - by Burning the Codeigniter
    Strangely enough, when I reboot my Ubuntu 12.04 server, the DNS no longer works, which makes the domain unavailable to access to my site. Normally the DNS should be working after a reboot, but this doesn't happen anymore. I use nginx to serve content, but nginx is already configured to work with my domains. What are the typical practises must I do after a reboot and how can I solve this issue I experience? I already have BIND, networking and resolvconf to boot when the server boots up. ; <<>> DiG 9.8.1-P1 <<>> mysite.com ;; global options: +cmd ;; connection timed out; no servers could be reached This is my output with dig $ttl 38400 mysite.com. IN SOA ns1.mysite.com. webmaster.mysite.com. ( 1055026205 6H 1H 5D 20M ) mysite.com. IN A xx.xx.xx.xx # Server IP *.mysite.com. IN A xx.xx.xx.xx # Server IP www.mysite.com. IN CNAME mysite.com. ns1.mysite.com. IN A xx.xx.xx.xx # Server 2nd IP ns2.mysite.com. IN A xx.xx.xx.xx # Server 3rd IP mysite.com. IN NS ns1.mysite.com. mysite.com. IN NS ns2.mysite.com. mail.mysite.com. IN MX 1 mysite.com. This is the contents of /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 85.17.150.123 nameserver 85.17.96.69 nameserver 62.212.64.122 search localdomain After using more dig commands, outputs: ; <<>> DiG 9.7.3-P3 <<>> @85.17.150.123 mysite.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 24847 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;mysite.com. IN A ;; Query time: 2145 msec ;; SERVER: 85.17.150.123#53(85.17.150.123) ;; WHEN: Mon Nov 5 16:31:32 2012 ;; MSG SIZE rcvd: 30 ; <<>> DiG 9.7.3-P3 <<>> @85.17.96.69 mysite.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 27879 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;mysite.com. IN A ;; Query time: 949 msec ;; SERVER: 85.17.96.69#53(85.17.96.69) ;; WHEN: Mon Nov 5 16:32:59 2012 ;; MSG SIZE rcvd: 30 ; <<>> DiG 9.7.3-P3 <<>> @62.212.64.122 mysite.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 29293 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;mysite.com. IN A ;; Query time: 825 msec ;; SERVER: 62.212.64.122#53(62.212.64.122) ;; WHEN: Mon Nov 5 16:33:39 2012 ;; MSG SIZE rcvd: 30 With Google DNS (8.8.8.8): ; <<>> DiG 9.7.3-P3 <<>> @8.8.8.8 mysite.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 38498 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;mysite.com. IN A ;; Query time: 3982 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Nov 5 16:37:27 2012 ;; MSG SIZE rcvd: 30

    Read the article

  • Why is connecting to external FTP sites slow on virtual 2008 R2 and not on host?

    - by subkamran
    I have Windows Server 2008 R2 installed as a virtual guest in my Windows 7 host via VirtualBox 4.0. I did this to move my development activity to a controlled environment that doesn't affect my host OS when I don't want to develop. The problem I have is that when I try to connect to my shared hosting FTP, it's slow as hell on the virtual OS but perfectly fast on the host. I tried: Disabling Windows Firewall Trying several different FTP clients Anyone else have this issue?

    Read the article

  • What is the recommended way of cloning virtual machines in VirtualBox?

    - by Sanoj
    Is there any recommended way of cloning virtual machines in VirtualBox? I would like to install an Operating System and then make several clones of that one. I have tried with export and import appliance but I have got some problems doing it that way. See Internet connection fails in Ubuntu on VirtualBox when virtual machine is created from “Import appliance”

    Read the article

  • Apache with multiple domains, single IP, VirtualHost is catching the wrong traffic

    - by apuschak
    I have a SOAP web service I am providing on a apache web server. There are 6 different clients (IPs) that request data and 3 of them are hitting the wrong domain. I am trying to find a way to log which domain name the requests are coming from. Details: ServerA is the primary ServerB is the backup domain1.com - the domain the web service is on domain2.com - a seperate domain that server seperate content on ServerB ServerA is standalone for now with its own IP and DNS from domain1.com. This works for everyone. ServerB is a backup for the web service, but it already hosts domain2.com. I added entries into the apache configuration file like: <VirtualHost *:443> ServerName domain2.com DocumentRoot /var/www/html/ CustomLog logs/access_log_domain2443 common ErrorLog logs/ssl_error_log_domain2443 LogLevel debug SSLEngine on ... etc SSL directives ... </VirtualHost> I have these for both 80 and 443 for domain1 and domain2 with domain1 being second. The problem is when we switch DNS for domain1 from ServerA to ServerB, 3 out of the 6 clients show up in the debug logs as hitting domain2.com instead of domain1.com and fail their web service request because domain2.com is first in the apache configuration file and catching all requests that don't match other virtualhosts, namely domain1.com. I don't know if they are hitting www.domain1.com, domain1.com (although I added entries for both) or using the external IP address or something else. Is there a way to see which URL they are hitting not just the page request or someother way to see why the first domain is catching traffic meant for the second listed domain? In the meantime, I've put domain1.com higher in the apache configuration than domain2.com. Now it catches the requests for all clients and works, however I don't know what it is catching and would like to make domain2.com the first entry again with a correct entry for domain1.com, for however they are hitting it. Thank you for your help! Andrew

    Read the article

  • Why Are SPF Records Failing?

    - by robobobobo
    Ok I've been going through various different sites, resources and topics here trying to figure out what is wrong with my SPF records but no matter what I do they don't seem to pass. Here's what I have "v=spf1 +a +mx +ip4:217.78.0.92 +ip4:217.78.0.95 -all" I've tried multiple different tools to check my spf records, some give me a pass, some don't. But I can't send mail to certain google app accounts, they just bounce back all the time which is very annoying. Anyone got any ideas? I have noticed that the source IP address is not the IPV4 addresses I've defined, but Cpanel wouldn't let me add that address into it.. And here's the result of tests I'm getting back from port25.com. I'm running WHM by the way and have enabled spf and dkim. Summary of Results SPF check: fail DomainKeys check: neutral DKIM check: pass Sender-ID check: fail SpamAssassin check: ham Details: HELO hostname: server1.viralbamboo.com Source IP: 2a01:258:f000:6:216:3eff:fe87:9379 mail-from: ###@viralbamboo.com SPF check details: Result: fail (not permitted) ID(s) verified: smtp.mailfrom=###@viralbamboo.com DNS record(s): viralbamboo.com. SPF (no records) viralbamboo.com. 13180 IN TXT "v=spf1 +a +mx +ip4:217.78.0.92 +ip4:217.78.0.95 -all" viralbamboo.com. AAAA (no records) viralbamboo.com. 13180 IN MX 0 viralbamboo.com. viralbamboo.com. AAAA (no records) DomainKeys check details: Result: neutral (message not signed) ID(s) verified: header.From=###@viralbamboo.com DNS record(s): DKIM check details: Result: pass (matches From: ###@viralbamboo.com). ID(s) verified: header.d=viralbamboo.com Canonicalized Headers: content-type:multipart/alternative;'20'boundary="4783D1BE-5685-41CF-B91B-1F15E91DD1E3"'0D''0A' date:Mon,'20'1'20'Jul'20'2013'20'21:30:47'20'+0000'0D''0A' subject:=?utf-8?Q?test?='0D''0A' to:"[email protected]?="'20''0D''0A' from:=?utf-8?Q?Rob_Boland_-_Viralbamboo?='20'<###@viralbamboo.com'0D''0A' mime-version:1.0'0D''0A' dkim-signature:v=1;'20'a=rsa-sha256;'20'q=dns/txt;'20'c=relaxed/relaxed;'20'd=viralbamboo.com;'20's=default;'20'h=Content-Type:Date:Subject:To:From:MIME-Version;'20'bh=CJMO7HYeyNVGvxttf/JspIMoLUiWNE6nlQUg5WjTGZQ=;'20'b=;

    Read the article

  • Is there a bug with Apache 2.2 and content filters (and maybe mod_proxy)?

    - by asciiphil
    I'm running Apache 2.2.15-29 on RHEL 6 (actually Scientific Linux 6.4) and I'm trying to set up a reverse proxy with content rewriting so all of the links on the proxied web pages are rewritten to reference the proxy host. I'm running into a problem with some of the content rewriting and I'd like to know if this is a bug or if I'm doing something wrong (and how to do it right, if applicable). I'm proxying a subdirectory on an internal host (internal.example.com/foo) onto the root of an external host (external.example.com). I need to rewrite HTML, CSS, and Javascript content to fix all of the URLs. I'm also hosting some content locally on the external host, which I don't think is a problem but I'm mentioning here for completeness. My httpd.conf looks roughly like this: <VirtualHost *:80> ServerName external.example.com ServerAlias example.com # Serve all local content directly, reverse-proxy all unknown URIs. RewriteEngine On RewriteRule ^(/(index.html?)?)?$ http://internal.example.com/foo/ [P] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -f [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -d RewriteRule ^.*$ - [L] RewriteRule ^/~ - [L] RewriteRule ^(.*)$ http://internal.example.com$1 [P] # Standard header rewriting. ProxyPassReverse / http://internal.example.com/foo/ ProxyPassReverseCookieDomain internal.example.com external.example.com ProxyPassReverseCookiePath /foo/ / # Strip any Accept-Encoding: headers from the client so we can process the pages # as plain text. RequestHeader unset Accept-Encoding # Use mod_proxy_html to fix URLs in text/html content. ProxyHTMLEnable On ProxyHTMLURLMap http://internal.example.com/foo/ / ProxyHTMLURLMap http://internal.example.com/foo / ProxyHTMLURLMap /foo/ / ## Use mod_substitute to fix URLs in CSS and Javascript #<Location /> # AddOutputFilterByType SUBSTITUTE text/css # AddOutputFilterByType SUBSTITUTE text/javascript # Substitute "s|http://internal.example.com/foo/|/|nq" #</Location> # Use mod_ext_filter to fix URLs in CSS and Javascript ExtFilterDefine fixurlcss mode=output intype=text/css cmd="/bin/sed -rf /etc/httpd/fixurls" ExtFilterDefine fixurljs mode=output intype=text/javascript cmd="/bin/sed -rf /etc/httpd/fixurls" <Location /> SetOutputFilter fixurlcss;fixurljs </Location> </VirtualHost> The text/html rewriting works just fine. When I use either mod_substitute or mod_ext_filter, the external server sends the pages as Transfer-Encoding: chunked, sends all of the data, and then closes the connection without sending the final, zero-length chunk. Some HTTP clients are unhappy with this. (Chrome won't process any content sent in this way, for example, so the pages don't get CSS applied to them.) Here's a sample wget session: $ wget -O /dev/null -S http://external.example.com/include/jquery.js --2013-11-01 11:36:36-- http://external.example.com/include/jquery.js Resolving external.example.com (external.example.com)... 192.168.0.1 Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Fri, 01 Nov 2013 15:36:36 GMT Server: Apache Last-Modified: Tue, 29 Oct 2013 13:09:10 GMT ETag: "1d60026-187b8-4e9e0ec273e35" Accept-Ranges: bytes Vary: Accept-Encoding X-UA-Compatible: IE=edge,chrome=1 Content-Type: text/javascript;charset=utf-8 Connection: close Transfer-Encoding: chunked Length: unspecified [text/javascript] Saving to: `/dev/null' [ <=> ] 100,280 --.-K/s in 0.005s 2013-11-01 11:36:37 (19.8 MB/s) - Read error at byte 100280 (Success).Retrying. --2013-11-01 11:36:38-- (try: 2) http://external.example.com/include/jquery.js Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 416 Requested Range Not Satisfiable Date: Fri, 01 Nov 2013 15:36:38 GMT Server: Apache Vary: Accept-Encoding Content-Type: text/html;charset=utf-8 Content-Length: 260 Connection: close The file is already fully retrieved; nothing to do. Am I doing something wrong? Am I hitting some sort of Apache bug? What do I need to do to get it working? (Note that I'd prefer solutions that work within RHEL-6-packaged RPMs and upgrading to Apache 2.4 would be a last resort, as we have a lot of infrastructure built around 2.2 on this system at the moment.)

    Read the article

  • Safe to use high port numbers? (re: obscuring web services)

    - by sofakng
    I have a small home network and I'm trying to balance the need for security versus convenience. The safest way to secure internal web servers is to only connect using VPNs but this seems overkill to protect a DVRs remote web interface (for example). As a compromise, would it be better to use very large ports numbers? (eg. five digits up to 65531) I've read that port scanners typically only scan the first 10,000 ports so using very high port numbers is a bit more secure. Is this true? Are there better ways to protect web servers? (ie. web guis for applications)

    Read the article

  • .htaccess with addondomain and https ssl

    - by admon
    I have main domain and addon domain. Question. 1)When surfing to: ftp.addondomain.com or mail.addondomain.com For some reason it goes to the main domain. (normally this should not be problem but i still want completely separation) Do you know the syntax to redirect in the .htaccess file this: (.*).addondomain.com - addondomain.com and where do i put the code? in the addondomain .htaccess or in the main domain attaccess I.E any_words.addondomain.com should be forwarded to the addondomain.com so these: dsdhf.addondomain.com ftp.addondomain.com mail.addondomain.com ... all will be forwarded to: addondomain.com (i.e without the prefix). 2)Same question for https:// Main domain has SSL addon domain does not have ssl. For some reason when surfing to: https:// addondomain.com you get to: http:// maindomain.com (the address bar shows https:// addondomain.com but the site pages - the page you see is the page of the main domain) I would like that if user surfs to https:// addondomain.com then (since there is no ssl for the addon domain) then user will get to: http:// addondomain.com Or alternatively user will get error message. I do not want him to be redirected to the main domain. Please if you can, write me what to add to the .htaccess and i will add it. Please also let me know where to write the code. I.E in the addondomain .htaccess or in the main domain attaccess Thanks.

    Read the article

  • Safe to use high port numbers? (re: obscuring web services)

    - by sofakng
    I have a small home network and I'm trying to balance the need for security versus convenience. The safest way to secure internal web servers is to only connect using VPNs but this seems overkill to protect a DVRs remote web interface (for example). As a compromise, would it be better to use very large ports numbers? (eg. five digits up to 65531) I've read that port scanners typically only scan the first 10,000 ports so using very high port numbers is a bit more secure. Is this true? Are there better ways to protect web servers? (ie. web guis for applications)

    Read the article

  • Mac Outlook showing all links in smart quotes?

    - by user2727128
    I was given the task of fixing my friend's email today and really don't know what the problem is. When an email is sent from his laptop (Mac) from Outlook the email address link in the signature shows exactly like this: [email protected]<mailto:[email protected]>. Additionally the website link displays like this: www.website.com<http://www.website.com>. And lastly, the image comes through as cid:randomstringofnumbers. When I sent him an email and he sent one back it converted my signature to same weird formatting. Plus, even in the header where it shows our emails, they are displaying the same way: [email protected]<mailto:[email protected]>. And the weirdest thing is that this problem seems to be "compounding". So when I scroll down to the last, most recent email in the thread I see this www.website.com<http://www.website.com> next email shows this: www.website.com<http://www.website.com><http://www.website.com> and the next this: www.website.com<http://www.website.com><http://www.website.com><http://www.website.com> This is happening to the emails too, everywhere. I'm thinking this might be something to do with smart quotes and the auto formatting but I'm not sure. Could this be the problem? And if so how do I fix it?

    Read the article

  • How do I change the space I allocated to my virtual hard drive in VirtualBox?

    - by Guest
    Hi, I have a Win7 x64 virtual machine running inside VirtualBox. When I first setup the system I gave the virtual hard drive 20gb of space to work with, but I also set it to dynamically expand (or so I thought). Unfortunately I ran out of space and the drive is not expanding/changing.. and I can't find a way to alter the size of it. Is there anything I can do in this situation. Thanks in advance.

    Read the article

  • Should I be using www. when setting up virtual hosts on apache?

    - by MAZUMA
    Does it matter whether or not I include the www. sub-domain when creating new virtual hosts on apache? So is this? /etc/apache2/sites-available/www.example.com better than this? /etc/apache2/sites-available/example.com I would assume I'd need to a2ensite either www.example.com or example.com. Depending on whichever method used? This might be a a fairly basic question But, I have no one else to ask. And, want to do it right.

    Read the article

  • VPS with Debian Squeeze cannot forwward email - Name service error for name=gmail.com type=MX: Host not found, try again

    - by Domagoj
    I have postfix set-up on my Debian VPS, I can: send emails receive emails on my server But forwarding emails from my server to gmail does not work! I configured google's DNS through /etc/resolv.conf I can ping google.com and with dig I also find gmail MX records. But when my server tries to forward email to gmail (setup with /etc/aliases) I get the following error: postfix/smtp[20280]: 825E117BA8A80: to=<[email protected]>, orig_to=<[email protected]>, relay=none, delay=40, delays=0/0.01/40/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=gmail.com type=MX: Host not found, try again) What am I missing? Any help will be greatly appreciated!

    Read the article

  • VM Virtual guest machine disk defrag improves performance, myth or reality?

    - by jafin
    In operation of a virtual Vmware or Hyper-V guest typically advice is given to defrag the host and virtual disk images so to result in improved performance. Something like a cmd: vmware-vdiskmanager -d <file.vmdk> works great. Yet I can't find any qualitive evidence that suggest defraging inside the guest VM improves performance. Does anyone have advice or evidence that doesn't come from a commercial defragger's whitepaper that suggests inside guest defragging helps?

    Read the article

  • Is there any way to change the VirtualBox "snapshot" folder for an existing virtual machine?

    - by Richard J Foster
    I have a virtual machine which is currently using a folder on the C: drive to store its snapshots. I have copied the contents of the "Snapshots" folder to an alternate drive, but whenever I go into the General / Advanced settings section for that virtual machine and change the snapshot folder to the new location it resets back to the original location. What do I need to do to get VirtualBox to recognize the new location for the snapshot files?

    Read the article

  • Get the Google Analytics page views for a directory [closed]

    - by Michael Morisy
    I have a blog network set up with the following schema: Blogs: example.com/blog-1/ example.com/blog-2/ example.com/blog-3/ Posts: example.com/blog-1/great-post.html example.com/blog-1/cool-post.html example.com/blog-1/alright-post.html example.com/blog-2/awesome-post.html example.com/blog-2/interesting-post.html example.com/blog-2/dull-post.html example.com/blog-3/another-post.html example.com/blog-3/favorite-post.html I'm trying to get active page views in Google Analytics for each blog, so all example.com/blog-1/*. To this, I created an advanced segment in Analytics: Page starts with /blog-1/ This works, but it also pulls in any page on my site that links to that blog. Any suggestions to just get pages with those blogs.

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >