Search Results

Search found 60513 results on 2421 pages for 'parse com'.

Page 770/2421 | < Previous Page | 766 767 768 769 770 771 772 773 774 775 776 777  | Next Page >

  • Vacation scheduler/viewer

    - by Norfeldt
    I'm looking for a solution that allows multiple persons to put plan and notify their vacation by putting it in their electronic calendar and invite a dedicated "robot" email. On the other side I should be able to get a quick overview of the vacation for each person and do a print out that allows me to put it on a board. Example: John puts his winter vacation for week 7 into his calendar and invite vacation@planner.com. Ben does the same thing for week 4 and 5 and invites vacation@planner.com. Dilbert host the vacation@planner.com and prints out and overview for the next 3 months. Each person's vacation is either stated by name or/and color on the print out. I would like to do the thing with standard business software like Outlook 2010 without installing too many softwares. But at the same time it should be easy and quick to make the print outs without too much fiddling Am I dreaming ?

    Read the article

  • Ubuntu firefox: some web-pages stuck on loading

    - by kristaps.skujins
    Just installed Ubuntu 10.04 beta, installation went fine and all programs are working smooth. Just this one weird network problem. Some sites simply are not opening and loading forever. For example, google.com and youtube are working well, but ubuntu.com and many other are not opening at all, or are loaded partly. One thing i noticed is that on all of those pages on firefox status bar is message "Looking up for www.google-analytics.com" (or similar remote resources) message appearing all the time (even on this page, but it somehow has loaded and working). I should mention that i tried those pages to open on windows OS on this same machine, and they opened without problems. So i am guessing that it has to be some sort of network configuration problems on Ubuntu. What could cause such problem?

    Read the article

  • PHP does not allow https connections

    - by FunkyChicken
    Hey guys im running PHP 5.4.0 and I cannot cURL nor files_get_content() https connections. Using curl in a PHP script shows: [root@ns1]# /opt/php/bin/php -q test.php * About to connect() to www.google.com port 443 * Trying 74.125.225.210... * connected * Connected to www.google.com (74.125.225.210) port 443 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none Segmentation fault Using file_get_contents() shows: Warning: file_get_contents(): Unable to find the wrapper "https" - did you forget to enable it when you configured PHP? in /test.php OpenSSL and OpenSSL-devel are installed, and PHP is also configured with cURL support for SSL connections. See: http://i.imgur.com/ExAIf.png Any idea what might be going wrong? Further info: CentOS 5.8(64) with Nginx 1.2.4

    Read the article

  • Forward requests to IIS Application/Folder to Apache server on another port

    - by TheGwa
    I have found many questions and answers for ways of doing this using asapi filters or ARR and URL Rewrite, but none are clear and concise and I am sure many people have this issue. I am looking for a best practice step by step solution to the following scenario: I have a development server accessible externally via a specific port for testing. Eg. rnd.domain.com:8888. So there is one port in and out of this machine accessible to the world. On this server I have a number of Apache or other servers using specific ports such as 8080. IIS is bound to port 80 locally as well as 8888 to get external requests and works perfectly. I would like to use an application (folder) in IIS such as rnd.domain.com:8888/mapserver to map to the local apache server in both directions. The same solution must apply in production where the domain is mapped to port 80. eg. production.domain.com/mapserver maps to 8080 on production server

    Read the article

  • Apache mod_proxy to another server

    - by trobrock
    I am using the proxy_balancer in Apache2 to proxy requests to a Rails application to my rails server on the port the application is running on. This is how its set up... Rails Server Mongrel running on port 8000, when accessing the url directly to http://rails_server:8000 the site loads fine Apache Server Conf file for the site: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName myserver.com ServerAlias application.myserver.com <Proxy balancer://application_cluster> Allow from localhost BalancerMember http://ip.to.server:8000 retry=10 </Proxy> ProxyPass / balancer://application_cluster </VirtualHost> The problem I am having is going to http://rails_server:8000 works fine, but going to http://application.myserver.com Loads the right content, but is displaying all the HTML as text and not rendering it as html

    Read the article

  • Rewrite for robots.txt and favicon.ico [closed]

    - by BHare
    I have setup some rules in which subdomains (my users) will default to where I have located the robots.txt, favicon.ico, and crossdomain.xml therefore if a user creates a site say testing.mywebsite.com and they don't make their own favicon.ico at testing.mywebsite.com/favicon.ico, then it will use the favicon.ico I have in /misc/favicon.ico This works perfect, but it doesn't work for the main website. If you attempt to go to mywebsite.com/favicon.ico it will check if "/" exists, in which it does. And then never redirects to /misc/favicon.ico How can I get it so both instances redirect to /misc/favicon.ico ? # Set all crossdomain (openpalace file) favorite icons and robots.txt doesnt exist on their # side, then redirect to site's just to have something to go on. RewriteCond %{REQUEST_URI} crossdomain.xml$ RewriteCond ^(.+)crossdomain.xml !-f RewriteRule ^(.*)$ /misc/crossdomain.xml [L] RewriteCond %{REQUEST_URI} favicon.ico$ RewriteCond ^(.+)favicon.ico !-f RewriteRule ^(.*)$ /misc/favicon.ico [L] RewriteCond %{REQUEST_URI} robots.txt$ RewriteCond ^(.+)robots.txt !-f RewriteRule ^(.*)$ /misc/robots.txt [L]

    Read the article

  • file downloaded via firefox and curl have different size

    - by Arash Mousavi
    When I download a file from this link by Firefox its size is 74580 B, But when I download it by curl with exactly all of header was sent by Firefox its size is 79891 B (I copied all header from Firefox and paste it in curl command). what is the problem? If you need any additional data ask me in comment. My curl command: curl --header 'Host: members.tsetmc.com' --header 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0' --header 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' --header 'Accept-Language: en-US,en;q=0.5' --header 'Referer: http://www.tsetmc.com/Loader.aspx?ParTree=15131F' --header 'Cookie: ASP.NET_SessionId=pwzbckbdpjlzqj45vcdbd455' --header 'Connection: keep-alive' 'http://members.tsetmc.com/tsev2/excel/MarketWatchPlus.aspx?d=0' -o 'MarketWatchPlus-1393_3_14.xlsx' -L

    Read the article

  • Redirecting to Login page in apache

    - by Shailesh Sutar
    I am working on OTRS where i want to set OTRS Login page on otrs.mydomain.com. I am having machine CentOS release 6.2 (Final). Currently I am accessing it,using otrs.mydomain.com/otrs/customer.pl for customer login AND otrs.mydomain.com/otrs/index.pl for admin login. I changed DocumentRoot to /opt/otrs but its not working as it should. OTRS is installed in /opt/otrs/ I am using Apache Server version: Apache/2.2.15 (Unix). Now i am stuck.

    Read the article

  • how can i cahe one more web site on same backend server (web server) with varnish?

    - by Kerberos
    i have one web server which is IIS that is back on varnish. there are more web sites on ISS. there are all web sites header's on IIS and all web sites publish from port 80. can i cache all web site by varnish like below code;backend cacheWebSite{.host = "192.168.0.1"; .port = "80";} sub vcl_recv {if (req.http.host == "www.example1.com") {set req.backend = CacheWebSites;} if (req.http.host == "www.example2.com") {set req.backend = CacheWebSites; } if (req.http.host == "www.example3.com") {set req.backend = CacheWebSites; }} i can't test this code. that is just senario. thank you for your help already now.

    Read the article

  • Sendmail Alias for Nonlocal Email Account

    - by Mark Roddy
    I admin a server which is running a number of web applications for a software dev team (source control, bug tracking, etc). The server has sendmail running solely as a transport to the departmental email server over which I have no control. We have someone who is still in the department but no longer on the dev team so I need to configure the transport agent to redirect all outgoing email (which would be coming from these applications) to the person that has taken their place. I added an entry in /etc/aliases like such: [email protected]: newuser@nonlocalhost.com But when I run /etc/init.d/sendmail newaliases I get the following error: /etc/mail/aliases: line 32: olduser@nonlocalhost.com... cannot alias non-local names So clearly I'm doing something I shouldn't. Is there a way to get aliases to work with non-local names or alternatively is their a way to accomplish my goal of redirecting outgoing mail for this user to another one? Technical Specs if the matter: Ubuntu 6.06 sendmail 8.13 (ubuntu provided package)

    Read the article

  • How can I copy the link in Google without openning the link and the "Google stuff" in the URL? [closed]

    - by John Isaiah Carmona
    I want to copy a link in Google without opening that link and without the "Google stuff". When I use my browser by right-clicking the link and selecting Copy Link Location, it copies a very long link because of the Google stuff. http://www.google.com.ph/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CBwQFjAA&url=http%3A%2F%2Fdownload.microsoft.com%2Fdownload%2FC%2F0%2FA%2FC0AEF0CC-B969-406D-989A-4CDAFDBB3F3C%2FWin8_UXG_RTM.pdf&ei=1bWHULCyEZGQiQfl_IGIDA&usg=AFQjCNEtK1uai68ZKixTovFm2bwe7C9LGg&sig2=cPFFl4ARTTr7xHTHcr5k8A I just want the download.microsoft.com/.../C/0/A/.../Win8_UXG_RTM.pdf URL, but I can't see it in my browser even after opening the site with Google.

    Read the article

  • IE8/IE7/IE6/IE5 on WinXP Use The Wrong Certificate

    - by Marco Calì
    For some reason IE8/IE7/IE6/IE5 on Windows XP, instead to use the certificate that is listed on the nginx website config, is using another certificate that is used from other websites. Checking the nging config file for the website everything is fine. A confirm of this is that all the other browsers (Chrome/Firefox/Safari/IE9) are using the correct certificate. This is the nginx configuration for the app: server { listen 80; listen 443 ssl; server_name mydomain.com; ssl_certificate /root/certs/mydomain.com/mydomain.bundle.crt; ssl_certificate_key /root/certs/mydomain.com/mydoamin.key; access_log /opt/webapps/cs_at/logs/access.log; location / { add_header P3P 'CP="CAO PSA OUR"'; proxy_pass http://127.0.0.1:20004; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; } }

    Read the article

  • What's Excel formats are (most) compatible with LibreOffice and Google Docs?

    - by iconoclast
    I use Excel (and occasionally Numbers), but I want to be able to share with users of Google Docs and LibreOffice (and I may want to switch in the future). What's the most compatible format to save my Excel spreadsheets in? I'm asking as a question here rather than merely Googling for a list of formats that LibreOffice and GoogleDocs support (although I'm doing that too, and will post the answer if no one else does) because there are likely to be hidden "gotchas" that only someone who has experience using all of the above applications is going to know about. Answers that include personal experience will be preferred over those that only post a link to the relevant facts on google.com and libreoffice.com. Oh, and of course the other reason I'm asking the question is because it's good to have this info readily available on SuperUser.com for anyone else who wants to know the same thing.

    Read the article

  • 301 redirect, canonical question

    - by Dave
    I've designed my own 'latest news' page for my site - and I'm trying to keep the URL's clean. (eg) It should look like this : http://www.domain.com/21/this-is-a-clean-url/ When someone links to the article, they sometimes mess it up and do : http://www.domain.com/21/this-is-a-clean-url/#random-hash-tag So what I have been doing is looking for "http://www.domain.com/21" and 301 (moved permantly) redirecting to the proper url + adding a canonical meta tag for it. Is this going overboard? Should I instead be using a (302 Found) header - and just let the canonical tag tell search engines what the proper URL for the article is? What is the best way of handling this?

    Read the article

  • Running PHPmyAdmin on Nginx, port 8080 passed to varnish not working well!

    - by amrnt
    I installed Nginx, Varnish and PHP-fpm. Then I installed PHPmyAdmin and made a virtual host for it: server{ listen 8080; server_name phpmyadmin.Domain.com; access_log /var/log/phpmyadmin.access_log; error_log /var/log/phpmyadmin.error_log; location / { root /usr/share/phpmyadmin; index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include /opt/nginx/conf/fastcgi_params; } } When I go to phpmyadmin.Domain.com it works as expected! but after submitting username/password it redirects me to phpmyadmin.Domain.com:8080/index.php?... with page cannot be found response as well! What could I do?

    Read the article

  • Apache trailing slash added to files problem

    - by Francisc
    Hello! I am having a problem with Apache. What it does is this: Take /index.php file containing an code with src set to relative path myimg.jpg, both in the root of my server. So, www.mysite.com would show the image as would www.mysite.com/index.php. However, if I access www.mysite.com/index.php/ (with a trailing slash) it does the odd thing of executing index.php code as it would be inside an index.php folder (e.g. /index.php/index.php), thus not showing the image anymore. This is a simple example that's easy to solve with absolte addressing etc, the problem I am getting from this a security one that's not so easily fixed. So, how can I get Apache to give a 403 or 404 when files are accessed "as folders"? Thank you.

    Read the article

  • disable browser localization

    - by broiyan
    How do I get the websites that I visit to stop localizing the language probably according to my IP location? This is an website specific issue because, for example economist.com and superuser.com do not do it, but Google Checkout and craigslist.org are doing it. Is there a way to setup Ubuntu and Firefox so that English will always be used for all web pages displayed? Edit: Of course many webpages have a link to an English version, but sometimes they don't. For example I believe such links usually appear on the root resource but sometimes I see non-English languages on child resources where such links do not appear. Example: most Blogger.com blogs appear in English but when I go to the blogger's profile ("view my complete profile"), it appears in another language that matches my geographic location.

    Read the article

  • mail refused by port 25

    - by shantanuo
    When I try to send a mail from my Linux (CentOS) server, the exit status is 0, but the mail never reaches it's destination. The /var/log/maillog file has an entry something like this... Mar 18 06:33:01 app11 postfix/qmgr[22454]: F18FD9F6074: to=<[email protected]>, relay=none, delay=0.01, delays=0/0/0/0, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to alt4.gmail-smtp-in.l.google.com[74.125.45.27]: Connection refused) Am I blocked by google? I tried to send a mail to some other mail server and got the similar result. Mar 18 06:33:01 app1 postfix/smtp[15460]: connect to acsinet11.xxx.com[111.222.333.444]: Connection refused (port 25) How do I correct this problem?

    Read the article

  • How to setup Wordpress High Availability

    - by Ketam
    I have installed Galera Cluster on 3 cluster + 1 management. I wanted to make it like this, Server1: Home (www.domain.com) Server2: For BBpress/Forum (Forum Tab Menu will forward to forum.domain.com) Server3: BuddyPress Activity (Social Tab Menu will forward to social.domain.com) The purpose I am doing this is to distribute my resource and load balancing each other at same time. However, I have difficulty to setup Apache Load-Balancing/mod_proxy/clustering or any suitable to have high availability WordPress. Any best suggestion/solution to make high availability WordPress? Or how to? And another question is I tried to copy whole WordPress files & folders to Server2 connecting to local database (same data inside since it is already on Galera Cluster) but the page blank. Any advice? OS: Centos 6.2 Thanks in advanced.

    Read the article

  • SSH authentication working unless ran from script??

    - by awright418
    I have set up my server to allow key/pair authentication by following instructions similar to what is found in this post. As far as I can tell that is working correctly. If I do the following, for example, it works correctly: ssh myusername@mydomain.com It will NOT prompt me for a password. This is what I want to happen. However if I write a small bash script like this: #!/bin/bash -x ssh myusername@mydomain.com and execute with: sudo ./mytestscript.sh ...it will prompt me with: myusername@mydomain.com's password: What am I doing wrong? I need to be able to login from within my script without being prompted for a password!

    Read the article

  • How to receive mail in Qmail?

    - by Ivan
    I've a server that uses Qmail. It is installed by default and it is supposed to work. I've created a new domain and new user (vadddomain + vadduser) without problems, but when I send an email from Gmail to webmaster@domain.com (the address I've created) it desappears, it is. But if connect to SMTP server directly (telnet domain.com 25) and post an email it arrives to the user queue. What's happening?!? Note: If I try to access to my user through telnet domain.com 110 it seems my pwd is not correct and it's the same I used when created the user with vadduser

    Read the article

  • How to block spam site republishing my content

    - by Fo.
    I noticed today that Google search results shows some spam copies of one of my sites. The url looks something like this: http://[subdomain].spamsite.com/www.example.com ...where example.com is my site. In my Apache access logs I'm noticing several lines like the following whenever I load the above url: 127.0.0.1 - - [219/Oct/2012:19:27:34 +0000] "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)" The spammer's site shows an exact up to date copy of my site, so I think they are pulling in live data. Any idea how I can block this traffic?

    Read the article

  • Installing MFC (vs2005) application on Windows 2008 R2 64

    - by olich
    I've build an application that runs on Windows 2003, it is an old style MFC application. Today I need to install the application on a Windows 2008 R2 64 system. I have failures during installation and the application does not run. The application is build with VisualStudio2005, and uses COM objects. The MSI register the objects but it fails with the error code : HRESULT -2147010895. Any idea why the COM registration failed? I've tried to install the "Microsoft Visual C++ 2005 Redistributable Package (x86)" but it doesn't help. I've tried to register the COM objects with the regsvr32 after the installation but sadly it doesn't help. I've tries to install the application on Windows 2008 R2 32, and it works perfectly. I am quite new with 64 systems, so any help will be appreciated. tia olich

    Read the article

  • Bash: Quotes getting stripped when a command is passed as argument to a function

    - by Shoaibi
    I am trying to implement a dry run kind of mechanism for my script and facing the issue of quotes getting stripped off when a command is passed as an argument to a function and resulting in unexpected behavior. dry_run () { echo "$@" #printf '%q ' "$@" if [ "$DRY_RUN" ]; then return 0 fi "$@" } email_admin() { echo " Emailing admin" dry_run su - $target_username -c "cd $GIT_WORK_TREE && git log -1 -p|mail -s '$mail_subject' $admin_email" echo " Emailed" } Output is: su - webuser1 -c cd /home/webuser1/public_html && git log -1 -p|mail -s 'Git deployment on webuser1' user@domain.com Expected: su - webuser1 -c "cd /home/webuser1/public_html && git log -1 -p|mail -s 'Git deployment on webuser1' user@domain.com" With printf enabled instead of echo: su - webuser1 -c cd\ /home/webuser1/public_html\ \&\&\ git\ log\ -1\ -p\|mail\ -s\ \'Git\ deployment\ on\ webuser1\'\ user@domain.com Result: su: invalid option -- 1 That shouldn't be the case if quotes remained where they were inserted. I have also tried using "eval", not much difference. If i remove the dry_run call in email_admin and then run script, it work great.

    Read the article

  • Make Exchange 2007 use the correct SSL certificate

    - by Neil
    I have an SBS 2008 server contososerver.contosodomain.local which is externally accessible with the domain remote.contoso.com and an SSL certificate for the external domain which we installed using the SBS 2008 wizard. This works great for OWA because IIS serves the remote.contoso.com certificate. I also want to turn on external POP3/IMAP4/SMTP however when I try, I get served the internal certificate that SBS generated automatically (using its internal CA) which has the alternate names remote.contoso.com, contososerver.contosodomain.local and contososerver. I tried removing this certificate from Exchange but it won't let me because it needs it for its internal receive connector. So how do I tell Exchange 2007 to use the real certificate for external POP3/IMAP4/SMTP?

    Read the article

< Previous Page | 766 767 768 769 770 771 772 773 774 775 776 777  | Next Page >